text
stringlengths
6
128k
[figure]subcapbesideposition=center # Synthetic Image Data for Deep Learning Jason W. Anderson BMW IT Research Center Greenville, SC <EMAIL_ADDRESS>Marcin Ziolkowski BMW IT Research Center Greenville, SC <EMAIL_ADDRESS>Ken Kennedy BMW IT Research Center Greenville, SC <EMAIL_ADDRESS>Amy W. Apon School of Computing Clemson University Clemson, USA <EMAIL_ADDRESS> ###### Abstract Realistic synthetic image data rendered from 3D models can be used to augment image sets and train image classification semantic segmentation models. In this work, we explore how high quality physically-based rendering and domain randomization can efficiently create a large synthetic dataset based on production 3D CAD models of a real vehicle. We use this dataset to quantify the effectiveness of synthetic augmentation using U-net and Double-U-net models. We found that, for this domain, synthetic images were an effective technique for augmenting limited sets of real training data. We observed that models trained on purely synthetic images had a very low mean prediction IoU on real validation images. We also observed that adding even very small amounts of real images to a synthetic dataset greatly improved accuracy, and that models trained on datasets augmented with synthetic images were more accurate than those trained on real images alone. Finally, we found that in use cases that benefit from incremental training or model specialization, pretraining a base model on synthetic images provided a sizeable reduction in the training cost of transfer learning, allowing up to 90% of the model training to be front-loaded. ## I Introduction In the field of image classification and segmentation with deep learning systems, access to sets of labelled training images with sufficient quantity and quality can be a formidable barrier to training an accurate model. Collecting, segmenting, and labelling high quality images can be prohibitively expensive both in time and monetary cost. In some cases, the barrier can be lowered by pretraining a model with a generic dataset such as ImageNet[1] and then fine-tuned on a smaller set of images more directly related to the project goals. However, depending on the specificity requirements for the final model, a generalized dataset may not be useful. A common alternative to vast quantities of readily available general images and costly task-specific images is synthetic image generation, where a 3D computer model of a scene relevant to the deep learning model is rendered to an image, segmented and/or classified, and then used to augment the training data available to the model. Synthetic data has been used successfully in a growing body of research, in many cases reducing the overall cost of training a model. Advantages to using synthetic images are not limited to overcoming the time and safety constraints of capturing and annotating real images. 3D modeling systems are very flexible – scenes and assets can be changed and re-rendered with a cost likely far less than the real world equivalent. For example, in the use cases presented in this work, the cost of changing the vehicle CAD model to a brand new vehicle or a new model year and then generating a new training set is far less than that of acquiring new real world examples, especially when the goal is to have a working detection system before the model enters production. The costs of developing a synthetic image generation pipeline specific to a model’s goals can be further recuperated in cases where similar images can be used to train other models, potentially requiring only minor alterations to the generator. While the body of work around using synthetic images in deep learning models has become broadened in recent years, we have found little exploration of using synthetic images to pretrain a multistage segmentation model such as the recently proposed DoubleU-net which has been shown to be highly accurate in some applications. Our motivations in this work are to explore the performance effects of training such a model in various combinations of synthetic and real images. We present our research on synthetic image training in the context of a real world anomaly detection system, including the results of testing on a large set of annotated proprietary production images. We believe the methodology presented here can be readily applied to other systems, and make the case that synthetic images can replace the real images and still achieve a potentially useful level of performance. The remainder of this paper is organized as follows. Section II describes concepts related to deep learning systems, synthetic data generation, and the specific models used in this paper. Section III describes the technologies and processes used to generate synthetic images. Section IV shows how synthetic data can augment real images to improve model accuracy. Section V compares those results to techniques using pretrained models and transfer learning. Finally, Section VI summarizes our key results and addresses further questions remaining for exploration. ## II Background and Related Work Synthetic data can be used to train deep learning models in a number of ways. First, in one extreme the model may be trained with only synthetic images, which can be useful in models where acquiring examples of desired detection conditions can be time consuming or unsafe. For example, sufficient examples of rare flaws in products on an assembly line could be time consuming to capture for a quality control model, and examples of unsafe conditions may be challenging to acquire for a video surveillance system. There has been some success with using purely synthetic data to train models [2, 3, 4], and may be a good option depending on the use case. Real images, if they exist, can be used as all or part of the test set to prove the model’s accuracy. Next, synthetic images may be mixed with real images in some combination, augmenting the size and/or variation of the training set presented to the model. In published research, this method has been used to successfully decrease model training cost or improve model accuracy, and in some cases both[5, 6, 7]. Finally, synthetic images can also be used to pretrain a model in a two-stage process, either by fitting a model to the synthetic set and then increasing the model bias toward real world examples by iterating over the real image set, or in a multi-model system such as DoubleU-net[8], which is the primary focus of this paper. This method is similar to using generalized image sets to pretrain a system (such as robotic vision) on patterns common to the real world, and then secondary training to adapt the model to a specific environment. [sort out citations for examples here] Jhang et al.[9] demonstrated training a Faster R-CNN[10] object detection model using synthetic images annotated with Unity Perception and generated at scale with Unity Simulation. They found that while a model trained purely on a large (400,000) set of synthetic images performed poorly at detecting objects in situations with occlusions and low lighting, augmenting the synthetic images with a small number of real images significantly improved the detection accuracy over a model trained purely on a small (760) set of real images. Their work was inspired by and complements findings from Hinterstoisser et al.[4], who described a method for domain randomization by composing a backdrop of random objects in front of which the objects of interest are rendered and labeled. Our process is distinguished by using a randomly oriented ”skybox” surrounding the subject of interest, which achieves domain randomization with lowered scene complexity and randomized reflections. Another method of domain adaptation to insert simulated objects of interest into real images, such as in [11]. Rendered images of 3D scenes have been used to train object detection models for a long time, as exemplified by [12] and [13]. More recently, advances in 3D rendering techniques have made photorealistic image generation practical. [14] [15] [16] Other researchers have applied full domain randomization[17] to synthetic image generation with varying degrees of success. [4, 18, 19]. Our approach is a hybrid between full domain randomization and photorealistic rendering, varying the lighting and subject/background orientation and random sampling from a set of realistic textures. The approaches described in [20], [21] and [22] are most similar to our own in this regard. Successful specialization of U-net models has been achieved [23, 24] using VGG encoders[25] pretrained on the ImageNet[1] dataset. Recent research has shown that models developed using synthetic data can be used as a basis for more specific models. This transfer learning can be used in many tasks, such as enhancing detection of object position in [26, 11] and separating target objects from visual distractors in [27]. ## III Synthetic Image Generation In this section we describe the tools and workflow developed for creating synthetic images, followed by our experiment designs for validating the output images and using the generated data to train a deep learning model. Software used includes Unity 2020.1, Unity High Definition Rendering Pipeline (HDRP) 7.4.1, and PiXYZ Plugin 2019.2.1.14. ### III-A 3D Modeling The image generator was built as a set of scene descriptions, models, and scripts in the Unity 3D game development platform. For our use case, a vehicle model was translated from its native CATIAv5 CAD format [check name] into a Unity asset with the PiXYZ plugin. Importing the CAD object was relatively labor intensive due to a technical difficulty in mapping part materials to Unity textures, which is an area of current work. The work-around for our purposes was to manually assign textures to the approximately 10,000 visible surfaces in the imported Unity asset. ### III-B Realistic Rendering In general, synthetic images for model training need to exemplify the characteristics of real images that the model relies on for accurate classification. While these qualities could be vastly different depending on the model, for our use case we needed images that embody the broad range of shadows and reflections seen in the production environment. Rather than attempting to identify and optimize for the most important image features, our approach was to create images as accurately as possible with a goal of being indistinguishable from real images by a human observer. Images were rendered using the Unity High Definition Rendering Pipeline. We relied on a number of Unity features designed for high rendering accuracy, and avoided many approximation features designed to improve rendering performance in a game setting requiring high framerate with limited hardware resources. Our settings for image accuracy were largely influenced by guidelines from Unity[28]. The Unity ”camera” object was configured to mimic the properties of the physical camera used to capture real images. A full disclosure and justification of the rendering settings we used would be lengthy and beyond the scope of this paper, and will be made available on publication. We found that several external resources were very helpful in creating realistic image rendering, especially in our use case with automotive models. In particular, Unity’s automotive industry-focused Measured Materials library[29] helped us simulate the paint, glass, rubber, and plastic textures of a real vehicle. Skyboxes were sampled from the Unity HDRI pack, captured using techniques described by Lagarde et al.[30]. ### III-C Domain Randomization We chose a hybrid approach to domain randomization, rendering the image subject as accurately as possible with ambient lighting similar to the production environment. Randomized attributes included subject position relative to the camera within plausible constraints, vehicle exterior paint colors from a set of possible values, and a single light source (the sun) with varying position. To separate the subject from the background, we used a background skybox with a very ”busy” texture, and then randomized its orientation on all 3 axis for every scene. This served a secondary purpose in creating randomized reflection patterns on all surfaces of the vehicle. Randomization of objects in the scene was accomplished with a set of scripts written in C-sharp, used natively in Unity for game logic. ### III-D Segment Labeling We labeled image segments by capturing multiple images from each randomized scene – one fully rendered image, and then one false color image for each segment. This could have been achieved in many ways, but the approach we found to be most performant in Unity was to maintain a second ”mask” copy of the subject model completely colored with an ”unlit” black texture, locked to the same position as the color model. Two identical cameras in the same position were used, one able to see the color model, background, and lighting and the other camera only able to see the mask model. After the normal image was captured with the color camera, the segment capture phase would iterate through groups of components comprising each segment, recolor the group with an unlit white texture, capture an image with the mask camera, and then recolor the group to the unlit black texture. Figure 1 shows the resulting image segments. This approach had the performance advantage of minimizing the retexturing of materials on the model. This also allowed us to capture occlusions by components not part of the segment of interest, such as the door handles in the example images. Figure 1: A 3D generated image (top left) in addition to a series of one-hot encoded masks segmenting each object class. ## IV Model Training To validate the effectiveness of the synthetic image generator, we conducted experiments comparing models trained with varying amounts of real labelled images augmented with synthetic data. Our available data consisted of 14,125 labelled images of real vehicles in a production line, each of which contained one or more examples of eight distinct feature classes. From this dataset, a 10% holdout set was randomly selected for validating models, leaving 12,712 images in the real dataset $R$ for training. The frequency of each feature’s appearance is described in Table I, where the subset of the real image set $R$ with one or more pixels belonging to a feature class $f$ is given as $R_{f}=\\{e|e\in R$ and $f\in e\\}$, and an example frequency of $|R_{f}|/|R|$. Using the synthetic image generator described in Section III, we rendered a set of 40,406 synthetic images and labels $S$ with the same feature classes as $R$. Due to a slightly smaller horizontal range of camera freedom, some classes were represented more or less heavily in the synthetic set, as detailed in Table I. However, as we weight each class equally in our metrics and present aggregate statistics over the entire dataset, we deemed that the example frequency weights would not affect the conclusions. TABLE I: Feature Example Frequency in Image Sets | real images $R$ | synthetic images $S$ ---|---|--- feature | examples | frequency | examples | frequency back door | 5,994 | 47.09% | 40,231 | 99.57% back window | 5,854 | 45.99% | 40,263 | 99.65% rear window | 4,844 | 38.05% | 24,080 | 59.60% front door | 6,599 | 51.84% | 22,308 | 55.21% front window | 5,985 | 47.02% | 26,171 | 64.77% door handle | 4,670 | 36.69% | 40,084 | 99.20% mirror | 3,897 | 30.62% | 6,932 | 17.16% tail light | 4,511 | 35.44% | 8,501 | 21.04% ### IV-A Training Methodology Images and labels were used to train U-net[31] convolutional neural network models implemented in TensorFlow[32] 2.0.0 and Keras[33] 2.2.4-tf. Models were trained using an NVIDIA DGX-2 with Tesla V100 GPUs running Ubuntu 18.04.4 LTS. In this section, all U-net model structure and parameters are identical with the exception of input datasets. The U-net implementation was derived from code provided by Debesh et al. in their Double-U-net supplement, to be consistent with the further work in Section V. From the original U-net description, the only significant difference is the use of batch normalization[34] after the convolutional layers along the contracting path, which resulted in more consistent training and better generalization in our use case. A hyperparameter search using real and synthetic datasets revealed optimal parameters that were similar enough to avoid differentiation between the domains. As the purpose of this work is to explore the tradeoffs of synthetic vs real data, we chose parameters that resulted in consistent and stable training sessions rather than strictly optimizing for the highest possible accuracy. For our datasets, a dropout probability of 0.30, a batch size of 64, and a learning rate of 0.0020 resulted in models that converged quickly and consistently within a reasonable limit on training time and generalized well to the validation data. As synthetic data can be seen as a form of data augmentation, we chose to forego any traditional augmentation techniques (randomized cropping, gamma shifts, etc.) to present clear results, with the single exception of randomly flipping all training images horizontally to match the real dataset’s imaging of both sides of the vehicle. During training, models were evaluated each epoch against the disjoint validation set. To prevent overfitting, we used an early stopping mechanism to halt training and revert to the best weights if no improvement in validation set prediction loss was made over 30 epochs. #### IV-A1 Metrics While the image generation and training techniques share applicability with object detection and instance segmentation models with more actionable metrics, we quantify the performance of a standard multiclass U-net segmentation model simply with per-pixel mean intersection-over-union (mean IoU) with uniform class weighting and a prediction threshold of 50%. ### IV-B Real Dataset Supplementation [] [] [] Figure 2: Aggregated mean prediction IoU LABEL:sub@fig:real_synth_matrix_iou of U-net models trained on random samples from real and synthetic datasets. Models augmented with synthetic data showed up to 24.9% higher prediction accuracy LABEL:sub@fig:real_synth_matrix_increase than the baseline, particularly with limited amounts of real training images. The $p$-values LABEL:sub@fig:real_synth_matrix_pvalue of one-sided T-tests, $H_{a}:\overline{IoU}(M_{r,s})>\overline{IoU}(M_{r,0})$, show significant accuracy increases ($p\leq 0.05$, highlighted) in most models trained with 256 or fewer real images. To determine how supplementing a dataset of real images with synthetic images would affect model training and accuracy, we trained instances of multiple model classes with different mixtures of images from both sets. Subsets of the real image set $R$ of sizes $N=\\{0,16,32,...,8192\\}$ were paired with subsets of the synthetic image set $S$ from the same size range, forming the axes of the 11x11 matrices shown in Figure 2 with model classes at each intersection. For each model class, random samples from $R$ and $S$ were used to train individual U-net segmentation models with parameters reported above. The number of models trained in each class was sufficient that the confidence interval ($\alpha=0.95$) width of the mean truth/prediction IoU measurements on the real image validation set was less than 5% of the mean value, requiring between 7 and 30 model instances for each image set size pair. We refer to the resulting set of segmentation models as $M$, where $m_{r,s,i}\in M:r\in N,s\in N,i\in[0..|M_{r,s}|)$ is one instance of a class of U-net models trained on $(r,s)$ random images from datasets $R$ and $S$, and we report aggregate statistics over the model class $M_{r,s}$ at each cell in the matrices of Figure 2. Figure 2 aggregates the mean IoU predictions of each trained model class on the unseen validation set from the real image domain. We observe a general trend of increasing accuracy with larger samples of real images, with diminishing returns as the training images grow to sufficiently represent the domain features. Along the horizontal axis, we see that augmentation with synthetic data tended to increase accuracy, with greater yields in models trained on smaller real datasets. We also observe that models trained on purely synthetic data tend to poorly predict the real domain, even with thousands of examples. Figure 3: Mean IoU of model predictions on a validation set of 1176 real images. Each IQR plot describes between 7 and 30 individual U-net models trained on random subsets of real and synthetic images. In general, augmenting smaller ($\leq 256$) sets of real images resulted in higher accuracy and less variation in the trained models, with diminishing returns as the real data became sufficiently representative of the domain. To discuss the results of synthetic data augmentation, we first look at the effects of augmentation on model reliability. Figure 3 shows the summary statistics of mean validation set predictions for model classes trained on purely real images and those augmented with 2048 synthetic images, which details columns 0 and 2048 from Figure 2. Models trained with smaller random samples of real images tended to show more variation in their resulting prediction accuracy. We observe that augmentation tended to increase mean accuracy and decrease variance in models trained with less than 256-512 real images. Augmenting the real training sample with varying amounts of synthetic data yields better results, depending on how accurate the model is to begin with. Figure 2 reshapes the data in Figure 2 as a percentage increase in mean prediction IoU relative to that of the pure real set (column 0). We can see that augmenting models trained with 512 or more real images only results in a marginal increase, at best 0.6%. However, in models trained with 256 or fewer real images, the accuracy increase is substantial, up to 25.0% when only 16 real images are available. We can also see that the addition of any amount of real images results in models that are more accurate than those trained on synthetic data alone. This is supported by the $p$-values of one-sided T-tests, $H_{a}:\overline{IoU}(M_{r,s})>\overline{IoU}(M_{r,0})~{}\forall~{}r,s\in N$, shown in Figure 2 with $p<0.05$ highlighted. Figure 4: Mean prediction IoU of U-net models on real images, viewed by the ratio of real to synthetic data in the training datasets. Each trend exhibits an inflection point where accuracy decreased, presumably due to limited capacity of the model to encompass both the real and synthetic domain. Figure 2 also shows that in some cases, particularly in those with 512 or more real images, the addition of large amounts of synthetic data correlate with a slight decrease in prediction accuracy, presumably due to dilution of the samples from the real domain and a limited capacity of the model to encompass both the real and synthetic domains. We can observe this trend more clearly when viewing the relationship between real and synthetic image set sizes as a ratio, shown in Figure 4. Each real image set size exhibits an inflection point where accuracy declines, which we suspect is dependent on the capacity of the model and similarity between real and synthetic data in a particular use case. To visualize the differences in prediction accuracy, Figure 5 presents the segmentation maps predicted by 10 different models, trained on 16-256 real images and augmented with either 0 or 2048 synthetic images. In contrast to the randomly selected images used to train the models Figure 2, each real dataset larger than 16 images is a superset of the smaller datasets, and the same real datasets and 2048-image synthetic dataset are reused in each of the augmented models. For this example image, the quality of the predictions are fairly low in the pure real models, limiting usefulness depending on the use case. The addition of synthetic images results in clearly defined door/window boundaries with even the smallest real training set, and better identification of smaller features such as the door handles at 64 real images compared to requiring 128 without augmentation. Figure 5: Segmentation map predictions of U-net models trained with pure real images (top) vs. the same training sets augmented with 2048 synthetic images (bottom). The input image and ground truth are shown on the left for reference. ## V Transfer Learning Another potential use case for synthetic data is in pretraining models for later improvement with real data, either as a base for multiple specialized models or as a starting point for incremental training as real data becomes available. Our results from the previous section indicate that U-net models trained with 256 or fewer images from our real image dataset suffer from low applicability to new images, so in this section we will focus on pretrained model refinement with small numbers of real images. The goals and requirements for transfer learning can vary widely, but in our exploration we will focus on use cases stemming from unavailability of real labelled training images and from the need to specialize a general model for a particular task. As such, we will quantify results in terms of accuracy (in this case, mean prediction IoU on real data) and training time of the model specialization training. ### V-A U-Net There are many strategies for transfer learning using the U-Net model [cite], most involving freezing, reinitializing, adding, or removing layers. It is beyond the scope of this work to explore the many factors involved in choosing the optimal strategy for a particular use case. We will instead focus on a relatively simple technique that compares well to our work with a more advanced model in the next subsection, which to train a U-Net with purely synthetic data, and then continuing training with real images while optionally freezing or replacing part of the model. Our base synthetic-trained U-Net model uses parameters as described in the previous section, trained with a larger dataset of 36,480 synthetic images, which achieved $0.954$ mean prediction IoU on the holdout set from the same synthetic domain. Accuracy on segmentation of real images was similar to the experiments with large pure synthetic datasets in the previous section, only achieving a mean prediction IoU of $0.618$ on that domain. Starting with an identical U-Net base model initialized with random weights, experiments were configured as follows: * • synth-random \- only the contracting path (encoder) was initialized with weights from the pretrained base, allowing the untrained expanding path (decoder) to train completely on real data; * • synth-synth \- both the encoder and decoder were initialized with pretrained base weights; * • VGG19-random \- the encoder part of the model was replaced with VGG19, detailed below, and the decoder left with random weights; * • VGG19-synth \- the encoder was replaced with VGG19, and the decoder initialized with pretrained base weights; * • control \- the base model was used without freezing or replacing layers, and the initial random weights were unchanged. Note that this is the same configuration as models in the previous section, and the resulting model is trained on purely real data. Finally, we doubled the above configurations with another parameter, choosing to either freeze the layers of the encoder portion of the model or allow the secondary training with real data to propagate and update the encoder weights. Our expectations were that freezing the encoder section of the model would reduce training time as there were less parameters to update with each back- propagation, but could reduce the model’s ability to adapt to the new data. Table II details the number of trainable parameters and mean training time per image-epoch for the four resulting model architectures, which indeed shows decreased time per image with less parameters to update. For some experiments, the encoder layers of the model were replaced with a VGG19[25] model pretrained with weights from ImageNet[1], following the same procedure as the work done in [8] for comparability. With the models initialized with pretrained weights, we continued training using randomly selected subsets of real images until convergence, using the stopping criteria described in the previous section. All model variant and real image sample size permutations were repeated 30 times. TABLE II: Model Size and Training Time | | parameters (millions) | training time per ---|---|---|--- model | variant | total | trainable | epoch-image (s) U-net | | 7.77 | 7.77 | 0.0130 U-net | frozen encoder | 7.77 | 3.05 | 0.0120 U-net | VGG19 encoder | 23.86 | 23.86 | 0.0176 U-net | frozen VGG19 encoder | 23.86 | 3.83 | 0.0153 W-net | frozen 1st U-net | 10.11 | 2.34 | 0.0146 W-net | frozen VGG19 encoder | 26.59 | 6.56 | 0.0195 [] [] Figure 6: Comparisons of mean prediction IoU LABEL:sub@fig:transfer_unet_freeze_iou and training time LABEL:sub@fig:transfer_unet_freeze_time of secondary training of pretrained U-Net models, with the weights of the contracting path (encoder) either trainable or frozen. We first compare on the frozen/trainable encoder variable, visualized in Figure 6. In models using VGG19 as the encoder, we observed greater prediction accuracy and lower training time, while models using our encoder pretrained on synthetic data tended to perform better when the encoder was not frozen during secondary training. This is perhaps due to the large difference in the number of encoder neurons, as propagating the training feedback from each example through the larger VGG19 encoder is more costly and less impactful. We speculate that limiting the neurons being updated each epoch lead to faster model convergence while the models with more trainable weights slowed in training progress enough to trigger early stopping. The training logs support this conjecture, showing extremely slow improvement before training was terminated. It is possible that, given enough time, the accuracy differences between trainable and frozen versions of the same model would minimize. However, since all models use the same early stopping criteria, we present the results as comparable in a practical sense. In the remainder of this work, comparisons with these models will use the better-performing frozen encoders in the case of VGG19, and trainable encoders for the synthetic data-trained models. [] [] Figure 7: Limiting to encoder type and decoder initial weights (synthetic pretrained vs. random) model permutations, we observed a sizeable tradeoff between mean prediction IoU LABEL:sub@fig:transfer_unet_iou and training time LABEL:sub@fig:transfer_unet_time when compared to models initialized from randomness. Next, we compare the mean prediction accuracy of the retrained models with frozen encoders to the control models trained from randomly initialized weights. We observed that in cases with 64 or fewer real images, we saw an increase in accuracy over a control model trained on purely real data. However, in larger real image classes and with all control models trained on a mix of real and synthetic data, we saw significantly lower accuracy in the specialized models. We again speculate that the model training may have slowed enough to trigger our early termination criteria, and that a combination of refined learning rate, early termination parameters, and lengthened training time may result in improved accuracy. Our goals in this work are in comparability between experiments, though, so we present these results as a baseline to be improved upon. In comparing the prediction accuracy of U-net models with different decoder weights, we saw mixed results; the pretrained synthetic data weights appeared to result in lower performance in models with synthetic weighted encoders trained on 16 or 32 real images, while having the opposite effect in models with VGG19 encoders. In models trained on 64 or more real images, the results were less clear; and a two-sided T-test showed insufficient difference to conclude that the results are drawn from different distributions at $p=0.05$. Comparing encoder paths of the different model classes was more consistent, in that the U-net default layers trained with synthetic data resulted in higher mean prediction accuracy than models using the VGG19 encoder trained on ImageNet, across all real data sample sizes. We conclude from these findings that a relatively small encoder (4.72m parameters) trained on a few thousand images drawn from a similar synthetic domain to the target can outperform the already impressive feature extraction of a large (23.03m parameters) encoder trained on over a million generic real images. Figure 7 compares the training times of retrained models to those of the control model for each real sample size class, with results between 10.0% and 20.8% of the time required for the control. The time can be accounted for in both the number of trainable parameters in the retrained models with frozen encoders, and the number of epochs required to converge. As the mean training time for a purely synthetic U-net (r=0, s=2048) is 11,648 seconds, the training time for a retrained U-net is comparable to that of the control. ### V-B Double-U-Net Since the introduction of U-net in 2015, a number of derivative models have been proposed that improve its applicability to certain use cases. One of these, the Double-U-net[8], improves upon the localization of segment instances by dividing the task between, as the name suggests, two U-net models linked together. The first U-net, using a VGG19 encoder trained on ImageNet, outputs feature maps from each level of the encoding process as well as an intermediate segmentation map from the decoder. The segmentation map is paired with the original image as input to the second U-net, while feature map outputs of the first U-net are linked to corresponding layers of the second U-net decoder. The authors’ results showed impressive accuracy gains over a standard U-net on a variety of medical segmentation datasets. As an exercise in applying transfer learning to a more complex model, we chose the Double-U-net (abbreviated W-net for the remainder of this work) because of its intuitive design as a logical extension to the standard U-net, as well as having experience and success using the model in some production use cases. Our experiments in this section will expand on the previous section for ease of comparison, with the caveat that we made some implementation choices toward this goal while potentially sacrificing some peak performance. For example, the authors of W-net used squeeze-excite blocks[35] at the end of each convolutional block, which is not part of the original U-net specification. Additionally, in our image set, vehicle features were largely scale-invariant, as the images were captured from a fixed viewpoint with a low variation in the vehicle’s distance from the camera. This warranted omission of the Atrous Spatial Pyramid Pooling (ASPP) block between the encoder and decoder in each U-net, which was used in [8] to handle feature scaling. We conducted a limited exploration and found these features to contribute little to no performance gains on our particular use case, so we believe that the simplified model is a better comparison to transfer learning results on a simple U-net in the previous section. Our W-net implementation is simply two U-net models, identical to the implementation described in the previous section, with the following two additions. First, as in the [8], the U-nets are connected with a pixel-wise multiplication layer, such that the second U-net receives the original image augmented with the segmentation map output of the first U-net. Second, the encoder layer-wise feature maps from the first U-net are concatenated to the inputs of the second U-net decoder, in the same manner as the feature maps from the second U-net encoder. Following the work in the previous section and as an analog to [8], we chose to construct Double-U-nets with two model variations. In the first model, we use a U-net trained on synthetic data as described above, with the entire first U-net frozen. The second model, analogous to [8], uses a frozen VGG19 encoder and a trainable uninitialized decoder. In both models, the second U-net is initialized with random weights and is fully trainable. Our hyperparameter search revealed optimal parameters very close to those used to train the individual U-nets, so we opted to keep the original parameters for comparability. [] [] Figure 8: Results of transfer learning on U-net and W-net models with (first) encoders trained on synthetic data or VGG19/ImageNet, compared to training of control models initialized with random weights. The mean prediction IoU LABEL:sub@fig:transfer_wnet_iou and training time LABEL:sub@fig:transfer_wnet_time suggest improved accuracy of synthetic- trained encoders, but in some cases with a time cost. Our results, shown in Figure 8, show accuracy improvements using the W-net model with the VGG19 encoder over all training image size classes, and similar or better results with the synthetic-trained first U-net. The accuracy improvements correlate with a training cost increase, however, especially with the VGG19-based models with more layers to train. The conclusion we draw from these results is that secondary training with a multipart model like W-net can be a viable accuracy enhancement if the time cost can be justified. ## VI Conclusions We found that, for this image segmentation problem, synthetic images were an effective technique for augmenting limited sets of real training data. We observed that models trained on purely synthetic images had a very low mean prediction IoU on real validation images. We also observed that adding even very small amounts of real images to a synthetic dataset greatly improved accuracy, and that models trained on datasets augmented with synthetic images were more accurate than those trained on real images alone. We noted that for this domain, 256 to 512 images seemed to be enough to train a reasonably accurate model, with rapidly diminishing returns on adding synthetic images to the mix, eventually resulting in lower accuracy as the real:synthetic ratio dropped. In use cases that benefit from incremental training or model specialization, we found that pretraining on synthetic images provided a usable base model for transfer learning. While we observed that models trained in a single session outperformed those pretrained on synthetic images and retrained on real data, we also saw that up to 90% of the total training time could be completed in the pretraining phase. We conclude that synthetic image generation can be beneficial to segmentation model training when insufficient images are available to train a satisfactory model. However, testing must be done to find the break point where adding more synthetic images does not result in higher mean accuracy. ## VII Future Work A natural progression from this work is to study the characteristics of synthetic data and identify features that contribute to model accuracy and can be adapted to more closely resemble the real domain, while separating less important features that should be randomized. Recent work in the field of Generative Adversarial Networks (GANs) could be used to automate the feature identification process and help design more robust synthetic image rendering processes. Another interesting topic would be exploring how synthetic images can be used in conjunction with other effective data augmentation techniques, which unfortunately was beyond the scope of this work. ## References * [1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _2009 IEEE conference on computer vision and pattern recognition_. Ieee, 2009, pp. 248–255, 00000. * [2] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in _European conference on computer vision_. Springer, 2016, pp. 102–118, 00781\. * [3] H. Su, C. R. Qi, Y. Li, and L. J. Guibas, “Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2015, pp. 2686–2694, 00567. * [4] S. Hinterstoisser, O. Pauly, H. Heibel, M. Martina, and M. Bokeloh, “An annotation saved is an annotation earned: using fully synthetic training for object detection,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops_ , Oct. 2019, 00004. * [5] X. Peng, B. Sun, K. Ali, and K. Saenko, “Learning deep object detectors from 3d models,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2015, pp. 1278–1286, 00274. * [6] D. Dwibedi, I. Misra, and M. Hebert, “Cut, paste and learn: Surprisingly easy synthesis for instance detection,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 1301–1310, 00164\. * [7] B. Sun and K. Saenko, “From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains.” in _BMVC_ , vol. 1, 2014, p. 3, 00150 Issue: 2. * [8] D. Jha, M. A. Riegler, D. Johansen, P. Halvorsen, and H. D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” _arXiv preprint arXiv:2006.04868_ , 2020, zSCC: 0000003. * [9] Y.-C. Jhang, A. Palmar, B. Li, S. Dhakad, S. K. Vishwakarma, J. Hogins, A. Crespi, C. Kerr, S. Chockalingam, C. Romero, A. Thaman, and S. Ganguly, “Training a performant object detection ML model on synthetic data using Unity Perception tools,” Sep. 2020, 00000. [Online]. Available: https://blogs.unity3d.com/2020/09/17/training-a-performant-object-detection-ml-model-on-synthetic-data-using-unity-perception-tools/ * [10] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 39, no. 6, pp. 1137–1149, 2016, 23133 Publisher: IEEE. * [11] M. Yan, I. Frosio, S. Tyree, and J. Kautz, “Sim-to-real transfer of accurate grasping with eye-in-hand observations and continuous control,” _arXiv preprint arXiv:1712.03303_ , 2017. * [12] R. Nevatia and T. O. Binford, “Description and recognition of curved objects,” _Artificial intelligence_ , vol. 8, no. 1, pp. 77–98, 1977, 00606 Publisher: Elsevier. * [13] D. G. Lowe, “Three-dimensional object recognition from single two-dimensional images,” _Artificial intelligence_ , vol. 31, no. 3, pp. 355–395, 1987, 01904\. * [14] T. Hodaň, V. Vineet, R. Gal, E. Shalev, J. Hanzelka, T. Connell, P. Urbina, S. N. Sinha, and B. Guenter, “Photorealistic image synthesis for object instance detection,” in _2019 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2019, pp. 66–70, 00000. * [15] Y. Zhang, S. Song, E. Yumer, M. Savva, J.-Y. Lee, H. Jin, and T. Funkhouser, “Physically-based rendering for indoor scene understanding using convolutional neural networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 5287–5295, 00000. * [16] Z. Li and N. Snavely, “Cgintrinsics: Better intrinsic image decomposition through physically-based rendering,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 371–387, 00000. * [17] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 23–30, 00851. * [18] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” _arXiv preprint arXiv:1809.10790_ , 2018, 00168. * [19] J. Borrego, A. Dehban, R. Figueiredo, P. Moreno, A. Bernardino, and J. Santos-Victor, “Applying domain randomization to synthetic data for object category detection,” _arXiv preprint arXiv:1807.09834_ , 2018, 00013\. * [20] C. Mitash, K. E. Bekris, and A. Boularias, “A self-supervised learning system for object detection using physics simulation and multi-view pose estimation,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 545–551, 00063. * [21] A. Prakash, S. Boochoon, M. Brophy, D. Acuna, E. Cameracci, G. State, O. Shapira, and S. Birchfield, “Structured domain randomization: Bridging the reality gap by context-aware synthetic data,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 7249–7255, 00036. * [22] J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: Bridging the reality gap by domain randomization,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2018, pp. 969–977, 00256. * [23] V. Iglovikov and A. Shvets, “Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation,” _arXiv preprint arXiv:1801.05746_ , 2018\. * [24] M. Frid-Adar, A. Ben-Cohen, R. Amer, and H. Greenspan, “Improving the segmentation of anatomical structures in chest radiographs using u-net with an imagenet pre-trained encoder,” in _Image Analysis for Moving Organ, Breast, and Thoracic Images_. Springer, 2018, pp. 159–168. * [25] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014, 00000\. * [26] T. Inoue, S. Choudhury, G. De Magistris, and S. Dasgupta, “Transfer learning from synthetic to real images using variational autoencoders for precise position detection,” in _2018 25th IEEE International Conference on Image Processing (ICIP)_. IEEE, 2018, pp. 2725–2729, 00018. * [27] F. Zhang, J. Leitner, M. Milford, and P. Corke, “Sim-to-real transfer of visuo-motor policies for reaching in clutter: Domain randomization and adaptation with modular networks,” _world_ , vol. 7, no. 8, 2017. * [28] P. Y. Donzallaz, “How to set up unity’s high definition render pipeline for high-end visualizations,” Jan 2020. [Online]. Available: https://blogs.unity3d.com/2020/01/09/how-to-set-up-unitys-unitys-high-definition-render-pipeline-for-high-end-visualizations/ * [29] E. Martin and L. Vo Van, “We have you covered with the measured materials library,” Feb 2019. [Online]. Available: https://blogs.unity3d.com/2019/02/08/we-have-you-covered-with-the-measured-materials-library/ * [30] S. Lagarde, S. Lachambre, and C. Jover, “An artist-friendly workflow for panoramic hdri,” in _ACM SIGGRAPH 2016 Courses_ , ser. SIGGRAPH ’16. New York, NY, USA: Association for Computing Machinery, 2016. [Online]. Available: https://doi.org/10.1145/2897826.2927353 * [31] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical image computing and computer-assisted intervention_. Springer, 2015, pp. 234–241, 00000. * [32] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/ * [33] F. Chollet _et al._ , “Keras,” https://keras.io, 2015. * [34] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in _International conference on machine learning_. PMLR, 2015, pp. 448–456. * [35] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7132–7141.
††thanks: These authors contribute equally to this work††thanks: These authors contribute equally to this work††thanks: These authors contribute equally to this work # Quantum simulation of the two-dimensional Weyl equation in a magnetic field Y. Jiang Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China M.-L. Cai Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China HYQ Co., Ltd., Beijing, 100176, P. R. China Y.-K. Wu Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China Q.-X. Mei Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China W.-D. Zhao Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China X.-Y. Chang Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China L. Yao Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China HYQ Co., Ltd., Beijing, 100176, P. R. China L. He Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China Z.-C. Zhou Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China L.-M. Duan<EMAIL_ADDRESS>Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, PR China ###### Abstract Quantum simulation of 1D relativistic quantum mechanics has been achieved in well-controlled systems like trapped ions, but properties like spin dynamics and response to external magnetic fields that appear only in higher dimensions remain unexplored. Here we simulate the dynamics of a 2D Weyl particle. We show the linear dispersion relation of the free particle and the discrete Landau levels in a magnetic field, and we explicitly measure the spatial and spin dynamics from which the conservation of helicity and properties of anti- particles can be verified. Our work extends the application of an ion trap quantum simulator in particle physics with the additional spatial and spin degrees of freedom. Relativistic quantum mechanics Greiner (2000); Peskin and Schroeder (2018) combines two most important theories of modern physics: special relativity and quantum mechanics. It possesses negative-energy solutions which naturally lead to the prediction of antimatter, and it also explains the half-integer value of spins and their magnetic moments. Weyl equation Weyl (1929) is one of the simplest relativistic quantum mechanical equations. Its solution, a Weyl fermion, is a spin-$1/2$ particle with zero mass and a featured linear dispersion relation, and can be derived from the famous Dirac equation Greiner (2000); Peskin and Schroeder (2018) in the massless limit. As one of the fundamental particles allowed by relativistic quantum mechanics Greiner (2000); Peskin and Schroeder (2018), the Weyl fermion was long believed to describe neutrinos, which are however extremely difficult to detect and are now known to have nonzero masses Fukuda _et al._ (1998); Ahmad _et al._ (2001). This leaves many properties of the Weyl particles only to be analyzed theoretically, or through the idea of quantum simulation Georgescu _et al._ (2014); Cirac and Zoller (2012) using other well-controlled quantum systems. Indeed, quantum simulation of relativistic quantum mechanical systems has been proposed Freedman _et al._ (2002); Lamata _et al._ (2007); Lan _et al._ (2011); Jordan _et al._ (2012); Hauke _et al._ (2013); Li _et al._ (2019) and performed Gerritsma _et al._ (2010, 2011); Zhang _et al._ (2018); Kokail _et al._ (2019) in various physical systems like trapped ions Leibfried _et al._ (2003); Blatt and Roos (2012); Bruzewicz _et al._ (2019). To date, Weyl fermions have been realized in photonic crystals Lu _et al._ (2015) and in condensed matter systems Xu _et al._ (2015); Lv _et al._ (2015), but in these systems only the spectral or the transport properties can be measured Armitage _et al._ (2018), while direct study of the Weyl particle dynamics is still lacking. On the other hand, massive Dirac particles have been simulated in ion trap Gerritsma _et al._ (2010, 2011), which can reduce to the massless Weyl particles by tuning the experimental parameters. Nevertheless, to minimize the required degrees of freedom to be controlled, the experiments so far are restricted to dynamics in 1D, where interactions with external magnetic fields and evolution of spin states become trivial. In this work, we report the quantum simulation of a 2D Weyl fermion, which allows us to explore much richer spatial and spin dynamics. Let us start from the 3D Dirac equation for a charged particle in a magnetic field Greiner (2000) (we have chosen the natural units by setting $\hbar=1$ and $c=1$ for simplicity) $i\frac{\partial\psi}{\partial t}=\hat{H}\psi=[\boldsymbol{\hat{\alpha}}\cdot(\boldsymbol{\hat{p}}-e\boldsymbol{\hat{A}})+m\hat{\beta}]\psi,$ (1) where $m$ is the mass of the particle, $e$ the electric charge, $\boldsymbol{\hat{A}}$ the vector potential of the magnetic field, and $\boldsymbol{\hat{p}}$ the momentum operator. $\hat{\alpha}_{j}$ ($j=x,y,z$) and $\hat{\beta}$ are Dirac matrices satisfying $\\{\hat{\alpha}_{i},\hat{\alpha}_{j}\\}=2\hat{I}\delta_{ij}$, $\hat{\beta}^{2}=\hat{I}$ and $\\{\hat{\alpha}_{j},\hat{\beta}\\}=0$. In 3D space, we thus need four matrices anti-commuting with each other, and therefore the Dirac matrices, as well as the Dirac spinor $\psi$, need to have a dimension of at least four Greiner (2000). Figure 1: Schematic of simulating a 2D Weyl fermion using ion trap. We apply a pair of Raman laser beams on a trapped ion, with different frequency components resonant to the red and the blue sidebands of both the $x$ and $y$ oscillation modes, to couple the internal states of the ion to its spatial oscillation modes. By further tuning the phase of these frequency components in the Raman laser, we can choose to couple to $\hat{\sigma}_{x}$ or $\hat{\sigma}_{y}$ of the internal states, and to different quadratures $\hat{x}$ or $\hat{p}$ of each mode, which finally gives us the Hamiltonian in Eq. (2). Figure 2: Linear dispersion relation of a free Weyl particle and its spectrum in a magnetic field. (a) Spin dynamics $\langle\hat{\sigma}_{y}(t)\rangle$ from an initial state $|+z\rangle|\alpha_{x}=ip/\sqrt{2}\rangle|\alpha_{y}=0\rangle$ with $B=0$ in Eq. (2). Each data point consists of 500 experimental shots and the error bars are estimated from the standard deviation of 10 repetitions. The solid curves are theoretical predictions under the same parameters. (b) From the slope of the early-time spin dynamics (inset), we can extract the average energy $E(p)$ for a free Weyl particle with momentum $p$. A linear dispersion relation is observed as the black fitting line. The error bars are standard deviations of 5 repetitions. (c) Spin dynamics $\langle\hat{\sigma}_{z}(t)\rangle$ in a magnetic field with $eB=1$ from an initial state $|+z\rangle|\alpha_{x}=i\rangle|\alpha_{y}=0\rangle$. Due to the larger number of data points and the longer evolution time, here we only repeat each point by 200 times and the error bars are one standard deviation of the average value. The red curve is the theoretical prediction with the motional decoherence included. (d) The energy spectrum of a Weyl fermion in a magnetic field can be obtained by a Fourier transform of the spin dynamics. The first three peaks at $2E_{0}=0$, $2E_{1}=2\pi\times 8.3\,$kHz and $2E_{2}=2\pi\times 11.7\,$kHz agree well with the theoretical $E_{n}\propto\sqrt{n}$ scaling, while higher peaks are more difficult to distinguish due to the frequency resolution limited by the total evolution time of $600\,\mu$s. Inset is the ideal result for an evolution time of $5\,$ms without decoherence. Now if we go to the massless limit $m\to 0$, the matrix $\hat{\beta}$ is removed and hence we only require three anti-commuting matrices. In this situation we can simply set $\hat{\alpha}_{j}=\hat{\sigma}_{j}$ ($j=x,y,z$) where $\hat{\sigma}_{j}$’s are the Pauli operators. This gives us the 3D Weyl equation Greiner (2000) in a magnetic field. We can further define a spin operator $\boldsymbol{\hat{S}}=\boldsymbol{\hat{\sigma}}/2$ such that the total angular momentum $\boldsymbol{\hat{J}}=\boldsymbol{\hat{L}}+\boldsymbol{\hat{S}}$ is conserved ($[\boldsymbol{\hat{J}},\hat{H}]=0$) when the magnetic field is zero ($\boldsymbol{\hat{A}}=0$), where $\hat{L}_{i}=\epsilon_{ijk}\hat{x}_{j}\hat{p}_{k}$ is the orbital angular momentum and $\epsilon_{ijk}$ the Levi-Civita symbol Greiner (2000). Finally, for a minimal model to demonstrate nontrivial spatial and spin dynamics in a magnetic field, we consider a uniform field along the $z$ axis such that $\hat{A}_{y}=B\hat{x}$ and $\hat{A}_{x}=\hat{A}_{z}=0$. The momentum in the $z$ direction is conserved so that we can restrict to the 2D case, which gives us the Hamiltonian to be simulated $\hat{H}=\hat{\sigma}_{x}\hat{p}_{x}+\hat{\sigma}_{y}(\hat{p}_{y}-eB\hat{x}).$ (2) Figure 3: Spin and spatial dynamics and conservation of helicity. (a), (b) Spin dynamics in a magnetic field $eB=1$ from the initial state $|+x\rangle|\alpha_{x}=i\rangle|\alpha_{y}=0\rangle$. Each point is measured by 500 times and the error bar is estimated as one standard deviation of the average. Solid curves are theoretical results under the same parameters. (c), (d) Evolution of the kinetic momentum $\boldsymbol{\hat{\pi}}=\boldsymbol{\hat{p}}-e\boldsymbol{\hat{A}}$ under the same condition. Each quadrature is measured by applying an additional spin- dependent force and fitting the early-time evolution (see Supplementary Materials supplementary ), with error bars estimated from one standard deviation of the fitting. The error bar of $\langle\hat{\pi}_{y}\rangle=\langle\hat{p}_{y}\rangle-\langle\hat{x}\rangle$ is further computed from those of $\langle\hat{p}_{y}\rangle$ and $\langle\hat{x}\rangle$ as shown in the inset. (e) We plot the ratio between the $x$ and $y$ components of the spin (red) and the kinetic momentum (blue). The two values are close to each other and follow the same tendency when the ratios by themselves change orders of magnitudes, although small difference exists. Solid curves are the ideal theoretical results. (f) We further compute the azimuthal angles of the spin and the kinetic momentum in the $x$-$y$ plane. The data points distribute around the diagonal, which indicates that the spin and the kinetic momentum roughly align with each other during the time evolution in a magnetic field, namely the conservation of helicity. Our experimental scheme is sketched in Fig. 1. A pair of counter-propagating $355\,$nm laser beams is shined on a trapped ${}^{171}\mathrm{Yb}^{+}$ ion to create a spin-dependent force Choi _et al._ (2014); Cai _et al._ (2021). The Raman laser has an angle of $45^{\circ}$ to both the $x$ and the $y$ directions with trap frequencies $\omega_{x}=2\pi\times 2.35\,$MHz and $\omega_{y}=2\pi\times 1.98\,$MHz. By tuning the Raman laser resonant to the red (blue) sideband of the mode $j$ ($j=x,\,y$), we get the Hamiltonian $\hat{H}^{r(b)}_{j}(\Omega,\,\phi)=\Omega[\hat{\sigma}_{-(+)}\hat{a}_{j}^{{\dagger}}e^{i\phi}+h.c.]/2$ where $\hat{\sigma}_{+(-)}$ is the raising (lowering) operator of the qubit, $\hat{a}_{j}(\hat{a}_{j}^{\dagger})$ the annihilation (creation) operator of the corresponding mode, $\Omega$ the sideband Rabi frequency, and $\phi$ controlled by the phase of the Raman laser. Now we apply four frequency components, driving both sidebands of the two oscillation modes simultaneously as $\hat{H}=\hat{H}_{x}^{r}((1-r)\Omega,\,\pi/2)+\hat{H}_{x}^{b}((1+r)\Omega,\,\pi/2)+\hat{H}_{y}^{r}(\Omega,\,\pi)+\hat{H}_{y}^{b}(\Omega,\,0)=(\Omega/\sqrt{2})[\hat{\sigma}_{x}\hat{p}_{x}+\hat{\sigma}_{y}(\hat{p}_{y}-r\hat{x})]$ where we define $\hat{x}=(\hat{a}_{x}+\hat{a}_{x}^{\dagger})/\sqrt{2}$ and $\hat{p}_{x}=i(\hat{a}_{x}^{\dagger}-\hat{a}_{x})/\sqrt{2}$ and similarly for the $y$ mode. This gives us the desired Hamiltonian with $r=eB$. A characteristic property of the Weyl fermion is its linear dispersion relation in free space. With the above spin-dependent force, we can initialize the phonon states into $|\psi_{m}\rangle=|\alpha_{x}=ip_{x}/\sqrt{2}\rangle|\alpha_{y}=ip_{y}/\sqrt{2}\rangle$ with expectation values of the momentum as $\langle\hat{p}_{x}\rangle=p_{x}$ and $\langle\hat{p}_{y}\rangle=p_{y}$. Without squeezing, such a coherent state has a continuous distribution of the momentum, hence we also expect a continuous distribution of the energy of the Weyl particle and will focus on its expectation value. Note that, for a momentum on the $x$-$y$ plane with an angle $\tan\theta=p_{y}/p_{x}$, we have the positive-energy and negative- energy states when the spin is parallel or anti-parallel to it, namely the two eigenstates of $\hat{\sigma}_{\theta}\equiv\hat{\sigma}_{x}\cos\theta+\hat{\sigma}_{y}\sin\theta$. Now if we initialize the momentum state $|\psi_{m}\rangle$ with spin $|+z\rangle$, evolve the system under the free Weyl fermion Hamiltonian, and measure the early-time dynamics for a spin perpendicular to $\hat{\sigma}_{\theta}$ as $\hat{\sigma}_{\theta}^{\perp}\equiv-\hat{\sigma}_{x}\sin\theta+\hat{\sigma}_{y}\cos\theta$, we get $\langle\hat{\sigma}_{\theta}^{\perp}(t)\rangle\approx-t[(\langle+\sigma_{\theta}|\langle\psi_{m}|)\hat{H}(|+\sigma_{\theta}\rangle|\psi_{m}\rangle)-(\langle-\sigma_{\theta}|\langle\psi_{m}|)\hat{H}(|-\sigma_{\theta}\rangle|\psi_{m}\rangle)]=-2E(p)t$ (see Supplementary Materials supplementary ). For example, in Fig. 2(a) we plot the spin dynamics $\langle\sigma_{y}(t)\rangle$ from the initial state $|+z\rangle|\alpha_{x}=ip/\sqrt{2}\rangle|\alpha_{y}=0\rangle$ under various values of $p$, from which we extract a linear dispersion relation $E(p)\propto p$ as shown in Fig. 2(b). With the existence of a magnetic field, the continuous spectrum collapses into discrete Landau levels Landau (1930). For the massless Weyl particle, these energy levels have a unique scaling $E_{n}^{\mathrm{Weyl}}=\sqrt{2neB}$ compared with the Dirac particle $E_{n}^{\mathrm{Dirac}}=\sqrt{m^{2}+2neB}$ Lamata _et al._ (2011) which in the nonrelativistic limit gives us the well- known result of $E_{n}^{\mathrm{NR}}=neB/m$ Landau (1930). In Fig. 2(c), we plot the spin dynamics $\langle\sigma_{z}(t)\rangle$ in a magnetic field $eB=1$, which is oscillating at the frequency difference between different energy levels. Through a Fourier transform, we can identify discrete peaks in the spectrum as shown in Fig. 2(d). Theoretically, these peaks locate at $2E_{n}$ (inset, see Supplementary Materials supplementary ) where the factor of $2$ comes from the positive and negative energy states. In this experiment, the evolution time up to $600\,\mu$s (which should be much shorter than the decoherence time of the motional states) restricts our frequency resolution so that high energy levels cannot be distinguished, but the first three peaks at $n=0,\,1,\,2$ already show good agreement with the predicted $\sqrt{n}$ scaling. Next we examine the spatial and spin dynamics of a Weyl particle in the magnetic field. As we can see from Fig. 3(a)-(d), in general, the components of the spin or the momentum are not conserved (except for $p_{y}$ in the inset which is related to our gauge choice). However, as shown in Fig. 3(e) and (f), the directions of the spin $\boldsymbol{S}$ and the kinetic momentum $\boldsymbol{\pi}\equiv\boldsymbol{p}-e\boldsymbol{A}$ stay roughly the same during the evolution. This is known as the conservation of helicity $\hat{h}\equiv\boldsymbol{\hat{\sigma}}\cdot\boldsymbol{\hat{\pi}}/|\boldsymbol{\hat{\pi}}|$ Greiner (2000) in a magnetic field where $|\boldsymbol{\hat{\pi}}|$ represents the magnitude of the kinetic momentum. Strictly speaking, this conservation is proved for the scattering problem where the incoming and the outgoing particles do not feel the magnetic field Goldhaber (1977), and from the theoretical curves we do observe small fluctuation between the orientations of these two vectors even under the ideal evolution. On the other hand, what we simulate here is more like the Larmor procession of a spin in a uniform magnetic field. By showing the spin procession speed to be matched by the spatial rotation rate $\omega=eB/m$ of the particle, this naturally explains the Lande $g$-factor of 2 for the spin-1/2 particles, which had to be added into the theory by hand before the relativistic quantum mechanics Greiner (2000). Figure 4: Trajectories of 2D Weyl particles with opposite helicities. Using the method of Fig. 3, we can plot the trajectory in a magnetic field $eB=1$ from the initial state $|+x\rangle|\alpha_{x}=i\rangle|\alpha_{y}=0\rangle$ (red) and $|-x\rangle|\alpha_{x}=i\rangle|\alpha_{y}=0\rangle$ (blue). These correspond to a Weyl particle with positive helicity and an anti-particle with negative helicity, respectively. Theoretically, the particles circles in real space with 2D Zitterbewegung as shown by the solid curves. Here our measurement precision cannot resolve the Zitterbewegung, but it can clearly be seen that the particle and the anti-particle have opposite charges as they rotate in opposite directions in a magnetic field. Following the same method, we can measure the spatial dynamics of a Weyl fermion with different initial spin states and we plot the 2D trajectories in Fig. 4. For an initial state $|+x\rangle|\alpha_{x}=i\rangle|\alpha_{y}=0\rangle$ (red), namely a helicity of $+1$, the trajectory starts to the right and goes clockwise. On the other hand, for an initial state $|-x\rangle|\alpha_{x}=i\rangle|\alpha_{y}=0\rangle$ (blue) with a helicity of $-1$, the trajectory points to the opposite direction and goes counterclockwise. This is because the two situations correspond to a particle and its antiparticle, respectively, with opposite masses and charges. By initializing them with the same momentum, we thus get opposite initial velocities, and they bend into different directions due to the different signs in their charges. To sum up, we have simulated a 2D Weyl particle with characteristic spectral properties using a single trapped ion and two of its spatial oscillation modes. Different from a neutrino, the simulated particle has a nonzero charge and demonstrates nontrivial spatial and spin dynamics in a magnetic field. Therefore our scheme provides a direct approach to study the dynamics of a class of elementary particles allowed by relativistic quantum mechanics but have not been discovered in nature. Theoretically, there will also be 2D Zitterbewegung on the trajectories (solid curves in Fig. 4), whose amplitudes are however comparable to the error bars in our measurement results and therefore require future efforts to improve the experimental precision. By further setting the trap frequencies in the three spatial directions to be comparable and by pointing the Raman laser to be at a nonzero angle to all these axes, we can generalize this scheme to a 3D Weyl particle with even richer dynamics. Besides, our scheme of using spatial oscillation modes in higher dimensions can also be generalized to multi-ion cases such as the quantum simulation of the spin model Monroe _et al._ (2021) and the spin- phonon coupled system Mei _et al._ (2021). Our work thus demonstrates the trapped ion system as a powerful quantum simulator, and significantly extends its application in particle physics by providing more spatial and spin degrees of freedom beyond 1D. ###### Acknowledgements. This work was supported by Tsinghua University Initiative Scientific Research Program, Beijing Academy of Quantum Information Sciences, and Frontier Science Center for Quantum Information of the Ministry of Education of China. Y.-K. W. acknowledges support from the start-up fund from Tsinghua University. ## References * Greiner (2000) Walter Greiner, _Relativistic Quantum Mechanics: Wave Equations_ , 3rd ed. (Springer, 2000). * Peskin and Schroeder (2018) Michael Peskin and Daniel V. Schroeder, _An introduction to quantum field theory_ (CRC press, 2018). * Weyl (1929) Hermann Weyl, “Elektron und gravitation. i,” Zeitschrift für Physik 56, 330–352 (1929). * Fukuda _et al._ (1998) Y. Fukuda, T. Hayakawa, E. Ichihara, K. Inoue, K. Ishihara, H. Ishino, Y. Itow, T. Kajita, J. Kameda, S. Kasuga, K. Kobayashi, Y. Kobayashi, Y. Koshio, M. Miura, M. Nakahata, S. Nakayama, A. Okada, K. Okumura, N. Sakurai, M. Shiozawa, Y. Suzuki, Y. Takeuchi, Y. Totsuka, S. Yamada, M. Earl, A. Habig, E. Kearns, M. D. Messier, K. Scholberg, J. L. Stone, L. R. Sulak, C. W. Walter, M. Goldhaber, T. Barszczxak, D. Casper, W. Gajewski, P. G. Halverson, J. Hsu, W. R. Kropp, L. R. Price, F. Reines, M. Smy, H. W. Sobel, M. R. Vagins, K. S. Ganezer, W. E. Keig, R. W. Ellsworth, S. Tasaka, J. W. Flanagan, A. Kibayashi, J. G. Learned, S. Matsuno, V. J. Stenger, D. Takemori, T. Ishii, J. Kanzaki, T. Kobayashi, S. Mine, K. Nakamura, K. Nishikawa, Y. Oyama, A. Sakai, M. Sakuda, O. Sasaki, S. Echigo, M. Kohama, A. T. Suzuki, T. J. Haines, E. Blaufuss, B. K. Kim, R. Sanford, R. Svoboda, M. L. Chen, Z. Conner, J. A. Goodman, G. W. Sullivan, J. Hill, C. K. Jung, K. Martens, C. Mauger, C. McGrew, E. Sharkey, B. Viren, C. Yanagisawa, W. Doki, K. Miyano, H. Okazawa, C. Saji, M. Takahata, Y. Nagashima, M. Takita, T. Yamaguchi, M. Yoshida, S. B. Kim, M. Etoh, K. Fujita, A. Hasegawa, T. Hasegawa, S. Hatakeyama, T. Iwamoto, M. Koga, T. Maruyama, H. Ogawa, J. Shirai, A. Suzuki, F. Tsushima, M. Koshiba, M. Nemoto, K. Nishijima, T. Futagami, Y. Hayato, Y. Kanaya, K. Kaneyuki, Y. Watanabe, D. Kielczewska, R. A. Doyle, J. S. George, A. L. Stachyra, L. L. Wai, R. J. Wilkes, and K. K. Young (Super-Kamiokande Collaboration), “Evidence for oscillation of atmospheric neutrinos,” Phys. Rev. Lett. 81, 1562–1567 (1998). * Ahmad _et al._ (2001) Q. R. Ahmad, R. C. Allen, T. C. Andersen, J. D. Anglin, G. Bühler, J. C. Barton, E. W. Beier, M. Bercovitch, J. Bigu, S. Biller, R. A. Black, I. Blevis, R. J. Boardman, J. Boger, E. Bonvin, M. G. Boulay, M. G. Bowler, T. J. Bowles, S. J. Brice, M. C. Browne, T. V. Bullard, T. H. Burritt, K. Cameron, J. Cameron, Y. D. Chan, M. Chen, H. H. Chen, X. Chen, M. C. Chon, B. T. Cleveland, E. T. H. Clifford, J. H. M. Cowan, D. F. Cowen, G. A. Cox, Y. Dai, X. Dai, F. Dalnoki-Veress, W. F. Davidson, P. J. Doe, G. Doucas, M. R. Dragowsky, C. A. Duba, F. A. Duncan, J. Dunmore, E. D. Earle, S. R. Elliott, H. C. Evans, G. T. Ewan, J. Farine, H. Fergani, A. P. Ferraris, R. J. Ford, M. M. Fowler, K. Frame, E. D. Frank, W. Frati, J. V. Germani, S. Gil, A. Goldschmidt, D. R. Grant, R. L. Hahn, A. L. Hallin, E. D. Hallman, A. Hamer, A. A. Hamian, R. U. Haq, C. K. Hargrove, P. J. Harvey, R. Hazama, R. Heaton, K. M. Heeger, W. J. Heintzelman, J. Heise, R. L. Helmer, J. D. Hepburn, H. Heron, J. Hewett, A. Hime, M. Howe, J. G. Hykawy, M. C. P. Isaac, P. Jagam, N. A. Jelley, C. Jillings, G. Jonkmans, J. Karn, P. T. Keener, K. Kirch, J. R. Klein, A. B. Knox, R. J. Komar, R. Kouzes, T. Kutter, C. C. M. Kyba, J. Law, I. T. Lawson, M. Lay, H. W. Lee, K. T. Lesko, J. R. Leslie, I. Levine, W. Locke, M. M. Lowry, S. Luoma, J. Lyon, S. Majerus, H. B. Mak, A. D. Marino, N. McCauley, A. B. McDonald, D. S. McDonald, K. McFarlane, G. McGregor, W. McLatchie, R. Meijer Drees, H. Mes, C. Mifflin, G. G. Miller, G. Milton, B. A. Moffat, M. Moorhead, C. W. Nally, M. S. Neubauer, F. M. Newcomer, H. S. Ng, A. J. Noble, E. B. Norman, V. M. Novikov, M. O’Neill, C. E. Okada, R. W. Ollerhead, M. Omori, J. L. Orrell, S. M. Oser, A. W. P. Poon, T. J. Radcliffe, A. Roberge, B. C. Robertson, R. G. H. Robertson, J. K. Rowley, V. L. Rusu, E. Saettler, K. K. Schaffer, A. Schuelke, M. H. Schwendener, H. Seifert, M. Shatkay, J. J. Simpson, D. Sinclair, P. Skensved, A. R. Smith, M. W. E. Smith, N. Starinsky, T. D. Steiger, R. G. Stokstad, R. S. Storey, B. Sur, R. Tafirout, N. Tagg, N. W. Tanner, R. K. Taplin, M. Thorman, P. Thornewell, P. T. Trent, Y. I. Tserkovnyak, R. Van Berg, R. G. Van de Water, C. J. Virtue, C. E. Waltham, J.-X. Wang, D. L. Wark, N. West, J. B. Wilhelmy, J. F. Wilkerson, J. Wilson, P. Wittich, J. M. Wouters, and M. Yeh (SNO Collaboration), “Measurement of the rate of ${\nu}_{e}+\mathit{d}\rightarrow\mathit{p}+\mathit{p}+{\mathit{e}}^{-}$ interactions produced by ${}^{8}\mathrm{B}$ solar neutrinos at the Sudbury Neutrino Observatory,” Phys. Rev. Lett. 87, 071301 (2001). * Georgescu _et al._ (2014) I. M. Georgescu, S. Ashhab, and Franco Nori, “Quantum simulation,” Rev. Mod. Phys. 86, 153–185 (2014). * Cirac and Zoller (2012) J. Ignacio Cirac and Peter Zoller, “Goals and opportunities in quantum simulation,” Nat. Phys. 8, 264–266 (2012). * Freedman _et al._ (2002) Michael H. Freedman, Alexei Kitaev, and Zhenghan Wang, “Simulation of topological field theories by quantum computers,” Communications in Mathematical Physics 227, 587–603 (2002). * Lamata _et al._ (2007) L. Lamata, J. León, T. Schätz, and E. Solano, “Dirac equation and quantum relativistic effects in a single trapped ion,” Phys. Rev. Lett. 98, 253005 (2007). * Lan _et al._ (2011) Z. Lan, N. Goldman, A. Bermudez, W. Lu, and P. Öhberg, “Dirac-Weyl fermions with arbitrary spin in two-dimensional optical superlattices,” Phys. Rev. B 84, 165115 (2011). * Jordan _et al._ (2012) Stephen P. Jordan, Keith S. M. Lee, and John Preskill, “Quantum algorithms for quantum field theories,” Science 336, 1130–1133 (2012). * Hauke _et al._ (2013) P. Hauke, D. Marcos, M. Dalmonte, and P. Zoller, “Quantum simulation of a lattice Schwinger model in a chain of trapped ions,” Phys. Rev. X 3, 041018 (2013). * Li _et al._ (2019) De-Sheng Li, Chun-Wang Wu, Lin-Ze He, Wei Wu, and Ping-Xing Chen, “Quantum simulation of the Weyl equation with a trapped ion,” Quantum Information Processing 18, 151 (2019). * Gerritsma _et al._ (2010) R. Gerritsma, G. Kirchmair, F. Zähringer, E. Solano, R. Blatt, and C. F. Roos, “Quantum simulation of the Dirac equation,” Nature 463, 68–71 (2010). * Gerritsma _et al._ (2011) R. Gerritsma, B. P. Lanyon, G. Kirchmair, F. Zähringer, C. Hempel, J. Casanova, J. J. García-Ripoll, E. Solano, R. Blatt, and C. F. Roos, “Quantum simulation of the Klein paradox with trapped ions,” Phys. Rev. Lett. 106, 060503 (2011). * Zhang _et al._ (2018) Xiang Zhang, Kuan Zhang, Yangchao Shen, Shuaining Zhang, Jing-Ning Zhang, Man-Hong Yung, Jorge Casanova, Julen S. Pedernales, Lucas Lamata, Enrique Solano, and Kihwan Kim, “Experimental quantum simulation of fermion-antifermion scattering via boson exchange in a trapped ion,” Nature Communications 9, 195 (2018). * Kokail _et al._ (2019) C. Kokail, C. Maier, R. van Bijnen, T. Brydges, M. K. Joshi, P. Jurcevic, C. A. Muschik, P. Silvi, R. Blatt, C. F. Roos, and P. Zoller, “Self-verifying variational quantum simulation of lattice models,” Nature 569, 355–360 (2019). * Leibfried _et al._ (2003) D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, “Quantum dynamics of single trapped ions,” Rev. Mod. Phys. 75, 281–324 (2003). * Blatt and Roos (2012) R. Blatt and C. F. Roos, “Quantum simulations with trapped ions,” Nature Physics 8, 277–284 (2012). * Bruzewicz _et al._ (2019) Colin D. Bruzewicz, John Chiaverini, Robert McConnell, and Jeremy M. Sage, “Trapped-ion quantum computing: Progress and challenges,” Applied Physics Reviews 6, 021314 (2019). * Lu _et al._ (2015) Ling Lu, Zhiyu Wang, Dexin Ye, Lixin Ran, Liang Fu, John D. Joannopoulos, and Marin Soljačić, “Experimental observation of Weyl points,” Science 349, 622–624 (2015). * Xu _et al._ (2015) Su-Yang Xu, Ilya Belopolski, Nasser Alidoust, Madhab Neupane, Guang Bian, Chenglong Zhang, Raman Sankar, Guoqing Chang, Zhujun Yuan, Chi-Cheng Lee, Shin-Ming Huang, Hao Zheng, Jie Ma, Daniel S. Sanchez, BaoKai Wang, Arun Bansil, Fangcheng Chou, Pavel P. Shibayev, Hsin Lin, Shuang Jia, and M. Zahid Hasan, “Discovery of a Weyl fermion semimetal and topological Fermi arcs,” Science 349, 613–617 (2015). * Lv _et al._ (2015) B. Q. Lv, H. M. Weng, B. B. Fu, X. P. Wang, H. Miao, J. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, Z. Fang, X. Dai, T. Qian, and H. Ding, “Experimental discovery of Weyl semimetal taas,” Phys. Rev. X 5, 031013 (2015). * Armitage _et al._ (2018) N. P. Armitage, E. J. Mele, and Ashvin Vishwanath, “Weyl and Dirac semimetals in three-dimensional solids,” Rev. Mod. Phys. 90, 015001 (2018). * Choi _et al._ (2014) T. Choi, S. Debnath, T. A. Manning, C. Figgatt, Z.-X. Gong, L.-M. Duan, and C. Monroe, “Optimal quantum control of multimode couplings between trapped ion qubits for scalable entanglement,” Phys. Rev. Lett. 112, 190502 (2014). * Cai _et al._ (2021) M.-L. Cai, Z.-D. Liu, W.-D. Zhao, Y.-K. Wu, Q.-X. Mei, Y. Jiang, L. He, X. Zhang, Z.-C. Zhou, and L.-M. Duan, “Observation of a quantum phase transition in the quantum Rabi model with a single trapped ion,” Nature Communications 12, 1126 (2021). * (27) See Supplementary Materials for details about the experimental setup, measurement and numerical simulation methods, which includes Ref. [28]. * Johansson _et al._ (2013) J.R. Johansson, P.D. Nation, and Franco Nori, “Qutip 2: A python framework for the dynamics of open quantum systems,” Computer Physics Communications 184, 1234–1240 (2013). * Landau (1930) L. Landau, “Diamagnetismus der metalle,” Zeitschrift für Physik 64, 629–637 (1930). * Lamata _et al._ (2011) L Lamata, J Casanova, R Gerritsma, C F Roos, J J García-Ripoll, and E Solano, “Relativistic quantum mechanics with trapped ions,” New Journal of Physics 13, 095003 (2011). * Goldhaber (1977) Alfred S. Goldhaber, “Dirac particle in a magnetic field: Symmetries and their breaking by monopole singularities,” Phys. Rev. D 16, 1815–1827 (1977). * Monroe _et al._ (2021) C. Monroe, W. C. Campbell, L.-M. Duan, Z.-X. Gong, A. V. Gorshkov, P. W. Hess, R. Islam, K. Kim, N. M. Linke, G. Pagano, P. Richerme, C. Senko, and N. Y. Yao, “Programmable quantum simulations of spin systems with trapped ions,” Rev. Mod. Phys. 93, 025001 (2021). * Mei _et al._ (2021) Q. Mei, B. Li, Y. Wu, M. Cai, Y. Wang, L. Yao, Z. Zhou, and L. Duan, “Experimental realization of Rabi-Hubbard model with trapped ions,” (2021), arXiv:2110.03227 [quant-ph] .
# On the bimodal Gumbel model with application to environmental data Cira E. G. Otiniano<EMAIL_ADDRESS>Departamento de Estatística, Universidade de Brasília, 70910-900, Brasília, Brazil Roberto Vila <EMAIL_ADDRESS>Departamento de Estatística, Universidade de Brasília, 70910-900, Brasília, Brazil Pedro C. Brom<EMAIL_ADDRESS>Departamento de Estatística, Universidade de Brasília, 70910-900, Brasília, Brazil Marcelo Bourguignon∗<EMAIL_ADDRESS>Departamento de Estatística, Universidade Federal do Rio Grande do Norte, 59078-970, Natal/RN, Brazil ###### Abstract The Gumbel model is a very popular statistical model due to its wide applicability for instance in the course of certain survival, environmental, financial or reliability studies. In this work, we have introduced a bimodal generalization of the Gumbel distribution that can be an alternative to model bimodal data. We derive the analytical shapes of the corresponding probability density function and the hazard rate function and provide graphical illustrations. Furthermore, We have discussed the properties of this density such as mode, bimodality, moment generating function and moments. Our results were verified using the Markov chain Monte Carlo simulation method. The maximum likelihood method is used for parameters estimation. Finally, we also carry out an application to real data that demonstrates the usefulness of the proposed distribution. Keywords. Gumbel distribution $\cdot$ BG $\cdot$ MCMC. Mathematics Subject Classification (2010). MSC 60E05 $\cdot$ MSC 62Exx $\cdot$ MSC 62Fxx. ###### Contents 1. 1 Introduction 2. 2 The bimodal Gumbel model 3. 3 Some properties of the BG distribution 1. 3.1 Cumulative distribution function 2. 3.2 Stochastic representation 3. 3.3 Rate of a random variable with BG distribution 4. 3.4 Bimodality 5. 3.5 The hazard function 6. 3.6 Moment-Generating Function 7. 3.7 Moments 4. 4 Graphical illustrations and simulation results 1. 4.1 Graphical illustration 2. 4.2 Simulation results 5. 5 Maximum likelihood estimation 6. 6 Real-world data analysis 7. 7 Conclusions ## 1 Introduction Asymmetrical models for a real-valued random variable such as the Gumbel and generalized extreme value distributions have been extensively utilized for modeling various random phenomena encountered for instance in the course of certain environmental, financial or reliability studies. Let $Y_{1},Y_{2},\dots,Y_{n}$ be a series of independent random variables with common distribution function $F$, and $M_{n}=\max\\{Y_{1},Y_{2},\dots,Y_{n}\\}$. The Gumbel distribution is one of the extreme value distributions, characterized by Fisher and Tippet (1928) [4] as the limit distribution for maxima. That is, if exist the normalization sequences $a_{n}$ and $b_{n}>0$ such that $P[(M_{n}-a_{n})/b_{n}]$ converges to a non-degenerate distribution $G$, then $G$ is an extreme value distribution. The $G$ distribution must be one of three types: Fréchet, Gumbel or negative Weibull. Among these three distributions the Gumbel distribution (see Gumbel (1958) [5] ) is the only one with a light tail. For this reason it can be considered as a good alternative for modeling extreme data whose tails are not heavy. A random variable $Y$ have a Gumbel distribution with location parameter $\mu\in\mathbb{R}$ and scale paramer $\sigma>0$, denoted by $Y\sim F_{G}(\cdot;\mu,\sigma)$, if its probability density function (PDF) and cumulated distribution function (CDF) are given, respectively, by $\displaystyle f_{G}(y;\mu,\sigma)=\frac{1}{\sigma}\exp\biggl{\\{}-\Big{(}\frac{y-\mu}{\sigma}\Big{)}-\exp\Big{[}-\Big{(}\frac{y-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}}\quad$ (1.1) and $\displaystyle F_{G}(y;\mu,\sigma)=\displaystyle\exp\biggl{\\{}-\exp\Big{[}-\Big{(}\frac{y-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}},\quad y\in\mathbb{R}.$ (1.2) Generalizations of the Gumbel distribution have been proposed by several authors. Pinheiro and Ferrari (2015) [13] carried out a comprehensive review of the generalization of Gumbel distribution and after comparing them they conclude that some distributions suffer from overparameterization. Another generalization of Gumbel is the Exponentiated Gumbel Type-2 by Okorie (2016) [11] and the references in [12] and [13]. Despite its broad applicability in many fields the Gumbel is not suitable to model bimodal data. Furthermore, all the models cited above are not suitable for capturing this. In this context, in this paper, we propose the bimodal Gumbel (BG) distribution as an alternative model of extreme data with more than one mode. Our approach consists of introduce bimodality in (1.1) as Elal- Olivero (2010) [3]. Results involving bimodality in other related probabilistic models can be found, for example, in Martinez et al. (2013) [9], Çankaya et al. (2015) [2]; and more recently, in Vila et al. (2020-2021) [16, 17, 18, 19]. The advantage of our model in comparison to other generalizations of the Gumbel distribution is the number of parameters and the fact that it can be used to model extreme data with one or two modes. This paper is organized as follows. In Section 2, we define the BG distribution. In Section 3, we provide general properties of the BG distribution including the cumulative distribution function, hazard rate function, mode, bimodality, moment generating function and moments, and stochastic representation. In Section 4, we provide graphical illustrations. Estimation of the parameters by maximum likelihood is investigated in Section 5. In Section 6, we discuss an application to real data. Some conclusions are addressed in Section 7. ## 2 The bimodal Gumbel model We said that a real-valued random variable $X$ has a bimodal Gumbel (BG) distribution with parameters $\mu\in\mathbb{R},\sigma>0$ and $\delta\in\mathbb{R}$, denoted by $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$, if its PDF is given by $\displaystyle f_{\rm BG}(x;\mu,\sigma,\delta)={\displaystyle\big{[}(1-\delta x)^{2}+1\big{]}\exp\biggl{\\{}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}-\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}}\over\textstyle\sigma\big{[}1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}\big{]}},\quad x\in\mathbb{R},$ (2.1) where $\gamma$ is the Euler’s constant. Let $\displaystyle Z_{\delta}=1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}$ (2.2) be the normalization constant of the BG distribution (2.1). Using this notation, note that $f_{\rm BG}(x;\mu,\sigma,\delta)={1\over Z_{\delta}}\,[(1-\delta x)^{2}+1]\,f_{\rm G}(x;\mu,\sigma)$ and that $f_{\rm BG}(x;\mu,\sigma,0)=f_{\rm G}(x;\mu,\sigma)$. In other words, we introduce bimodality in the Gumbel distribution (1.1) through of a quadratic transformation $(1-\delta x)^{2}+1$. The BG distribution function (2.1) is well defined, because $\displaystyle\int_{-\infty}^{\infty}f_{\rm BG}(x;\mu,\sigma,\delta)\,{\rm d}x$ $\displaystyle=$ $\displaystyle{1\over Z_{\delta}}\,[1+\mathbb{E}(1-\delta Y)^{2}],\quad Y\sim F_{G}(\cdot;\mu,\sigma)$ $\displaystyle=$ $\displaystyle{1\over Z_{\delta}}\,\big{\\{}1+\delta^{2}\mathrm{Var}(Y)+[\delta\,\mathbb{E}(Y)-1]^{2}\big{\\}}=1,$ where $\mathbb{E}(Y)=\mu+\sigma\gamma$ and $\mathrm{Var}(Y)=\sigma^{2}{\pi^{2}\over 6}$. The CDF of a BG random variable $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$, defined for $x\in\mathbb{R}$, is given by (see Subsection 3.1 for more details) $\displaystyle F_{\rm BG}(x;\mu,\sigma,\delta)=$ $\displaystyle{\displaystyle\big{[}2-\delta\mu(2-\delta\mu)\big{]}\exp\biggl{\\{}-\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}}+\delta^{2}\sigma^{2}I\biggl{(}2;\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]},+\infty\biggr{)}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle+{\displaystyle 2\delta(1-\delta\mu)\Biggl{\\{}{(x-\mu)}\,\exp\biggl{\\{}-\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}}-\sigma\Gamma\biggl{(}0,\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{)}\Biggr{\\}}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ (2.3) where $\Gamma(a,b)$ is the upper incomplete gamma function and $\displaystyle I(k;a,b)=(-1)^{k}\int_{a}^{b}\ln^{k}(v)\exp(-v)\,{\rm d}v,\quad k\in\mathbb{N}\cup\\{0\\},\ 0\leqslant a<b\leqslant+\infty$ (2.4) is the incomplete moments of the random variable $V\sim F_{G}(\cdot;0,1)$. For $k>1$, closed form solutions for the definite integral $I(k;a,b)$ are not available in terms of commonly used functions. From the formula in (2), when $\delta=0$, $F_{\rm BG}(x;\mu,\sigma,0)=\exp\\{-\exp[-(\frac{x-\mu}{\sigma})]\\}=F_{G}(x;\mu,\sigma)$, $x\in\mathbb{R}$, what was known in (1.2). ###### Remark 1. Let $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ and let $g(\cdot)$ be a real- valued Borel measurable function. From definition of expectation and by using the PDF of the BG distribution (2.1), we have $\displaystyle\mathbb{E}[g(X)]$ $\displaystyle=$ $\displaystyle{1\over Z_{\delta}}\,\mathbb{E}\big{[}g(Y)(1-\delta Y)^{2}+g(Y)\big{]}$ $\displaystyle=$ $\displaystyle{1\over Z_{\delta}}\,\big{\\{}2\mathbb{E}[g(Y)]-2\delta\mathbb{E}[Yg(Y)]+\delta^{2}\mathbb{E}[Y^{2}g(Y)]\big{\\}},$ where $Y\sim F_{G}(\cdot;\mu,\sigma)$ and $Z_{\delta}$ is as in (2.2). ## 3 Some properties of the BG distribution In this section, some mathematical properties as closed expression for the CDF, rate, modes, bimodality, hazard function and moments of the BG distribution are discussed. ### 3.1 Cumulative distribution function In this subsection, we derive in detail the closed expression in (2) for the CDF of a BG random variable $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$. Indeed, from Remark 1 with $g(X)=\mathds{1}_{X\leqslant x}$, we obtain $\displaystyle F_{\rm BG}(x;\mu,\sigma,\delta)=\frac{1}{Z_{\delta}}\big{[}2F_{\rm G}(y;\mu,\sigma)-2\delta\mathbb{E}\left(Y\mathds{1}_{Y\leq x}\right)+\delta^{2}\mathbb{E}\left(Y^{2}\mathds{1}_{Y\leq x}\right)\big{]},$ (3.1) where $Y\sim F_{G}(\cdot;\mu,\sigma)$ and $Z_{\delta}$ is as in 2.2. Taking the following change of variables $z=\exp[-({y-\mu})/{\sigma}],\ {\rm d}z=-({z/\sigma}){\rm d}y,\ y=\mu-\sigma\ln(z),$ and using a binomial expansion we have $\displaystyle\mathbb{E}\big{(}Y^{k}\mathds{1}_{Y\leq x}\big{)}$ $\displaystyle=$ $\displaystyle\int_{\exp[-(\frac{x-\mu}{\sigma})]}^{+\infty}[\mu-\sigma\ln(z)]^{k}\exp(-z)\,{\rm d}z$ (3.2) $\displaystyle=$ $\displaystyle\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}\mu^{k-i}\sigma^{i}\int_{\exp[-(\frac{x-\mu}{\sigma})]}^{+\infty}\ln^{i}(z)\exp(-z)\,{\rm d}z$ $\displaystyle=$ $\displaystyle\sum_{i=0}^{k}\binom{k}{i}\mu^{k-i}\sigma^{i}\,I\biggl{(}i;\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]},+\infty\biggr{)},$ where $k\in\mathbb{N}\cup\\{0\\}$ and $I(i;a,b)$ is as in (2.4). By combining (3.1) and (3.2), and by using the relations $\displaystyle I(0;a,+\infty)$ $\displaystyle=$ $\displaystyle\exp(-a),$ $\displaystyle I(1;a,+\infty)$ $\displaystyle=$ $\displaystyle-\exp(-a)\ln(a)-\Gamma(0,a),$ we get formula (2). ### 3.2 Stochastic representation Suppose $Y_{k}$ has a weighted Gumbel distribution with parameters $\mu\in\mathbb{R}$ and $\sigma>0$. That is, if $Y\sim F_{\rm G}(\cdot;\mu,\sigma)$, by (3.2), $Y_{k}$ has CDF given by, for each $x\in\mathbb{R}$ and $k=0,1,2,\ldots$, $\displaystyle F_{Y_{k}}(x)={\mathbb{E}\big{(}Y^{k}\mathds{1}_{Y\leq x}\big{)}\over\mathbb{E}(Y^{k})}=\frac{\displaystyle\sum_{i=0}^{k}\binom{k}{i}\mu^{k-i}\sigma^{i}\,I\biggl{(}i;\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]},+\infty\biggr{)}}{\displaystyle\sum_{i=0}^{k}\binom{k}{i}\mu^{k-i}\sigma^{i}\,I(i;0,+\infty)},$ (3.3) where $I(k;a,b)$ is the incomplete moments defined in (2.4). Note that $F_{Y_{0}}(x)=F_{\rm G}(x;\mu,\sigma)$. Let $W$ be a discrete random variable, so that $W=1$ or $W=2$ or $W=3$, each with probability $\displaystyle p_{1}={2\over Z_{\delta}},\quad p_{2}=-{2(\mu+\sigma\gamma)\delta\over Z_{\delta}},\quad p_{3}={[\sigma^{2}{\pi^{2}\over 6}+(\mu+\sigma\gamma)^{2}]\delta^{2}\over Z_{\delta}},$ respectively, where $[\delta>0\,\text{and}\,\mu+\sigma\gamma<0]$ or $[\delta<0\,\text{and}\,\mu+\sigma\gamma>0]$, and $Z_{\delta}$ is as in (2.2). It is straightforward see that $p_{1}+p_{2}+p_{3}=1$. Assume that $T=\sum_{k=1}^{3}{1\over k}\,Y_{k-1}\delta_{W,k}\delta_{W,l},\quad l=1,2,3,$ and that $W$ is independent of $Y_{l}$, for each $l=1,2,3$. Here $\delta_{x,y}$ is the Kronecker delta function, i.e., this function is 1 if the variables are equal, and 0 otherwise. ###### Proposition 3.1. The following holds $\displaystyle X=WT\ \ \text{if and only if}\ \ X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta).$ (3.4) ###### Proof. By Law of total probability and by independence, we get $\displaystyle\mathbb{P}(X\leqslant x)=\mathbb{P}(WT\leqslant x)$ $\displaystyle=\sum_{l=1}^{3}\mathbb{P}(WT\leqslant x|W=l)\mathbb{P}(W=l)$ $\displaystyle=\sum_{l=1}^{3}\mathbb{P}(Y_{l-1}\leqslant x|W=l)\mathbb{P}(W=l)=\sum_{l=1}^{3}F_{Y_{l-1}}(x)\,p_{l}.$ By using the CDF of the weighted Gumbel distribution, given in (3.3), and by definitions of $p_{l}$’s and $Z_{\delta}$, the above expression is $\displaystyle={2\over Z_{\delta}}\,F_{Y_{0}}(x)-{2(\mu+\sigma\gamma)\delta\over Z_{\delta}}\,F_{Y_{1}}(x)+{[\sigma^{2}{\pi^{2}\over 6}+(\mu+\sigma\gamma)^{2}]\delta^{2}\over Z_{\delta}}\,F_{Y_{2}}(x)$ $\displaystyle={2\over Z_{\delta}}\,F_{\rm G}(x;\mu,\sigma)-{2(\mu+\sigma\gamma)\delta\over Z_{\delta}}\,\frac{\displaystyle\sum_{i=0}^{1}\binom{1}{i}\mu^{1-i}\sigma^{i}\,I\biggl{(}i;\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]},+\infty\biggr{)}}{\displaystyle\sum_{i=0}^{1}\binom{1}{i}\mu^{1-i}\sigma^{i}\,I(i;0,+\infty)}$ $\displaystyle\quad+{[\sigma^{2}{\pi^{2}\over 6}+(\mu+\sigma\gamma)^{2}]\delta^{2}\over Z_{\delta}}\,\frac{\displaystyle\sum_{i=0}^{2}\binom{2}{i}\mu^{2-i}\sigma^{i}\,I\biggl{(}i;\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]},+\infty\biggr{)}}{\displaystyle\sum_{i=0}^{2}\binom{2}{i}\mu^{2-i}\sigma^{i}\,I(i;0,+\infty)}$ $\displaystyle={\displaystyle\big{[}2-\delta\mu(2-\delta\mu)\big{]}\exp\biggl{\\{}-\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}}+\delta^{2}\sigma^{2}I\biggl{(}2;\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]},+\infty\biggr{)}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle\quad+{\displaystyle 2\delta(1-\delta\mu)\Biggl{\\{}{(x-\mu)}\,\exp\biggl{\\{}-\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{\\}}-\sigma\Gamma\biggl{(}0,\exp\Big{[}-\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{]}\biggr{)}\Biggr{\\}}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle\stackrel{{\scriptstyle\eqref{cdfbgev2-1}}}{{=}}F_{\rm BG}(x;\mu,\sigma,\delta),$ where $\Gamma(a,b)$ is the upper incomplete gamma function. Then the statement in (3.4) follows. ∎ ### 3.3 Rate of a random variable with BG distribution Following Klugman et al. (1998) [7], for a continuous random variable $X$ with density function $f_{X}(x)$, the rate of a random variable is given by $\displaystyle\tau_{X}=-\lim_{x\to\infty}{{\rm d}\ln\big{[}f_{X}(x)\big{]}\over{\rm d}x}.$ A simple computation shows that $\tau_{{\rm BG}(\mu,\sigma,\delta)}={1/\sigma}.$ In what follows we present some comparisons between the rates of random variables with known distributions: Inverse-gamma, Log-normal, Generalized- Pareto, BWeibull (see Vila et al. 2020 [18]), BGamma (see Vila et al. 2020 [17]), BG, exponential and Normal; $\displaystyle\tau_{{\rm InvGamma}(\alpha,\sigma)}=\tau_{{\rm LogNorm}(\mu,\kappa^{2})}$ $\displaystyle=\tau_{{\rm GenPareto}(\alpha,\sigma,\zeta)}=\tau_{\rm BWeibull(\alpha<1,\sigma,\delta)}=0$ $\displaystyle<\tau_{{\rm BG}(\mu,\sigma,\delta)}=\tau_{\rm BWeibull(\alpha=1,\sigma,\delta)}=\tau_{{\rm BGamma}(\alpha,1/\sigma,\delta)}=\tau_{{\rm exp}(1/\sigma)}=1/\sigma$ $\displaystyle<\tau_{\rm BWeibull(\alpha>1,\sigma,\delta)}=\tau_{{\rm Normal}(\mu,\kappa^{2})}=+\infty.$ In other words, far enough out in the tail, every BG distribution looks like an exponential distribution. ### 3.4 Bimodality ###### Proposition 3.2. A point $x\in\mathbb{R}$ is a mode of the BG density (2.1) if it is a root of the following non-polynomial function: $\displaystyle g(x)={1\over\sigma}\biggl{\\{}\exp\Big{[}-\Big{(}{x-\mu\over\sigma}\Big{)}\Big{]}-1\biggl{\\}}-{2\delta(1-\delta x)\over(1-\delta x)^{2}+1}.$ (3.5) ###### Proof. Taking the derivative of $f_{\rm BG}(x;\mu,\sigma)$ with respect to $x$, we have $\displaystyle{f^{\prime}_{\rm BG}(x;\mu,\sigma)}=f_{\rm BG}(x;\mu,\sigma)g(x).$ (3.6) Hence, the proof follows. ∎ Let $\mathcal{C}$ be the set formed for all $(\mu,\sigma,\delta)\in\mathbb{R}\times(0,+\infty)\times\mathbb{R}$ such that the following hold: $\displaystyle\delta>\max\biggl{\\{}1,{1\over\sigma}\Big{[}\exp\Big{(}{\mu\over\sigma}\Big{)}-1\Big{]}\biggr{\\}},$ (3.7) $\displaystyle{2\delta(1+\delta)\over(1+\delta)^{2}+1}<{1\over\sigma}\Big{\\{}\exp\Big{(}{1+\mu\over\sigma}\Big{)}-1\Big{\\}},$ (3.8) $\displaystyle{2\delta(1-2\delta)\over(1-2\delta)^{2}+1}<{1\over\sigma}\biggl{\\{}\exp\Big{[}-\Big{(}{2-\mu\over\sigma}\Big{)}\Big{]}-1\biggl{\\}},$ (3.9) $\displaystyle{2\delta(1-3\delta)\over(1-3\delta)^{2}+1}>{1\over\sigma}\biggl{\\{}\exp\Big{[}-\Big{(}{3-\mu\over\sigma}\Big{)}\Big{]}-1\biggl{\\}}.$ (3.10) ###### Remark 2. By considering $\mu=\sigma=1$ and $\delta>{\rm e}-1$, we have that $(\mu,\sigma,\delta)\in\mathcal{C}$. That is, the set $\mathcal{C}$ is non- empty. ###### Lemma 3.3. If $(\mu,\sigma,\delta)\in\mathcal{C}$ then the function $g(x)$ has at least three distinct real roots. ###### Proof. Since $(\mu,\sigma,\delta)\in\mathcal{C}$, a simple observation in the definition (3.5) of $g(x)$ shows that * • $\displaystyle g(-1)={1\over\sigma}\Big{\\{}\exp\Big{(}{1+\mu\over\sigma}\Big{)}-1\Big{\\}}-{2\delta(1+\delta)\over(1+\delta)^{2}+1}>0,$ because of condition (3.8); * • $\displaystyle g(0)={1\over\sigma}\Big{\\{}\exp\Big{(}{\mu\over\sigma}\Big{)}-1\Big{\\}}-\delta<0$, because of condition (3.7); * • $\displaystyle g(2)={1\over\sigma}\biggl{\\{}\exp\Big{[}-\Big{(}{2-\mu\over\sigma}\Big{)}\Big{]}-1\biggl{\\}}-{2\delta(1-2\delta)\over(1-2\delta)^{2}+1}>0$, because of condition (3.9); * • $\displaystyle g(3)={1\over\sigma}\biggl{\\{}\exp\Big{[}-\Big{(}{3-\mu\over\sigma}\Big{)}\Big{]}-1\biggl{\\}}-{2\delta(1-3\delta)\over(1-3\delta)^{2}+1}<0$, because of condition (3.10). Since $g(x)$ is a continuous real-valued function, by Intermediate Value Theorem, there are some points $r_{1},r_{2},r_{3}$, with $-1<r_{1}<0$, $0<r_{2}<2$ and $2<r_{3}<3$, so that $g(r_{i})=0$, $i=1,2,3$. ∎ Let $\mathcal{D}$ be the set formed for all $x\in\mathbb{R}$ such that $\displaystyle 2\delta^{2}\,{(1-\delta x)^{2}-1\over(1-\delta x)^{2}+1}$ $\displaystyle<$ $\displaystyle-{1\over\sigma^{2}}\exp\Big{[}-\Big{(}{x-\mu\over\sigma}\Big{)}\Big{]}\quad\text{where}\ (\mu,\sigma,\delta)\in\mathcal{C}.$ ###### Remark 3. The set $\mathcal{D}$ is non-empty. To see this just take $\mu=\sigma=1$ and $\delta=2$ $(>{\rm e}-1)$. By Remark 2, $(\mu,\sigma,\delta)\in\mathcal{C}$. In this case we have $\mathcal{D}=(0.132178,0.937349)$. ###### Lemma 3.4. If $(\mu,\sigma,\delta)\in\mathcal{C}$ then the function $g(x)$ has no root outside the interval $(-1,3)$. Futhermore, if $-1<r_{1}<0$, $0<r_{2}<2$ and $2<r_{3}<3$ are the roots of $g(x)$ found in Lemma 3.3, so that $r_{2}\in\mathcal{D}\subset(0,2)$, then these roots are unique. ###### Proof. Taking the derivative of $g(x)$ with respect to $x$, we have $\displaystyle g^{\prime}(x)=-{1\over\sigma^{2}}\exp\Big{[}-\Big{(}{x-\mu\over\sigma}\Big{)}\Big{]}-2\delta^{2}\,{(1-\delta x)^{2}-1\over(1-\delta x)^{2}+1}.$ Since, for $x\leqslant 0$ or $x\geqslant 2/\delta$, $2\delta^{2}\,{[(1-\delta x)^{2}-1]/[(1-\delta x)^{2}+1]}\geqslant 0,$ we obtain $\displaystyle g^{\prime}(x)\leqslant-{1\over\sigma^{2}}\exp\Big{[}-\Big{(}{x-\mu\over\sigma}\Big{)}\Big{]}<0\quad\text{for}\ x\leqslant 0\ \text{ou}\ x\geqslant 2/\delta.$ In other words, outside of the interval $(-1,3)$ the function $g(x)$ has no zeros because this one is decreasing on $(-\infty,0]\cup[2/\delta,+\infty)$ with $2/\delta<2$ (by condition (3.7)). This implies that the roots $-1<r_{1}<0$ and $2<r_{3}<3$, of $g(x)$, are unique in at their respective intervals. Finally, since $r_{2}\in\mathcal{D}$ we have that the function $g(x)$ crosses the abscissa axis at the only point $r_{2}$. Therefore, we conclude that the root $r_{2}$ is also unique in $\mathcal{D}\subset(0,2)$. ∎ ###### Remark 4. Let $\mu=\sigma=1$ and $\delta=2$ $(>{\rm e}-1)$. By Remark 3, $\mathcal{D}=(0.132178,0.937349)$. Computationally it can be verified that $-1<r_{1}=-0.0896138<0,0<r_{2}=0.389792<2$ and $2<r_{3}=2.79117<3$ are the only roots of $g(x)$, with $r_{2}\in\mathcal{D}$. ###### Theorem 1 (Bimodality). If $r_{2}\in\mathcal{D}$ then the BG distribution (2.1) is bimodal. ###### Proof. By Lemma 3.4 the function $g(x)$ in (3.5) has exactly three roots $r_{1},r_{2},r_{3}$ so that $r_{1}<r_{2}<r_{3}$. Since $f_{\rm BG}(x;\mu,\sigma)\to 0$ as $x\to\pm\infty$, it follows that the BG distribution (2.1) increases on the intervals $(-\infty,r_{1})$ and $(r_{2},r_{3})$, and decreases on $(r_{1},r_{2})$ and $(r_{3},+\infty)$. That is, $r_{1}$ and $r_{3}$ are two maximum points and $r_{2}$ is the unique minimum point. Therefore, the bimodality of $f_{\rm BG}(x;\mu,\sigma)$ is guaranteed. ∎ ### 3.5 The hazard function The survival and hazard functions, denoted by SF and HR of the BG is given by $S_{\rm BG}(x;\mu,\sigma,\delta)=1-F_{\rm BG}(x;\mu,\sigma,\delta)$ and $H_{\rm BG}(x;\mu,\sigma,\delta)={f_{\rm BG}(x;\mu,\sigma,\delta)/[1-F_{\rm BG}(x;\mu,\sigma,\delta)]}.$ Considering $r_{1},r_{2},r_{3}$ the three distinct real roots, of function $g(x)$, found in Lemma 3.3, we state the following result. ###### Proposition 3.5. If $r_{2}\in\mathcal{D}$ then the HR $H_{\rm BG}(x;\mu,\sigma,\delta)$ of the BG distribution (2.1) has the following monotonicity properties: * 1) It is increasing for each $x<r_{1}$ or $r_{2}<x<r_{3}$, * 2) It is decreasing for each $x\in\mathcal{D}$. ###### Proof. As a sub-product of the proof of Theorem 1, note that the density $f_{\rm BG}(x;\mu,\sigma,\delta)$ is increasing on the intervals $x<r_{1}$ or $r_{2}<x<r_{3}$. Since $1-F_{\rm BG}(x;\mu,\sigma,\delta)$ is a decreasing function, we have that the hazard function on $x<r_{1}$ or $r_{2}<x<r_{3}$ is the product of two increasing and nonnegative functions, then the proof of first item follows. By Glaser (1980), for to prove the second item, it is enough to prove that the function $G_{X}(x)$, defined as $\displaystyle G_{X}(x)=-{f_{\rm BG}^{\prime}(x;\mu,\sigma,\delta)\over f_{\rm BG}(x;\mu,\sigma,\delta)}\stackrel{{\scriptstyle\eqref{f-der- relation}}}{{=}}-g(x),$ is decreasing for each $x\in\mathcal{D}$. But this is immediate since $g(x)$ is increasing on this interval. ∎ ### 3.6 Moment-Generating Function ###### Theorem 2. If $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$, $t<\min\\{0,-m/\sigma\\}$ and $m\in\mathbb{N}\cup\\{0\\}$, we have $\displaystyle\mathbb{E}\big{[}X^{m}\exp(tX)\big{]}$ $\displaystyle=(-1)^{m+2}\exp(t\mu)\,\dfrac{\delta^{2}\sigma^{m+2}\,\Gamma^{(m+2)}(1-\sigma t)+\delta\sigma^{m+1}\big{[}2-\delta\mu(m+2)\big{]}\,\Gamma^{(m+1)}(1-\sigma t)}{1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle+\dfrac{\exp(t\mu)\,\sum_{i=0}^{m}(-1)^{i}\sigma^{i}\mu^{m-i}\big{[}2\binom{m}{i}-2\delta\mu\binom{m+1}{i}+\delta^{2}\mu^{2}\binom{m+2}{i}\big{]}\Gamma^{(i)}(1-\sigma t)}{1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ where $\Gamma^{(i)}(x)={{\rm d}^{i}\Gamma(x)/{\rm d}x^{i}}$. ###### Proof. Taking $g(X)=X^{m}\exp(tX)$ in Remark 1, $\displaystyle\mathbb{E}\big{[}X^{m}\exp(t\,X)\big{]}={1\over Z_{\delta}}\,\big{\\{}2\,\mathbb{E}\big{[}Y^{m}\exp(t\,Y)\big{]}-2\delta\,\mathbb{E}\big{[}Y^{m+1}\exp(t\,Y)\big{]}+\delta^{2}\,\mathbb{E}\big{[}Y^{m+2}\exp(t\,Y)\big{]}\big{\\}},$ (3.11) where $Y\sim F_{\rm G}(\cdot;0,\mu,\sigma)$ and $Z_{\delta}$ are as in (1.2) and (2.2), respectively. Taking the change of variables $z=\exp[-({y-\mu})/{\sigma}],\ {\rm d}z=-({z/\sigma}){\rm d}y,\ y=\mu-\sigma\ln(z),$ and using a binomial expansion we get, for $k=m,m+1,m+2$, $\displaystyle\mathbb{E}\big{[}Y^{k}\exp(tY)\big{]}$ $\displaystyle=$ $\displaystyle\int_{0}^{+\infty}[\mu-\sigma\ln(z)]^{k}z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z,\quad k\in\mathbb{N}\cup\\{0\\}$ (3.12) $\displaystyle=$ $\displaystyle\exp(\mu t)\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}\mu^{k-i}\sigma^{i}\int_{0}^{+\infty}\ln^{i}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z.$ We claim that $\displaystyle\int_{0}^{+\infty}\ln^{i}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z=\Gamma^{(i)}(1-\sigma t),\quad i=0,1,\ldots,k;\ k=m,m+1,m+2.$ (3.13) Note that, by combining (3.13), (3.12) and (3.11), the proof of theorem follows. Indeed, since ${\partial^{j}z^{\alpha-1}\over\partial\alpha^{j}}=\ln^{j}(z)\,z^{\alpha-1}$, $j=1,2,\ldots i$, $\displaystyle\int_{0}^{+\infty}\ln^{i}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z=\int_{0}^{+\infty}\dfrac{\partial^{j}}{\partial(1-\sigma t)^{j}}\big{[}\ln^{i-j}(z)\,z^{(1-\sigma t)-1}\exp(-z)\big{]}\,{\rm d}z.$ (3.14) Let us note that the following conditions hold true. * • The derivatives $\displaystyle\dfrac{\partial^{j}}{\partial(1-\sigma t)^{j}}\big{[}\ln^{i-j}(z)\,z^{(1-\sigma t)-1}\exp(-z)\big{]}=\ln^{i}(z)\,z^{(1-\sigma t)-1}\exp(-z)$ exists and are continuous in $1-\sigma t$ for all $z$ and all $1-\sigma t$, for $j=1,2,\ldots i$. * • By using the following inequalities $\displaystyle|\ln(z)|\leqslant{1\over z}\,\mathds{1}_{\\{x>0:x<1\\}}(z)+z\,\mathds{1}_{\\{x>0:x\geqslant 1\\}}(z),$ $\displaystyle|\ln^{i}(z)|\leqslant{1\over z^{i-1}}\,\mathds{1}_{A_{i}}(z)+z^{i-1}\,\mathds{1}_{B_{i}}(z),\quad i=2,3,\ldots,k,$ where $A_{i}=\big{\\{}x>0:x<{\rm e}^{iW({i-1\over i})/(i-1)}\big{\\}}$, $B_{i}=\big{\\{}x>0:x\geqslant{\rm e}^{iW({i-1\over i})/(i-1)}\big{\\}}$, and $W$ is the product logarithm function (or Lambert $W$ function), we get $\displaystyle\biggl{|}\dfrac{\partial^{j}}{\partial(1-\sigma t)^{j}}\big{[}\ln^{i-j}(z)\,z^{(1-\sigma t)-1}\exp(-z)\big{]}\biggr{|}=\big{|}\ln^{i}(z)\big{|}\,z^{(1-\sigma t)-1}\exp(-z)$ $\displaystyle\leqslant\begin{cases}z^{(1-\sigma t)-1}\exp(-z)&\text{if}\ i=0,\\\\[5.69046pt] \big{[}z^{(1-\sigma t)-2}\,\mathds{1}_{\\{x>0:x<1\\}}(z)+z^{(1-\sigma t)}\,\mathds{1}_{\\{x>0:x\geqslant 1\\}}(z)\big{]}\exp(-z)&\text{if}\ i=1,\\\\[5.69046pt] \big{[}z^{(1-\sigma t)-i}\,\mathds{1}_{A_{i}}(z)+z^{(1-\sigma t)+i-2}\,\mathds{1}_{B_{i}}(z)\big{]}\exp(-z)&\text{if}\ i=2,3,\ldots,k,\end{cases}\eqqcolon G(z,1-\sigma t).$ Furthermore, the integral $\displaystyle\int_{0}^{+\infty}G(z,1-\sigma t)\,{\rm d}z\leqslant\begin{cases}\Gamma(1-\sigma t)&\text{if}\ i=0,\ t<1/\sigma,\\\\[5.69046pt] \Gamma(-\sigma t)+\Gamma(2-\sigma t)&\text{if}\ i=1,\ t<0,\\\\[5.69046pt] \Gamma(2-\sigma t-i)+\Gamma(i-\sigma t)&\text{if}\ i=2,3,\ldots,k,\ t<(2-k)/\sigma,\end{cases}$ is finite, for $k=m,m+1,m+2$. * • The integral $\int_{0}^{+\infty}\ln^{i-j}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z$ exists because of last item above. Under the above three conditions, by Leibniz integral rule, we can interchange the derivative with the integral in (3.14). Hence, for $j=1,2,\ldots i$, $\displaystyle\int_{0}^{+\infty}\ln^{i}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z=\dfrac{{\rm d}^{j}}{{\rm d}(1-\sigma t)^{j}}\int_{0}^{+\infty}\ln^{i-j}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z.$ Letting $j=i$, we obtain $\displaystyle\int_{0}^{+\infty}\ln^{i}(z)\,z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z$ $\displaystyle=$ $\displaystyle\dfrac{{\rm d}^{i}}{{\rm d}(1-\sigma t)^{i}}\int_{0}^{+\infty}z^{(1-\sigma t)-1}\exp(-z)\,{\rm d}z$ $\displaystyle=$ $\displaystyle\Gamma^{(i)}(1-\sigma t),\quad i=0,1,\ldots,$ where in the last line we used the definition of gamma function, $\Gamma(\alpha)=\int_{0}^{+\infty}z^{\alpha-1}\exp(-z)\,{\rm d}z$. Then the claimed (3.13) follows. Thus, this complete the proof of theorem. ∎ By taking $m=0$ in Theorem 2 we obtain a clossed expression for the moment- generating function of BGumbel distribution. ###### Corollary 3.6. If $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ then, the moment-generating function $M_{X}(t)=\mathbb{E}[\exp(tX)]$ with $t<0$, is given by $\displaystyle M_{X}(t)={\exp(\mu t)\Gamma(1-\sigma t)\big{[}2-2\mu\delta+\mu^{2}\delta^{2}+2\sigma\delta(1-\mu\delta)\psi(1-\sigma t)+{\sigma^{2}\delta^{2}\over\Gamma(1-\sigma t)}\,\Gamma^{(2)}(1-\sigma t)\big{]}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ where $\psi(x)=\Gamma^{\prime}(x)/\Gamma(x)$ is the digamma function and $\Gamma^{(2)}(1-\sigma t)={{\rm d}^{2}\Gamma(x)/{\rm d}x^{2}}$. ###### Remark 5. Letting $\delta=0$ in Corollary 3.6 we obtain the known formula $M_{Y}(t)=\exp(\mu t)\Gamma(1-\sigma t)$, where $Y\stackrel{{\scriptstyle d}}{{=}}X\sim F_{\rm G}(\cdot;0,\mu,\sigma)$. Note that, by a by-product of proof of Theorem 2, in this case it is sufficient take $t<1/\sigma$ instead $t<0$. ###### Remark 6. The characteristic function of $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$, denoted by $\phi_{X}(t)$, can be obtained from the moment-generating function by the relation $M_{X}(t)=\phi_{X}(-it)$. ### 3.7 Moments ###### Theorem 3. If $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ then $\displaystyle\mathbb{E}(X^{k})={\displaystyle\delta^{2}\sigma^{k+2}I(k+2;0,+\infty)-\delta\sigma^{k+1}\big{[}2-\delta\mu(k+2)\big{]}I(k+1;0,+\infty)\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle+{\sum_{i=0}^{k}\sigma^{i}\mu^{k-i}\big{[}2\binom{k}{i}-2\delta\mu\binom{k+1}{i}+\delta^{2}\mu^{2}\binom{k+2}{i}\big{]}I(i;0,+\infty)\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ where $I(i;a,b)$ is defined in (2.4). ###### Proof. From Remark 1 with $g(X)=X^{k}$, we obtain $\displaystyle\mathbb{E}(X^{k})$ $\displaystyle=$ $\displaystyle{1\over Z_{\delta}}\,\big{[}2\,\mathbb{E}(Y^{k})-2\delta\,\mathbb{E}(Y^{k+1})+\delta^{2}\,\mathbb{E}(Y^{k+2})\big{]},$ (3.15) where $Y\sim F_{\rm G}(\cdot;0,\mu,\sigma)$ is as in (1.2). By taking $x\longrightarrow+\infty$ in (3.2), Lebesgue dominated convergence theorem gives $\displaystyle\mathbb{E}(Y^{k})=\lim_{x\to+\infty}\mathbb{E}\big{(}Y^{k}\mathds{1}_{Y\leq x}\big{)}=\sum_{i=0}^{k}\binom{k}{i}\mu^{k-i}\sigma^{i}\,I(i;0,+\infty).$ (3.16) By combining (3.15) and (3.16), the proof follows. ∎ ###### Corollary 3.7. If $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ then $\displaystyle\mathbb{E}(X)$ $\displaystyle={\delta^{2}\sigma^{3}\big{[}2\zeta(3)+\gamma^{3}+{\gamma\pi^{2}\over 2}\big{]}-\delta\sigma^{2}(2-3\delta\mu)\big{(}\gamma^{2}+{\pi^{2}\over 6}\big{)}+\mu\big{[}2-\delta\mu(2-\delta\mu)\big{]}+\sigma\big{[}2-\delta\mu(4-3\delta\mu)\big{]}\gamma\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ (3.17) $\displaystyle\mathbb{E}(X^{2})$ $\displaystyle={\delta^{2}\sigma^{4}\big{[}8\gamma\zeta(3)+\gamma^{4}+\gamma^{2}\pi^{2}+{3\pi^{4}\over 20}\big{]}-2\delta\sigma^{3}(1-2\delta\mu)\big{[}2\zeta(3)+\gamma^{3}+{\gamma\pi^{2}\over 2}\big{]}+\mu^{2}\big{[}2-\delta\mu(2-\delta\mu)]\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle+{2\sigma\mu\big{[}2-\delta\mu(3-2\delta\mu)\big{]}\gamma+2\sigma^{2}\big{[}1-3\delta\mu(1-\delta\mu)\big{]}\big{(}\gamma^{2}+{\pi^{2}\over 6}\big{)}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ (3.18) $\displaystyle\mathbb{E}(X^{3})$ $\displaystyle={\delta^{2}\sigma^{5}\big{[}20\gamma^{2}\zeta(3)+{10\pi^{2}\zeta(3)\over 3}+24\zeta(5)+\gamma^{5}+{5\gamma^{3}\pi^{2}\over 3}+{3\gamma\pi^{4}\over 4}\big{]}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle+{\mu^{3}\big{[}2-\delta\mu(2-\delta\mu)\big{]}+\sigma\mu^{2}\big{[}6-\delta\mu(8-5\delta\mu)\big{]}\gamma+2\sigma^{2}\mu\big{[}3-\delta\mu(6-5\delta\mu)\big{]}(\gamma^{2}+{\pi^{2}\over 6})\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}}$ $\displaystyle+{2\sigma^{3}\big{[}1-\delta\mu(4-5\delta\mu)\big{]}\big{[}2\zeta(3)+\gamma^{3}+{\gamma\pi^{2}\over 2}\big{]}-\delta\sigma^{4}(2-5\delta\mu)\big{[}8\gamma\zeta(3)+\gamma^{4}+\gamma^{2}\pi^{2}+{3\pi^{4}\over 20}\big{]}\over 1+\delta^{2}\sigma^{2}{\pi^{2}\over 6}+(\delta\mu+\delta\sigma\gamma-1)^{2}},$ where $\gamma$ is the Euler-Mascheroni constant and $\zeta(s)$ is the Riemann zeta function. ###### Proof. By combining Theorem 3 with the following values for the improper integral $I(k;0,+\infty)$ in (2.4), with $k=0,1,2,3,4$ and $5$, $\displaystyle\begin{array}[]{lllll}I(0;0,+\infty)&=&1,\\\\[2.84544pt] I(1;0,+\infty)&=&\gamma,\\\\[2.84544pt] I(2;0,+\infty)&=&\gamma^{2}+{\pi^{2}\over 6},\\\\[2.84544pt] I(3;0,+\infty)&=&2\zeta(3)+\gamma^{3}+{\gamma\pi^{2}\over 2},\\\\[2.84544pt] I(4;0,+\infty)&=&8\gamma\zeta(3)+\gamma^{4}+\gamma^{2}\pi^{2}+{3\pi^{4}\over 20},\\\\[2.84544pt] I(5;0,+\infty)&=&20\gamma^{2}\zeta(3)+{10\pi^{2}\zeta(3)\over 3}+24\zeta(5)+\gamma^{5}+{5\gamma^{3}\pi^{2}\over 3}+{3\gamma\pi^{4}\over 4},\end{array}$ (3.25) the proof follows. ∎ For the variance of the BGumbel distribution ${\rm Var}(X)=\mathbb{E}(X^{2})-\big{[}\mathbb{E}(X)\big{]}^{2},$ (3.26) just replace (3.17) and (3.7) in (3.26). ###### Remark 7 (Standardized moments). Let $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ with $\mathbb{E}(X)$ and $\sqrt{{\rm Var}(X)}>0$, both finite. Newton’s binomial expansion gives $\displaystyle\mathbb{E}\biggl{[}{X-\mathbb{E}(X)\over\sqrt{{\rm Var}(X)}}\,\biggr{]}^{n}={(-1)^{n}\big{[}\mathbb{E}(X)\big{]}^{n}\over\big{[}{\rm Var}(X)\big{]}^{n/2}}\,\sum_{k=0}^{n}\binom{n}{k}(-1)^{-k}\,\big{[}\mathbb{E}(X)\big{]}^{-k}\,\mathbb{E}(X^{k}).$ (3.27) By combining (3.27), Theorem 3, identities in (3.25) and the following relation $\displaystyle\begin{array}[]{lllll}I(6;0,+\infty)=20\gamma(2\gamma^{2}+\pi^{2})\zeta(3)+40[\zeta(3)]^{2}+144\gamma\zeta(5)+\gamma^{6}+{5\gamma^{4}\pi^{2}\over 2}+{9\gamma^{2}\pi^{4}\over 4}+{61\pi^{6}\over 168},\end{array}$ closed formulas for skewness and kurtosis of the random variable $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ it is possible to obtain. ## 4 Graphical illustrations and simulation results ### 4.1 Graphical illustration Here, the flexibility of the BG model is shown. Note that the BG model can be unimodal or bimodal. Figures 1 and 2 show how the BG density function is influenced by the shape parameters $\delta$ and $\mu$, since $\sigma$ is a scale parameter. Figure 1 shows that the bimodality of the BG model is more evident the greater the absolute value of $\delta$. The density tends to become unimodal when the values of $\delta$ and $\mu$ are far apart. In the bimodal case, the left mode is smaller and the opposite for the right mode as $\mu$ increases, see Figure 2. All graphics were generated with our bgumbel package, [1]. Figure 1: BG density graphs for $\mu=1$, $\sigma=2$ and $\delta$ varying as shown in the caption. Figure 2: BG density graphs for $\delta=1$, $\sigma=2$ and $\mu$ varying as shown in the caption. ### 4.2 Simulation results In this section, we compare the sample mean and variance with the population mean and variance (3.17) and (3.26). In addition, we compared the cumulative distribution function given in (2) with the empirical distribution. For that, we present a Markov chain Monte Carlo (MCMC) simulation study with various sample sizes. In the MCMC simulation method we considered the Metropolis- Hastings method as pseudo-random numbers (PRN) generator [10],[6]. We chose this method instead of the inverse transform method, because the BG cumulative distribution (3.1) does not admit analytical inversion. To generate PRN was used the MCMCpack package [8] available in the R program [14]. The mean, variance and the corresponding bias of the sample estimates were computed over $10^{5}$ iterations of MCMC. These results are shown in Tables 1. The performance of the MCMC algorithm was tested by varying the number of iterations, n. Graphically, the convergence of the Markov Chain is shown in Figure 3, for the first set of parameters in Table 1. For the other parameters sets , in the Table 1, the performance of the algorithm is illustrated in Appendix A, Figures 6 \- 14. The value of n varies from 1000 to 100000 for each parameters set. In all these figures the transition probability density function (TPDF) ,defined by the MCMCpack package, is in blue and the estacionary density in red. We can conclude that the algorithm, [1], is efficient for n greater than 100000. Table 1: Comparison between the sample moments and population moments ($n=10^{5}$). $\mu$ | $\sigma$ | $\delta$ | sample mean | $E(X)$ | Bias(mean) | variance sample | $Var(X)$ | Bias (variance) ---|---|---|---|---|---|---|---|--- $-$2 | 1 | $-$1 | $-$1.0931 | $-$1.0640 | $-$0.0291 | 4.9268 | 5.0126 | $-$0.0858 $-$1 | 2 | $-$1 | 3.9504 | 4.0170 | $-$0.0666 | 17.126 | 18.016 | $-$0.8902 $-$1 | 2 | $-$2 | 3.9901 | 3.9909 | $-$0.0008 | 21.032 | 21.575 | $-$0.5435 $-$2 | 2 | $-$1 | 2.0065 | 1.9512 | 0.0553 | 24.104 | 24.592 | $-$0.4883 Figure 3: Transition probability density function (TPDF)(left) and transition cumulative distribution function (TCDF)(right), varying $n$ from $10^{3}$ to $10^{5}$. ## 5 Maximum likelihood estimation In this section, we determine the maximum likelihood estimates (MLEs) of the parameters of the BG distribution. Let $X\sim F_{\rm BG}(\cdot;\mu,\sigma,\delta)$ be a random variable with PDF $f_{\rm BG}(x;\Theta)$ defined in (2.1), where $\Theta=(\mu,\sigma,\delta)$. Let $X_{1},\dots,X_{n}$ be a random sample of $X$ and $x=(x_{1},\dots,x_{n})$ the corresponding observed values of the random sample $X_{1},\dots,X_{n}$. The log-likelihood function for $\Theta=(\mu,\sigma,\delta)$ is given by $\displaystyle\\!\ell(\Theta;x)=-n\ln Z_{\delta}-n\ln\sigma+\sum_{i=1}^{n}\left\\{\ln\big{[}(1-\delta x_{i})^{2}+1\big{]}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}-\exp\Big{[}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]}\right\\}.$ (5.1) The first-order partial derivatives of $Z_{\delta}$ are given by $\displaystyle\textstyle\frac{\partial Z_{\delta}}{\partial\mu}=\textstyle 2\delta\big{[}\delta(\mu+\sigma\gamma)-1\big{]};$ $\displaystyle\textstyle\frac{\partial Z_{\delta}}{\partial\sigma}=\frac{\sigma\delta\pi^{2}}{3}+2\delta\gamma\big{[}\delta(\mu+\sigma\gamma)-1\big{]};$ $\displaystyle\textstyle\frac{\partial Z_{\delta}}{\partial\delta}=\frac{\sigma\delta\pi^{2}}{3}+2\big{[}\delta(\mu+\sigma\gamma)-1\big{]}(\mu+\sigma\gamma).$ Then, the mle of $\mu,\sigma,\delta$ are the solutions of the following system of equations $\displaystyle\frac{\partial\ell(\Theta;x)}{\partial\mu}$ $\displaystyle=-\frac{n}{Z_{\delta}}\frac{\partial Z_{\delta}}{\partial\mu}+\frac{n}{\sigma}-\frac{1}{\sigma}\sum_{i=1}^{n}\exp\Big{[}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]}=0;$ $\displaystyle\frac{\partial\ell(\Theta;x)}{\partial\sigma}$ $\displaystyle=-\frac{n}{Z_{\delta}}\frac{\partial Z_{\delta}}{\partial\sigma}-\frac{n}{\sigma}-\frac{1}{\sigma^{2}}\sum_{i=1}^{n}(x_{i}-\mu)\left\\{1-\exp\Big{[}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]}\right\\}=0;$ $\displaystyle\frac{\partial\ell(\Theta;x)}{\partial\delta}$ $\displaystyle=-\frac{n}{Z_{\delta}}\frac{\partial Z_{\delta}}{\partial\delta}-2\sum_{i=1}^{n}{x_{i}(1-\delta x_{i})\,\over(1-\delta x_{i})^{2}+1}=0.$ The MLEs $\widehat{\mu},\widehat{\sigma}$ and $\widehat{\delta}$ of $\mu,\sigma$ and $\delta$ are defined as the values of $\widehat{\mu},\widehat{\sigma}$ and $\widehat{\delta}$ that maximize the log- likelihood function in (5.1). There will be, in general, no closed form for the MLE and their obtention will need, in practice, numerical methods. Since the second order partial derivatives of $Z_{\delta}$ are $\displaystyle\begin{array}[]{lllll}&\frac{\partial^{2}Z_{\delta}}{\partial\mu^{2}}=2\delta^{2};&\frac{\partial^{2}Z_{\delta}}{\partial\sigma^{2}}=\frac{\delta\pi^{2}}{3}+2\delta^{2}\gamma^{2};\\\\[5.69046pt] &\frac{\partial^{2}Z_{\delta}}{\partial\delta^{2}}=\frac{\sigma\pi^{2}}{3}+2(\mu+\sigma\gamma)^{2};&\frac{\partial^{2}Z_{\delta}}{\partial\mu\partial\sigma}=\frac{\partial^{2}Z_{\delta}}{\partial\sigma\partial\mu}=2\delta^{2}\gamma;\\\\[5.69046pt] &\frac{\partial^{2}Z_{\delta}}{\partial\mu\partial\delta}=\frac{\partial^{2}Z_{\delta}}{\partial\delta\partial\mu}=4\delta(\mu+\sigma\gamma);&\frac{\partial^{2}Z_{\delta}}{\partial\sigma\partial\delta}=\frac{\partial^{2}Z_{\delta}}{\partial\delta\partial\sigma}=\frac{\sigma\pi^{2}}{3}+4\delta\gamma(\mu+\sigma\gamma);\end{array}$ the second-order partial derivatives of the log-likelihood function $\ell(\Theta;x)$ are given by $\displaystyle\frac{\partial^{2}\ell(\Theta;x)}{\partial\mu^{2}}$ $\displaystyle=-D_{\mu,\mu}(\Theta;n)-\frac{1}{\sigma^{2}}\sum_{i=1}^{n}\exp\Big{[}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]};$ $\displaystyle\frac{\partial^{2}\ell(\Theta;x)}{\partial\sigma^{2}}$ $\displaystyle=-D_{\sigma,\sigma}(\Theta;n)+\frac{n}{\sigma^{2}}+\frac{1}{\sigma^{3}}\sum_{i=1}^{n}(x_{i}-\mu)\left\\{2-\Big{[}2-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]}\exp\Big{[}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]}\right\\};$ $\displaystyle\frac{\partial^{2}\ell(\Theta;x)}{\partial\delta^{2}}$ $\displaystyle=-D_{\delta,\delta}(\Theta;n)-2\sum_{i=1}^{n}{x_{i}^{2}\big{[}(1-\delta x_{i})^{2}-1\big{]}\over\big{[}(1-\delta x_{i})^{2}+1\big{]}^{2}}.$ Already, the second-order mixed derivatives of $\ell(\Theta;x)$ can be written as $\displaystyle\frac{\partial^{2}\ell(\Theta;x)}{\partial\mu\partial\sigma}$ $\displaystyle=\frac{\partial^{2}\ell(\Theta;x)}{\partial\sigma\partial\mu}=-D_{\mu,\sigma}(\Theta;n)-\frac{n}{\sigma^{2}}+\frac{1}{\sigma^{2}}\sum_{i=1}^{n}\Big{[}1-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]}\exp\Big{[}-\Big{(}\frac{x_{i}-\mu}{\sigma}\Big{)}\Big{]};$ $\displaystyle\frac{\partial^{2}\ell(\Theta;x)}{\partial\mu\partial\delta}$ $\displaystyle=\frac{\partial^{2}\ell(\Theta;x)}{\partial\delta\partial\mu}=-D_{\mu,\delta}(\Theta;n);$ $\displaystyle\frac{\partial^{2}\ell(\Theta;x)}{\partial\sigma\partial\delta}$ $\displaystyle=\frac{\partial^{2}\ell(\Theta;x)}{\partial\delta\partial\sigma}=-D_{\sigma,\delta}(\Theta;n),$ where $\displaystyle\textstyle D_{u,v}(\Theta;n)=\frac{\partial}{\partial u}\big{(}\frac{n}{Z_{\delta}}\frac{\partial Z_{\delta}}{\partial v}\big{)}=\frac{n}{Z_{\delta}}\big{(}\frac{\partial^{2}Z_{\delta}}{\partial u\partial v}-{1\over Z_{\delta}}\frac{\partial Z_{\delta}}{\partial u}\frac{\partial Z_{\delta}}{\partial v}\big{)},\quad u,v\in\\{\mu,\sigma,\delta\\}.$ (5.2) Thus, the elements of the Fisher information matrix $I_{X}(\Theta)$ are defined by $\displaystyle[I_{X}(\Theta)]_{jk}=\mathbb{E}\biggl{[}{\partial\ln f_{\rm BG}({X};{\Theta})\over\partial\theta_{j}}\ {\partial\ln f_{\rm BG}({X};{\Theta})\over\partial\theta_{k}}\biggr{]},$ for $\theta_{j},\theta_{k}\in\\{\mu,\sigma,\delta\\}$ and $j,k=1,2,3$. Under known regularity conditions, the elements $[I_{X}(\Theta)]_{jk}$ are written as $\mathbb{E}\big{[}-{\partial^{2}\ln f_{\rm BG}({X};{\Theta})\over\partial\theta_{j}\partial\theta_{k}}\big{]}$. Hence, by considering the following notations: $\displaystyle\begin{array}[]{lllll}F_{1}(X)&=&\exp\big{[}{-\big{(}\frac{X-\mu}{\sigma}\big{)}}\big{]},\\\\[8.5359pt] F_{2}(X)&=&\big{[}1-\big{(}\frac{X-\mu}{\sigma}\big{)}\big{]}\exp\big{[}{-\big{(}\frac{X-\mu}{\sigma}\big{)}}\big{]},\\\\[8.5359pt] F_{3}(X)&=&(X-\mu)\big{\\{}2-\big{[}2-\big{(}\frac{X-\mu}{\sigma}\big{)}\big{]}\exp\big{[}{-\big{(}\frac{X-\mu}{\sigma}\big{)}}\big{]}\big{\\}},\\\\[8.5359pt] F_{4}(X)&=&{X^{2}\big{[}(1-\delta X)^{2}-1\big{]}/\big{[}(1-\delta X)^{2}+1\big{]}^{2}},\end{array}$ we get $\displaystyle I_{X}(\Theta)=\scalebox{0.95}{\mbox{$\displaystyle\begin{bmatrix}D_{\mu,\mu}(\Theta;1)+\frac{1}{\sigma^{2}}\mathbb{E}[F_{1}(X)]&D_{\mu,\sigma}(\Theta;1)+\frac{1}{\sigma^{2}}-\frac{1}{\sigma^{2}}\mathbb{E}[F_{2}(X)]&D_{\mu,\delta}(\Theta;1)\\\\[8.5359pt] D_{\mu,\sigma}(\Theta;1)+\frac{1}{\sigma^{2}}-\frac{1}{\sigma^{2}}\mathbb{E}[F_{2}(X)]&D_{\sigma,\sigma}(\Theta;1)-\frac{1}{\sigma^{2}}-\frac{1}{\sigma^{3}}\mathbb{E}[F_{3}(X)]&D_{\sigma,\delta}(\Theta;1)\\\\[8.5359pt] D_{\mu,\delta}(\Theta;1)&D_{\sigma,\delta}(\Theta;1)&D_{\delta,\delta}(\Theta;1)+2\mathbb{E}[F_{4}(X)]\ \end{bmatrix}$}},$ where $D_{u,v}(\Theta;n)$ is as in(5.2). We emphasize that, by using Theorem 2 and Corollary 3.6, closed expressions for the expectations $\mathbb{E}[F_{1}(X)]$, $\mathbb{E}[F_{2}(X)]$ and $\mathbb{E}[F_{3}(X)]$ can be found. Furthermore, by using Theorem 3, a simple argument shows that $F_{4}(X)$ is an integrable random variable. ## 6 Real-world data analysis In this section, we used the BG model to fit one data set coming from daily pressure observations of the state of Mato Grosso do Sul-Brazil, during the period of the year 2015 to 2020. The data series employed were taken from Corumbá automatic station of INMET (Instituto Nacional de Meteorologia) at https://portal.inmet.gov.br/dadoshistoricos. The required numerical evaluations are implemented using the R software [14], [6], and [8]. The maximum likelihood estimates of the parameters were obtained as in Section 5, using the bgumbel package [1]. For a random sample of the pressure $\\{X_{i}\\}_{t=1}^{T}$, $T=1774$, we fit their maximum values by Gumbel and BG models. For this, we used the maximum block technique. That is, we provide $\tau=29$ non-overlapping subsamples of length $N=60$; $\\{m_{i}\\}_{i=1}^{\tau}$, where $m_{i}=\max\\{x_{(i-1)N+1}\dots x_{iN+1}\\}$, $\forall i=1,\ldots,\tau=[T/N]$ ($[.]$ denotes the integer part ). We applied Ljung – Box’s test [15] for verifying the null hypothesis of serial independence of the new sample of maxima. The test statistics did not reject the null hypothesis at the significance level of $1,7\%$ for the $28$ blocks of size $N=60$ and the last of size $34$. The descriptive statistics of sample $\\{m_{i}\\}_{i=1}^{\tau}$ and the sample centered by the mean are shown in Table 2. Table 2: Descriptive statistics Distribution | Mean | Median | Maximum | Minimum | Std Dev ---|---|---|---|---|--- Extreme values | 1009.2 | 1010.0 | 1017.9 | 1001.1 | 4.9301 Centralized extreme values | 0.0000 | 0.7893 | 8.7393 | $-$8.0607 | 4.9301 The parameter estimates and their corresponding standard error estimates (SE) from the BG and Gumbel models are shown in Table 3. The results of goodness- of-fit tests based on the Kolmogorov–Smirnov (KS) test, Akaike information criterion (AIC) and Bayesian information criterion (BIC) indicate that the BG model is a better fit, judging on the basis of $p$-values and the lowest value of the AIC and BIC, see Table 3. The Figures 4 and 5 also indicates that the adjustment of the data by the BG model is better than the simple Gumbel model. Table 3: Parameters estimates of Gumbel and BG models. Model | Parameters | Estimate | SE | ks $p$-value | AIC | BIC ---|---|---|---|---|---|--- | $\mu$ | $-$3.6972 | 0.4613 | | | BG | $\sigma$ | 2.4661 | 0.2222 | 0.8239 | 169.8228 | 173.8194 | $\delta$ | $-$0.4128 | 0.1220 | | | Gumbel | $\mu$ | $-$2.4080 | 0.8667 | 0.6576 | 173.2287 | 175.8932 | $\beta$ | 4.3326 | 0.6405 | | | Figure 4: On the left: centralized extreme values histogram versus fitted BG and Gumbel models. On the right: empirical cdf versus theorical cdf of BG and Gumbel models. Figure 5: QQ-plot empirical versus theoretical of BG model. ## 7 Conclusions In this paper, we proposed an extension to the Gumbel distribution by using a quadratic transformation technique used to generate bimodal functions produced due to using the quadratic expression, with an additional bimodality parameter which modifies the mode of the distribution, composing as a alternative models for single maxima events. In this generalization, the Gumbel distribution appears as a particular case. We provide a mathematical treatment of the new model including the mode, bimodality, moment generating function and moments. We performed the modelling under a frequentist approach and the estimation of the parameters was proposed using the maximum likelihood estimation. Finally, we have performed a statistical modeling with real data by using the new proposed model in the article. The application demonstrated the practical relevance of the new model, which also showed the advantage of Gumbel model. #### Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) (Finance Code 001). #### Disclosure statement There are no conflicts of interest to disclose. ## References * [1] P. C. Brom, C. E. G. Otiniano, R. Vila, M. B. Pereira (2021). Bimodal Gumbel Distribution, Package bgumbel. https://CRAN.R-project.org/package=bgumbel. * [2] M. N. Çankaya, Y. M. Bulut, F. Z. Doğru, O. Arslan (2015). A bimodal extension of the generalized gamma distribution. Revista Colombiana de Estadística, 38:371–384. * [3] D. Elal-Olivero (2010). Alpha-skew-normal distribution. Proyecciones Journal of Mathematics, 29:224–240. * [4] R. A. Fisher, L. H. C. Tippett (1928). Limiting forms of the frequency distribution of the largest or smallest member of a sample. Mathematical Proceedings of the Cambridge Philosophical Society, 24:180–190. * [5] E. J. Gumbel (1958). Statistics of Extremes. Columbia University Press, New York. * [6] W. K. Hastings (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57:97–109. * [7] S. Klugman, H. Panjer, G. Willmot (1998). Loss models: From data to decisions, Wiley, New York. * [8] A. D. Martin, K. M. Quinn, J. H. Park (2011). MCMCpack: Markov Chain Monte Carlo in R. Journal of Statistical Software. 42(9): 1-21. URL https://www.jstatsoft.org/v42/i09/. * [9] E. Z. Martinez, J. A. Achcar, A. A. Jacome, J. S. Santos (2013). Mixture and non-mixture cure fraction models based on the generalized modified Weibull distribution with an application to gastric cancer data. Computer Methods and Programs in Biomedicine, 112:343–355. * [10] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. (1953). Teller, and E. Teller. Equations of state calculations by fast computing machine. Journal of Chemical Physics, 21:1087–1091. * [11] I. E. Okorie, A. C. Akpanta, J. Ohakwe (2016). The Exponentiated Gumbel Type-2 Distribution: Properties and Application. International Journal of Mathematics and Mathematical Sciences, vol. 2016, 10 pages. * [12] I. E. Okorie, A. C. Akpanta, J. Ohakwe, D. C. Chikezie, E. O. Obi (2017). The Kumaraswamy G Exponentiated Gumbel type-2 distribution. Afrika Statistika, vol. 12, No. 3. * [13] E. C. Pinheiro, S. L. P. Ferrari (2015). A comparative review of generalizations of the Gumbel extreme value distribution with an application to wind speed data. Journal of Statistical Computation and Simulation, 86:2241–2261. * [14] R-Team (2020). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. * [15] A. Trapletti (2016). R: Box-Pierce and Ljung-Box Tests. stat.ethz.ch. Retrieved 2016-06-05. URL https://stat.ethz.ch/R-manual/R-devel/library/stats/html/box.test.html. * [16] R. Vila, J. Leao, H. Saulo, M. N. Shahzad, M. Santos-Neto (2020). On a bimodal Birnbaum-Saunders distribution with applications to lifetime data. Brazilian Journal of Probability and Statistics, 34:495–518 * [17] R. Vila, L. Ferreira, H. Saulo, F. Prataviera, E. M. M. Ortega (2020). A bimodal gamma distribution: Properties, regression model and applications. Statistics: A Journal of Theoretical and Applied Statistics, 54:469–493. * [18] R. Vila, M. N. Çankaya (2021). A Bimodal Weibull Distribution: Properties and Inference. To appear in Journal of Applied Statistics. * [19] R. Vila, H. Saulo, J. Roldan (2021). On some properties of the bimodal normal distribution and its bivariate version. Preprint, http://arxiv.org/abs/2106.00097. ## Appendix Figure 6: TPDF (left) and TCDF (right), n = 1000. Figure 7: TPDF (left) and TCDF (right), n = 10000. Figure 8: TPDF (left) and TCDF (right), n = 100000. Figure 9: TPDF (left) and TCDF (right), n = 1000. Figure 10: TPDF (left) and TCDF (right), n = 10000. Figure 11: TPDF (left) and TCDF (right), n = 100000. Figure 12: TPDF (left) and TCDF (right), n = 1000. Figure 13: TPDF (left) and TCDF (right), n = 10000. Figure 14: TPDF (left) and TCDF (right), n = 100000.
††institutetext: Department of Physics and Astronomy, University of Nebraska, Lincoln, NE 68588, USA # Leptogenesis triggered by a first-order phase transition Peisi Huang Ke-Pan Xie<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We propose a new scenario of leptogenesis, which is triggered by a first-order phase transition (FOPT). The right-handed neutrinos (RHNs) are massless in the old vacuum, while they acquire a mass in the new vacuum bubbles, and the mass gap is huge compared with the FOPT temperature. The ultra-relativistic bubble walls sweep the RHNs into the bubbles, where the RHNs experience fast decay and generate the lepton asymmetry, which is further converted to the baryon asymmetry of the Universe (BAU). Since the RHNs are out of equilibrium inside the bubble, the generated BAU does not suffer from the thermal bath washout. We first discuss the general feature of such a FOPT leptogenesis mechanism, and then realize it in an extended $B-L$ model. The gravitational waves from $U(1)_{B-L}$ breaking could be detected at the future interferometers. ## 1 Introduction Leptogenesis is a class of mechanisms for solving the matter-antimatter asymmetry problem of the Universe Fukugita:1986hr ; Luty:1992un ; Davidson:2008bu . In this paradigm, the heavy right-handed neutrinos (RHNs) $\nu_{R}$ decay to the Standard Model (SM) particles via the CP violating Dirac Yukawa interaction $\lambda_{D}\bar{\ell}_{L}\tilde{H}\nu_{R}$, generating a lepton asymmetry which is then converted to a baryon asymmetry of the Universe (BAU) via the electroweak (EW) sphaleron process. A particularly attractive and elegant feature of this paradigm is that the same coupling can also explain the origin of neutrino masses Davis:1994jw ; Super- Kamiokande:1998kpq ; KamLAND:2002uet via Type-I seesaw Minkowski:1977sc . In the conventional thermal leptogenesis scenario, the generated BAU is determined by the competition between the enhancement from the CP violating phase in the $\lambda_{D}$ matrix and the washout effects from the thermal bath. Typically, the washout processes are so efficient that only $\mathcal{O}(10^{-2})$ of the originally generated BAU survives till today Buchmuller:2005eh . If different generations of RHNs have a mass hierarchy, then the CP violating phase has an upper bound proportional to the lightest $\nu_{R}$ mass (denoted as $M_{1}$), known as the Davidson-Ibarra bound Davidson:2002qv , which requires $M_{1}\gtrsim 10^{9}$ GeV to generate the observed BAU.111If the masses of at least two generations of $\nu_{R}$’s are nearly degenerate, the CP violating phase can be resonantly enhanced to be $\mathcal{O}(1)$, independent of the $\nu_{R}$ mass. In that case, successful leptogenesis can occur for $\mathcal{O}({\rm TeV})$ scale RHN Flanz:1996fb ; Pilaftsis:1997jf ; Pilaftsis:2003gt ; Dev:2017wwc . If some of the RHNs do not reach thermal equilibrium before the EW sphaleron process is switched off, then leptogenesis may even apply to the sub-EW scale $\nu_{R}$ Akhmedov:1998qx ; Drewes:2017zyw . Figure 1: The sketch of leptogenesis triggered by a FOPT. The blue and white regions represent the new vacuum bubble (in which $\left\langle\phi\right\rangle\neq 0$) and the old vacuum background (in which $\left\langle\phi\right\rangle=0$), respectively. The FOPT occurs at temperature $T_{p}$, and the bubble expands at a wall velocity $v_{w}$. Inside a bubble, $\nu_{R}$ gains a huge mass $M_{1}\gg T_{p}$, such that the $\nu_{R}$’s that have penetrated the bubble decay quickly, generating the BAU. The possible washout effects (some of which are illustrated inside the yellow rectangle) are suppressed since $M_{1}/T_{p}\gg 1$. In this article, we propose a new scenario of leptogenesis, which is triggered by a first-order phase transition (FOPT). The idea is quite simple: in many models such as the $B-L$ or Majoron model, RHNs obtain masses through the vacuum expectation value (VEV) of a scalar field $\phi$. If in the early Universe $\phi$ experiences a FOPT from $\left\langle\phi\right\rangle=0$ to $\left\langle\phi\right\rangle\neq 0$, then during the phase transition $\nu_{R}$ would be massless in the old vacuum, while massive in the new vacuum bubbles. If the mass gap is much larger than the FOPT temperature $T_{p}$, then the $\nu_{R}$’s that have penetrated into the new vacuum will be out of equilibrium and decay rapidly, generating the lepton asymmetry and hence the BAU. Since $M_{1}\gg T_{p}$, the washout effects are Boltzmann suppressed, and hence almost all the generated BAU can survive till today. The mechanism is sketched in Fig. 1. The idea of generating BAU via the fast decay of heavy particles crossing or being produced at the bubble wall is first proposed in Ref. Baldes:2021vyz , where the general features are discussed, and a benchmark model on a color triplet heavy scalar is given. Our work provides the first detailed study of applying this idea to leptogenesis. This scenario is distinct from the mechanisms involving the generation and diffusion of chiral and/or lepton asymmetry in the vicinity of bubble Chung:2009cb ; Chiang:2016vgf ; Guo:2016ixx ; DeVries:2018aul ; Xie:2020wzn ; Cline:2021dkf (see also Pascoli:2016gkf ; Long:2017rdo ).222See Refs. Pilaftsis:2008qt ; Shuve:2017jgj for leptogenesis during a second-order phase transition. One might concern that in case of $M_{1}/T_{p}\gg 1$ the RHNs do not have sufficient energy to cross the wall, as the average kinetic energy of $\nu_{R}$ is $\mathcal{O}(T_{p})$. Were that true, most of the RHNs will be trapped in the old vacuum Hong:2020est ; Baker:2021nyl ; Kawana:2021tde ; Baker:2021sno ; Arakawa:2021wgz , and only a tiny fraction of them can be “filtered” to the new vacuum Baker:2019ndr ; Chway:2019kft ; Chao:2020adk ; Baker:2021zsf , resulting in a much suppressed $\nu_{R}$ number density in the $\left\langle\phi\right\rangle\neq 0$ phase, and the resultant BAU is also negligible, as pointed out in Shuve:2017jgj ; Ahmadvand:2021vxs . This issue, however, can be solved, provided that the bubbles are expanding in an ultra- relativistic velocity, i.e. $\gamma_{w}\equiv(1-v_{w}^{2})^{-1/2}\gg 1$. In that case, in the wall rest frame the RHNs have an average kinetic energy $\mathcal{O}(\gamma_{w}T_{p})$, and almost all the $\nu_{R}$’s can penetrate into the bubble, yielding an unsuppressed number density $\sim T_{p}^{3}$ inside the new vacuum. We will show that $\gamma_{w}\gg 1$ can be easily achieved in a supercooling FOPT. See Refs. Katz:2016adq ; Azatov:2020ufh ; Azatov:2021ifm ; Azatov:2021irb for other cosmological implications of the ultra-relativistic walls. Compared with the conventional thermal leptogenesis scenario, our FOPT leptogenesis scenario has an enhanced RHN number density ($T_{p}^{3}$ instead of $(M_{1}T_{p})^{3/2}e^{-M_{1}/T_{p}}$) and does not suffer from thermal washout effects (which in general suppress the BAU by a factor of $\mathcal{O}(10^{-2})$). Therefore, naively we expect the CP violating phase needed by the FOPT scenario is much smaller than that in the conventional scenario, and hence the FOPT scenario is able to explain the BAU at a lower $M_{1}\lesssim 10^{9}$ GeV assuming the Davidson-Ibarra bound. However, the FOPT scenario suffers from the washout and dilution effects after the FOPT. This is because the ultra-relativistic wall requires a strong FOPT, which releases a large amount of latent heat and then reheats the Universe to a high temperature $T_{\rm rh}>T_{p}$. It is difficult to satisfy $M_{1}/T_{\rm rh}\gg 1$, which is the condition to suppress the washout effect after reheating. In addition, the generated BAU will be diluted by a factor of $(T_{p}/T_{\rm rh})^{3}$. In this article, we will provide a realization of the FOPT leptogenesis scenario in the classically conformal $B-L$ model with $M_{1}\gtrsim 10^{11}$ GeV, taking account of the reheating washout and dilution effects. While in the same parameter space, the conventional thermal leptogenesis generates a BAU much smaller than the observed value. Therefore, our research extends the parameter space for leptogenesis. This article is organized as follows. Before moving to the concrete model building, we will first study the dynamics of the FOPT leptogenesis in Section 2, keeping the discussions as general as we can. Then Section 3 introduces a concrete extended $B-L$ model and demonstrates the parameter space realizing a FOPT leptogenesis scenario. The possible gravitational wave (GW) signals are also studied. Finally, we conclude in Section 4. ## 2 Dynamics of the FOPT leptogenesis ### 2.1 Basic setup In this section, we do not specify a concrete model. The discussions apply to any model that contains the following two features. First, the RHNs $\nu_{R}^{i}$ have the Majorana Yukawa interaction $\mathcal{L}\supset-\sum_{i,j}\frac{1}{2}\left(\lambda_{R}^{ij}\bar{\nu}_{R}^{i,c}\nu_{R}^{j}\frac{\phi}{\sqrt{2}}+\text{h.c.}\right),$ (1) where $i$, $j$ are family indices, $\phi$ is a real scalar field that experiences a FOPT from $\left\langle\phi\right\rangle=0$ to $\left\langle\phi\right\rangle=v_{p}$ at temperature $T_{p}$. Therefore, the RHNs are massless in the old vacuum but obtain masses $M_{R}^{ij}=\lambda_{R}^{ij}v_{p}/\sqrt{2}$ inside the new vacuum bubble. For simplicity we set $M_{R}^{ij}={\rm diag}\\{M_{1},M_{2},M_{3}\\}$ and let $\nu_{R}^{1}$ be the lightest RHN. The second feature is that the RHNs should couple to the SM leptons and bosons via the Dirac Yukawa interaction $\mathcal{L}\supset-\sum_{i,j}\left(\lambda_{D}^{ij}\bar{\ell}_{L}^{i}\tilde{H}\nu_{R}^{j}+\text{h.c.}\right),$ (2) where $\ell_{L}^{i}=(\nu_{L}^{i},e_{L}^{i})^{T}$ is the lepton doublet, and $\tilde{H}=i\tau^{2}H^{*}$ is the charge conjugation of the Higgs doublet. Eq. (2) allows the RHNs to decay via $\nu_{R}^{i}\to\ell_{L}^{j}H/\bar{\ell}_{L}^{j}H^{*}$, and hence to generate the lepton asymmetry. The magnitude of the $\lambda_{D}^{ij}$ matrix can be estimated by the seesaw relation $m_{\nu}\approx|\lambda_{D}|^{2}v_{\rm EW}^{2}/(2M_{R})$ as $|\lambda_{D}|\approx 10^{-2}\times\left(\frac{M_{R}}{10^{11}~{}{\rm GeV}}\right)^{1/2}\left(\frac{m_{\nu}}{0.05~{}{\rm eV}}\right)^{1/2},$ (3) where $v_{\rm EW}=246$ GeV is the Higgs VEV. The CP violating effect is characterized by the RHN decay width asymmetry $\epsilon_{i}=\frac{\sum_{j}\Gamma(\nu_{R}^{i}\to\ell_{L}^{j}H)-\Gamma(\nu_{R}^{i}\to\bar{\ell}_{L}^{j}H^{*})}{\sum_{j}\Gamma(\nu_{R}^{i}\to\ell_{L}^{j}H)+\Gamma(\nu_{R}^{i}\to\bar{\ell}_{L}^{j}H^{*})},$ (4) which is related to the imaginary part of $(\lambda_{D}\lambda_{D}^{\dagger})^{2}$. A nonzero $\epsilon_{1}$ is needed for the generation of BAU. According to the Davidson-Ibarra bound Davidson:2002qv , $|\epsilon_{1}|\leqslant\frac{3}{8\pi}\frac{M_{1}(m_{3}-m_{1})}{v_{\rm EW}^{2}}\approx 10^{-5}\times\left(\frac{M_{1}}{10^{11}~{}{\rm GeV}}\right)\left(\frac{m_{\nu}}{0.05~{}{\rm eV}}\right).$ (5) We can see that $\epsilon_{1}$ is quite small even for a rather heavy $\nu_{R}^{1}$. Above is the basic setup of the FOPT leptogenesis mechanism. When applying this mechanism, we allow a concrete model to have more ingredients, such as a $Z^{\prime}$ boson from the gauged $U(1)_{B-L}$ group or other additional scalars and fermions. To realize our mechanism, three things must be checked. First, right after penetration, the RHNs should decay instead of annihilating with each other, or scattering with the particles in the thermal bath. Second, as the penetrated RHNs are typically boosted, so are the decay products, and it is necessary to check that they do not cause additional washout effects for the BAU. Third, after the FOPT, the Universe is reheated to $T_{\rm rh}$ and we have to confirm the thermal bath washout effects are still Boltzmann suppressed even at this temperature. Also, the dilution factor $(T_{p}/T_{\rm rh})^{3}$ should be included. All those issues will be addressed one by one in the subsequent subsections. ### 2.2 RHNs right after penetration In the vicinity of the bubble wall, we can model the bubble expansion as a one-dimension problem: the wall is a plane perpendicular to the $z$-axis and moving in a velocity $-v_{w}$ with $v_{w}>0$. The $z\to-\infty$ region is the old phase with $\left\langle\phi\right\rangle=0$, where the RHNs are assumed to be in thermal equilibrium. Therefore, the lightest RHN $\nu_{R}^{1}$ follows a boosted massless Fermi-Dirac distribution in the wall frame $f_{\mathfrak{s}}^{\rm wa}(p_{x},p_{y},p_{z})=\frac{1}{e^{\gamma_{w}(E_{0}-v_{w}p_{z})/T_{p}}+1},$ (6) where $E_{0}=\sqrt{p_{x}^{2}+p_{y}^{2}+p_{z}^{2}}$. The corresponding particle number density is $n_{\mathfrak{s}}^{\rm wa}=g_{\nu}\int\frac{d^{3}p}{(2\pi)^{3}}f_{\mathfrak{s}}^{\rm wa}(p_{x},p_{y},p_{z})=\gamma_{w}\times g_{\nu}\frac{3\zeta_{3}}{4\pi^{2}}T_{p}^{3}\equiv\gamma_{w}n_{\mathfrak{s}}^{\rm pl},$ (7) where $\zeta_{3}\approx 1.202$, and $g_{\nu}=2$ is the spin degeneracy factor. $n_{\mathfrak{s}}^{\rm wa}$ is enhanced by a factor of $\gamma_{w}$ compared with $n_{\mathfrak{s}}^{\rm pl}$ in the plasma frame, which can be understood as the Lorentz contraction of the volume element. Note that we use a superscript “wa” (“pl”) to label the wall frame (plasma frame), and a subscript “$\mathfrak{s}$” (“$\mathfrak{h}$”) to label the old vacuum with $\left\langle\phi\right\rangle=0$ (new vacuum with $\left\langle\phi\right\rangle=v_{p}$), respectively. In the wall frame the average $z$-component momentum is $\left\langle p_{z}\right\rangle^{\rm wa}_{\mathfrak{s}}=7\pi^{4}v_{w}\gamma_{w}T_{p}/(135\zeta_{3})$, and hence the RHNs are boosted along the $+z$ direction. If $\gamma_{w}$ is large enough, $\left\langle p_{z}\right\rangle^{\rm wa}_{\mathfrak{s}}$ is enhanced that most $\nu_{R}^{1}$’s have sufficient energy to overcome the mass gap $M_{1}$ between the new and old vacua. Hereafter we only consider the $\gamma_{w}\gtrsim M_{1}/T_{p}\gg 1$ limit, then the $\nu_{R}^{1}$’s can generally cross the wall and enter the new vacuum. Due to energy conservation in the wall frame, after crossing the wall, in the new vacuum the average energy and momentum of $\nu_{R}^{1}$ should be $\left\langle E\right\rangle^{\rm wa}_{\mathfrak{h}}\sim\gamma_{w}T_{p},\quad\left\langle p_{z}\right\rangle^{\rm wa}_{\mathfrak{h}}\sim\sqrt{\gamma_{w}^{2}T_{p}^{2}-M_{1}^{2}}\sim\gamma_{w}T_{p}-\frac{M_{1}^{2}}{2\gamma_{w}T_{p}}.$ (8) Transforming back to the plasma frame, one obtains the typical $\nu_{R}^{1}$ energy and momentum after penetrating into the bubble $\left\langle E\right\rangle^{\rm pl}_{\mathfrak{h}}\sim M_{1}\frac{M_{1}}{T_{p}},\quad\left\langle p_{z}\right\rangle^{\rm pl}_{\mathfrak{h}}\sim-M_{1}\frac{M_{1}}{T_{p}},$ (9) which means in the plasma frame the $\nu_{R}^{1}$’s that have entered the new vacuum are boosted in the $-z$ direction by a Lorentz factor of $\gamma_{1}\equiv M_{1}/T_{p}\gg 1$. In other words, in the plasma frame, part of the wall kinetic energy is converted into the rest mass and kinetic energy of the RHNs that enter the bubble. This causes the energy loss of the wall and serves as a source of the friction force acting on the wall, as we will discuss in Eq. (38). After entering the new vacuum, a $\nu_{R}^{1}$ may decay, or annihilate with another $\nu_{R}^{1}$, or scatter with the particles in the plasma. When calculating these interaction rates, it is convenient to work in the “$\nu_{R}^{1}$ gas frame”, which is boosted along the $-z$ direction with a Lorentz factor $\gamma_{1}$. In that frame, the $\nu_{R}$’s are on average at rest, and with a relative velocity $v^{\rm ga}_{\rm rel}\sim T_{p}/M_{1}$ to each other, and the number density is $n^{\rm ga}_{\mathfrak{h}}\approx\gamma_{1}n^{\rm pl}_{\mathfrak{s}}$ Baldes:2021vyz . In the gas frame, the $\nu_{R}^{1}$ decay rate is $\Gamma_{D}=\frac{|\lambda_{D}|^{2}}{8\pi}M_{1}\approx\frac{m_{\nu}}{4\pi}\left(\frac{M_{1}}{v_{\rm EW}}\right)^{2},$ (10) where we have assumed one-flavor SM final state for simplicity, and the second approximate equality is from the seesaw relation. Depending on the concrete model, the $\nu_{R}^{1}$’s can annihilate with each other via various channels. For example, the Majorana interaction Eq. (1) induces the annihilation to a pair of scalars, i.e. $\nu_{R}^{1}\nu_{R}^{1}\to\phi\phi$, if kinematically allowed, while the Dirac interaction Eq. (2) induces $\nu_{R}^{1}\nu_{R}^{1}\to\ell_{L}\bar{\ell}_{L}/HH^{*}$. If the model is embedded into a gauged $U(1)_{B-L}$ group, then the $\nu_{R}^{1}\nu_{R}^{1}\to Z^{\prime*}\to f\bar{f}$ and $\nu_{R}^{1}\nu_{R}^{1}\to Z^{\prime}Z^{\prime}$ channels may be important, where $Z^{\prime}$ and $f$ denote the $U(1)_{B-L}$ gauge boson and SM fermions, respectively. Then the annihilation rate can be expressed as $\Gamma_{\rm ann}=\sum_{X}n_{\mathfrak{h}}^{\rm ga}\left\langle\sigma_{\nu_{R}^{1}\nu_{R}^{1}\to X}v_{\rm rel}\right\rangle_{\rm ga},$ (11) summing over all possible annihilation final states. The subscript “ga” of $\left\langle\sigma v\right\rangle$ is to remind us that this is an average performed in the gas frame. $\Gamma_{\rm ann}$ scales as $T_{p}^{3}/M_{1}^{2}$. Another possible fate of the penetrated $\nu_{R}^{1}$’s is to scatter with the particles in the plasma. The Dirac Yukawa interaction can mediate scattering channels such as $\nu_{R}^{1}\ell_{L}\to q_{L}\bar{t}_{R}$ or $\nu_{R}^{1}t_{R}\to q_{L}\bar{\ell}_{L}$ and their charge conjugations and crossings. The corresponding interaction rates are $\Gamma_{\rm sca}=\sum_{a,X}\gamma_{1}n_{a}^{\rm pl}\left\langle\sigma_{\nu_{R}^{1}a\to X}\right\rangle_{\rm ga},$ (12) summing over all possible initial states $a$ and final states $X$. In the gas frame, the plasma species $a$ is boosted by a Lorentz factor of $\gamma_{1}$, therefore the number density is enhanced by $\gamma_{1}$ compared with $n_{a}^{\rm pl}\sim T_{p}^{3}$ in the plasma frame, and we have taken the relative velocity between $\nu_{R}^{1}$ and $a$ to be approximately 1. The scattering cross section $\langle\sigma_{\nu_{R}^{1}a\to X}\rangle_{\rm ga}$ scales as $1/M_{1}^{2}$, thus $\Gamma_{\rm sca}\sim T_{p}^{2}/M_{1}$. For the sake of leptogenesis, we want the $\nu_{R}^{1}$’s to decay rather than annihilate with each other or scatter with the particles in the plasma, i.e. $\Gamma_{D}>\Gamma_{\rm ann},\quad\Gamma_{D}>\Gamma_{\rm sca}.$ (13) Under this condition, the $\nu_{R}^{1}$’s swept by the bubble wall decay immediately and generate a BAU of $Y_{B}^{p}=-c_{s}\epsilon_{1}\frac{n_{\mathfrak{s}}^{\rm pl}}{s}=-c_{s}\epsilon_{1}\frac{135\zeta_{3}}{4\pi^{4}g_{*}},$ (14) where $s=(2\pi^{2}/45)g_{*}T^{3}$ is the entropy density with $g_{*}\approx 100$ the number of relativistic degrees of freedom, and $c_{s}=28/79$ is the conversion factor from the lepton asymmetry to the BAU. As the upper limit of CP asymmetry $\epsilon_{1}$ is constrained by Eq. (5), we see that the maximal value of BAU is proportional to $M_{1}$. ### 2.3 The boosted decay products of RHNs In the plasma frame, the $\nu_{R}^{1}$’s in new vacuum are moving along the $-z$ direction with a typical energy $E_{1}=\gamma_{1}M_{1}=M_{1}^{2}/T_{p}$. The decay products $\ell_{L}H/\bar{\ell}_{L}H^{*}$ share the same order of energy and hence are also boosted. These out-of-equilibrium SM particles interact with other SM particles in the plasma, causing cascade scatterings, which might reduce the BAU. Following Ref. Baldes:2021vyz , we model the energy of the particles that in the $n$-th step cascade scattering as $E_{1}/2^{n}$. The washout effect is mainly from the possibility that the energetic particles fuse to an on-shell RHN, i.e. $\ell_{L}H\to\nu_{R}^{1}\to\bar{\ell}_{L}H^{*}$, and the corresponding rate can be estimated as Baldes:2021vyz $\Gamma_{\rm on}\approx\frac{\Gamma_{\ell_{L}H}\Gamma_{\bar{\ell}_{L}H^{*}}}{\Gamma_{D}}\frac{M_{1}T_{p}}{E_{1}^{2}}\exp\left\\{-\frac{M_{1}^{2}}{4E_{1}T_{p}}\right\\}\approx\frac{2^{2n}T_{p}^{3}}{4M_{1}^{3}}\Gamma_{D}e^{-2^{n}/4},$ (15) where we have approximated $\Gamma_{\ell_{L}H}\approx\Gamma_{\bar{\ell}_{L}H^{*}}\approx\Gamma_{D}/2$. We can see that the washout rate decreases very quickly as $n$ increases, so we only need to account for the first step of scattering, i.e. $n=1$. Being charged under the SM gauge groups, the boosted $\ell_{L}/\bar{\ell}_{L}$ and $H/H^{*}$ particles also thermalize via the elastic EW scattering with the SM particles in the plasma. The thermalization rate can be estimated by calculating the energy loss of a boosted lepton in an elastic scattering with another SM particle in the thermal bath. The two incoming particles have momenta $p_{1}^{\mu}=\left(\frac{E_{1}}{2^{n}},0,0,\frac{E_{1}}{2^{n}}\right),\quad p_{2}^{\mu}=\left(T_{p},0,0,-T_{p}\right),$ (16) respectively, and they scatter through exchanging a $t$-channel $W/Z$ boson. It is straightforward to show that the energy loss of the boosted lepton is $\delta E_{1}\approx-\hat{t}/(4T_{p})$ in the plasma frame, and the scattering cross section is $\frac{d\sigma}{d\hat{t}}=\frac{1}{16\pi\hat{s}^{2}}|i\mathcal{M}|^{2}\approx\frac{1}{16\pi\hat{s}^{2}}\frac{g_{2}^{4}\hat{s}^{2}}{\hat{t}^{2}}=\frac{\pi\alpha_{W}^{2}}{\hat{t}^{2}},$ (17) where $g_{2}$ is the gauge coupling of the $SU(2)_{L}$ group, and $\hat{s}$, $\hat{t}$ are the Mandelstam variables. Therefore, the thermalization rate can be estimated as Baldes:2021vyz $\Gamma_{\rm th}=\frac{n_{\rm EW}^{\rm pl}}{E_{1}/2^{n}}\int_{-\hat{s}}^{-m_{W}^{2}}d\hat{t}\frac{d\sigma}{d\hat{t}}\delta E_{1}=\frac{\zeta_{3}g_{\rm EW}2^{n}\alpha_{W}^{2}T_{p}^{3}}{4\pi M_{1}^{2}}\ln\frac{3M_{1}^{2}}{5\pi 2^{n}\alpha_{W}T_{p}^{2}},$ (18) where $n_{\rm EW}^{\rm pl}$ is the number density of the particles that participate in such EW elastic scattering, and the corresponding number of degrees of freedom is $g_{\rm EW}=46$ including the SM fermions and gauge bosons as well as the Higgs doublet. The upper limit of integration of $\hat{t}$ is set to $-m_{W}^{2}$ to avoid infrared divergence, where $m_{W}^{2}=20\pi\alpha_{W}T_{p}^{2}/3$ is the thermal mass of the $W$ boson Weldon:1982aq . We see that $\Gamma_{\rm th}$ increases rapidly with $n$. To avoid washout from the boosted decay products, we require $\Gamma_{\rm th}\big{|}_{n=1}>\Gamma_{\rm on}\big{|}_{n=1},\quad\Gamma_{\rm th}\big{|}_{n=1}>H_{p},$ (19) where $H_{p}$ is the Hubble constant at the FOPT temperature. Note that $H_{p}$ is not solely determined by temperature, as the vacuum energy from the potential could dominate the energy of the Universe in the case of a supercooling FOPT. Once these inequalities are satisfied, the boosted decay products $\ell_{L}H/\bar{\ell}_{L}H^{*}$ thermalize very quickly, and the washout effect is completely negligible. ### 2.4 Reheating after the FOPT completes The latent heat released from a FOPT will reheat the Universe to a new temperature $T_{\rm rh}=(1+\alpha)^{1/4}T_{p}$, where $\alpha$ is the ratio of latent heat to the radiation energy density of the Universe, whose detailed definition will be given in Section 3.3. Since our scenario needs a strong FOPT to provide fast moving bubble walls, typically $\alpha\gg 1$, the reheating temperature could be very high, such that the $B-L$ violating interactions are active again, erasing the generated $B-L$ asymmetry as the situation in the conventional thermal leptogenesis. The first type of dangerous processes is the thermally produced RHNs. For the inverse decay, i.e. $\ell_{L}H/\bar{\ell}_{L}H^{*}\to\nu_{R}^{i}$, the simplified Boltzmann equation gives $\frac{dY_{\nu_{R}^{i}}}{dt}=\frac{1}{s}\int\frac{d^{3}p}{(2\pi)^{3}2E_{i}}e^{-E_{i}/T}M_{i}\Gamma_{D,i}=\frac{\Gamma_{D,i}M_{i}^{2}T}{4\pi^{2}s}K_{1}\left(\frac{M_{i}}{T}\right),$ (20) where $Y_{\nu_{R}^{i}}=n_{\nu_{R}^{i}}/s$ is the yield of the $i$-th generation of RHN, $E_{i}\equiv\sqrt{|\textbf{p}|^{2}+M_{i}^{2}}$ is the on- shell energy, $\Gamma_{D,i}$ is the decay width of $\nu_{R}^{i}$, and $K_{1}$ is the modified Bessel function of the first kind. By this, the inverse decay rate can be estimated as $\Gamma_{\rm ID}^{i}=\frac{\Gamma_{D,i}M_{i}^{2}T_{\rm rh}}{4\pi^{2}s_{\rm rh}}K_{1}\left(\frac{M_{i}}{T_{\rm rh}}\right),$ (21) where $s_{\rm rh}=s|_{T_{\rm rh}}$. We should have $\Gamma_{\rm ID}^{i}<H_{\rm rh}$ such that the RHNs are not thermally produced after the FOPT, where the Hubble constant $H_{\rm rh}=2\pi\sqrt{\pi g_{*}/45}T_{\rm rh}^{2}/M_{\rm Pl}$ with $M_{\rm Pl}=1.22\times 10^{19}$ GeV, as the Universe is in a radiation era after the FOPT. The rate of RHNs being produced in pair in the plasma can be estimated as $\Gamma_{\rm pr}^{i}=\sum_{X}n_{\nu_{R}^{i}}^{\rm eq}\left\langle\sigma_{\nu_{R}^{i}\nu_{R}^{i}\to X}\right\rangle=\sum_{X}2\frac{M_{i}^{2}T_{\rm rh}}{2\pi^{2}}K_{2}\left(\frac{M_{i}}{T_{\rm rh}}\right)\left\langle\sigma_{\nu_{R}^{i}\nu_{R}^{i}\to X}\right\rangle,$ (22) where $n_{\nu_{R}^{i}}^{\rm eq}$ is the equilibrium distribution of $\nu_{R}^{i}$ in the plasma whose concrete expression is given in the second equality with $K_{2}$ being the modified Bessel function of the second kind. Depending on the model, the pair production channels could include $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime*}\to f\bar{f}$, $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime}Z^{\prime}$, $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime}\phi$, $\nu_{R}^{i}\nu_{R}^{i}\to\phi\phi$, etc. We require $\Gamma_{\rm ID}^{i}<H_{\rm rh},\quad\Gamma_{\rm pr}^{i}<H_{\rm rh},$ (23) to avoid thermal bath washout after the FOPT reheating. Both $\Gamma_{\rm ID}^{i}$ and $\Gamma_{\rm pr}^{i}$ are suppressed by the Bessel functions, which are $K_{j}(z)\sim e^{-z}\sqrt{\pi/(2z)}$ for $z\gg 1$. Therefore, $M_{i}/T_{\rm rh}\gg 1$ could exponentially suppress those washout effects. In other words, we need $T_{\rm rh}=(1+\alpha)^{1/4}T_{p}$ still small compared with the RHN masses; this is, however, in tension with the requirement of a strong supercooling FOPT which generally leads to $\alpha\gg 1$. Provided that Eqs. (13), (19) and (23) are satisfied, the FOPT leptogenesis scenario is realized. Namely, the $\nu_{R}^{1}$’s that have entered the new vacuum bubble during the FOPT will decay and generate the lepton asymmetry, which is not washed out by the plasma. The BAU survives today would be $Y_{B}=-c_{s}\epsilon_{1}\frac{135\zeta_{3}}{4\pi^{4}g_{*}}\left(\frac{T_{p}}{T_{\rm rh}}\right)^{3},$ (24) which is diluted by a factor of $(T_{p}/T_{\rm rh})^{3}$ compared to Eq. (14), due to the entropy production of the FOPT reheating. For a successful FOPT leptogenesis, $Y_{B}$ should reach the observed BAU, i.e. $Y_{B}^{\rm obs}\approx 0.9\times 10^{-10}$ ParticleDataGroup:2020ssz . In summary, in the FOPT leptogenesis scenario, the FOPT should be strong to provide fast expanding bubbles, which sweep the RHN into the new vacuum. Therefore, the abundant massless $\nu_{R}^{1}$ density in the old vacuum can be directly transferred into the new vacuum, where the $\nu_{R}^{1}$’s are so massive that their out-of-equilibrium decay can generate the BAU without the washout effects. However, the reheating effects from the strong FOPT might cause additional washout and dilution effects, and hence the application of this mechanism requires a highly non-trivial tradeoff between strong FOPT and reheating. A concrete model that succeeds to realize the FOPT leptogenesis scenario is given in the next section. ## 3 An extended classically conformal $B-L$ model ### 3.1 The model and particle spectrum The conventional (or say, minimal) $B-L$ model Davidson:1978pm ; Marshak:1979fm ; Mohapatra:1980qe ; Davidson:1987mh is defined by gauging the $U(1)_{B-L}$ group and introducing three generation of RHNs (with $B-L$ quantum number $X=-1$) for gauge anomaly cancellation, and one complex scalar field $\Phi=(\phi+i\eta)/\sqrt{2}$ charged as $X=2$ to break the $U(1)_{B-L}$ spontaneously. In this work, we extend the model with one more complex scalar $S$ which has the same quantum number with $\Phi$. The relevant Lagrangian can be written as $\begin{split}\mathcal{L}_{B-L}=&~{}\sum_{i}\bar{\nu}_{R}^{i}i\not{D}\nu_{R}^{i}-\frac{1}{2}\sum_{i,j}\left(\lambda_{R}^{ij}\bar{\nu}_{R}^{i,c}\Phi\nu_{R}^{j}+\text{h.c.}\right)-\sum_{i,j}\left(\lambda_{D}^{ij}\bar{\ell}_{L}^{i}\tilde{H}\nu_{R}^{j}+\text{h.c.}\right)\\\ &~{}+D_{\mu}\Phi^{\dagger}D^{\mu}\Phi+D_{\mu}S^{\dagger}D^{\mu}S-V(\Phi,S)-\frac{1}{4}Z^{\prime}_{\mu\nu}Z^{\prime\mu\nu},\end{split}$ (25) where $D_{\mu}=\partial_{\mu}-ig_{B-L}XZ_{\mu}^{\prime}$ is the $U(1)_{B-L}$ gauge covariant derivative. For simplicity, we take $\lambda_{R}^{ij}={\rm diag}\\{\lambda_{R,1},\lambda_{R,2},\lambda_{R,3}\\}$. Note that the SM fermions are also charged under the $U(1)_{B-L}$ group, with the quarks having $X=1/3$ and the leptons having $X=-1$. The reason why we have to extend the minimal $B-L$ model will be given in Section 3.3. In principle, $S$ can also couple to RHNs via $\bar{\nu}_{R}^{i,c}S\nu_{R}^{j}$; however, as we will see, $S$ never gets a VEV, thus it does not contribute to the RHN mass. On the other hand, $S$ can provide extra CP violating phase to $N_{1}$ decay LeDall:2014too ; Alanne:2018brf . We do not consider such CP asymmetry enhancement effects here, as they are irrelevant to the core of our FOPT leptogenesis mechanism. As for the scalar potential $V(\Phi,S)$, we adopt the classically conformal assumption Iso:2009ss ; Iso:2009nw ; Das:2015nwk as it is known that this kind of potential favors a strong supercooling FOPT Jinno:2016knw ; Iso:2017uuu ; Marzo:2018nov ; Ellis:2019oqb ; Bian:2019szo ; Ellis:2020nnr ; Jung:2021vap . At three level, the potential is $V_{\rm tree}(\Phi,S)=\lambda_{\phi}|\Phi|^{4}+\lambda_{s}|S|^{4}+\lambda_{\phi s}|\Phi|^{2}|S|^{2},$ (26) where only dimensionless quartic couplings are involved. The one-loop contributions from $Z^{\prime}$ and $\nu_{R}^{i}$ Iso:2009ss ; Iso:2009nw ; Das:2015nwk and $S$ Haruna:2019zeu ; Hamada:2020wjh ; Hamada:2021jls ; Kawana:2022fum induce a Colman-Weinberg potential for $\Phi$, which in the unitary gauge can be written as $V(\phi)=V_{0}+\frac{B}{4}\phi^{4}\left(\ln\frac{\phi}{v_{\phi}}-\frac{1}{4}\right),$ (27) where $B=\frac{6}{\pi^{2}}\left(\frac{\lambda_{\phi s}^{2}}{96}+g_{B-L}^{4}-\sum_{i}\frac{\lambda_{R,i}^{4}}{96}\right),$ (28) is a positive constant. This potential has the a minimum at $\left\langle\phi\right\rangle=v_{\phi}\neq 0$, which breaks the $U(1)_{B-L}$ symmetry spontaneously and provides masses for the particles in Eq. (25) as follows $M_{Z^{\prime}}=2g_{B-L}v_{\phi},\quad M_{i}=\lambda_{R,i}\frac{v_{\phi}}{\sqrt{2}},\quad M_{\phi}=\sqrt{B}v_{\phi},\quad M_{S}=\frac{1}{\sqrt{2}}\sqrt{\lambda_{\phi s}}v_{\phi}.$ (29) The vacuum energy is adopted as $V_{0}=Bv_{\phi}^{4}/16$ to have $V(v_{\phi})=0$. ### 3.2 FOPT and ultra-relativistic bubble walls At finite temperature, the potential receives corrections from the one-loop thermal integrals and daisy resummation terms $\Delta V_{T}(\phi,T)=2\frac{T^{4}}{2\pi^{2}}J_{B}\left(\frac{\lambda_{\phi s}\phi^{2}}{2T^{2}}\right)+3\frac{T^{4}}{2\pi^{2}}J_{B}\left(\frac{4g_{B-L}^{2}\phi^{2}}{T^{2}}\right)+2\sum_{i}\frac{T^{4}}{2\pi^{2}}J_{F}\left(\frac{\lambda_{R,i}^{2}\phi^{2}}{2T^{2}}\right)\\\ -2\frac{T}{12\pi}\frac{\lambda_{\phi s}^{3/2}}{2^{3/2}}\left[\left(\phi^{2}+\frac{T^{2}}{12}\right)^{3/2}-\phi^{3}\right]-\frac{2g_{B-L}^{3}}{3\pi}T\left[\left(\phi^{2}+T^{2}\right)^{3/2}-\phi^{3}\right],$ (30) where thermal integral functions are defined as $J_{B/F}(y)=\pm\int_{0}^{\infty}dxx^{2}\ln\left(1\mp e^{-\sqrt{x^{2}+y}}\right).$ (31) Therefore, the complete one-loop thermal potential is $V_{T}(\phi,T)=V(\phi)+\Delta V_{T}(\phi,T),$ (32) which can trigger a FOPT in the early Universe. The FOPT of the minimal classically conformal $B-L$ model has already been extensively studied Jinno:2016knw ; Iso:2017uuu ; Marzo:2018nov ; Ellis:2019oqb ; Bian:2019szo ; Ellis:2020nnr , and in this work we use homemade codes to derive the FOPT dynamics of our extended $B-L$ model. When $T$ is sufficiently high, the Universe stays in the $U(1)_{B-L}$ preserving vacuum $\phi=0$. At the critical temperature $T_{c}$, the potential $V_{T}(\phi,T)$ develops another degenerate vacuum in $\phi=v_{c}$. When $T$ falls below $T_{c}$, the $U(1)_{B-L}$ breaking vacuum is energetically preferred, i.e. $\left\langle\phi\right\rangle=v(T)$ becomes the true vacuum and we have $v(T_{c})=v_{c}$ and $v(0)=v_{\phi}$. The Universe then acquires a decay probability Linde:1981zj $\Gamma(T)\sim T^{4}\left(\frac{S_{3}(T)}{2\pi T}\right)^{3/2}e^{-S_{3}(T)/T},$ (33) to the true vacuum, where $S_{3}(T)$ is the action of the bounce solution, which we numerically resolve from $V_{T}(\phi,T)$ based on the shooting algorithm. When the decay probability in a Hubble volume and a Hubble time scale reaches $\mathcal{O}(1)$, new vacuum bubbles start to nucleate. Given $\Gamma(T)$, the volume fraction of the old vacuum in the Universe is Guth:1979bh ; Guth:1981uk $p(T)\equiv e^{-I(T)}=\exp\left\\{-\frac{4\pi}{3}\int_{T}^{T_{c}}dT^{\prime}\frac{\Gamma(T^{\prime})}{T^{\prime 4}H(T^{\prime})}\left[\int_{T}^{T^{\prime}}d\tilde{T}\frac{1}{H(\tilde{T})}\right]^{3}\right\\},$ (34) where we have taken the bubble velocity $v_{w}\to 1$, and the Hubble constant is given by the Friedmann equation $H^{2}(T)=\frac{8\pi}{3M_{\rm Pl}^{2}}\left(\frac{\pi^{2}}{30}g_{*}T^{4}+V_{0}\right).$ (35) By definition $p(T_{c})=1$. When $T$ decreases, $p(T)\to 0$, and the Universe transfers entirely to the new vacuum, completing the FOPT. The milestone that the new vacuum bubbles form an infinite connected cluster is called percolation, and it happens at $p(T_{p})=0.71$ rintoul1997precise , which defines the percolation temperature $T_{p}$ and VEV $v_{p}=v(T_{p})$. We will calculate the leptogenesis at this temperature.333Note that the large vacuum energy released from a supercooled FOPT can lead to a short vacuum domination era. We have checked that the FOPT can complete via verifying Ellis:2018mja $H(T)\left(3+T\frac{dI(T)}{dT}\right)\Big{|}_{T_{p}}<0,$ (36) where $I(T)$ is defined in Eq. (34). Define $\Delta V(T)=V_{T}(0,T)-V_{T}(v(T),T),$ (37) as the positive free energy difference between the true and false vacuum, and let $\Delta V_{p}\equiv\Delta V(T_{p})$. The behavior of wall velocity is determined by vacuum pressure $\Delta V_{p}$ and the leading-order (LO) friction Bodeker:2009qy $\mathcal{P}_{1\to 1}=\left(\lambda_{\phi s}+12g_{B-L}^{2}+\frac{1}{2}\sum_{i}\lambda_{R,i}^{2}\right)\frac{v_{p}^{2}T_{p}^{2}}{24},$ (38) which comes from the mass differences of $S$, $Z^{\prime}$ and RHNs between the two sides of the bubble wall. If $\Delta V_{p}>\mathcal{P}_{1\to 1},$ (39) then the wall will be accelerated up to a high velocity that is very close to the speed of light, providing necessary condition for our FOPT leptogenesis scenario. When $\gamma_{w}\gg 1$, the beyond LO contributions to the friction force become important. Ref. Bodeker:2017cim performs the first next-to-leading- order (NLO) calculation, showing that the emission of gauge bosons when particles cross the wall can induce a friction force scaling as $\mathcal{P}_{1\to 2}\propto\gamma_{w}$, preventing the bubble walls from runaway. Recently, friction force on the wall is studied in many literatures Hoche:2020ysm ; Gouttenoire:2021kjv ; Ai:2021kak ; Cai:2020djd ; Wang:2022txy ; Laurent:2022jrs , and we take the results of Refs. Hoche:2020ysm ; Gouttenoire:2021kjv to calculate the evolution of bubble walls. While both two works consider the resummation effect of the $1\to N$ emission of gauge bosons, they obtain different friction pressures $\mathcal{P}_{1\to N}$. Ref. Hoche:2020ysm shows $\mathcal{P}_{1\to N}\propto\gamma_{w}^{2}$, however, Ref. Gouttenoire:2021kjv gives $\mathcal{P}_{1\to N}\propto\gamma_{w}$. More concretely, applying to our model we find $\mathcal{P}_{1\to N}^{\text{\scriptsize\cite[cite]{\@@bibref{Authors Phrase1YearPhrase2}{Hoche:2020ysm}{\@@citephrase{(}}{\@@citephrase{)}}}}}\approx\gamma_{w}^{2}\left[\left(4\times 2\times 3\times\frac{1}{9}+4+2\right)\times 3\right]\frac{3\zeta_{3}(2\ln 2-1)}{32\pi^{4}}g_{B-L}^{2}T_{p}^{4},$ (40) and $\mathcal{P}_{1\to N}^{\text{\scriptsize\cite[cite]{\@@bibref{Authors Phrase1YearPhrase2}{Gouttenoire:2021kjv}{\@@citephrase{(}}{\@@citephrase{)}}}}}\approx\gamma_{w}\left[\left(4\times 2\times 3\times\frac{1}{9}+4+2\right)\times 3\right]\frac{3\kappa\zeta_{3}}{8\pi^{4}}g_{B-L}^{3}v_{p}T_{p}^{3}\ln\frac{v_{p}}{T_{p}},$ (41) where only the dominant SM fermion contributions are included, and $\kappa\approx 4$. The wall stops accelerating when the friction force balances the vacuum pressure, i.e. $\Delta V_{p}=\mathcal{P}_{1\to 1}+\mathcal{P}_{1\to N}$. Therefore, given the resummed friction force $\mathcal{P}_{1\to N}$, we are able to derive the terminal wall velocity $\gamma_{\rm eq}^{\text{\scriptsize\cite[cite]{\@@bibref{Authors Phrase1YearPhrase2}{Hoche:2020ysm}{\@@citephrase{(}}{\@@citephrase{)}}}}}=\sqrt{\frac{\Delta V_{p}-\mathcal{P}_{1\to 1}}{\mathcal{P}_{1\to N}^{\text{\scriptsize\cite[cite]{\@@bibref{Authors Phrase1YearPhrase2}{Hoche:2020ysm}{\@@citephrase{(}}{\@@citephrase{)}}}}}/\gamma_{w}^{2}}};\quad\gamma_{\rm eq}^{\text{\scriptsize\cite[cite]{\@@bibref{Authors Phrase1YearPhrase2}{Gouttenoire:2021kjv}{\@@citephrase{(}}{\@@citephrase{)}}}}}=\frac{\Delta V_{p}-\mathcal{P}_{1\to 1}}{\mathcal{P}_{1\to N}^{\text{\scriptsize\cite[cite]{\@@bibref{Authors Phrase1YearPhrase2}{Gouttenoire:2021kjv}{\@@citephrase{(}}{\@@citephrase{)}}}}}/\gamma_{w}}.$ (42) However, the wall might have not yet reached the terminal velocity at $T_{p}$. We use the method from Refs. Ellis:2019oqb ; Ellis:2020nnr to evaluate the evolution of the wall velocity and confirm that for the parameter space of interest $\gamma_{w}\big{|}_{T_{p}}\gg M_{1}/T_{p}$ is indeed satisfied for either choice of $\mathcal{P}_{1\to N}$, and hence all the $\nu_{R}^{1}$’s can penetrate the wall, which is the necessary condition for the FOPT leptogenesis. ### 3.3 FOPT leptogenesis and the GW signals Given the FOPT environment with ultra-relativistic bubble walls that can sweep all the RHNs into the new vacuum, we apply Eqs. (13), (19) and (23) in Section 2 to our model Eq. (25) to ensure that the penetrated $\nu_{R}^{1}$’s indeed decay before annihilating and scattering, and the boosted decay products thermalize instead of erasing the $B-L$ asymmetry, and the reheating temperature is still significantly below the RHN masses so that the thermal washout processes are Boltzmann suppressed. As for the $\nu_{R}^{1}$’s right after penetration, the possible annihilation channels include $\nu_{R}^{1}\nu_{R}^{1}\to\ell_{L}\bar{\ell}_{L}/HH^{*}$, $\nu_{R}^{1}\nu_{R}^{1}\to Z^{\prime*}\to f\bar{f}$ with $f$ being the SM fermions, and $\nu_{R}^{1}\nu_{R}^{1}\to Z^{\prime}Z^{\prime}/Z^{\prime}\phi/\phi\phi$. We calculate the corresponding annihilation rates using the FeynCalc package Mertig:1990an ; Shtabovenko:2016sxi ; Shtabovenko:2020gxv and check that they are consistent with those in Refs. Blanchet:2009bu ; Heeck:2016oda ; Duerr:2016tmh . The $\nu_{R}^{1}$ scattering processes include $\nu_{R}^{1}\ell_{L}\to q_{L}\bar{t}_{R}$, $\nu_{R}^{1}t_{R}\to q_{L}\bar{\ell}_{L}$ and their charge conjugations and crossing diagrams. $\Gamma_{D}>\Gamma_{\rm ann}$ and $\Gamma_{D}>\Gamma_{\rm sca}$ are required for the fast decay of $\nu_{R}^{1}$, see Eq. (13). After decay, $\Gamma_{\rm th}>H_{p}$ and $\Gamma_{\rm th}>\Gamma_{\rm on}$ are needed for the decay products to thermalize quickly and do not reduce the generated $B-L$ asymmetry, see Eq. (19). The reheating temperature is $T_{\rm rh}=(1+\alpha)^{1/4}T_{p}$, where $\alpha=\frac{1}{g_{*}\pi^{2}T_{p}^{4}/30}\left(\Delta V(T)-\frac{T}{4}\frac{\partial\Delta V(T)}{\partial T}\right)\Big{|}_{T_{p}},$ (43) is the ratio of the FOPT latent heat to the radiation energy density. After confirming that the washout effects are suppressed even after reheating, i.e. Eq. (23), the eventual generated BAU is given by Eq. (24). As stated in the Introduction, it is challenging to strike a balance between a strong FOPT and a not-so-strong reheating. A supercooling FOPT can provide fast-moving bubble walls, but the resultant large latent heat will push the Universe to a high $T_{\rm rh}$ that the thermal washout processes are active again, reducing any $B-L$ asymmetry generated during the FOPT. Especially, we find that it is in general difficult for the minimal classically conformal $B-L$ model Iso:2009ss ; Iso:2009nw to realize the mechanism. In that model, the coefficient $B$ in Eq. (28) is determined only by $g_{B-L}$ and $\lambda_{R,i}$ that $B\xrightarrow[B-L]{\rm Minimal}\frac{6}{\pi^{2}}\left(g_{B-L}^{4}-\sum_{i}\frac{\lambda_{R,i}^{4}}{96}\right)=\frac{3}{8\pi^{2}v_{\phi}^{4}}\left(M_{Z^{\prime}}^{4}-\sum_{i}\frac{2M_{i}^{4}}{3}\right).$ (44) Therefore, $M_{Z^{\prime}}\gtrsim M_{i}$ is required for a positive $B$ to ensure the vacuum stability. In addition, the FOPT requires a sizable $B$ to generate the potential barrier, and this implies a sizable $g_{B-L}$ and hence $M_{Z^{\prime}}$ dominates Eq. (44). On the other hand, for a supercooling FOPT, the cosmic energy density is dominated by the vacuum energy and hence $T_{\rm rh}\sim V_{0}^{1/4}\sim B^{1/4}v_{\phi}\sim g_{B-L}v_{\phi}\sim M_{Z^{\prime}}$. To have $M_{1}/T_{\rm rh}\gg 1$ after reheating, we must have $M_{1}/M_{Z^{\prime}}\gg 1$, which is in contrast with the vacuum stability and FOPT conditions. We confirm this qualitative argument by a detailed numerical scan. Therefore, we extend the model with one extra scalar $S$, as we did in Eq. (25). In this new model, the contribution to $B$ can be dominated by the scalar portal coupling $\lambda_{\phi s}$, and the reheating temperature is no longer directly related to $M_{Z^{\prime}}$. Figure 2: The allowed parameter space of the FOPT leptogenesis scenario is shown in white region, for $M_{1}=2.5\times 10^{11}$ GeV, $\lambda_{R,1}=0.3$ and $\lambda_{R,2}=\lambda_{R,3}=4\lambda_{R,1}$. The blue and orange shaded regions are excluded by thermal washout and dilution effects after the FOPT reheating, respectively. The $M_{1}/T_{p}$ and $\alpha$ contours are shown in the left and right panels, respectively. The green star is the benchmark adopted for GW calculation, see Fig. 3 for details. For our extended $B-L$ model, we start from $M_{1}=10^{9}$ GeV and gradually increase it to seek for viable parameter space for the FOPT leptogenesis. The most stringent constraints for the scenario come from the washout effects after reheating, especially $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime}\phi$ and $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime*}\to f\bar{f}$. Even in case that the reheating washout effects are suppressed, the BAU is usually diluted by the large $\alpha$ to be lower than the experimentally observed value. Therefore, we have to increase $M_{1}$ to $M_{1}\gtrsim 10^{11}$ GeV to generate a large BAU. An example is shown in Fig. 2 with $M_{1}=2.5\times 10^{11}~{}{\rm GeV},\quad\lambda_{R,1}=0.3,\quad\lambda_{R,2}=\lambda_{R,3}=4\lambda_{R,1},$ (45) fixed, and scanning over $\lambda_{\phi s}$ and $g_{B-L}$. The parameter space with successful FOPT leptogenesis, i.e. can provide $Y_{B}\geqslant Y_{B}^{\rm obs}$ for the $\epsilon_{1}$ within the Davidson-Ibarra bound, is plotted as the white region covered by the $M_{1}/T_{p}$ (left panel) and $\alpha$ (right panel) contours. We see $\alpha\gg 1$ for most of the parameter space, implying a strong FOPT with vacuum energy dominance. The blue shaded region cannot realize FOPT leptogenesis because the thermal washout processes are active after reheating, where the $g_{B-L}\gtrsim 0.1$ region is ruled out by the $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime*}\to f\bar{f}$ annihilation, while the $\lambda_{\phi s}\gtrsim 3.9$ region is excluded by the $\nu_{R}^{i}\nu_{R}^{i}\to Z^{\prime}\phi$ annihilation. If $\lambda_{\phi s}$ is too small, the FOPT strength is so strong that the entropy production during reheating dilutes the BAU to an unacceptable low value, as covered by the orange shaded region in the figure. We have checked that, without the FOPT, the same parameter space in Fig. 2 cannot realize a conventional thermal leptogenesis in the $B-L$ model, which typically requires a CP asymmetry $\mathcal{O}(30)$ larger than the Davidson-Ibarra bound due to the large thermal washout effects from processes involving $Z^{\prime}$ and $\phi$. Therefore, our model has opened up new parameter space for a novel kind of leptogenesis. In this scenario, the relevant energy scale is about $10^{11}$ GeV, which is not accessible at any current or near-future colliders. However, the GWs as byproducts of the $U(1)_{B-L}$ breaking may help to probe the scenario, although those signals could not serve as smoking guns for this specific mechanism. Thus, we briefly comment on the possible signals. In our scenario, there are two sources of the GWs: first, the $U(1)_{B-L}$ FOPT itself generates GWs via vacuum bubble collision, sound waves and magneto- hydrodynamics (MHD) turbulence in the plasma Jinno:2016knw ; Iso:2017uuu ; Marzo:2018nov ; Ellis:2019oqb ; Bian:2019szo ; Ellis:2020nnr ; second, the cosmic strings forming after the $U(1)_{B-L}$ breaking keep emitting GWs during the evolution of the Universe Buchmuller:2013lra ; Dror:2019syi ; Fornal:2020esl ; Samanta:2020cdk ; Masoud:2021prr ; Bian:2021vmi ; Buchmuller:2021mbb . As an illustration, we adopt $\lambda_{\phi s}=3.5$ and $g_{B-L}=0.05$ as a benchmark (shown as the green star in Fig. 2) to calculate the GW spectrum today after the cosmological redshift. For the FOPT GWs, $T_{p}=6.1\times 10^{10}$ GeV, and the energy budget depends on the evolution of the wall velocity, thus we tried both schemes from Ref. Hoche:2020ysm (with $\mathcal{P}_{1\to N}\propto\gamma_{w}^{2}$) and Ref. Gouttenoire:2021kjv (with $\mathcal{P}_{1\to N}\propto\gamma_{w}$). For the former case, as the friction increases rapidly with $\gamma_{w}$, the bubble walls have reached the terminal velocity at $T_{p}$, thus the sound wave and MHD contributions dominate Ellis:2018mja , and we make use of the efficiency factor $\kappa_{V}$ derived in Ref. Espinosa:2010hh ; while for the latter case, the walls are still accelerating at $T_{p}$, and hence the bubble collision contribution dominates, and we adopt the method in Refs. Ellis:2019oqb ; Ellis:2020nnr to obtain the efficiency factor $\kappa_{\rm col}$. With the efficiency coefficients in hand, the FOPT GW spectra are evaluated using the numerical formulae in Refs. Caprini:2015zlo ; Caprini:2019egz .444In the sound wave dominant case, the extra suppression factor from the finite duration of sound wave period is taken into account Ellis:2018mja ; Ellis:2019oqb ; Guo:2020grp . For the cosmic strings GWs, the spectrum is determined by the dimensionless combination $G\mu$, where $G=1/M_{\rm Pl}^{2}$ is the Newton’s constant of gravitation, and $\mu\sim v_{\phi}^{2}$ is the tension of the strings. For our benchmark, $G\mu\approx 10^{-14}$, and we use the numerical results in Refs. Auclair:2019wcv ; Blanco-Pillado:2017oxo ; Binetruy:2012ze ; Blanco- Pillado:2013qja to derive the GW spectrum.555See Refs. Bian:2022tju ; Zhao:2022cnn for recent research on cosmic string GW simulations and experimental constraints. Figure 3: The GW spectra for the benchmark $\lambda_{\phi s}=3.5$ and $g_{B-L}=0.05$ (marked as green star in Fig. 2). Both the spectra from cosmic strings and FOPT are shown, and in the latter case both two possibilities of $\mathcal{P}_{1\to N}\propto\gamma_{w}^{2}$ (sound wave dominant) and $\mathcal{P}_{1\to N}\propto\gamma_{w}$ (bubble collision dominant) are considered. The GW spectra for our benchmark point are given in Fig. 3, where the expected sensitivity curves for the space-based laser interferometers LISA LISA:2017pwj , TianQin TianQin:2015yph ; Hu:2017yoc ; TianQin:2020hid , Taiji Hu:2017mde ; Ruan:2018tsw , BBO Crowder:2005nr and DECIGO Kawamura:2011zz , and the ground-based interferometers LIGO LIGOScientific:2014qfs ; LIGOScientific:2019vic , CE Reitze:2019iox and ET Punturo:2010zz ; Hild:2010id ; Sathyaprakash:2012jk are also shown. We first see that the FOPT GWs spectra peak at $\sim 10^{5}$ Hz, which is too high to be detected by the near-future instruments. For heavier RHNs and hence higher FOPT scales, the typical peak frequency is even higher and hence more difficult to probe. However, the cosmic string GW spectrum is rather flat and could be reached by quite a few future detectors such as BBO, DECIGO, CE and ET. For heavier RHNs, $G\mu$ is larger, and the signal strength becomes stronger that LISA, TianQin and Taiji can also probe the scenario. Therefore, we conclude that the cosmic strings induced GWs are hopeful to be seen at the future detectors, although this is a general feature of all the high-scale $U(1)$ breaking new physics models, not specifically for our extended $B-L$ model. ## 4 Conclusion In this article, we apply the mechanism of baryogenesis induced by ultra- relativistic bubble walls to the leptogenesis case. After giving a general discussion on the dynamics of such a scenario, we build an extended $B-L$ model to demonstrate the viable parameter space realizing the mechanism. We have shown that the mechanism requires a trade-off between the strength of FOPT and the level of reheating, and the successful FOPT leptogenesis requires RHN mass $\gtrsim 10^{11}$ GeV assuming the Davidson-Ibarra bound. Meanwhile, we verify that the same parameter space cannot generate sufficient BAU within the conventional thermal leptogenesis mechanism. Therefore, our research provides a novel approach to realize leptogenesis. While the frequency of GW signals from FOPT is too high to be probed at the detectors, the GWs emitted by the cosmic strings from $U(1)_{B-L}$ breaking might be seen at the near- future detectors such as LISA, TianQin, Taiji, CE and ET. Note added. Soon after the completion of this manuscript, Ref. Dasgupta:2022isg appears, which applies the same mechanism to the minimal classically conformal $B-L$ model in the resonant leptogenesis regime. ###### Acknowledgements. We would like to thank Iason Baldes, Ville Vaskonen and Shao-Jiang Wang for the very useful and inspiring discussions. This work is supported by the National Science Foundation under grant number PHY-1820891, and PHY-2112680, and the University of Nebraska Foundation. ## References * (1) M. Fukugita and T. Yanagida, “Baryogenesis Without Grand Unification,” _Phys. Lett. B_ 174 (1986) 45–47. * (2) M. A. Luty, “Baryogenesis via leptogenesis,” _Phys. Rev. D_ 45 (1992) 455–465. * (3) S. Davidson, E. Nardi and Y. Nir, “Leptogenesis,” _Phys. Rept._ 466 (2008) 105–177, [0802.2962]. * (4) R. Davis, “A review of the Homestake solar neutrino experiment,” _Prog. Part. Nucl. Phys._ 32 (1994) 13–32. * (5) Super-Kamiokande collaboration, Y. Fukuda et al., “Evidence for oscillation of atmospheric neutrinos,” _Phys. Rev. Lett._ 81 (1998) 1562–1567, [hep-ex/9807003]. * (6) KamLAND collaboration, K. Eguchi et al., “First results from KamLAND: Evidence for reactor anti-neutrino disappearance,” _Phys. Rev. Lett._ 90 (2003) 021802, [hep-ex/0212021]. * (7) P. Minkowski, “$\mu\to e\gamma$ at a Rate of One Out of $10^{9}$ Muon Decays?,” _Phys. Lett. B_ 67 (1977) 421–428. * (8) W. Buchmuller, R. D. Peccei and T. Yanagida, “Leptogenesis as the origin of matter,” _Ann. Rev. Nucl. Part. Sci._ 55 (2005) 311–355, [hep-ph/0502169]. * (9) S. Davidson and A. Ibarra, “A Lower bound on the right-handed neutrino mass from leptogenesis,” _Phys. Lett. B_ 535 (2002) 25–32, [hep-ph/0202239]. * (10) M. Flanz, E. A. Paschos, U. Sarkar and J. Weiss, “Baryogenesis through mixing of heavy Majorana neutrinos,” _Phys. Lett. B_ 389 (1996) 693–699, [hep-ph/9607310]. * (11) A. Pilaftsis, “CP violation and baryogenesis due to heavy Majorana neutrinos,” _Phys. Rev. D_ 56 (1997) 5431–5451, [hep-ph/9707235]. * (12) A. Pilaftsis and T. E. J. Underwood, “Resonant leptogenesis,” _Nucl. Phys. B_ 692 (2004) 303–345, [hep-ph/0309342]. * (13) B. Dev, M. Garny, J. Klaric, P. Millington and D. Teresi, “Resonant enhancement in leptogenesis,” _Int. J. Mod. Phys. A_ 33 (2018) 1842003, [1711.02863]. * (14) E. K. Akhmedov, V. A. Rubakov and A. Y. Smirnov, “Baryogenesis via neutrino oscillations,” _Phys. Rev. Lett._ 81 (1998) 1359–1362, [hep-ph/9803255]. * (15) M. Drewes, B. Garbrecht, P. Hernandez, M. Kekic, J. Lopez-Pavon, J. Racker et al., “ARS Leptogenesis,” _Int. J. Mod. Phys. A_ 33 (2018) 1842002, [1711.02862]. * (16) I. Baldes, S. Blasi, A. Mariotti, A. Sevrin and K. Turbang, “Baryogenesis via relativistic bubble expansion,” _Phys. Rev. D_ 104 (2021) 115029, [2106.15602]. * (17) D. J. H. Chung, B. Garbrecht, M. J. Ramsey-Musolf and S. Tulin, “Lepton-mediated electroweak baryogenesis,” _Phys. Rev. D_ 81 (2010) 063506, [0905.4509]. * (18) C.-W. Chiang, K. Fuyuto and E. Senaha, “Electroweak Baryogenesis with Lepton Flavor Violation,” _Phys. Lett. B_ 762 (2016) 315–320, [1607.07316]. * (19) H.-K. Guo, Y.-Y. Li, T. Liu, M. Ramsey-Musolf and J. Shu, “Lepton-Flavored Electroweak Baryogenesis,” _Phys. Rev. D_ 96 (2017) 115034, [1609.09849]. * (20) J. De Vries, M. Postma and J. van de Vis, “The role of leptons in electroweak baryogenesis,” _JHEP_ 04 (2019) 024, [1811.11104]. * (21) K.-P. Xie, “Lepton-mediated electroweak baryogenesis, gravitational waves and the $4\tau$ final state at the collider,” _JHEP_ 02 (2021) 090, [2011.04821]. * (22) J. M. Cline and B. Laurent, “Electroweak baryogenesis from light fermion sources: A critical study,” _Phys. Rev. D_ 104 (2021) 083507, [2108.04249]. * (23) S. Pascoli, J. Turner and Y.-L. Zhou, “Baryogenesis via leptonic CP-violating phase transition,” _Phys. Lett. B_ 780 (2018) 313–318, [1609.07969]. * (24) A. J. Long, A. Tesi and L.-T. Wang, “Baryogenesis at a Lepton-Number-Breaking Phase Transition,” _JHEP_ 10 (2017) 095, [1703.04902]. * (25) A. Pilaftsis, “Electroweak Resonant Leptogenesis in the Singlet Majoron Model,” _Phys. Rev. D_ 78 (2008) 013008, [0805.1677]. * (26) B. Shuve and C. Tamarit, “Phase Transitions and Baryogenesis From Decays,” _JHEP_ 10 (2017) 122, [1704.01979]. * (27) J.-P. Hong, S. Jung and K.-P. Xie, “Fermi-ball dark matter from a first-order phase transition,” _Phys. Rev. D_ 102 (2020) 075028, [2008.04430]. * (28) M. J. Baker, M. Breitbach, J. Kopp and L. Mittnacht, “Primordial Black Holes from First-Order Cosmological Phase Transitions,” 2105.07481. * (29) K. Kawana and K.-P. Xie, “Primordial black holes from a cosmic phase transition: The collapse of Fermi-balls,” _Phys. Lett. B_ 824 (2022) 136791, [2106.00111]. * (30) M. J. Baker, M. Breitbach, J. Kopp and L. Mittnacht, “Detailed Calculation of Primordial Black Hole Formation During First-Order Cosmological Phase Transitions,” 2110.00005. * (31) J. Arakawa, A. Rajaraman and T. M. P. Tait, “Annihilogenesis,” _JHEP_ 08 (2022) 078, [2109.13941]. * (32) M. J. Baker, J. Kopp and A. J. Long, “Filtered Dark Matter at a First Order Phase Transition,” _Phys. Rev. Lett._ 125 (2020) 151102, [1912.02830]. * (33) D. Chway, T. H. Jung and C. S. Shin, “Dark matter filtering-out effect during a first-order phase transition,” _Phys. Rev. D_ 101 (2020) 095019, [1912.04238]. * (34) W. Chao, X.-F. Li and L. Wang, “Filtered pseudo-scalar dark matter and gravitational waves from first order phase transition,” _JCAP_ 06 (2021) 038, [2012.15113]. * (35) M. J. Baker, M. Breitbach, J. Kopp, L. Mittnacht and Y. Soreq, “Filtered Baryogenesis,” 2112.08987. * (36) M. Ahmadvand, “Filtered asymmetric dark matter during the Peccei-Quinn phase transition,” _JHEP_ 10 (2021) 109, [2108.00958]. * (37) A. Katz and A. Riotto, “Baryogenesis and Gravitational Waves from Runaway Bubble Collisions,” _JCAP_ 11 (2016) 011, [1608.00583]. * (38) A. Azatov and M. Vanvlasselaer, “Bubble wall velocity: heavy physics effects,” _JCAP_ 01 (2021) 058, [2010.02590]. * (39) A. Azatov, M. Vanvlasselaer and W. Yin, “Dark Matter production from relativistic bubble walls,” _JHEP_ 03 (2021) 288, [2101.05721]. * (40) A. Azatov, M. Vanvlasselaer and W. Yin, “Baryogenesis via relativistic bubble walls,” _JHEP_ 10 (2021) 043, [2106.14913]. * (41) H. A. Weldon, “Covariant Calculations at Finite Temperature: The Relativistic Plasma,” _Phys. Rev. D_ 26 (1982) 1394. * (42) Particle Data Group collaboration, P. A. Zyla et al., “Review of Particle Physics,” _PTEP_ 2020 (2020) 083C01. * (43) A. Davidson, “$B-L$ as the fourth color within an $\mathrm{SU}(2)_{L}\times\mathrm{U}(1)_{R}\times\mathrm{U}(1)$ model,” _Phys. Rev. D_ 20 (1979) 776. * (44) R. E. Marshak and R. N. Mohapatra, “Quark - Lepton Symmetry and B-L as the U(1) Generator of the Electroweak Symmetry Group,” _Phys. Lett. B_ 91 (1980) 222–224. * (45) R. N. Mohapatra and R. E. Marshak, “Local B-L Symmetry of Electroweak Interactions, Majorana Neutrinos and Neutron Oscillations,” _Phys. Rev. Lett._ 44 (1980) 1316–1319. * (46) A. Davidson and K. C. Wali, “Universal Seesaw Mechanism?,” _Phys. Rev. Lett._ 59 (1987) 393. * (47) M. Le Dall and A. Ritz, “Leptogenesis and the Higgs Portal,” _Phys. Rev. D_ 90 (2014) 096002, [1408.2498]. * (48) T. Alanne, T. Hugle, M. Platscher and K. Schmitz, “Low-scale leptogenesis assisted by a real scalar singlet,” _JCAP_ 03 (2019) 037, [1812.04421]. * (49) S. Iso, N. Okada and Y. Orikasa, “Classically conformal $B^{-}$ L extended Standard Model,” _Phys. Lett. B_ 676 (2009) 81–87, [0902.4050]. * (50) S. Iso, N. Okada and Y. Orikasa, “The minimal B-L model naturally realized at TeV scale,” _Phys. Rev. D_ 80 (2009) 115007, [0909.0128]. * (51) A. Das, N. Okada and N. Papapietro, “Electroweak vacuum stability in classically conformal B-L extension of the Standard Model,” _Eur. Phys. J. C_ 77 (2017) 122, [1509.01466]. * (52) R. Jinno and M. Takimoto, “Probing a classically conformal B-L model with gravitational waves,” _Phys. Rev. D_ 95 (2017) 015020, [1604.05035]. * (53) S. Iso, P. D. Serpico and K. Shimada, “QCD-Electroweak First-Order Phase Transition in a Supercooled Universe,” _Phys. Rev. Lett._ 119 (2017) 141301, [1704.04955]. * (54) C. Marzo, L. Marzola and V. Vaskonen, “Phase transition and vacuum stability in the classically conformal B–L model,” _Eur. Phys. J. C_ 79 (2019) 601, [1811.11169]. * (55) J. Ellis, M. Lewicki, J. M. No and V. Vaskonen, “Gravitational wave energy budget in strongly supercooled phase transitions,” _JCAP_ 06 (2019) 024, [1903.09642]. * (56) L. Bian, W. Cheng, H.-K. Guo and Y. Zhang, “Cosmological implications of a B $-$ L charged hidden scalar: leptogenesis and gravitational waves,” _Chin. Phys. C_ 45 (2021) 113104, [1907.13589]. * (57) J. Ellis, M. Lewicki and V. Vaskonen, “Updated predictions for gravitational waves produced in a strongly supercooled phase transition,” _JCAP_ 11 (2020) 020, [2007.15586]. * (58) S. Jung and K. Kawana, “Low-energy probes of the small cosmic microwave background amplitude in models of the radiative Higgs mechanism,” _PTEP_ 2022 (2022) 033B11, [2105.01217]. * (59) J. Haruna and H. Kawai, “Weak scale from Planck scale: Mass scale generation in a classically conformal two-scalar system,” _PTEP_ 2020 (2020) 033B01, [1905.05656]. * (60) Y. Hamada, H. Kawai, K.-y. Oda and K. Yagyu, “Dark matter in minimal dimensional transmutation with multicritical-point principle,” _JHEP_ 01 (2021) 087, [2008.08700]. * (61) Y. Hamada, H. Kawai, K. Kawana, K.-y. Oda and K. Yagyu, “Minimal scenario of criticality for electroweak scale, neutrino masses, dark matter, and inflation,” _Eur. Phys. J. C_ 81 (2021) 962, [2102.04617]. * (62) K. Kawana, “Cosmology of a supercooled universe,” _Phys. Rev. D_ 105 (2022) 103515, [2201.00560]. * (63) A. D. Linde, “Decay of the False Vacuum at Finite Temperature,” _Nucl. Phys. B_ 216 (1983) 421. * (64) A. H. Guth and S. H. H. Tye, “Phase Transitions and Magnetic Monopole Production in the Very Early Universe,” _Phys. Rev. Lett._ 44 (1980) 631. * (65) A. H. Guth and E. J. Weinberg, “Cosmological Consequences of a First Order Phase Transition in the SU(5) Grand Unified Model,” _Phys. Rev. D_ 23 (1981) 876. * (66) M. D. Rintoul and S. Torquato, “Precise determination of the critical threshold and exponents in a three-dimensional continuum percolation model,” _Journal of physics a: mathematical and general_ 30 (1997) L585. * (67) J. Ellis, M. Lewicki and J. M. No, “On the Maximal Strength of a First-Order Electroweak Phase Transition and its Gravitational Wave Signal,” _JCAP_ 04 (2019) 003, [1809.08242]. * (68) D. Bodeker and G. D. Moore, “Can electroweak bubble walls run away?,” _JCAP_ 05 (2009) 009, [0903.4099]. * (69) D. Bodeker and G. D. Moore, “Electroweak Bubble Wall Speed Limit,” _JCAP_ 05 (2017) 025, [1703.08215]. * (70) S. Höche, J. Kozaczuk, A. J. Long, J. Turner and Y. Wang, “Towards an all-orders calculation of the electroweak bubble wall velocity,” _JCAP_ 03 (2021) 009, [2007.10343]. * (71) Y. Gouttenoire, R. Jinno and F. Sala, “Friction pressure on relativistic bubble walls,” _JHEP_ 05 (2022) 004, [2112.07686]. * (72) W.-Y. Ai, B. Garbrecht and C. Tamarit, “Bubble wall velocities in local equilibrium,” 2109.13710. * (73) R.-G. Cai and S.-J. Wang, “Effective picture of bubble expansion,” _JCAP_ 03 (2021) 096, [2011.11451]. * (74) S.-J. Wang and Z.-Y. Yuwen, “Hydrodynamic backreaction force of cosmological bubble expansion,” 2205.02492. * (75) B. Laurent and J. M. Cline, “First principles determination of bubble wall velocity,” 2204.13120. * (76) R. Mertig, M. Bohm and A. Denner, “FEYN CALC: Computer algebraic calculation of Feynman amplitudes,” _Comput. Phys. Commun._ 64 (1991) 345–359. * (77) V. Shtabovenko, R. Mertig and F. Orellana, “New Developments in FeynCalc 9.0,” _Comput. Phys. Commun._ 207 (2016) 432–444, [1601.01167]. * (78) V. Shtabovenko, R. Mertig and F. Orellana, “FeynCalc 9.3: New features and improvements,” _Comput. Phys. Commun._ 256 (2020) 107478, [2001.04407]. * (79) S. Blanchet, Z. Chacko, S. S. Granor and R. N. Mohapatra, “Probing Resonant Leptogenesis at the LHC,” _Phys. Rev. D_ 82 (2010) 076008, [0904.2174]. * (80) J. Heeck and D. Teresi, “Leptogenesis and neutral gauge bosons,” _Phys. Rev. D_ 94 (2016) 095024, [1609.03594]. * (81) M. Duerr, F. Kahlhoefer, K. Schmidt-Hoberg, T. Schwetz and S. Vogl, “How to save the WIMP: global analysis of a dark matter model with two s-channel mediators,” _JHEP_ 09 (2016) 042, [1606.07609]. * (82) W. Buchmüller, V. Domcke, K. Kamada and K. Schmitz, “The Gravitational Wave Spectrum from Cosmological $B-L$ Breaking,” _JCAP_ 10 (2013) 003, [1305.3392]. * (83) J. A. Dror, T. Hiramatsu, K. Kohri, H. Murayama and G. White, “Testing the Seesaw Mechanism and Leptogenesis with Gravitational Waves,” _Phys. Rev. Lett._ 124 (2020) 041804, [1908.03227]. * (84) B. Fornal and B. Shams Es Haghi, “Baryon and Lepton Number Violation from Gravitational Waves,” _Phys. Rev. D_ 102 (2020) 115037, [2008.05111]. * (85) R. Samanta and S. Datta, “Gravitational wave complementarity and impact of NANOGrav data on gravitational leptogenesis,” _JHEP_ 05 (2021) 211, [2009.13452]. * (86) M. A. Masoud, M. U. Rehman and Q. Shafi, “Sneutrino tribrid inflation, metastable cosmic strings and gravitational waves,” _JCAP_ 11 (2021) 022, [2107.09689]. * (87) L. Bian, X. Liu and K.-P. Xie, “Probing superheavy dark matter with gravitational waves,” _JHEP_ 11 (2021) 175, [2107.13112]. * (88) W. Buchmuller, V. Domcke and K. Schmitz, “Stochastic gravitational-wave background from metastable cosmic strings,” _JCAP_ 12 (2021) 006, [2107.04578]. * (89) J. R. Espinosa, T. Konstandin, J. M. No and G. Servant, “Energy Budget of Cosmological First-order Phase Transitions,” _JCAP_ 06 (2010) 028, [1004.4187]. * (90) C. Caprini et al., “Science with the space-based interferometer eLISA. II: Gravitational waves from cosmological phase transitions,” _JCAP_ 04 (2016) 001, [1512.06239]. * (91) C. Caprini et al., “Detecting gravitational waves from cosmological phase transitions with LISA: an update,” _JCAP_ 03 (2020) 024, [1910.13125]. * (92) H.-K. Guo, K. Sinha, D. Vagie and G. White, “Phase Transitions in an Expanding Universe: Stochastic Gravitational Waves in Standard and Non-Standard Histories,” _JCAP_ 01 (2021) 001, [2007.08537]. * (93) P. Auclair et al., “Probing the gravitational wave background from cosmic strings with LISA,” _JCAP_ 04 (2020) 034, [1909.00819]. * (94) J. J. Blanco-Pillado and K. D. Olum, “Stochastic gravitational wave background from smoothed cosmic string loops,” _Phys. Rev. D_ 96 (2017) 104046, [1709.02693]. * (95) P. Binetruy, A. Bohe, C. Caprini and J.-F. Dufaux, “Cosmological Backgrounds of Gravitational Waves and eLISA/NGO: Phase Transitions, Cosmic Strings and Other Sources,” _JCAP_ 06 (2012) 027, [1201.0983]. * (96) J. J. Blanco-Pillado, K. D. Olum and B. Shlaer, “The number of cosmic string loops,” _Phys. Rev. D_ 89 (2014) 023512, [1309.6637]. * (97) L. Bian, J. Shu, B. Wang, Q. Yuan and J. Zong, “Searching for cosmic string induced stochastic gravitational wave background with the Parkes Pulsar Timing Array,” 2205.07293. * (98) Z. Zhao, Y. Di, L. Bian and R.-G. Cai, “Probing the electroweak symmetry breaking history with Gravitational waves,” 2204.04427. * (99) LISA collaboration, P. Amaro-Seoane et al., “Laser Interferometer Space Antenna,” 1702.00786. * (100) TianQin collaboration, J. Luo et al., “TianQin: a space-borne gravitational wave detector,” _Class. Quant. Grav._ 33 (2016) 035010, [1512.02076]. * (101) Y.-M. Hu, J. Mei and J. Luo, “Science prospects for space-borne gravitational-wave missions,” _Natl. Sci. Rev._ 4 (2017) 683–684. * (102) TianQin collaboration, J. Mei et al., “The TianQin project: current progress on science and technology,” 2008.10332. * (103) W.-R. Hu and Y.-L. Wu, “The Taiji Program in Space for gravitational wave physics and the nature of gravity,” _Natl. Sci. Rev._ 4 (2017) 685–686. * (104) W.-H. Ruan, Z.-K. Guo, R.-G. Cai and Y.-Z. Zhang, “Taiji program: Gravitational-wave sources,” _Int. J. Mod. Phys. A_ 35 (2020) 2050075, [1807.09495]. * (105) J. Crowder and N. J. Cornish, “Beyond LISA: Exploring future gravitational wave missions,” _Phys. Rev. D_ 72 (2005) 083005, [gr-qc/0506015]. * (106) S. Kawamura et al., “The Japanese space gravitational wave antenna: DECIGO,” _Class. Quant. Grav._ 28 (2011) 094011. * (107) LIGO Scientific, VIRGO collaboration, J. Aasi et al., “Characterization of the LIGO detectors during their sixth science run,” _Class. Quant. Grav._ 32 (2015) 115012, [1410.7764]. * (108) LIGO Scientific, Virgo collaboration, B. P. Abbott et al., “Search for the isotropic stochastic background using data from Advanced LIGO’s second observing run,” _Phys. Rev. D_ 100 (2019) 061101, [1903.02886]. * (109) D. Reitze et al., “Cosmic Explorer: The U.S. Contribution to Gravitational-Wave Astronomy beyond LIGO,” _Bull. Am. Astron. Soc._ 51 (2019) 035, [1907.04833]. * (110) M. Punturo et al., “The Einstein Telescope: A third-generation gravitational wave observatory,” _Class. Quant. Grav._ 27 (2010) 194002. * (111) S. Hild et al., “Sensitivity Studies for Third-Generation Gravitational Wave Observatories,” _Class. Quant. Grav._ 28 (2011) 094013, [1012.0908]. * (112) B. Sathyaprakash et al., “Scientific Objectives of Einstein Telescope,” _Class. Quant. Grav._ 29 (2012) 124013, [1206.0331]. * (113) A. Dasgupta, P. S. B. Dev, A. Ghoshal and A. Mazumdar, “Gravitational Wave Pathway to Testable Leptogenesis,” 2206.07032.
We define and study coherent cochain complexes in arbitrary stable , following Joyal. Our main result is that the of coherent cochain complexes in a stable $\scrC$ is equivalent to the of complete filtered objects in $\scrC$. We then show how the Beilinson t-structure can be interpreted in light of such equivalence, and analyze its behavior in the presence of symmetric monoidal structures. We also examine the relationship between the notion of (higher) Toda brackets and coherent cochain complexes. Finally, we prove how every coherent cochain complex gives rise to a spectral sequence and illustrate some examples. § INTRODUCTION Recall that a filtered chain complex (resp. a filtered spectrum) consists of a $\bbZ$-indexed sequence $$\cdots\to F^{n} \to F^{n-1} \to F^{n-2} \to\cdots$$ where each $F^i$ is a chain complex (resp. a spectrum) and the morphisms between them are chain maps[in the case of chain complexes it is common to allow only monomorphisms, but as one can always replace a chain map by a monomorphic one up to quasi-isomorphism, this extra condition will not be relevant to us] (resp. morphisms of spectra). Two of the motivating reasons for the study of filtered derived categories and filtered spectra are the construction of spectral sequences, and the existence of the Beilinson t-structure (firstly introduced in [4]; see Definition <ref>). The objective of this paper is to present a new perspective on such objects (and more generally, filtered objects in stable ) that allows to generalize at once both constructions in a homotopy coherent fashion, and to gain some insight on other related constructions, like the obstruction theory for the realizability of spectra with prescribed homotopy groups and k-invariants (see Section <ref>). This perspective will be realized by using coherent cochain complexes (originally introduced in their bounded flavor in <cit.>, in relation to the $\infty$-categorical Dold-Kan correspondence). A coherent cochain complex in a stable $\scrC$ is a homotopy coherent version of an ordinary cochain complex, and consists of a $\bbZ$-indexed sequence of objects $C^i\in\scrC$ and differentials $$\cdots\xto{\partial} C^{n} \xto{\partial} C^{n+1} \xto{\partial} C^{n+2} \xto{\partial}\cdots$$ together with nullhomotopies $\partial^2 \simeq 0$, and further coherence data making all the nullhomotopies mutually compatible. Concretely, a coherent cochain complex will be defined as a pointed functor from (the nerve of) a $1$-category having as objects the integers together with an extra base point, and being generated by morphisms $\partial\colon n\to n+1$ such that $\partial^2 = 0$ (see Definition <ref> for a rigorous For the sake of simplicity, in this introduction we will restrict to the case of spectra, and the corresponding of coherent cochain complexes $\coCh(\Sp)$, although the results in the rest of the paper are discussed in the wider generality of stable equipped with some t-structure. At their core, both the Beilinson t-structure and the spectral sequence associated to a filtered spectrum are in a sense “blind” to the information stored at the limit of the relevant filtered object. As it is customary, we say that a filtered object $F$ is complete if $\varprojlim F \simeq 0$, and we call the cofiber of the canonical map $\varprojlim F \to \varinjlim F$ the completion of $\varinjlim F$. If $F$ is a filtered spectrum, the spectral sequence it generates only abuts to the completion of its colimit. Something similar is true for the Beilinson t-structure, although saying precisely what this means requires more work; the failure of it being left complete is a good starting point: all and only the constant objects are the ones that are $\infty$-connected with respect to the Beilinson t-structure, and constant objects are in a precise sense orthogonal to complete ones (see Proposition <ref>). In Theorem <ref>, we prove that the Beilinson t-structure can be glued out of its restriction to the full subcategory of complete objects and the trivial t-structure on the full subcategory of constant objects. As it turns out, the of filtered spectra is equivalent to that of coherent cochain complexes of spectra: (see Theorems <ref>, <ref> and <ref>) There exists a symmetric monoidal adjunction \begin{tikzcd}[column sep=huge] \Fun(\bbZ\op,\Sp) \ar[r, shift left=1.1ex, "\aush"] & \coCh(\Sp) \ar[l, shift left=1.1ex, "\imp"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} that restricts to an equivalence on complete filtered spectra. Moreover, this adjunction sends the Day convolution symmetric monoidal structure on filtered spectra, to a symmetric monoidal structure given componentwise by $$(C \tensor D)^n \simeq \tplus_{s+t=n} C^s \tensor D^t .$$ As already noted in <cit.>, given a filtered spectrum $F$ one can construct a sequence of morphisms on suitable shifts of the graded pieces \begin{equation}\label{eq-intro-gr} \gr^nF[-n]\to\gr^{n+1}F[-n-1] \end{equation} such that all pairwise compositions are nullhomotopic, and the homotopies are suitably compatible (this is in fact the first step for the construction of the spectral sequence of $F$). The left adjoint defined in the theorem sends $F$ to the coherent cochain $$\cdots \to \gr^{-2}F[-2] \to \gr^{-1}F[-1] \to \gr^0 F \to \gr^1 F [1] \to \gr^2 F [2] \to \cdots$$ where the differentials are precisely the maps of (<ref>). Although above we only represented the components and the differentials of the object $\aush F$, it also encodes much more data: $\coCh(\Sp)$ is defined using a pointed diagram $1$-category and pointed functors, hence its objects keep track also of the nullhomotopies for the two-fold compositions of their differentials, and of all the recursive coherences between them. The compatibilities between all the nullhomotopies can be made explicit by means of higher Toda brackets (see Section <ref>); in order to do so, we use an explicit $\bbE_1$ presentation of exterior algebras over $\bbZ$ due to Achim Krause, presented in Appendix <ref>. The left adjoint above is really a homotopy coherent version of the construction of homotopy objects in Beilinson's t-structure: such t-structure corresponds to the pointwise one along the left adjoint functor discussed above; that is, for a filtered spectrum $F$, $$\pi_n^B F \cong \pi_n^\text{lvl}\aush F,$$ where $\pi_n^\text{lvl} C$ denotes the functor applying $\pi_n$ to all the components of $C$. The right adjoint can be thought of as the functor iteratively solving all the extension problems needed to reconstruct the filtered object from its graded pieces and the differentials. Notice that, in order for such a reconstruction to be possible in general, one really needs to use all the information stored in the higher morphisms. In fact, the differentials appearing in a coherent cochain complex encode precisely the differentials of the $E_1$ page of a spectral sequence abutting to the total homology of the complex (that is, the object underlying its associated filtered object). The higher pages together with their differentials can be recovered from the complex itself, by means of an incarnation of Deligne's décalage construction, as explained in Section <ref>. §.§ Outline of the paper In Section <ref>, we put the basis for the rest of the paper, by the main definitions and some structural results about the at hand. In particular, we show that for a stable $\scrC$, both $\Fun(\bbZ\op,\scrC)$ and $\coCh\scrC$ sit in suitable recollements of stable . Section <ref> is the technical core of the paper, in which we construct the adjunction between filtered spectra and cochain complexes of spectra, proving the equivalence of the latter with the of complete filtered spectra. Then, in Section <ref>, we generalize the result to all stable with sequential (inverse) limits. In Section <ref>, we analyze the interplay between the Beilinson t-structure and the levelwise t-structure on the relevant categories, when the base $\scrC$ comes equipped with a t-structure. We continue the analysis of with extra structure in Section <ref>, where we study the interplay of the equivalence of Theorem <ref> with symmetric monoidal structures, and the compatibility of the relevant t-structures in the case where the base $\scrC$ is also equipped with a t-structure. In Section <ref>, we use an explicit $\bE_1$ presentation of exterior algebras over $\bbZ$, proven by Achim Krause in Appendix <ref>, to show that $\coCh\scrC$ is the universal of $\bbZ\op$-indexed sequences of morphisms in $\scrC$ such that any pairwise composition is trivial, as are all the possible Toda brackets. In Section <ref>, we show how coherent cochain complexes encode a form of obstruction theory, and we use the formalism developed there to recover a spectral sequence from a coherent cochain complex in Section <ref>. Finally, in Section <ref>, we have a look at a few examples from the recent literature that we believe benefit from the perspective of coherent cochain complexes. Some of the main tools for constructing the adjunction and proving the equivalence of Theorem <ref> is the stable nerve-realization paradigm. We recollect and prove many results about the $\infty$-categorical incarnation of this topic in Appendix <ref>. §.§ Notational conventions Throughout, we will use the following notations and terminology: * We will denote by the of presheaves on $\scrC$. * We will denote by the of spectra-valued presheaves on $\scrC$, and sometimes refer to it as the of stable presheaves on $\scrC$. * We will denote by the Yoneda embedding, and by $\Yo_{\kern-0.2emC}$ the functor represented by an object $C\in\scrC$. * Given any functor $\Fun(\scrC,\scrD)$, we will denote by the (fully faithful) functor obtained by precomposition with the terminal functor $\scrC\to\Delta^0$, and by $\const_C$ its value at $C\in\scrC$. * When considering $\bbZ$ or $\bbN$ as categories, we will always implicitly assume that they are given the structure of a poset category, with their usual ordering $\leq$. We will use the notations $\bbZ^\delta$ and $\bbN^\delta$ to denote their underlying discrete categories. * Unless otherwise stated, whenever we refer to $\bbZ$, $\bbZ\op$ or $\bbZ^\delta$ as symmetric monoidal categories, we consider them endowed with the symmetric monoidal structure induced by addition. * We refer to a presentable endowed with a symmetric monoidal structure whose tensor product preserves colimits in each variable as a presentably symmetric monoidal . * We refer to a stable endowed with a symmetric monoidal structure whose tensor product is exact in each variable as a stably symmetric monoidal . * We refer to a presentable stable endowed with a symmetric monoidal structure whose tensor product preserves colimits in each variable as a stable presentably symmetric monoidal . * We will most of the times omit the Eilenberg–MacLane functor from the notations, when considering an Abelian group as a spectrum. Throughout, when dealing with t-structures, we will use the homological grading convention. When dealing with spectral sequences, we will use the cohomological Serre grading convention. For the sake of clarity, we decided to stick with the choice of working exclusively with decreasing filtered objects and coherent cochain complexes. The results of this paper translate immediately to the case of increasing filtrations and coherent chain complexes, with the caveat that the equivalence of Theorem <ref> always inverts the direction of the $\bbZ^\delta$-indexed arrows, hence complete increasing filtered objects are equivalent to coherent chain Throughout, we (mostly implicitly) work in ZFC+U, where “U” is Tarski's axiom: “For each set $x$, there exists a Grothendieck universe $U$ such that $x\in U$”. Equivalently we assume the existence of infinitely many strongly inaccessible cardinals, and we fix one as the cardinality of our universe of small sets. As we are agnostic about the size of the universe fixed, all the arguments that do not rely on dealing with a bigger universe hold regardless of the actual size of the objects referred to as “small sets”. In those few cases where we need to deal with more than one universe at a time, we are going to be explicit about relative sizes. §.§ Related works Some of the structural results about filtered objects in stable contained in Section <ref> have been presented for the case of filtered spectra in <cit.>. The of filtered objects of a stable has also been studied in <cit.>, and some results in Section <ref> and Section <ref> overlap with loc. cit.[the reader should note that in [17] the authors refer to what we call filtered objects as “sequences”, and reserve the term “filtered objects” for what we call complete filtered objects]. In [27], Raksit considers an alternative formulation for coherent chain complexes; although we don't dwelve into a detailed comparison, the results of Section <ref>, should give a clear idea about the relation between the formulation in op. cit. and the one in this paper. In [31], Walde proves the equivalence between $\Fun(\bbN,\scrC)$ and $\Ch_{*\geq 0}(\scrC)$ through their equivalence with A similar approach to the construction of the spectral sequence of a filtered spectrum through the décalage functor that does not use the language of coherent cochain complexes appeared recently in Hedenlund's Ph.D. thesis [18]. The décalage functor introduced in Section <ref> is further analyzed and discussed in forthcoming work by §.§ Prerequisites We assume the reader is familiar with the theory of as developed in [22] and with the contents of [24]. In particular, we assume the reader is thoroughly acquainted with the theory of t-structures as developed in <cit.> (see also<cit.>). Some of the terminology and notations in this paper differ from the ones used in [22] and [24], but we use the same notations and terminology of op. cit. for the concepts we do not explicitly recall or introduce §.§ Aknowledgements This paper is part of my Ph.D. project at the University of Münster. I am very grateful to my advisor, Thomas Nikolaus, for suggesting this topic and for his support and advice. I would like to thank Benjamin Antieau, from whom I learned about the relation between the Beilinson t-structure and Deligne's décalage, and Fosco Loregian, for having introduced me to co/end calculus way before I could appreciate its value. I am also grateful to Achim Krause, Edoardo Lanari, Jonas McCandless and Tashi Walde for the insightful discussions I shared with them while writing this § DEFINITIONS AND PRELIMINARIES In this section we introduce and discuss a few key facts about coherent cochain complexes and filtered objects in . In particular, in Proposition <ref> and Proposition <ref> we prove that both constructions figure in suitable recollements (see Definition <ref>) of stable . Let $\catCh$ be the pointed (ordinary) category given by $$\operatorname{ob}\catCh = \bbZ \cup \{\pt\}$$ \begin{cases} \{\id,\zero\} \quad &\text{if } m=n; \\ \{\de_n,\zero\} \quad &\text{if } m=n-1; \\ \{\zero\} \quad &\text{otherwise }. \end{cases}$$ where $\pt\in\catCh$ is a zero object, and $\zero$ is the zero map. Given any pointed $\scrC$ * the of coherent chain complexes in $\scrC$ is the full spanned by pointed functors. * The of coherent cochain complexes is the full \Fun(\catCh\op,\scrC)$$ spanned by pointed functors. * An object $C\in\coCh(\scrC)$ is bounded above if there exists an $n\in\bbZ$ such that $$C^k \simeq 0 \text{ for all } k>n.$$ * An object $C\in\coCh(\scrC)$ is bounded below if there exists an $n\in\bbZ$ such that $$C^k \simeq 0 \text{ for all } k<n.$$ * An object $C\in\coCh(\scrC)$ is bounded if it is bounded above and bounded below. Notice that $0\in\bbZ$ is an object of $\catCh$, but it is not its zero Let $\scrA$ be an Abelian category, then $\coCh(\scrA)$ is the usual category of cochain complexes of $\scrA$. In what follows, we will exclusively focus our attention on coherent cochain Given a coherent cochain complex $C\in\coCh{\scrC}$, we will denote $C(\de_n)$ by $\de^n_{C}$, or just by $\de^n$ if there is no risk of confusion. Let $\scrC$ be any . * The category $\Fild\scrC\coloneqq\Fun(\bbZ\op, \scrC)$ is called the of (decreasing) filtered objects of $\scrC$. * The category $\Fili\scrC\coloneqq\Fun(\bbZ,\scrC)$ is called the of (increasing) filtered objects of $\scrC$ [the position of the arrows in the notation is meant to remind the common convention of using upper indices for decreasing filtrations, and lower indices for increasing filtrations]. * Given $F\in\Fild\scrC$, we call $F^{-\infty}\coloneqq\colim_i F^i$ the underlying object of $F$. * Given $F\in\Fild\scrC$, we say that $F$ is complete if $F^{+\infty}\coloneqq\lim_i F^i\simeq 0$. We will denote by the full subcategory of $\Fild\scrC$ spanned by complete objects. Throughout, we will concentrate on the case of descending filtrations. Let $\scrC$ be an with cofibers and countable coproducts. Precomposition with the inclusion map $\iota_n\colon\Delta^{\{n+1, n\}} \to \bbZ\op$ induces functors (ι_n)^* →(Δ^1,) F ↦(F^n+1→F^n). * We will define the $n$-th graded functor $\gr^n$ as the composite functor * We define the associated graded functor $\gr$ as the composite functor \prod_{n\in\bbZ}\scrC\xto{\bigoplus}\scrC.$$ We say that a map $\alpha\colon F\to G$ in $\Fild\scrC$ is a graded equivalence if $\alpha\colon F\to G$ is such that $\gr(\alpha)$ is an equivalence. Equivalently. if for all $n\in\bbZ$, the dotted map \begin{tikzcd} F^{n+1} \ar[d] \ar[r] & F^n \ar[d] \ar[r] & \gr^n F \ar[d, dashed]\\ G^{n+1} \ar[r] & G^n \ar[r] & \gr^n G \end{tikzcd} induced by universality of cofibers is an equivalence. Let $\scrC$ be an . For $n,m\in\bbZ\cup\{+\infty,-\infty\}$, with $n\leq m$, we use the notation $$F^n/F^m\coloneqq\cofib\left(F^m\to F^n\right)$$ to denote the cofiber of the evident map. One of the nice features of ordinary cochain complexes is the possibility to write them as limits of bounded above (or below) ones. This feature is still present in the coherent setting, as we now show. Given any integer $n$, let $\catCh_{(-\infty,n]}$ denote the full subcategory of $\catCh$ spanned by Let now $\scrC$ denote a pointed complete[or, more generally, such that all the relevant Kan extensions exist and are pointwise] . Then, each inclusion $\iota^n\colon\catCh_{(-\infty,n]}\to\catCh$ induces, by right Kan extension along it, an adjunction \begin{tikzcd}[column sep=huge] \coCh(\scrC) \ar[r, shift left=1.1ex, "(\iota^n)^*"] & \Fun^0\left(\catCh\op_{(-\infty,n]},\scrC\right). \ar[l, shift left=1.1ex,"\iota^n_*"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} For any given $C\in\coCh(\scrC)$, we can compute explicitly the values of As $\iota^n$ is fully faithful, the only values we need to determine are the ones for $M>n$: where the last limit is $0$ as the indexing category is pointed, and $C$ preserves the zero object. If we now denote by $(-)^{\leq n} \coloneqq \iota^n_*(\iota^n)^*$, we have that the unit of the adjunction $(\iota^n)^*\dashv\iota^n_*$ induces an endofunctor of $\coCh(\scrC)$, denoted $(-)^{\leq n}$, sending a complex $C$ to the complex $C^{\leq n}$, which in degree $m$ is given by $$ \begin{cases} C^m &\text{ if } m\leq n\\ 0 &\text{ else.} \end{cases} $$ Similarly, the inclusions $\iota^{n,n+1}\colon\catCh_{(-\infty,n]}\to \catCh_{(-\infty,n+1]}$ induce adjunctions \begin{tikzcd}[column sep=huge] \Fun^0\left(\catCh\op_{(-\infty,n+1]},\scrC\right) \ar[r, shift left=1.1ex, "(\iota^{n,n+1})^*"] & \Fun^0\left(\catCh\op_{(-\infty,n]},\scrC\right). \ar[l, shift left=1.1ex,"\iota^{n,n+1}_*"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} Notice that we have a natural equivalence (\iota^{n+1}\circ\iota^{n,n+1})^* \simeq (\iota^{n,n+1})^*(\iota^{n+1})^* \simeq (\iota^n)^*; by passing to adjoints, we get a natural transformation \begin{equation}\label{eq-compose} (\iota^{n+1})^* \Rightarrow \iota^{n,n+1}_*(\iota^n)^*; \end{equation} if we now precompose (<ref>) with $\iota^{n+1}_*$ (using that adjunctions compose), we get a natural $$\upsilon^{n+1}\colon(-)^{\leq n+1}\Rightarrow(-)^{\leq n}.$$ By inspection, the above is given pointwise by \begin{cases} \id\colon C^m\to C^m &\text{ if } m\leq n \\ 0\colon C^{n+1} \to 0 &\text{ if } m = n+1 \\ 0\colon 0 \to 0 &\text{ if } m \geq n+2. \end{cases} Similarly to what we did in Construction <ref>, one can truncate below a certan integer. It is also possible to consider left Kan extensions along the $\iota^n$'s, and the induced counits to obtain different truncations with $\cofib \partial^n$ in degree $n+1$, for truncations above $n$, or $\fib \partial^n$ in degree $n-1$, for truncations below $n$. In what follows, we won't need any of such Let $\scrC$ be a pointed complete . Then, for any $C\in\coCh\scrC$ C\simeq\lim \left(\cdots \to C^{\leq n+1}\xto{\upsilon^{n+1}_C} C^{\leq n} \to \cdots\right) (where the $\upsilon^n$'s are the natural transformations defined in Construction <ref>). It follows from Proposition <ref> that limits in $\coCh\scrC$ can be computed objectwise. As for any $m\in\bbZ$ the \cdots \to \left(C^{\leq n+1}\right)^m\xto{\upsilon^n_C} \left(C^{\leq n}\right)^m \to \cdots is eventually constant on the left, the result follows. Given any pointed $\scrC$, the functor $\undch\colon\coCh(\scrC) \to \prod_{\bbZ} \scrC$ induced by precomposition with $\bbZ^\delta\to\catCh\op$ is conservative. As equivalences in are detected at the level of homotopy categories, this is clear. Let $\scrC$ be a complete and cocomplete semiadditive , and let denote precomposition with $n\colon\Delta^0\to\catCh\op$. The “evaluation at $n$” functor \xto{n^*}\scrC,$$ admits both a left and a right adjoint. Such adjoints are given objectwise by \begin{cases} X \quad &\text{if } m=n-1, n\\ 0 &\text{else.} \end{cases} with the identity as the only nontrivial differential, and \begin{cases} X \quad &\text{if } m=n, n+1\\ 0 &\text{else.} \end{cases} with the identity as the only nontrivial differential. The functor $n^*$ admits both adjoints $n_!$ and $n_*$, given respectively by left and right Kan extension. As adjoint functors compose, it follows from Proposition <ref> that the right Kan extension $(\ev_n)_*$ is given by the composite $\ptize\circ n_*$, and the left Kan extension $(\ev_n)_!$ is given by $\ptize\circ n_!$. By Remark <ref>, $n_*$ is given by n_*X^m= [\Map_{\catCh\op}(m,n),X] = \begin{cases} X \oplus X \quad &\text{if } m=n-1, n\\ X &\text{else.} \end{cases} with the differentials $n_*X^m\to n_*X^{m+1}$ determined by $$\Map_{\catCh\op}(m+1,n) \xto{(\partial_{m}\op)^*} \Map_{\catCh\op}(m,n)$$ and thus given by \begin{equation}\label{eq-complex-ext} \begin{tikzcd}[ampersand replacement=\&] \cdots \ar[r] \& X \ar[r,"{\begin{pmatrix} \id \\ \id \end{pmatrix}}"] \& X\oplus X \ar[r,"{\begin{pmatrix} \id & 0 \\ 0 & \id \end{pmatrix}}"] \& X\oplus X \ar[r,"{\begin{pmatrix} \id & 0 \end{pmatrix}}"] \& X \ar[r] \& \cdots \end{tikzcd} \end{equation} in degrees $n-2$ to $n+1$, and by identities elsewhere. By Lemma <ref>, $n_!$ is given by n_*X^m= \Map_{\catCh\op}(n,m) \tensor X = \begin{cases} X \oplus X \quad &\text{if } m=n,n+1\\ X &\text{else.} \end{cases} with the differentials $n_*X^m\to n_*X^{m+1}$ determined by $$\Map_{\catCh\op}(n,m) \xto{(\partial_{m}\op)_*} \Map_{\catCh\op}(n,m+1)$$ and thus given by \begin{equation} \begin{tikzcd}[ampersand replacement=\&] \cdots \ar[r] \& X \ar[r,"{\begin{pmatrix} \id \\ 0 \end{pmatrix}}"] \& X\oplus X \ar[r,"{\begin{pmatrix} \id & 0 \\ 0 & \id \end{pmatrix}}"] \& X\oplus X \ar[r,"{\begin{pmatrix} \id & \id \end{pmatrix}}"] \& X \ar[r] \& \cdots \end{tikzcd} \end{equation} in degrees $n-1$ to $n+2$, and by identities elsewhere. By the explicit description of $\ptize$ given in Proposition <ref>, we get the formulas for $(\ev_n)_*$ and We now recall some definitions and facts about recollements, which we will use extensively in the following sections; we consider only recollements in the case of stable , but the theory holds in greater generality; see also [12], [5] and <cit.>. Let $\scrC$ be a stable , and let $i\colon\scrC_0\hookrightarrow\scrC$ and $j\colon\scrC_1\hookrightarrow\scrC$ be full subcategories. We say that $\scrC$ is a recollement of the essential image of $i$ and the essential image of $j$ if: * Both $i$ and $j$ admit left adjoints: \begin{tikzcd} \scrC_0 \ar[r, hook, "i"'] & \scrC \ar[l, shift right=0.6ex, bend right, "i_L"'] \ar[r, "j_L"] \ar[from=r, hook', shift left=0.6ex, bend left, "j"] \ar[l,phantom, shift right=1.2ex, "\text{\rotatebox{-90}{$\dashv$}}"] & \scrC_1 \ar[l,phantom, shift left=1ex, "\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} * The functor $j_L$, left adjoint to $j$, carries every object of $\scrC_0$ to zero; * If $\alpha$ is a morphism of $\scrC$ such that $i_L(\alpha)$ and $j_L(\alpha)$ are equivalences, then $\alpha$ is an equivalence. It follows from <cit.> that if $\scrC$ is a recollement of $\scrC_0$ and $\scrC_1$, then we actually have the following adjunctions \begin{tikzcd}[column sep=huge] \scrC_0 \ar[r, hook, "i"' description] & \scrC \ar[l, shift right=0.6ex, bend right, "i_L"'] \ar[from=r, hook, shift right=0.6ex, bend right, "(j_L)_!"'] \ar[r, "j_L" description] \ar[from=r, hook', shift left=0.6ex, bend left, "j"] \ar[l, shift left=0.6ex, bend left, "i_R"] \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, "\text{\scalebox{1}{\rotatebox{-90}{$\dashv$}}}"] & \scrC_1 \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, \end{tikzcd} where $(j_L)_!$ is fully faithful, and $i_R$ is such that $$ii_R \to \id_\scrC \to jj_L$$ is a co/fiber sequence. Let $\scrC$ be a stable , and let $i\colon\scrC_0\to\scrC$ be a fully faithful functor. The following are equivalent: * The functor $i$ admits a left adjoint and a right adjoint; * There exists a full subcategory $j\colon\scrC_1\hookrightarrow\scrC$, closed under equivalences, such that $\scrC$ is the recollement of the essential images of $i$ and $j$. Moreover, if the conditions above hold, we can identify $\scrC_1$ with the full subcategory $\scrC_0^\perp\subseteq\scrC$ spanned by those objects $X\in\scrC$ such that for all $C\in \scrC_0$, Recollements are strictly related to semiorthogonal decompositions. Let $\scrC$ be a stable . A semiorthogonal decomposition of $\scrC$ is the datum of two full subcategories $\scrC_0$ and $\scrC_1$ of $\scrC$ such that * $\scrC_1 \simeq \scrC_0^\perp$; * Every object $C\in\scrC$ sits in a cofiber sequence $$C_0\to C\to C_1$$ where $C_0\in\scrC_0$ and $C_1\in\scrC_1$. It follows from Remark <ref> that every recollement determines a semiorthogonal decomposition. We learned the following argument from <cit.>, and we present it here almost verbatim for the reader's convenience. The commutative square \begin{tikzcd} \id_\scrC \ar[r, "\eta^i"] \ar[d, "\eta^j"'] & i i_L \ar[d, "i i_L \eta^j"] \\ j j_L \ar[r, "\eta^j j j_L"] & i i_L j j_L \end{tikzcd} is Cartesian in $\Fun(\scrC,\scrC)$. It follows from Remark <ref> that the fiber of the vertical maps is given by $\eta^i i i_R \colon i i_R \to i i_L i i_R$, which, as $i i_L \simeq \id$, is an equivalence. In particular, every recollement determines a “fracture square” \begin{tikzcd} C \ar[r] \ar[d] & i_L C \ar[d] \\ j_L C \ar[r] & i_L j_L C \end{tikzcd} for any object $C\in\scrC$. We are now ready to prove the main results of this section. Let $\scrC$ be a stable that is both complete and cocomplete. The $\Fun(\catCh\op,\scrC)$ is the recollement of the essential images of $\const\colon\scrC\to\Fun(\catCh\op,\scrC)$ and $\coCh(\scrC)$: \begin{tikzcd}[column sep=huge] \scrC \ar[r, hook, "\const"' description, pos=.7] & \Fun(\catCh\op,\scrC) \ar[l, shift right=0.6ex, bend right, "\colim" description] \ar[from=r, hook, shift right=0.6ex, bend right, "i" description, pos=.4] \ar[r, "\ptize" description] \ar[from=r, hook', shift left=0.6ex, bend left, "i" description, pos=.4] \ar[l, shift left=0.6ex, bend left, "\lim" description] \ar[l,phantom, shift left=2ex, "\text{\scalebox{1}{\rotatebox{-90}{$\dashv$}}}", pos=.3] \ar[l,phantom, shift right=2ex, "\text{\scalebox{1}{\rotatebox{-90}{$\dashv$}}}", pos=.3] & \coCh(\scrC) \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, \end{tikzcd} Moreover, the inclusion $i\colon\coCh(\scrC)\to\Fun(\catCh\op,\scrC)$ is both left and right adjoint to a functor $\ptize$, given by $$\ptize F(n)\simeq F(n)/F(\pt)$$ on objects. Finally, the $\coCh(\scrC)$ is stable, complete and cocomplete; if $\scrC$ is presentable, $\coCh(\scrC)$ is presentable as well. The functor $\const$ admits a left and a right adjoint, given respectively by left and right Kan extension along the terminal morphism (i.e. the $\colim$ and $\lim$ functors). Hence, by Proposition <ref>, $\Fun(\catCh\op,\scrC)$ is the recollement of (the essential image of) $\const$ and the full subcategory $\const(\scrC)^\perp$ orthogonal to it. We claim that \begin{equation}\label{eq-orth-pstch} \const(\scrC)^\perp\simeq\coCh(\scrC). \end{equation} To see this, recall that $\const(\scrC)^\perp$ is spanned by those objects $F\in\Fun(\catCh\op,\scrC)$ such that for all $X\in\scrC$ the mapping space $\Map_{\Fun(\catCh\op,\scrC)}(\const_X,F)$ is contractible. But, as we see that (<ref>) holds. We denote by $\ptize$ the left adjoint to the fully faithful inclusion Now, $\coCh(\scrC)$ is stable by <cit.> (or just because of the fact that (co)limits in functor categories are computed pointwise) and, if $\scrC$ is presentable (as $\Fun(\catCh\op,\scrC)$ is presentable by <cit.>) $\coCh(\scrC)$ is presentable by <cit.>. By <cit.>, we see that \begin{equation}\label{eq-pt-cof-cns} \ptize F\simeq \cofib\left(\const_{F(\pt)}\to F\right) \end{equation} and in particular $\ptize F(n)\simeq F(n)/F(\pt)$. The existence of a left adjoint for $\ptize$ follows from Remark <ref>. To give a description, let's first note that, by (<ref>) and the fully faithfulness of the inclusion $i$ we can write and, as corepresentable functors commute with limits, the latter is equivalent to \to \Map_{\Fun(\catCh\op,\scrC)}\left(C[-1],X\right)\right)$$ which in turn, as $\const$ is right adjoint to $\colim$, can be computed as \begin{equation}\label{eq-in-lemma-ptize} \fib\left(\Map_{\scrC}\big(\colim C[-1],F(\pt)\big) \to \Map_{\Fun(\catCh\op,\scrC)}\left(C[-1],X\right)\right). \end{equation} But, as the colimit of a coherent cochain complex is always $0$, we have $$\Map_{\scrC}\big(\colim C[-1],F(\pt)\big)\simeq\pt$$ and thus (<ref>) is equivalent to ≃ Ω_(,)(C[-1],X) ≃ _(,)(C,X). In particular, as $\ptize$ is both left and right adjoint to the inclusion, we have that $\coCh(\scrC)$ is closed under all limits and colimits in $\Fun(\catCh\op,\scrC)$, which is by hypothesis complete and cocomplete. Let $\scrC$ be a stable that is both complete and cocomplete. The $\Fild\scrC$ is the recollement of the essential images of $\const\colon\scrC\to\Fild\scrC$ and $i\colon\cFild\scrC$: \begin{tikzcd}[column sep=huge] \scrC \ar[r, hook, "\const"' description] & \Fild\scrC \ar[l, shift right=0.6ex, bend right, "\colim"' description, pos=.45] \ar[from=r, hook, shift right=0.6ex, bend right, "L_!" description] \ar[r, "L" description] \ar[from=r, hook', shift left=0.6ex, bend left, "i" description] \ar[l, shift left=0.6ex, bend left, "\lim" description, pos=.55] \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, "\text{\scalebox{1}{\rotatebox{-90}{$\dashv$}}}"] & \cFild\scrC. \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, \end{tikzcd} The left adjoint to the inclusion $i$ is given by Bousfield localization at the class of graded equivalences, and computed as $$LF^n\simeq F^n/F^{+\infty}.$$ Moreover, the $\cFild\scrC$ is stable, complete and cocomplete; if $\scrC$ is presentable, $\cFild\scrC$ is presentable as well. The proof is completely analogous to the proof of Proposition <ref>. The only new element is the identification of the local maps for the Bousfield localization determined by $L$. In order to prove it, let us notice that $L\alpha$ is an equivalence if and only if $\cofib L\alpha \simeq L (\cofib \alpha) \simeq 0$, which in turn, by <cit.>, is the case if and only if $\cofib \alpha$ is essentially constant; by inspection of the following diagram \begin{tikzcd} F^{n+1} \ar[r] \ar[d] & F^n \ar[r] \ar[d] & \gr^n F \ar[d] \\ G^{n+1} \ar[r] \ar[d] & G^n \ar[r] \ar[d] & \gr^n G \\ (\cofib \alpha)^{n+1} \ar[r] & (\cofib \alpha)^n & \end{tikzcd} where all the rows and columns are co/fiber sequences, we see that this is the case if and only if $\alpha$ is a graded equivalence. § FILTERED SPECTRA AND COCHAIN COMPLEXES OF SPECTRA Our goal in this section is to construct an equivalence between the $\rlywdhat{\Fild} \Sp$ and $\coCh(\Sp)$. In order to do so, we will first construct a pair of adjoint functors between $\Fild\Sp$ and $\coCh(\Sp)$ using the machinery of Appendix <ref>, and then prove that the right adjoint is a fully faithful functor having $\cFild\Sp$ as its essential image. Along the way, we compute explicitly the values of the pair of adjoints constructed abstractly. In order to construct the left adjoint, let us start with the following Restriction along $\susinftyp\Yo\colon\bbZ\to\Pst\bbZ$ induces an equivalence whose inverse is given by associating to every functor $F\colon\bbZ\to\coCh(\Sp)$ its stable realization functor (see Definition <ref>). As, by Proposition <ref>, $\coCh(\Sp)$ is a stable , this is just an application of Lemma <ref>, toghether with the observation that $\Fild\Sp \simeq \Pst{\bbZ}$. Let $\preaush\colon\bbZ\to\coCh(\Sp)$ be the functor defined on objects as $\preaush(n)=\spch{n}{n}$, where nn → m ↦ [n] if m=n, 0 if mn. We define $\preaush$ on morphisms as follows. First of all, let us note that, as it is equivalent to determine a map in $\coCh(\Sp)$ or a map from $\Delta^1 \times\catCh\op$ landing in spectra, such that both restrictions to $\{0\}\times\catCh\op$ and $\{1\}\times\catCh\op$ preserve zero objects (recall that $\Fun^0(\catCh\op,\Sp)$ is a full subcategory of $\Fun(\catCh \op,\Sp)$). Let $\iota_{m}\colon\catCh\op_{[m,m+1]}\to\catCh\op$ be the inclusion functor Then, we define $\preaush(\iota_{m})$ as the morphism corresponding to the following left Kan extension [d, "𝕀×ι_m"'] [r, "α"] Δ^1×[ur, dashed, "β"'] where $\alpha$ is obtained from the defining square for the suspension [m] [r] [d] 0 [d] 0 [r] [m+1] and extending this map $\Delta^1\times\Delta^1 \to \Sp$ to $\Delta^1\times\catCh\op$ in the only possible way that sends $(0,\pt)$ and $(1,\pt)$ to the zero spectrum. As $\id\times d_m$ is fully faithful, the values of $\beta(0,\pt)$ and $\beta(1,\pt)$ are determined by $\alpha$, and are zero by construction. By Corollary <ref>, this completes the construction of Let[where $\aush$ stands for “Aushülen”, German for hull shelling] $\aush\coloneqq\lvert - \rvert_{\preaush}^{\rm{st}} \colon\Fild(\Sp)\to\coCh(\Sp)$ denote the stable functor associated, by the equivalence of Lemma <ref>, to the functor $\preaush$ given in Construction <ref>. Let us denote by[where $\imp$ stands for “Impilare”, Italian for piling up] $\imp\coloneqq\nerve_{\preaush}^{\rm{st}}$ its right \begin{tikzcd}[column sep=huge] \Fild\Sp \ar[r, shift left=1.1ex, "\aush"] & \coCh(\Sp) \ar[l, shift left=1.1ex,"\imp"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} (the reason for the hat in the notation will be clearified in Proposition We will refer to $\aush$ as the shelling functor functor and to the $\imp$ as the piling-up functor; we will use the terms associated shelled complex and piled-up filtered object for objects of the form $\aush F$ and $\imp C$, respectively. It follows from Proposition <ref> that, for any coherent cochain complex $C$, the filtered object $\imp C$ is given by $$\imp C^{\istar} \simeq\map_{\coCh(\Sp)}(\spch{\istar}{\istar},C).$$ Our next goal is to show that $\aush$ factors through the localization In order to do so, it will be useful to identify its graded pieces There is a natural equivalence of functors $\Fild(\Sp)\to\Sp$. By virtue of Lemma <ref>, it suffices to prove that both functors preserve colimits and agree on elements of the form $\susinftyp \Yo_{-}$. As both $\aush$ and $\ev_n$ admit right adjoints (see Proposition <ref>), $\aush(-)^n\simeq\ev_n\circ \ \aush$ preserves By Definition <ref>, $\gr^n$ is given by $\cofib\circ(\iota_n)^*$; by <cit.>, $\cofib$ admits a right adjoint, and since $\Sp$ is complete, $(\iota_n)^*$ admits a right adjoint, given by right Kan extension along $\iota_n$. It follows from Remark <ref> that for all $m\in\bbZ$ $$\aush(\susinftyp \Yo_m)\simeq\lvert \Yo_m \rvert_{\preaush} \simeq \preaush(m) \simeq\spch m m$$ (see Appendix <ref> for a detailed discussion) and thus that (_m)^n ≃ [n] if m=n, 0 if mn, whereas a direct check shows that \gr^n\left(\susinftyp \Yo_m\right) \simeq \begin{cases} \bbS \quad &\text{if } m=n, \\ 0 \quad &\text{if } m\ne n, \end{cases} concluding the proof. The adjunction $\aush\dashv\imp$ given in Definition <ref> factors as \begin{tikzcd}[column sep=huge] \Fild\Sp \ar[r, shift left=1.1ex, "L"] & \cFild\Sp \ar[r, shift left=1.1ex, "\caush"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \ar[l, shift left=1.1ex] & \coCh(\Sp). \ar[l, shift left=1.1ex,"\imp"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} through the localization of Proposition <ref>. Let us first show that $\aush$ factors through the localization. By virtue of Proposition <ref>, it is enough to show that it sends maps inducing equivalences on associated gradeds to equivalences. But, as per Lemma <ref>, the functor is conservative. It is thus enough to know that on each component, $\aush(-)^n$ sends local maps to equivalences, which is true by virtue of Lemma As $\aush$ preserves colimits, and $\cFild\Sp$ is a full subcategory of $\Fild\Sp$ closed under colimits (see Proposition <ref>), we get that $\caush$ preserves colimits, and hence admits a right adjoint. Since adjoints compose, this just means that $\imp$ takes values in the full subcategory of complete objects. Our next goal is to prove that the induced adjunction $\caush\dashv\imp$ is an equivalence of . To this end, we need to prove a few key lemmata before, and to get a better understanding of the functor $\imp$. The functor $\caush$ is conservative. Let $\alpha\colon F\to G$ be a map in $\cFild\Sp$ such that $\caush(\alpha)\colon\caush F \to \caush G$ is an equivalence. As $\coCh(\Sp)$ is a full subcategory of a functor category, equivalences are given pointwise, hence we have that, for all $$\caush F^n\simeq \caush G^n.$$ But, by Lemma <ref> this is just means that $\alpha$ induces an equivalence on associated gradeds, hence, by Proposition <ref>, it is an equivalence in $\cFild\Sp$. There is a natural equivalence (with notation as in Proposition <ref>) of functors It follows from the pointwise description of Proposition <ref> that $(\ev_n)_*$ preserves all colimits. As $\Sp\simeq\Pst{\Delta^0}\simeq\St(\Pre{\Delta^0})$, Lemma <ref> implies that it is enough to check that they take the same value on $\susinftyp\Yo_{\pt}\simeq\susinftyp\pt\simeq\bbS$. But this follows immediately again from description given in Proposition We have equivalences $$\ptize \circ \susinftyp \Yo_n\simeq(\ev_n)_!\bbS\simeq(\ev_{n+1})_*\bbS$$ of objects in $\coCh(\Sp)$ The second equivalence is an instance of Lemma <ref>. Regarding the first one, we will show that they represent the same functor. In fact: and, using Proposition <ref>: concluding the proof. Motivated by Lemma <ref>, we introduce the notation to emphasize the connection with the pointed stabilization of the Yoneda We have the following “density” result for elements of the form $\evalfun_n\bbS$ in $\coCh(\Sp)$. Given any $C\in\coCh(\Sp)$, there is a natural equivalence \tensor\evalfun_n\bbS$$ where $\tensor$ denotes the canonical tensoring of stable over spectra (see <cit.>). Using Proposition <ref> _n, D) (_()(_n,C) _n, D) Now, using the end formula for the space of natural transformations (see <cit.>, <cit.>), we get that hence both objects corepresent the same functor. There is a co/fiber sequence in $\coCh(\Sp)$ given by $$\spch{n-1}{n-1} \to \spch n n \to \evalfun_{n-1} \bbS[n]$$ where the first map is the structure map defined in Construction By Proposition <ref>, the inclusion of $\coCh(\Sp)$ into $\Pst\catCh$ preserves colimits, therefore we can understand the colimit in the latter , where it can be computed pointwise (see The result then follows by inspection of the following diagram \begin{tikzcd} \spch{n-1}{n-1} \ar[d] &[-25pt] = &[-25pt] \cdots \ar[r] & 0 \ar[r] \ar[d] &\bbS[n-1] \ar[r] \ar[d] & 0 \ar[r] \ar[d] & 0 \ar[r] \ar[d] & \cdots \\ \spch n n \ar[d] &[-25pt] = &[-25pt] \cdots \ar[r] & 0 \ar[r] \ar[d] & 0 \ar[r] \ar[d] & \bbS[n] \ar[r] \ar[d, equal] & 0 \ar[r] \ar[d] & \cdots \\ \evalfun_{n-1} \bbS[n] &[-25pt] = &[-25pt] \cdots \ar[r] & 0 \ar[r] & \bbS[n] \ar[r, equal] & \bbS[n] \ar[r] & 0 \ar[r] & \cdots .\\ \end{tikzcd} The previous result lets us get an explicit description of the graded pieces of $\imp$. Let $C\in\coCh(\Sp)$; then $$\gr^n(\imp C) \simeq C^n [-n].$$ By Lemma <ref>, we have a co/fiber sequence $$\spch{n}{n} \to \spch{n+1}{n+1} \to \evalfun_{n} \bbS[n+1].$$ By applying $\map_{\coCh(\Sp)}(-,C)$ to it, we get $$\map_{\coCh(\Sp)}(\evalfun_{n}\bbS[n+1],C)\to\imp C(n+1)\to\imp C(n),$$ (see Remark <ref>) and, by Lemma <ref>, (where the last equivalence follows from Proposition <ref>). Let $C\in\coCh(\Sp)$. Then * If $C^m \simeq 0$, then $\imp C^m \simeq \imp C^{m+1}$. * If there exists an $N$ such that $C^m \simeq 0$ for all $m < N$, then $\imp C^m \simeq \imp C^N$ for all $m \leq N$. * If there exists an $N$ such that $C^m \simeq 0$ for all $m\geq N$, then $\imp C^m \simeq \imp C^N$ for all $m \geq N$. * If there exists an $N$ such that $C^m \simeq 0$ for all $m\geq N$, then $\imp C^m \simeq 0$ for all $m \geq N$ and $\imp C^{N-1} \simeq C^{N-1}[-N+1]$. * For all $n\in\bbZ$, we have that $\imp \spch n n \simeq \susinftyp\Yo_n$. The first three points are straightforward consequences of Proposition <ref>, whereas (<ref>) follows immediately from (<ref>), together with the completeness of $\imp C$ (see Proposition <ref>). Point (<ref>) follows from the definition of $\spch n n$ together with (<ref>-<ref>). The composite $\aush \imp$ is equivalent to $\id_{\coCh(\Sp)}$. It follows from Remark <ref> that $\imp$ commutes with arbitrary coproducts, and is thus cocontinuous. Hence, $\aush \imp$ is cocontinuous as well. By Lemma <ref>, we have that every coherent cochain complex is canonically the colimit of elements in the essential image of In particular, it is enough to check that the two functors agree on elements of the form $\evalfun_{n}\bbS$. Using Lemma <ref> and Corollary <ref>.<ref>, we have _n ≃(n n →n+1n+1) ≃(_n →_n+1) ≃(n n →n+1n+1) concluding the proof. We can now prove the following theorem. The adjunction \begin{tikzcd}[column sep=huge] \cFild\Sp \ar[r, shift left=1.1ex, "\caush"] & \coCh(\Sp). \ar[l, shift left=1.1ex,"\imp"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} is an equivalence. Putting together Proposition <ref>, and Proposition <ref>, we get that $\caush\imp\simeq\id_{\coCh(\Sp)}$. This, together with (the dual of) Proposition <ref> implies that the counit $\eps\colon\caush\imp\Longrightarrow \id_{\coCh{\Sp}}$ of the adjunction $\caush\dashv\imp$ is an equivalence. If we now consider the triangular identity \begin{tikzcd} \caush \ar[r, Rightarrow, "\caush \eta"] \ar[dr, Rightarrow, "\id_{\caush}"'] & \caush\imp\caush \ar[d, Rightarrow, "\eps_{\caush}"] \\ & \caush \end{tikzcd} we see that, as both $\id_\caush$ and $\eps_\caush$ are equivalences, $\caush\eta$ is an equivalence as well; but, as by Lemma <ref> $\caush$ is conservative, then $\eta$ has to be an equivalence, concluding the proof. We conclude the section giving more information about the functor $\imp$; we can leverage from Lemma <ref> our understanding of it, by getting a recursive description of its components, which in turn gives a complete description of its values in the case of bounded above cochain complexes. In the rest of the section, we will make free use of some results about cubic diagrams in stable , as presented in [10], and refer to op. cit. for all the related concepts, notations and terminology we use and do not introduce here. The following fact is somewhat implicit in [10], but as we will use it crucially, we report it here for convenience. Let $\scrC$ be a stable , let $a\in\bbN$ and let $C:(\Delta^a)\op\to\scrC$ be a functor. Moreover, if $(\Delta^1)^a$ denotes the $a$-fold product of $\Delta^1$ with itself, and we denote by $\overrightharp{v} = (\overrightharp{v}_{a}, \overrightharp{v}_{a-1}, \ldots, \overrightharp{v}_1)$ the objects of $(\Delta^1)^a$ (where each $\overrightharp{v}_i$ can either be $0$ or $1$), let $F$ be the $a$-cube having for vertices: F (\overrightharp{v}) = \begin{cases} C(a) \quad &\text{if } \overrightharp{v} = (0,\ldots,0) \\[10pt] \quad &\text{if } 0 < b \leq a \text{, } \overrightharp{v}_i = 0 \text{ for } i > b \text{ and } \overrightharp{v}_i = 1 \text{ for } i \leq b \\[10pt] 0 &\text{else} \end{cases} where all the nonzero maps are determined by $C$ (that is, $F$ is an $a$-cube having the $C(i)$'s on its “spine” and zero objects elsewhere). \totcof F\simeq\cofib^a (C) \simeq \underbrace{ \cofib \kern-.2em\Big(\kern-.4em\cdots \kern-.2em\cofib \kern-.2em\Big( \kern-.2em\cofib }_{a \text{ times}} \kern-.2em\Big(C(a) \to C(a-1) \Big) \to C(1) \Big) \cdots \to C(0) \Big). By <cit.>, one can extend $F$ to a coCartesian $(a+1)$-cube $\widetilde F$ such that $\widetilde{F}|_{\{0\}\times\Delta^a} \simeq F$, and with $\widetilde{F}|_{\{1\}\times\Delta^a}$ having $\totcof F$ as its terminal vertex, and $0$ elsewhere (see op. cit. for a precise statement). As $\widetilde F$ is coCartesian, by iteratively applying <cit.>, $\cofib^a(\widetilde{F})$ is a coCartesian $1$-cube, i.e. an equivalence. We conclude by observing that its source is given by $\cofib^a(F)$, and its target is just $\totcof F$. By virtue of Lemma <ref> and Lemma <ref>, we see that ≃(n n , C) ≃((_n-1[n]→nn-1), C) hence, by iteratively applying the above, we get that, for all $a\in\bbN$, $\imp C^n$ is naturally equivalent to -.2em^a times -.2em( C^n-a →C^n-a[-n+a] ) →C^n-a+1[-n+a-1] ) ⋯→C^n-1[-n+1] ). By stability, the above is equivalent to -.2em^a times -.2em( C^n-a →C^n-a[-n+a] )[-1] →C^n-a+1[-n+a-1] )[-1] ⋯→C^n-1[-n+1] )[-1] which, in turn, is equivalent to -.2em^a times -.2em( C^n-a[-a] →C^n-a[-n] ) →C^n-a+1[-n] ) ⋯→C^n-1[-n] ). By <ref>, the latter is equivalent to the total cofiber of a suitable cube $F_a$, that is $$\imp C^n \simeq \totcof F_a$$ and hence, by <cit.>, $$\imp C^n \simeq \left(\totfib F_a\right)[a].$$ We can describe $F_a$ explicitly as follows; if $(\Delta^1)^a$ denotes the $a$-fold product of $\Delta^1$ with itself, and we denote by $\overrightharp{v} = (\overrightharp{v}_{a-1}, \overrightharp{v}_{a-2}, \ldots, \overrightharp{v}_0)$ the objects of $(\Delta^1)^a$ (where each $\overrightharp{v}_i$ can either be $0$ or $1$) $F_a$ is the $a$-cube having for vertices: F_a (\overrightharp{v}) = \begin{cases} \imp C^{n-a}[-a] \quad &\text{if } \overrightharp{v} = (0,\ldots,0) \\[10pt] \quad &\text{if } 0 \leq b < a \text{, } \overrightharp{v}_i = 0 \text{ for } i > b \text{ and } \overrightharp{v}_i = 1 \text{ for } i \leq b \\[10pt] 0 &\text{else.} \end{cases} As an example, in the case $a=3$, the formula above looks as follows \imp C^n \simeq \totcof \left( \begin{tikzcd} \imp C^{n-3}[-3] \ar[rr]\ar[dd]\ar[dr]&& C^{n-3}[-n] \ar[dr] \ar[dd] & \\ & 0 \ar[rr, crossing over] && C^{n-2}[-n] \ar[dd] \\ 0 \ar[rr]\ar[dr]&& 0 \ar[dr]& \\ & 0 \ar[rr] \ar[from=uu, crossing over]&& C^{n-1}[-n] \\ \end{tikzcd} \right). The previous remark takes a particularly pleasant form when the complex is Let $C\in\coCh(\Sp)$, and assume there exists an $N$ such that $C^m \simeq 0$ for all $m>N$; then for all $a>0$ C^N-a ≃ -.2em^a times -.2em( C^N-a[-N] →C^N-a+1[-N] ) →C^N-a+2[-N] ) ⋯→C^N[-N] ). It follows from Remark <ref>, Corollary and <cit.> that the $(a+1)$-cube defined by F (\overrightharp{v}) = \begin{cases} \imp C^{N-a}[-a] \quad &\text{if } \overrightharp{v} = (0,\ldots,0) \\[10pt] \quad &\text{if } 0 \leq b < a \text{, } \overrightharp{v}_i = 0 \text{ for } i > b \text{ and } \overrightharp{v}_i = 1 \text{ for } i \leq b \\[10pt] \imp C^N \simeq C^N[-N] \quad &\text{if } \overrightharp{v} = (1,\ldots,1) \\[10pt] 0 &\text{else} \end{cases} (where $\overrightharp{v} = (\overrightharp{v}_{a}, \overrightharp{v}_{a-1}, \ldots, \overrightharp{v}_0)$) is coCartesian. In particular, again by <cit.>, $\imp C^{N-a}[-a]$ is the total fiber of $F|_{\Delta^a \times \{1\}}$; the thesis follows from <cit.>. In the cases $a=2$ and $a=3$, the previous Corollary specializes to \imp C^{N-2} \simeq \totcof \left( \begin{tikzcd} C^{N-2} \ar[r] \ar[d] & C^{N-1} \ar[d] \\ 0 \ar[r] & C^{N} \\ \end{tikzcd} \right)[-N] \imp C^{N-3} \simeq \totcof \left( \begin{tikzcd} C^{N-3} \ar[rr]\ar[dd]\ar[dr]&& C^{N-2} \ar[dr] \ar[dd] & \\ & 0 \ar[rr, crossing over] && C^{N-1} \ar[dd] \\ 0 \ar[rr]\ar[dr]&& 0 \ar[dr]& \\ & 0 \ar[rr] \ar[from=uu, crossing over]&& C^{N}\\ \end{tikzcd} \right)[-N] The following remark, inspired by the “Gap objects” considered in <cit.>, will be useful later, and sheds some light on the relation between coherent cochain complexes and the objects called $\bbZ$-complexes in op. cit. Proposition <ref> and the calculus of total cofibers of cubic diagrams allow us to understand also all the intermediate subquotients of $\imp C$. We can consider the diagram consisting of co/Cartesian squares \begin{tikzcd}[column sep = 0.7em] \imp C^n \ar[r] \ar[d] & \imp C^{n-1} \ar[r] \ar[d] & \imp C^{n-2} \ar[r] \ar[d] & \imp C^{n-3} \ar[r] \ar[d] & \imp C^{n-4} \ar[d] \ar[r] & \cdots\\ 0 \ar[r] & C^{n-1}[-n+1] \ar[r] \ar[d] & \imp C^{n-2}/\imp C^{n} \ar[r] \ar[d] & \imp C^{n-3}/\imp C^{n} \ar[r] \ar[d] & \imp C^{n-4}/\imp C^{n} \ar[d] \ar[r] & \cdots\\ & 0 \ar[r] & C^{n-2}[-n+2] \ar[r] \ar[d] & \imp C^{n-3}/\imp C^{n-1} \ar[r] \ar[d] & \imp C^{n-4}/\imp C^{n-1} \ar[d] \ar[r] & \cdots\\ && 0 \ar[r] & C^{n-3}[-n+3] \ar[r]\ar[d] & \imp C^{n-4}/\imp C^{n-2} \ar[r]\ar[d] & \cdots\\ &&&\vdots & \vdots \end{tikzcd} from which we can deduce that, for any $n\in\bbZ$ $$\imp C^{n-2}/\imp C^{n}\simeq \cofib(\partial^{n-2})[-n+1];$$ moreover, we can proceed inductively and apply <cit.> to the cofiber sequences $$\imp C^{n-k+1}/\imp C^n \to \imp C^{n-k}/\imp C^n \to C^{n-k}[-n+k]$$ to identify $\imp C^{n-k}/\imp C^n$ with the total cofiber of a cube having the truncation $$\left(C^{n-k}\to C^{n-k+1}\to \cdots \to C^{n-1}\right)[-n+1]$$ of $C[-n+1]$ on its “spine” and zeroes elsewhere; thus, by Proposition <ref> we have C^n-k/C^n ≃ -.2em^k-1 times -.2em( C^n-k →C^n-k+1 ) →C^n-k+2 ) ⋯→C^n-1 )[-n+1] for any $k\geq 2$. § THE GENERAL EQUIVALENCE In this section, we extend the result of Theorem <ref> to all stable with sequential limits. Along the way, we prove that the explicit formulas given for $\caush$ and $\imp$ in the previous section hold also in the general case. Let $\scrC$ be a small stable . Then, the following equivalences hold: \coCh\left(\Fun\left(\scrC,\Sp\right)\right)$$ \simeq\cFild\left(\Fun\left(\scrC,\Sp\right)\right).$$ We prove this for $\coCh\left(\Fun\left(\scrC,\Sp\right)\right)$, the case of $\cFild\left(\Fun\left(\scrC,\Sp\right)\right)$ being entirely We have that, by definition of $\coCh\left(\Sp\right)$, \begin{equation}\label{eq-funcats} \Fun\left(\scrC,\coCh\left(\Sp\right)\right) \simeq\Fun\left(\scrC,\Fun^0\left(\catCh\op,\Sp\right)\right). \end{equation} Now, as \simeq\Fun\left(\scrC\times\catCh\op,\Sp\right)$$ we have that the of (<ref>) are equivalent to the full subcategory of $\Fun(\scrC\times\catCh\op,\Sp)$ spanned by functors that are pointed in the second variable (that is, sending any pair $(C,0)\in\scrC\times\catCh\op$ to $0$). Thus, as \simeq\Fun\left(\catCh\op,\Fun\left(\scrC,\Sp\right)\right)$$ the of (<ref>) are in turn equivalent to the full subcategory \subset\Fun\left(\catCh\op,\Fun\left(\scrC,\Sp\right)\right)$$ spanned by pointed functors, which, by definition, is $\coCh\left(\Fun\left(\scrC, \Sp\right)\right)$. Let $\scrC$ be a small stable . Then, the equivalence of Theorem <ref> extends to an equivalence between $\coCh(\Fun(\scrC,\Sp))$ and $\cFild(\Fun(\scrC,\Sp))$ which we will again denote by \begin{tikzcd}[column sep=huge] \cFild(\Fun(\scrC,\Sp)) \ar[r, shift left=1.1ex, "\caush"] & \coCh(\Fun(\scrC,\Sp)). \ar[l, shift left=1.1ex,"\imp"] \ar[l,phantom,"\sim" description] \end{tikzcd} One way to re-phrase Lemma <ref> and Corollary <ref> is to say that objects in $\coCh(\Fun(\scrC,\Sp))$ can be described as bifunctors $(C,n)\mapsto G_C(n)$ such that $G_C(\pt)\simeq 0$ for any $C\in\scrC$, and similarly objects in $\cFild(\Fun(\scrC,\Sp))$ can be described as bifunctors $(C,n)\mapsto F_C(n)$ such that $\lim_n F_C(n) \simeq 0$ for any $C\in\scrC$. The equivalence of Corollary is then given on objects by pointwise (in $\scrC$) postcomposition with $\caush$ and $\imp$; that is, we have $$\caush\Big((C,n)\mapsto F_C^n\Big) \simeq \Big((C,n) \mapsto \caush\big(F_C^\bullet\big)^n\Big)$$ $$\imp\Big((C,n)\mapsto G_C^n\Big) \simeq \Big((C,n) \mapsto \imp\big(G_C^\bullet\big)^n\Big).$$ Let $\scrC$ be a small stable , and let $\scrA\subseteq\Fun(\scrC,\Sp)$ be a full stable subcategory closed under sequential limits. Postcomposition with the inclusion $\iota\colon\scrA\to\Fun(\scrC,\Sp)$ induces fully faithful functors By inspection, an object $F$ lies in the essential image of $\iota_{\mathrm{Fil}}$ if and only if $F^n$ lies in $\scrA$ for all integers $n$, and an object $C$ lies in the essential image of $\iota_{\mathrm{Ch}}$ if and only if $C^n$ lies in $\scrA$ for all integers $n$. Let $\scrA\subseteq\Fun(\scrC,\Sp)$ be a full, stable subcategory closed under sequential limits. * If $F$ is a coherent cochain complex in $\cFild(\Fun(\scrC,\Sp))$ that lies in the essential image of $\cFild(\scrA)$, then $\caush F$ lies in the essential image of $\coCh(\scrA)$. * If $C$ is a coherent cochain complex in $\coCh(\Fun(\scrC,\Sp))$ that lies in the essential image of $\coCh(\scrA)$, then $\imp C$ lies in the essential image of $\cFild(\scrA)$. Let $F\colon (C,n)\mapsto F_C^n$ lie in (the essential image of) By Lemma <ref> together with Remark <ref>, we have that, for any integer $m$, $(\caush F)^m$ is given by $\cofib\left(F_\bullet^{m+1}\to F_\bullet^m\right)$. By Remark <ref>, each $F_\bullet^n$ lies in $\scrA$, hence, as $\scrA$ is a stable subcategory each $(\caush A)^m$ lies in $\scrA$, and thus $\caush F$ lies in the essential image of $\coCh(\scrA)$. Let $G\colon (C,n)\mapsto G_C^n$ lie in (the essential image of) By Lemma <ref>, $G$ can be expressed as the limit of bounded above complexes: G\simeq\lim_{j\in\bbZ\op}G^{\leq j}; notice that, by Remark <ref> and by the definition of the $G^{\leq j}$'s given in Construction <ref>, all the objects of the form $(G^{\leq j})^k_\bullet$ belong to $\scrA$. By Remark <ref> together with Remark <ref>, we have that, for any choice of integers $j$ and $m$, $(\imp G^{\leq j})^m_\bullet$ is either zero or can be expressed as a suitable total fiber for a finite diagram $K^{j,m}$ whose entries are either zeroes or of the form $(G^{\leq j})^k_\bullet[j]$ for different values of $k$; in particular, $G^{\leq j}$ is a finite limit of a diagram with values in Putting everything together (and keeping in mind that by Proposition <ref> evaluation commutes with limits), we have that (G)^n ≃((lim_j G^≤j))^n ≃(lim_j (G^≤j))^n ≃lim_j (G^≤j)^n ≃lim_j (K^j,n) is given by a sequential limit of finite colimits of elements of $\scrA$, and thus lies in $\scrA$ for any given $n$. This in turn proves that $\imp G$ lies in the essential image of $\cFild(\scrA)$. Let $\scrA\subseteq\Fun(\scrC,\Sp)$ be a full, stable subcategory closed under sequential limits. Then the equivalence between $\cFild(\Fun(\scrC,\Sp))$ and $\coCh(\Fun(\scrC,\Sp))$ restricts to an \cFild(\scrA) \simeq \coCh(\scrA). It follows from Lemma <ref> that (in the notation of Remark <ref>) $\caush\circ\iota_{\mathrm{Fil}}$ factors through $\iota_{\mathrm{Ch}}$ and $\imp\circ\iota_{\mathrm{Ch}}$ factors thorugh $\iota_{\mathrm{Fil}}$, i.e. there exist functors (which we'll temporarily denote $A$ and $I$) such that $\caush\circ\ifil\simeq\ich\circ A$ and $\imp\circ\ich\simeq\ifil\circ I$. In particular, as ≃∘A ∘I $A$ and $I$ are mutually inverse. Let $\scrC$ be a stable having sequential limits. Then there exists an equivalence of stable \begin{tikzcd}[column sep=huge] \cFild(\scrC) \ar[r, shift left=1.1ex, "\caush"] & \coCh(\scrC). \ar[l, shift left=1.1ex,"\imp"] \ar[l,phantom,"\sim" description] \end{tikzcd} Let $U_0$ be our universe of small sets, and let $U_1$ denote a universe containing $U_0$ as an element. Let us denote by $\Sp_{U_1}$ the stabilization of the of $U_1$-small spaces. The stable Yoneda embedding (see Definition <ref>) provides a fully faithful (see <cit.>) exact functor The results now follows from Lemma <ref> applied to $\stYo$. In particular, by composing the equivalence of Theorem <ref> with the adjunction $L\dashv i$ of Proposition <ref>, we get an induced adjunction \begin{tikzcd}[column sep=huge] \Fild\scrC \ar[r, shift left=1.1ex, "\aush"] & \coCh\scrC \ar[l, shift left=1.1ex,"\imp"] \ar[l,phantom,"\text{\rotatebox{-90}{$\dashv$}}"] \end{tikzcd} for any stable $\scrC$ with sequential limits. In the proof of Lemma <ref> we also showed that the pointwise descriptions already given for $\caush$ and $\imp$ in the case $\scrC=\Sp$ hold also in the general case of Theorem <ref>; that is, for $F\in\cFild\scrC$ $$\caush F^n\simeq\gr^n F[n]$$ and, for $C\in\coCh\scrC$, $$\imp C^n \simeq \totfib F_a [a]$$ for a suitable cube $F_a$, given explicitly by F_a (\overrightharp{v}) = \begin{cases} \imp C(n-a)[-a] \quad &\text{if } \overrightharp{v} = (0,\ldots,0) \\[10pt] \quad &\text{if } 0 \leq b < a \text{, } \overrightharp{v}_i = 0 \text{ for } i > b \text{ and } \overrightharp{v}_i = 1 \text{ for } i \leq b \\[10pt] 0 &\text{else.} \end{cases} (see <ref> for details about the notation). In particular, Remark <ref> generalizes as well, C^n-k/C^n ≃ -.2em^k-1 times -.2em( C^n-k →C^n-k+1 ) →C^n-k+2 ) ⋯→C^n-1 )[-n+1] for any $n\in\bbZ$ and $k\geq 2$. § COHERENT COCHAIN COMPLEXES AND BEILINSON $\MATHRM{T}$-STRUCTURES In this section, we study the connection between the pointwise t-structure on coherent cochain complexes and the Beilinson t-structure on filtered objects and show how the former can in some sense be interpreted as an easier to understand version of the latter. In fact, if $\Fild\scrC$ is equipped with the Beilinson t-structure, then the full subcategory of complete filtered objects inherits a t-structure from it[this fact can easily be proved directly, but will be an immediate consequence of Theorem <ref>]. It turns out that such inherited t-structure is equivalent to the one obtained by carrying the pointwise t-structure on $\coCh\scrC$ along the equivalence of Theorem <ref>; moreover, the Beilinson t-structure on $\Fild\scrC$ is in some sense characterized by this property and by carrying “trivial information” on essentially constant objects (see Theorem <ref> for a precise statement). In particular, $\Fild\scrC$ and $\cFild\scrC$ have the same heart (Remark Let $\scrC$ be a stable equipped with a t-structure. We define the pointwise t-structure on $\coCh\scrC$ to be the one defined ()_≥0 = {C∈ | ∀n C^n ∈_≥0} ()_≤0 = {C∈ | ∀n C^n ∈_≤0}. We will denote the truncation functors for this t-structure by $\tau^\text{lvl}_{\geq n}$ and $\tau^\text{lvl}_{\leq n}$, and the homotopy objects by $\pi^\text{lvl}_n$ for all $n\in\bbZ$. It follows immediately from the definitions that $\coCh\scrC$ has precisely the same separatedness and completeness properties that $\scrC$ has. Let $\scrC$ be a stable with sequential limits. The transferred t-structure on $\cFild\scrC$ is the t-structure $(\imp(\coCh\scrC)_{\geq 0},\imp(\coCh\scrC)_{\leq 0})$ transferred along the equivalence of Theorem <ref>. As a direct consequence of the definitions, we have that \simeq\coCh\left(\scrC^\heartsuit\right).$$ The following definition is a straightforward generalization of the t-structure first introduced by Beilinson in [4]. Let $\scrC$ be a stable having all sequential limits[this assumption is likely superfluous; see Remark <ref>], equipped with a right separated[unlike the previous one, this assumption is crucial; see Remark <ref>] t-structure $(\scrC_{\geq 0}, \scrC_{\leq 0})$. The Beilinson t-structure $(\Fild_{\geq 0}\scrC, \Fild_{\leq 0}\scrC)$ on $\Fild\scrC$ is defined as follows: $\bullet$ $\Fild_{\geq 0}\scrC$ is the full subcategory spanned by the objects $F\in\Fild\scrC$ such that $$\gr^i(F)\in \scrC_{\geq -i} \text{ for all } i.$$ $\bullet$ $\Fild_{\leq 0}\scrC$ is the full subcategory spanned by the objects $F\in\Fild\scrC$ such that $$F^i\in \scrC_{\leq -i} \text{ for all } i.$$ (Note the asymmetry in the definition). This t-structure appeared first, in a slightly different setting, in [4]. Its existence in this generality will be a consequence of <cit.> together with Theorem <ref>. We will denote the truncation functors for this t-structure by $\tau^B_{\geq n}$ and $\tau^B_{\leq n}$, and the homotopy objects by $\pi^B_n$ for all $n\in\bbZ$. The hypothesis of $\scrC$ having all sequential limits in Definition <ref> is there just because we will use Theorem <ref> to prove its existence. We believe that the t-structure can exist even without this assumption on $\scrC$, but as all the examples that arise in practice satisfy this extra hypothesis, we didn't bother finding a proof that does not use it. We can infer a few properties of the Beilinson t-structure from its * Even if we are assuming $\scrC$ to be right separated, $\Fild\scrC$ need not be so; in fact, the full subcategory of $\infty$-coconnective objects $\cap_n (\Fild\scrC)_{\leq n}$ consists of all filtered objects whose associated graded is trivial, hence of all the essentially constant objects. * Since the full subcategory of $\infty$-connective objects consists of the levelwise $\infty$-connective ones, $\Fild\scrC$ is left separated if and only if $\scrC$ is so. We learned the following fact from <cit.>; although the result in loc. cit. is stated in less generality, the proof carries verbatim to the general case. We report the argument here for the reader's convenience. Let $\scrC$ be a stable having all sequential limits, equipped with a right separated t-structure $(\scrC_{\geq 0}, \scrC_{\leq 0})$, and let $\wtrunc{n}$ denote its Whitehead truncation functors. Let $\Fild(\scrC)$ be equipped with the Beilinson t-structure, and let $\wtrunc{n}^B$ denote its Whitehead truncation functors. Then, there is a natural equivalence of functors $\Fild\scrC\to\scrC$ \gr^i\circ\wtrunc{n}^B \simeq \wtrunc{n-i}\circ\gr^i for all $i\in\bbZ$. Notice that for any $i$, the exact functor $\gr^i\colon\Fild\scrC\to\scrC$ carries $(\Fild\scrC)_{\geq 0}$ to $\scrC_{\geq -i}$. Moreover, as by <cit.> each $\scrC_{\geq -i}$ is closed under extensions, the fiber sequence $$F^i\to\gr^i F\to F^{i+1}[1]$$ proves that $\gr^i F\in\scrC_{\leq -i}$, and thus $\gr^i$ carries also $(\Fild\scrC)_{\leq 0}$ to $\scrC_{\leq -i}$. That is, each $\gr^i$ is t-exact with respect to the Beilinson t-structure on $\Fild\scrC$ and the shifted t-structure $(\scrC_{\geq -i},\scrC_{\leq -i})$ on $\scrC$. As any exact and t-exact functor between stable equipped with t-structures commutes with the truncation functors associated to the t-structures, the result follows. In particular, in the hypotheses of Proposition <ref>, if we denote by $\pi_n$ the t-structure homotopy object functors of $\scrC$ and by $\pi_n^B$ the t-structure homotopy object functors of $\Fild\scrC$, we have the equivalence (natural in $F$) \gr^i\pi_n^B F \simeq (\pi_{n-i}(\gr^i F))[-i] for all $i,n\in\bbZ$. We recall the following theorem from [12] (which is an $\infty$-categorical generalization of <cit.>). Given a recollement $\scrC_0\hookrightarrow\scrC\hookleftarrow\scrC_1$, suppose both $\scrC_0$ and $\scrC_1$ are equipped with t-structures, then there exists a t-structure on $\scrC$, called the glued t-structure, given by _≥0 = {X∈ | j_L X ∈(_1)_≥0 and i_L X ∈(_0)_≥0} _≤0 = {X∈ | j_L X ∈(_1)_≤0 and i_R X ∈(_0)_≤0} (with notations as in Remark <ref>). Let $\scrC$ be a stable with all sequential limits equipped with a right separated t-structure $(\scrC_{\geq 0},\scrC_{\leq 0})$. Then the glued t-structure on $\Fild\scrC$ (via the recollement of Remark <ref>) obtained by considering * the trivial t-structure[that is, the one given by $(\scrC,\{0\})$, where all objects are connective, and only the zero object is coconnective] on $\scrC$, * the transferred t-structure on $\cFild\scrC$ is the Beilinson t-structure (with respect to $(\scrC_{\geq 0},\scrC_{\leq 0})$) on $\Fild\scrC$, as per Definition Before proving Theorem <ref>, we state one immediate consequence of it (and the definition of glued t-structure given in Theorem The Beilinson t-structure on $\Fild\scrC$ induces a t-structure on the full subcategory of complete filtered objects $\cFild\scrC$ that is equivalent to the transferred t-structure of Definition Let us start by identifying the subcategory of connective objects for the transferred t-structure on $\cFild\scrC$. We have that $F\in(\cFild\scrC)_{\geq 0}$ if and only if $\caush F \in \left(\coCh\scrC\right)_{\geq 0}$. By Definition <ref>, this is the case if and only if (\caush F)^n \simeq \gr^n F[n] \in \scrC_{\geq 0} \text{ for all } n (\cFild\scrC)_{\geq 0} = \left\{F\in\cFild\scrC \ | \ \forall n \ \gr^n F \in \scrC_{\geq -n}\right\}. Now, according to Theorem <ref>, the connective objects in the glued t-structure on $\Fild\scrC$ are given by all the $G\in\Fild\scrC$ such that $$LG\in\left(\cFild\scrC\right)_{\geq 0}$$ (the condition on $G^{-\infty}$ being empty, as we are considering the trivial t-structure on $\scrC$); but, as for all $n$ we have $\gr^n LG \simeq \gr^n G$, the above is equivalent to the condition \gr^n G \in \scrC_{\geq -n} \text{ for all } n which in turn determines exactly the class of connective objects for the Beilinson t-structure of Definition <ref>; as, by <cit.>, the class of connective objects completely determines the t-structure, provided its existence, the only thing left is to chech is that the description of coconnective objects given in Theorem <ref> coincides with the one given in Definition <ref>; that is, we have to prove that \left(\forall n \ \gr^n F \in \scrC_{\leq -n} \text{ and } F^{+\infty} \simeq 0\right) \Longleftrightarrow \left(\forall n \ F^n \in \scrC_{\leq -n}\right). For the “only if” direction: <cit.> implies that $\scrC_{\leq -n}$ is closed under extensions, hence if $F^{n+1} \in \scrC_{\leq -n-1}$ and $F^n \in \scrC_{\leq -n}$ the fiber sequence $$F^n\to\gr^{n}F\to F^{n+1}[1]$$ proves that $\gr^n F\in\scrC_{\leq -n}$. To check that $F^{+\infty}\simeq 0$, observe that any subset of the form $\bbZ\op_{\geq n}$ is an initial subcategory of $\bbZ\op$, hence for any $n\in\bbZ$ we have F^{+\infty}\coloneqq\lim\left(\cdots\to F^{n+1}\to F^n \to \cdots\right) \simeq\lim\left(\cdots\to F^{n+1}\to F^n\right); as (by <cit.>), each $\scrC_{\leq n}$ is closed under all limits existing in $\scrC$, $$F^{+\infty}\in\scrC_{\leq n} \ \forall n\in\bbZ$$ (recall that $\scrC_{\leq m}\subseteq\scrC_{\leq n}$ for all pairs of integers $m\leq n$) and thus $F^{+\infty}\simeq 0$ by the right separatedness For the “if” direction, let us start by noticing that as $\gr^{n+1}F[1]\in\scrC_{\leq -n}$, the fiber $$F^n/F^{n+2}\to\gr^n F\to\gr^{n+1}F$$ proves $F^n/F^{n+2}\in\scrC_{\leq -n}$ (again, as the latter is closed under limits in $\scrC$). We can now proceed inductively for $m\geq 2$ to show that (as $\gr^{n+m}F[1]\in\scrC_{\leq -n-m+1}\subseteq\scrC_{\leq -n})$ the fiber sequence $$F^n/F^{n+m+1}\to F^n/F^{n+m}\to \gr^{n+m}F[1]$$ implies all objects $F^n/F^k$ for $k>n$ lie in $\scrC_{\leq -n}$. Since (again, by <cit.>) we know we can compute colimits in $\scrC_{\leq -n}$ just by computing them in $\scrC$ and then reflecting along the left adjoint to the inclusion (in particular, the colimit is the same in both categories if the object already happened to land in $\scrC_{\leq -n}$ when computed in $\scrC$), we have that lim_k F^n/F^k ≃lim_k (F^k→F^n) ≃(lim_k F^k→F^n) lies in $\scrC_{\leq -n}$. But, as by hypothesis $F^{+\infty}\simeq 0$, we have that $F^n\in\scrC_{\leq -n}$ as desired. Motivated by the previous results, we will refer to the transferred t-structure on $\cFild\scrC$ also as the Beilinson t-structure. In the situation of Theorem <ref>, passing to hearts one gets a “recollement” of Abelian categories: \begin{tikzcd}[column sep=huge] \scrC_0^\heartsuit \ar[r, hook, "i"' description] & \scrC^\heartsuit \ar[l, shift right=0.6ex, bend right, "i_L"'] \ar[from=r, hook, shift right=0.6ex, bend right, "(j_L)_!"'] \ar[r, "j_L" description] \ar[from=r, hook', shift left=0.6ex, bend left, "j"] \ar[l, shift left=0.6ex, bend left, "i_R"] \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, "\text{\scalebox{1}{\rotatebox{-90}{$\dashv$}}}"] & \scrC_1^\heartsuit \ar[l,phantom, shift left=2ex, \ar[l,phantom, shift right=2ex, \end{tikzcd} where $i$, $j$ and $(j_L)_!$ are fully faithful. As shown already in <cit.>, in this situation $j_L$ characterizes $\scrC_1^\heartsuit$ as the quotient category In particular, in the hypothesis of Theorem <ref>, we have that (as $\scrC$ is endowed with the trivial t-structure) $\scrC^\heartsuit\simeq 0$ and thus (using Remark <ref>) \simeq\coCh\left(\scrC^\heartsuit\right).$$ It follows from Definition <ref> together with Corollary <ref> that for any $F\in\cFild\scrC$ and $C\in \coCh\scrC$ we have \caush \left(\tau^B_{\leq n} F\right) \simeq \tau^\text{lvl}_{\leq n} (\caush F) \quad \text{and} \quad \imp \left(\tau^\text{lvl}_{\leq n} C \right)\simeq \tau^B_{\leq n} (\imp C) for all $n\in\bbZ$, and similar formulae for the truncations above $n$. Moreover, by Remark <ref>, we also have $$\pi^B_n F \simeq \pi^\text{lvl}_n \caush F \quad \text{and} \quad \pi^\text{lvl}_n C \simeq \pi^B_n \imp C for all $n\in\bbZ$. One can construct on $\Fild\scrC$ the glued t-structure as in Theorem <ref> regardless of the right separatedness if $\scrC$ is not right separated, the class of coconnective objects will be the full subcategory \{F\in\Fild_{\leq 0} \ | \ \aush F \in (\coCh\scrC)_{\leq 0} \text{ and } F^{-\infty} \simeq 0\} but this class will in general not coincide with the one described in Definition <ref> (and, as by <cit.> the class of connective objects completely determines the t-structure, there cannot exist a t-structure exactly as in Definition <ref> if the two classes of “candidate coconnective objects” do not coincide). We are not aware of any application for this t-structure. § SYMMETRIC MONOIDAL STRUCTURES The $\infty$-categorical Day convolution (introduced in [15] and further developed in <cit.>) provides a way to equip with a symmetric monoidal structure any functor category $\Fun(\scrC,\scrD)$, provided that both $\scrC$ and $\scrD$ are symmetric monoidal, and $\scrD$ is presentably so. In particular (see Remark <ref>), we can endow $\Fild\scrC$ with a symmetric monoidal structure whenever $\scrC$ is presentably symmetric monoidal. As it turns out, such monoidal structure induces one on $\cFild\scrC$, and thus on $\coCh\scrC$, whenever Theorem <ref> applies. In this section, we analyze these induced symmetric monoidal structures, and their interaction with the t-structures introduced in the previous section. In particular, we prove that the Day convolution structure on both $\Fild\scrC$ and $\cFild\scrC$ is compatible with Beilinson t-structures, and that the symmetric monoidal structure on $\coCh\scrC$ provides a homotopy coherent refinement of the usual tensor product of cochain complexes. Let $\scrC$ be a presentably symmetric monoidal . By <cit.> (see also <cit.>, with $\kappa$ chosen to be the strongly inaccessible cardinal determining the size of our universe of small sets) we can endow $\Fild(\scrC)$ with the structure of a symmetric monoidal , given by Day convolution (where the symmetric monoidal structure on $\bbZ\op$ is given by addition). Again, by <cit.>, if $F$ and $G$ are filtered objects in $\scrC$, their Day convolution product is given by Kan extension of $\tensor\circ(F,G)$ along $+$, hence by <cit.> and <cit.> it is pointwise given by \begin{equation}\label{eq-day-formula} (F\tensor_{\text{Day}}G)^n \simeq \colim_{ \substack{(s,t)\in\bbZ\op\times\bbZ\op \\ s+t\geq n}} F^s \tensor G^t \end{equation} where the shape of the colimit follows from inspection of the comma category $(+\downarrow n)$ (see Definition <ref>). In particular, $\Fild(\scrC)$ is presentably symmetric monoidal. Similarly, we can endow $\prod_\bbZ\scrC\simeq\Fun(\bbZ^\delta,\scrC)$ with a presentably symmetric monoidal structure given by Day convolution, whose product is pointwise given by \left((X_u)_{u\in\bbZ}\tensor_{\text{Day}}(Y_v)_{v\in\bbZ}\right)_n \simeq \bigoplus_{s+t=n} X_s \tensor Y_t where once again the shape of the colimit follows from inspection of the comma category $(+\downarrow n)$. Let $\scrC$ be a presentably symmetric monoidal , whose unit we'll denote by $\mathbbm{1}$. Then, the unit object for $(\Fild\scrC, \tensor_{\text{Day}})$ is given by the filtered object $$\mathbbm{1}_{\langle\leq 0\rangle}\coloneqq \cdots \to 0 \to 0 \to \mathbbm{1}\xto{\id}\mathbbm{1}\xto{\id}\cdots$$ consisting of copies of $\mathbbm{1}$ and identity morphisms for $n\leq 0$, and of copies of $0$ for $n>0$. First, let us note that it is enough to prove the following claim: ($\spadesuit$) For any $F\in\Fild\scrC$ and any $n\in\bbZ$ we have $$(F\tensor_{\text{Day}}\mathbbm{1}_{\langle\leq 0\rangle})^n\simeq F^n.$$ In fact, the existence of the unitor equivalence $\mathbbm{1}_{\text{Day}}\tensor\mathbbm{1}_{\langle\leq 0\rangle}\xto{\sim} \mathbbm{1}_{\langle\leq 0\rangle}$ together with the claim implies the existence of equivalences \left(\mathbbm{1}_{\text{Day}}\tensor \mathbbm{1}_{\langle\leq 0\rangle}\right)^n\simeq \mathbbm{1}_{\langle\leq 0\rangle}^n$$ for all $n\in\bbZ$. As equivalences in $\Fild\scrC$ can be cheched pointwise, this is enough to $\mathbbm{1}_{\text{Day}}\simeq \mathbbm{1}_{\langle\leq 0\rangle}$. We now turn to the proof of claim ($\spadesuit$). By (<ref>), this boils down to proving that the colimit of the following diagram \begin{tikzcd} &\cdots \ar[d] & \cdots \ar[d] & \cdots \ar[d] & \cdots \ar[d] & \cdots \ar[d] & \cdots \ar[d]\\ \cdots \ar[r] & 0 \ar[r]\ar[d] & 0 \ar[r]\ar[d]& 0 \ar[r]\ar[d]& 0 \ar[r]\ar[d]& 0 \ar[r]\ar[d]& 0 \\ \cdots \ar[r] & 0 \ar[r]\ar[d] & 0 \ar[r]\ar[d]& 0 \ar[r]\ar[d]& 0 \ar[r]\ar[d]& 0 & \\ \cdots \ar[r] & F^{n+3} \ar[r]\ar[d, equal] & F^{n+2} \ar[r]\ar[d, equal]& F^{n+1} \ar[r]\ar[d, equal]& F^n && \\ \cdots \ar[r] & F^{n+3} \ar[r]\ar[d, equal] & F^{n+2} \ar[r]\ar[d, equal]& F^{n+1} &&& \\ \cdots \ar[r] & F^{n+3} \ar[r]\ar[d, equal] & F^{n+2} &&&& \\ &\cdots &&&&& \\ \end{tikzcd} is equivalent to $F^n$. By finality, it is enough to check that the colimit of the following diagram \begin{equation}\label{eq-zigzag} \begin{tikzcd} &&&&& \cdots \ar[d]\\ &&&& 0 \ar[r]\ar[d]& 0 & \\ &&& 0 \ar[r]\ar[d]& 0 && \\ && F^{n+1} \ar[r]\ar[d, equal]& F^{n} &&& \\ & F^{n+2} \ar[r]\ar[d, equal]& F^{n+1} &&&& \\ F^{n+3} \ar[r]\ar[d, equal] & F^{n+2} &&&&& \\ \cdots &&&&& \\ \end{tikzcd} \end{equation} is equivalent to $F^n$. If we denote by $A$ the colimit of the diagram \begin{equation}\label{eq-zero-zig} \begin{tikzcd} &&&&& \cdots \ar[d]\\ &&&& 0 \ar[r]\ar[d]& 0 & \\ &&& 0 \ar[r]& 0 && \\ \end{tikzcd} \end{equation} and by $B$ the colimit of the diagram \begin{equation}\label{eq-zag} \begin{tikzcd} &&& 0 \ar[d]&&& \\ && F^{n+1} \ar[r]\ar[d, equal]& F^{n} &&& \\ & F^{n+2} \ar[r]\ar[d, equal]& F^{n+1} &&&& \\ F^{n+3} \ar[r]\ar[d, equal] & F^{n+2} &&&&& \\ \cdots &&&&& \\ \end{tikzcd} \end{equation} we have that, by <cit.>, we can decompose the colimit of (<ref>) as the coproduct $A\oplus B$. As (<ref>) consists only of zero objects, its colimit is zero. Thus, the colimit of (<ref>) is equivalent to the colimit of (<ref>). By finality, we can omit the top right arrow $0\to F^n$ from the diagram in order to compute its colimit. By applying inductively <cit.>, we see that $B$ can be computed as the iterated pushout As each of the pushouts is taken along an equivalence, we have $B\simeq F^n$, as desired. Recall the following definition. Let $\scrC$ be a symmmetric monoidal , and let $L\colon\scrC\to\scrC$ be a localization functor. The functor $L$ is compatible with the symmetric monoidal structure if for every $L$-equivalence $X\to Y$, and every $Z\in\scrC$, the induced $X\tensor Z\to Y\tensor Z$ is an $L$-equivalence. The following proposition already appeared as <cit.>, we present an alternative proof. Let $\scrC$ be a presentably symmetric monoidal . Then, the localization functor $L\colon\Fild\scrC\to\cFild\scrC$ is compatible with Day convolution. Note that, by <cit.>, $\Fild\scrC$ admits an internal mapping object given by (where $\map_{\scrC}$ denotes the internal mapping object of $\scrC$) By <cit.>, it suffices to prove that, given any $F\in\Fild\scrC$ and any $G\in\cFild\scrC$, the internal mapping object is complete; by the following computation lim_n∈ (F,G)(n) this is in fact the case. It follows from Proposition <ref> and <cit.> that if $\scrC$ is presentably symmetric monoidal, we have an induced presentably symmetric monoidal structure on $\cFild(\scrC)$, which we'll refer to as the completed Day convolution monoidal structure and will denote by $\widehat{\tensor}$. In particular, we have that $$F \ \widehat{\tensor} \ G \simeq L \Big(F\tensor_{\text{Day}}G\Big).$$ From Proposition <ref> and Remark <ref>, we have that the unit for $\widehat\tensor$ is given by $$L\mathbbm{1}_{\langle\leq 0\rangle}\simeq\mathbbm{1}_{\langle\leq 0\rangle}.$$ Let $\scrC$ be a stable presentably symmetric monoidal . We refer to the symmetric monoidal structure induced on $\coCh(\scrC)$ by the equivalence of Theorem <ref> as the coherent cochains tensor product, and we will denote it just by $\tensor$. It follows from Theorem <ref> together with Remark <ref> that the functor $\undch$ defined in Lemma fits into the following commutative diagram \begin{tikzcd} \Fild\scrC\ar[r,"\aush"] \ar[d, "\gr"'] &[15pt] \coCh(\scrC) \ar[d,"\undch"] \\ \prod_\bbZ\scrC\ar[r,"(\Sigma^n)_{n\in\bbZ}"] &[15pt] \prod_\bbZ\scrC. \end{tikzcd} that is, $\undch \circ \aush F \simeq \left(\gr^n F [n]\right)_{n\in\bbZ}$ naturally in $F\in\Fild\scrC$. The symmetric monoidal stucture given in Definition <ref> is really a homotopy coherent generalization of the usual tensor product of cochain complexes, in a sense made precise by the following results. The functor $\undch \circ \aush$ is symmetric monoidal. This is a direct consequence of <cit.> together with Remark <ref> and $(\Sigma^n)_{n\in\bbZ}$ being an equivalence. Let $\scrC$ be a stable presentably symmetric monoidal . Let $C$ and $D$ be elements of $\coCh(\scrC)$. Then, we have that $$(C \tensor D)^n \simeq \bigoplus_{s+t=n} C^s \tensor_\scrC D^t .$$ By definition of $\undch$ (see Lemma $(C\tensor D)^n \simeq \undch(C\tensor D)^n$. The result follows from the following computation (C D)^n (C D)^n L (C _Day D)^n (C _Day D)^n (C _Day D)^n (uC _Day uD)^n ⊕_s+t=n C^s _ D^t. We now analyze the interaction between the symmetric monoidal structures introduced in this section, and the t-structures introduced in Section <ref>. We start by recalling the following definition (see <cit.> and <cit.> for more details about the general theory of the interaction between t-structures and symmetric monoidal Let $\scrC$ be a stably symmetric monoidal equipped with a t-structure $(\scrC_{\leq 0}, \scrC_{\geq 0})$. The t-structure is said to be compatible with the symmetric monoidal structure if the following conditions hold: * The unit object for $\tensor$ lies in $\scrC_{\geq 0}$; * Given any pair of connective objects $X,Y\in\scrC_{\geq 0}$, their product $X\tensor Y$ lies in $\scrC_{\geq 0}$ as well. Conditions (<ref>) and (<ref>) guarantee that $\scrC_{\geq 0}$ inherits a symmetric monoidal structure from $\scrC$ such that the fully faithful inclusion $\scrC_{\geq 0}\hookrightarrow\scrC$ is a (strong) symmetric monidal functor. Let $\scrC$ be a presentably symmetric monoidal equipped with a t-structure that is compatible with the monoidal structure. Then: * The Beilinson t-structure on $\Fild\scrC$ is compatible with the Day convolution product $\tensor_{\text{Day}}$; * The Beilinson t-structure on $\cFild\scrC$ is compatible with the completed Day convolution product $\widehat\tensor$; * If $\scrC$ is moreover stable, the pointwise t-structure on $\coCh\scrC$ is compatible with the coherent cochains tensor product $\tensor$ By Proposition <ref>, we have that \gr^i\mathbbm{1}_{\langle\leq 0\rangle}\simeq \begin{cases} 0 &\text{ for } i\ne 1;\\ \mathbbm{1} &\text{ for } i=1. \end{cases} and thus the unit is Beilinson connective. Let us now consider $F, G\in\Fild\scrC_{\geq 0}$. By <cit.> we have that \gr^i(F \tensor_{\text{Day}} G)\simeq \bigoplus_{s+t=i} \gr^s F\tensor\gr^t G from which we immediately have that $F\tensor_{\text{Day}}G$ lies in $(\Fild\scrC)_{\geq 0}$. This completes the proof of (<ref>). By Proposition <ref>, applying the localization functor $L$ has no effect on associated gradeds, hence (<ref>) immediately implies (<ref>). Finally, (<ref>) is a trivial consequence of (<ref>) and the definition of $\tensor$ on $\coCh\scrC$. It follows from <cit.> that Day convolution, and hence also the tensor product of coherent chain complexes, induce a symmetric monoidal structure on the hearts of the respective Beilinson t-structures. From Corollary <ref>, we have that the induced symmetric monoidal structure on is the usual one. § COHERENT COCHAIN COMPLEXES AND TODA BRACKETS In this section, we will have a closer look at the relation between Toda brackets and coherent cochain complexes. Our main result will be characterizing coherent cochain complexes in suitable pointed as being precisely the sequences of objects $$\cdots\to X^n \xto{f^n} X^{n+1}\xto{f^{n+1}} X^{n+2}\cdots$$ such that all pairwise composable morphisms are nullhomotopic (i.e. for all $n\in\bbZ$ we have $f^{n+1}\circ f^n\simeq 0$) and all possible Toda brackets are compatibly trivial (see Remark <ref>). Let us first start with a short recollection about Toda brackets. Given a pointed $\scrC$, let $f^0\colon X^0 \to X^1$, $f^1\colon X^1 \to X^2$ and $f^2\colon X^2 \to X^3$ be morphisms in $\scrC$, such that $f^1 f^0\simeq f^2 f^1 \simeq 0$, and let $\alpha\colon 0\Rightarrow f^1 f^0$ and $\beta\colon 0 \Rightarrow f^1 f^2$ be two choices of nullhomotopies: \begin{tikzcd} X^0 \ar[r, "f^0"] \ar[rr, bend right=40, "0"'] \ar[rr, phantom, bend right=20, "\rotatebox{90}{$\Rightarrow$} \alpha"] & X^1 \ar[r, "f^1"] & X^2 \ar[r, "f^2"] & X^3. \ar[from=ll, bend left=40, "0"] \ar[from=ll, phantom, bend left=20, "\rotatebox{90}{$\Leftarrow$} \beta"] \end{tikzcd} In such a situation, the two whiskerings $f^2 \alpha$ and $\beta f^0$ determine two paths $0\Rightarrow f^2 f^1 f^0$ in $\Map_*(X^0,X^3)$. By gluing them along their endpoints, we obtain a map (pointed at zero) $T\colon S^{1}\to\Map_*(X^0,X^3)$, which in turns determines a class $\langle f^2,f^1,f^0 \rangle_{(\alpha, \beta)} \in \pi_1(\Map(X^0,X^3))$. The homotopy class $\langle f^2,f^1,f^0 \rangle_{(\alpha, \beta)}$ is known as the Toda bracket determined by $\alpha$ and $\beta$. The customary approach to Toda brackets would be not fix $\alpha$ and $\beta$ as part of the datum, and to instead define the Toda bracket to be a subset of $\pi_1(\Map(X^0,X^3))$, whose elements are given by all the possible choices of homotopy classes of paths $([\alpha], [\beta])$. According to this definition, the object referred above as $\langle h,g,f \rangle_{(\alpha, \beta)}$ is a specified element of this subset. The notion can be generalized to longer sequences of maps, provided that the Toda brackets of the shorter sub-sequences are nullhomotopic; to illustrate how the generalization works, let us consider the case of a sequence of length 4 \begin{tikzcd} X^0 \ar[r, "f^0"] & X^1 \ar[r, "f^1"] & X^2 \ar[r, "f^2"] & X^3 \ar[r, "f^3"] & X^4. \end{tikzcd} and choices of nullhomotopies $\alpha$, $\beta$ and $\gamma$ for all consecutive pairs of maps, for which $\langle f^2,f^1,f^0 \rangle_{(\alpha, \beta)}=0$ and $\langle f^3,f^2,f^1 \rangle_{(\beta, \gamma)}=0$; in such a situation, we obtain a pair of nullhomotopies for a pair of maps $X^0\to\Sigma X^4$, one for the composite \begin{tikzcd}[column sep=6em] X^0 \ar[r, "f^0"] & X^1 \ar[r, "{\langle f^3,f^2,f^1 \rangle_{(\beta, \gamma)}}"] & \Sigma X^4 \end{tikzcd} and the other one for composite \begin{tikzcd}[column sep=6em] \ar[r, "{\langle f^2,f^1,f^0 \rangle_{(\alpha, \beta)}}"] & \Sigma X^3 \ar[r, "\Sigma f^3"] & \Sigma X^4; \end{tikzcd} putting together these two as before, we obtain a pointed map $S^1\to\Map_*(X^0,\Sigma X^4)$, and thus a class $\langle f^3,f^2,f^1,f^0 \rangle\in\pi_2(\Map_*(X^0,X^3))$, the 4-fold Toda bracket of the sequence[again, we are working with a coherent notion, depending also on the choices of all the five nullhomotopies involved]. This idea clearly generalizes to longer sequences of maps, provided that the shorter subsequences have nullhomotopic brackets. Although it is possible to work with these notions using the approach sketched above, in the rest of the section we will use a slightly different perspective to define and generalize Toda brackets. Our approach will be to define these classes by means of the actions of certain algebra objects in graded pointed spaces; the main advantage for us will be the ease of working at once with all the possible $n$-fold Toda brackets in a $\bbZ$-indexed sequence (provided that all the possible $(n-1)$-fold Toda brackets are of course, one can recover the case of finite sequences by considering a $\bbZ$-indexed sequence where only finitely many objects are non-zero (see Definition <ref> for the details). One other pleasant feature of our approach will be to obtain an inherently coherent notion of Toda brackets (that is, encoding also choices for all the involved homotopies, rather than defining the notion up to a coset of “indeterminacies”). Our strategy for proving that coherent chain complexes are precisely those $\bbZ$-indexed sequences of morphisms where consecutive maps compose to zero and all Toda brackets are coherently nullhomotopic requires a few steps: in <ref>, we will prove that for a pointed $\scrC$ (which we will for simplicity assume to be also presentable) both $\Fun(\bbZ\op,\scrC)$[in the present section we will use the notation $\Fun(\bbZ\op,\scrC)$ instead of the previously introduced $\Fild(\scrC)$ to stress that here we want to think not of structured objects of independent interest, but of sequences of morphisms upon which we intend to put relations; in particular, the reader should keep in mind that the appearence of this here has nothing to do with the equivalence of Theorem <ref>] and $\coCh(\scrC)$ are of modules for suitable $\bbE_1$-algebra objects in $\Gr\spaces_*$. Then, in <ref> we will construct a sequence of $\bbE_1$-algebras in $\Gr\spaces_*$ \begin{equation}\label{eq-sequence-of-algebras} R_1\to R_2\to R_3 \to \cdots \end{equation} such that the of modules over $R_1$ will be equivalent to $\Fun(\bbZ\op,\scrC)$, the of modules over $\colim_n R_n$ will be equivalent to $\coCh(\scrC)$, and such that the sequence induced by (<ref>) on homology will be equivalent to the resolution of $\Lambda(e)$ described in Theorem <ref>, with $e$ of degree 1. Modules over $R_n$ for a fixed $n\geq 2$ will be sequences where all pairwise composable morphisms will be nullhomotopic, all possible $m$-fold Toda brackets for $m\leq n$ will be defined and trivial, and all the relevant nullhomotopies will be encoded in the module structure (see Remark <ref> for the details). In order to prove the existence of the sequence (<ref>), we will use the results of Appendix <ref>. As it turns out, the construction of the above sequence up to $R_3$ is slightly more subtle than its extension to all the other $R_n$'s, hence we will first construct the sequence inductively for $n\geq 4$, and defer the construction of the beginning of the sequence to <ref>. §.§ Algebras in graded pointed spaces The main goal of this section is to prove that, given a pointed $\scrC$, both $\Fun(\bbZ\op,\scrC)$ and $\coCh(\scrC)$ can be expressed as of modules over suitably defined $\bbE_1$-algebras in graded pointed spaces. For simplicity, we will assume that $\scrC$ is also presentable, but with some extra effort the results can be generalized further, using methods along the lines of those employed in Section <ref>. The stable analogue of the following proposition appeared as Let $\scrC$ be a pointed . The following data are equivalent: * self-equivalences $\scrC\to\scrC$; * (left or right) actions of $\bbZ^\delta$ on $\scrC$; * monoidal functors $\bbZ^\delta\to\Fun(\scrC,\scrC)$; * monoidal functors $\bbZ^\delta\to\Fun^{\text{Rex}}(\scrC,\scrC)$; * right exact monoidal functors * (left or right) actions of $\Prefinpt{\bbZ^\delta}$ on $\scrC$ such that the action map commutes with finite colimits in each variable. If $\scrC$ is pointed presentable, then the above are equivalent to: * left adjoint monoidal functors * actions of $\Prept{\bbZ^\delta}$ on $\scrC$ such that the action map commutes with small colimits in each variable. Here $\Prept{\bbZ^\delta}$ and $\Prefinpt{\bbZ^\delta}$ are defined as in <ref>. The equivalence (<ref>) $\Leftrightarrow$ (<ref>) follows from <cit.>: loc. cit. gives an equivalence \simeq\Aut(\scrC)$$ where the monoidal structure on $\Fun(\scrC,\scrC)$ is given by composition, and $\Aut(\scrC)\subset\Fun(\scrC,\scrC)$ denotes the full subcategory spanned by self-equivalences. (<ref>)$\Leftrightarrow$(<ref>) and (<ref>)$\Leftrightarrow$(<ref>) are just reformulations of the definition of action of a symmetric monoidal . (<ref>)$\Leftrightarrow$(<ref>) holds trivially, as $\bbZ^\delta$ is a discrete category. (<ref>)$\Leftrightarrow$(<ref>) follows from Proposition <ref> and the monoidality of the pointed Yoneda embedding (see <cit.>). (<ref>)$\Leftrightarrow$(<ref>) and follow at once from Proposition <ref>. In what follows, unless otherwise specified, we will implicitly work with left actions whenever we apply Proposition <ref>. Recall that, in the terminology of [24], a left action of a monoidal $\scrC$ on some $\scrM$ is equivalent to exhibiting $\scrM$ as left-tensored over $\scrC$, and to exhibiting $\scrM$ as a left $\scrC$-module in (a suitably sized) of (see <cit.>). Let $\scrC$ be a pointed . Then, precomposition with $-1\colon\bbZ\to\bbZ$ induces an automorphism $$(-)\{1\}\colon\coCh\scrC\to\coCh\scrC \ \colon C^\bullet\mapsto C\{1\}^\bullet \simeq C^{\bullet -1};$$ by virtue of Proposition <ref>, this endowes $\coCh\scrC$ with the structure of an left-tensored over $\Prefinpt{\bbZ^\delta}\simeq\Gr(\spaces_*^{\text{fin}})$ (and when $\scrC$ is also presentable, the left-tensoring extends over we will use the notation: $$(-)\{n\}\colon\coCh\scrC\to\coCh\scrC \quad \forall n\in\bbZ$$ to denote the (right exact) endofunctor induced by $n$. Let $\scrC$ be a pointed . Similarly to the previous example, precomposition with $-1\colon\bbZ\op\to\bbZ\op$ induces an automorphism hence, $\Fild\scrC$ is also left-tensored over (and when $\scrC$ is also presentable, the left-tensoring extends over also in this case, we will use the notation: $$(-)\{n\}\colon\Fild\scrC\to\Fild\scrC \quad \forall n\in\bbZ$$ to denote the automorphism induced by $n$. Given any $n\in\bbN$ and any $t\in\bbZ$, we will denote by $S^{n,t}\in\coCh\spaces_*$ the graded pointed space consisting of a copy of $S^n$ in degree $t$, and copies of $\pt$ in all other degrees. Let $C\in\coCh\scrC$ be a coherent cochain complex in a complete[or, more generally, in a pointed where all the relevant right Kan extensions exist and are pointwise], pointed . The functor $-\tensor C \colon \Gr\spaces_* \to \coCh\scrC$ induced by the left-tensoring of Example <ref> admits a right adjoint, denoted $\innmap{}_*(C,-)$, whose value on any $D\in\coCh\scrC$ is pointwise given by $$\innmap{}_*(C,D)^n \simeq\ptMap\left(C\{n\},D\right).$$ By unraveling the definitions, the $\Gr\spaces_*$-action on $\Fun^{\text{Rex}}(\coCh\scrC,\coCh\scrC)$ is obtained by Kan extension of the functor n\mapsto\{n\}$ along the functor $y\colon\bbZ^\delta\to\Gr\spaces_*\colon n\mapsto S^{0,n}$. As the Kan extension is pointwise, $$-\tensor C\simeq \Lan_y(\ev_C \circ\{\})$$ where $\ev_C\colon\Fun^{\text{Rex}}(\coCh\scrC,\coCh\scrC)\to\coCh\scrC$ denotes the evaluation at $C$ functor. In particular, if we put $F\coloneqq\ev_C\circ\{\}$, we have that $-\tensor C$ is, in the language of Appendix <ref>, the pointed $F$-realization functor. Thus, it admits as a right adjoint given by the pointed $F$-nerve functor $\nerve_F^{\text{pt}}$, which by Proposition <ref> is given by $$\nerve_F^{\text{pt}}(D)\simeq \ptMap(F(-),D)\simeq \ptMap\left(C\{-\},D\right)$$ as desired. We will refer to the right adjoint functor defined in Proposition as the graded pointed mapping space functor. Let $\scrC$ be a pointed presentable . Then, there exists an $\bbE_1$-algebra in graded pointed spaces such that the of coherent cochain complexes in $\scrC$ is equivalent to the of right $A$-modules in graded objects of $\scrC$: Moreover, the underlying graded pointed space of $A$ is given by $S^{0,0}\amalg S^{0,1}$. Let us first notice that the claim for a generic pointed presentable $\scrC$ can be deduced from the case $\scrC\simeq\spaces_*$, by means of Lurie's tensor product (see <cit.>): by <cit.>, since preserving the zero object is a property, the equivalence above induces an equivalence $\coCh(\scrC)\simeq\coCh(\spaces_*)\tensor\scrC$. Now, by <cit.>, \tensor_{\Gr\spaces_*}\scrC \simeq \operatorname{RMod}_A(\Gr\scrC);$$ it thus suffices to prove our claim for the case of pointed spaces. By Example <ref>, together with Proposition <ref>, we have that $\coCh\spaces_*$ is left-tensored over $\Gr\spaces_*$ (see Remark <ref>). By <cit.>, it is thus enough to prove that: * $\coCh(\spaces_*)$ admits geometric realizations of simplicial * The action map $\Gr\spaces_* \times \coCh(\spaces_*) \to \coCh(\spaces_*)$ preserves geometric realizations of simplicial objects. * There exists an $M\in\coCh(\spaces_*)$ such that the functor $-\tensor M \colon \Gr\spaces_* \to \coCh(\spaces_*)$ admits a right adjoint, denoted $\underline{\Map}{}_*(M,-)$. * The functor $\underline{\Map}{}_*(M,-)$ preserves geometric realizations of simplicial objects. * The functor $\underline{\Map}{}_*(M,-)$ is conservative. * For every coherent chain complex $C$ and every graded pointed space $X$, the map $$X\tensor \underline{\Map}{}_*(M,C)\tensor M \xto{X\tensor\eps_C} X\tensor C$$ is adjoint to an equivalence $$X\tensor \underline{\Map}{}_*(M,C)\to\underline{\Map}{}_*(M,X\tensor C).$$ We take $M$ to be the coherent cochain complex consisting of a copy of $S^0$ sitting in degree $0$, a copy of $S^0$ sitting in degree $1$, copies of $\pt$ elsewhere, and the identity as its only possibly nontrivial differential. (<ref>) follows from the cocompleteness of (<ref>) follows from presentability of $\spaces_*$ and Proposition <ref>. (<ref>) and (<ref>) hold for any choice of $M$, and follow from Proposition <ref>. Using the explicit description for the right adjoint given in Proposition <ref>, we get that for our choice of $M$, $\underline{\Map}{}_*(M,C)^n\simeq\ptMap(M\{n\},C)$ is equivalent to the space of choices of $f$ and $g$'s making the square \begin{tikzcd} S^0 \ar[r, "\id"]\ar[d, "f"'] & S^0 \ar[d, "g"] \\ C^n \ar[r, "\partial^n"]& C^{n+1} \end{tikzcd} commute; this space is in turn equivalent to $\ptMap(S^0,C^n)\simeq C^n$, and thus the functor $\underline{\Map}{}_*(M,-)$ is equivalent to the forgetful functor $\undch\colon\coCh\spaces_*\to\Gr\spaces_*$ introduced in Lemma implying (<ref>). For (<ref>), by definition of adjoint map, the adjoint map of interest can be factored as $$ X\tensor\underline{\Map}{}_*(M,C) \xto{\eta_{X\tensor\underline{\Map}{}_*(M,C)}} \underline{\Map}{}_*(M,X\tensor \underline{\Map}{}_*(M,C)\tensor M) \xto{\underline{\Map}{}_*(M,X\tensor\eps_C)}$$ \underline{\Map}{}_*(M,X\tensor C).$$ By a pointwise check, we have that \underline{\Map}{}_*(M,\eta_{X\tensor C})$, and $X\tensor\eps_c\simeq\eps_{X\tensor C}$; by the conservativity of $\underline{\Map}{}_*(M,-)$, it is sufficient to show that the composite $$X\tensor C\xto{\eta_{X\tensor C}} \underline{\Map}{}_*(M,X\tensor C)\tensor M \xto{\eps_{X\tensor C}} X\tensor C$$ is an equivalence; but, as the latter is one of the triangle identities, the desired condition Finally, in the proof of <cit.> is shown that $A$ can be identified with the internal hom object thence we get the desired characterization for the graded pointed space underlying $A$. As, by <cit.>, the stabilization functor is symmetric monoidal on presentable , we can stabilize Theorem <ref>, to obtain an equivalence for any stable presentable $\scrD$. Let $\scrC$ be a pointed presentable . Then, the of filtered objects of $\scrC$ is equivalent to the of right $\rr{Free}_{\bbE_1}(S^{0,1})$-modules in graded objects: \operatorname{RMod}_{\rr{Free}_{\bbE_1}(S^{0,1})}(\Gr\scrC).$$ Let us first prove the claim for the case $\scrC\simeq\spaces_*$. This is basically an adaptation of <cit.> to the case of pointed spaces. First of all, notice that, as the functor $u\colon\Fun(\bbZ\op,\spaces_*)\to\Gr\spaces_* $ given by precomposition with $\bbZ^\delta\to\bbZ\op\colon n\mapsto -n$ is symmetric monoidal, it induces a functor \theta\colon \Fun(\bbZ\op,\spaces_*)\simeq\Mod_{\mathbbm{1}_{\text{Day}}} by Proposition <ref> together with <cit.>, we have that $u\mathbbm{1}_{\langle\leq 0\rangle}$ is an $\bbE_\infty$-algebra whose underlying $\bbE_1$-algebra is equivalent to It is thus sufficient to show that the functor $\theta$ is an equivalence. Let us start with fully faithfulness. That is, we want to prove that the maps $$\phi_{X,Y}\colon\ptMap(X,Y)\to\ptMap(\theta X, \theta Y)$$ are all equivalences. As the functor $\phi_{-,Y}\colon\Fun(\bbZ\op,\spaces_*)\to \Fun(\Delta^1,\spaces_*)$ sending a graded pointed space $X$ to the map $\phi_{X,Y}$ sends colimits to limits, the full subcategory spanned by those $X$ such that $\phi_{X,Y}$ is an equivalence is a pointed subcategory of $\Fun(\bbZ\op,\spaces_*)$ closed under colimits. Thus, as $\Fun(\bbZ\op,\spaces_*)$ is generated under colimits by objects of the form $\ptYo_n$ for $n\in\bbZ$ (see Proposition <ref>), it suffices to show that $\phi_{-,Y}$ is an equivalence for such generators; this is indeed the case, as for $X\simeq\ptYo_n$ we have Essential surjectivity now follows from the observation that the essential image of $\theta$ is closed under colimits, as $\theta$ is fully faithful and commutes with colimits. Similarly to the proof of Theorem <ref>, the claim for general pointed presentable follows from the properties of Lurie's tensor product. By <cit.>, and by <cit.>, \tensor_{\Gr\spaces_*}\scrC \simeq \operatorname{RMod}_{\rr{Free}_{\bbE_1}(S^{0,1})}(\Gr\scrC)$$ we thence have the desired claim. We can describe more explicitly the equivalence of Theorem Let us first observe that the action of the free generator of $R_1 \coloneqq \rr{Free}_{\bbE_1}(S^{0,1})$ determines the action of all other spaces of $R_1$. By unraveling the definitions, specifying a map $t\colon S^{0,1}\tensor X\to X$ on a graded object $X$ is equivalent to specifying maps $X^n\to X^{n+1}$ for all $n\in\bbZ$. Hence, given an $R_1$-module, the associated filtered object is the one having as maps between consecutive objects the maps determined by $t$, and the actions of the $s$-th powers of the generator $t^s\colon S^{0,s}\tensor X\to X$ correspond to choices for all the possible $s$-fold composites for the structure maps of the resulting filtered object (i.e. maps $X^n\to X^{n+s}$ for all $n>1$). This description makes also clear the behavior of the functor going in the opposite direction. §.§ The resolution sequence We will now move to the construction of the sequence (<ref>). By Theorem <ref>, we can take We will define the first map of the sequence to be the one obtained by killing the square of the free generator of $R_1$. By <cit.>, we have $\left(\rr{Free}_{\bbE_1}(S^{0,1})\right)^n\simeq S^n$ for all $n\geq 0$. In particular, the inclusion of pointed spaces $S^{0,2}\to\rr{Free}(S^{0,1})$ induces a map of $\bbE_1$-algebras We define $R_2$ to be the $\bbE_1$-algebra in graded pointed spaces obtained as the pushout of $t^2$ along the augmentation map for the free algebra $\rr{Free}_{\bbE_1}(S^{0,2})$: \begin{equation}\label{eq-r2} \begin{tikzcd} \rr{Free}_{\bbE_1}(S^{0,2})\ar[r]\ar[d,"t^2"']& S^{0,0} \ar[d] \\ \rr{Free}_{\bbE_1}(S^{0,1})\ar[r]& R_2 \end{tikzcd} \end{equation} We pick the bottom horizontal map $\rr{Free}_{\bbE_1}(S^{0,1}) \simeq R_1 \to R_2$ to be the first map in the sequence Notice that, as passing to homology preserves colimits, the pushout square (<ref>) induces on homology the square of Theorem <ref> for the case $n=2$. In Lemma <ref>, we will show that there exists a map $S^{1,3}\to R_2$, inducing on homology the map $\mathrm{Free}_{\bbE_1}(r_3)\to \Lambda^{(2)}$ of Theorem <ref>. Given such a map, we can consider the pushout diagram of $\bbE_1$-algebras \begin{tikzcd} \rr{Free}_{\bbE_1}(S^{1,3})\ar[r]\ar[d]& S^{0,0} \ar[d] \\ R_2\ar[r]& R_3 \end{tikzcd} and pick the bottom horizontal map to be the second map in the sequence Similarly, in Lemma <ref>, we will show that there exists a map $S^{2,4}\to R_2$, inducing on homology the map $\mathrm{Free}_{\bbE_1}(r_4)\to \Lambda^{(3)}$ of Theorem Given such maps, we can construct the rest of the sequence by induction. Let us assume we are given a sequence $R_1\to R_2\to\cdots\to R_{n}$ with $n\geq 3$, such that every $R_j$ comes equipped with a map $S^{j-1,j+1}\to R_j$ inducing on homology the map $\mathrm{Free}_{\bbE_1}(r_{j+1})\to \Lambda^{(j)}$ of Theorem <ref>, and that each map $R_{j-1}\to R_{j}$ for $2\leq j\leq n$ fits in a commutative square of $\bbE_1$-algebras \begin{tikzcd} \rr{Free}_{\bbE_1}(S^{j-2,j})\ar[r]\ar[d]& S^{0,0} \ar[d] \\ R_{j-1}\ar[r]& R_{j}. \end{tikzcd} Our goal is to produce inductively an $\bbE_1$-algebra $R_{n+1}$ and a map $S^{n,n+2}\to R_{n+1}$ inducing on homology the map $\mathrm{Free}_{\bbE_1}(r_{n+2})\to \Lambda^{(n+1)}$ of Theorem <ref>. Let us denote by $R_{n+1}$ the object obtained pushing out the map $\rr{Free}_{\bbE_1}(S^{n-1,n+1})\to R_n$ we have by inductive hypothesis along the augmentation map of the free algebra: \begin{tikzcd} \rr{Free}_{\bbE_1}(S^{n-1,n+1})\ar[r]\ar[d]& S^{0,0} \ar[d] \\ R_n\ar[r]& R_{n+1}. \end{tikzcd} We pick the bottom map of the above square to be the one for the sequence (<ref>). By passing to homology, we get precisely the pushout square of Theorem <ref>. By Lemma <ref>, we have an element $r_{n+1}$ of bidegree $(n+1,n-1)$ in By Hurewicz's Theorem, the cohomology class $r_{n+1}$ determines in an essentially unique way a map $S^{n,n+2}\to R_{n+1}$, which we will denote $\langle t\rangle^{n+2}$ and call the $(n+2)$-fold Toda power of $t$. This completes the induction step needed to extend the sequence for $m\geq 4$. We can now understand the relations between $R_n$-modules for varying $n$: * It follows from Construction <ref> together with Remark <ref> that the $R_1$-module associated to a filtered space $X$ can be given the structure of an $R_2$-module if and only if all the composites of pairs of consecutive maps of $X$ are nullhomotopic, and specifying the structure of an $R_2$-module is equivalent to choosing a nullhomotopy for each such pair. * It follows from Construction <ref> together with Remark <ref> that given any $R_{n-1}$-module $C$, it can be given the structure of an $R_n$-module if and only if the map $\langle t\rangle^n\colon S^{n-2,n}\to\innmap{}_*(C,C)$ of Construction <ref> [or, in the case of $n=3$ (resp. $n=4$), the map $\langle t \rangle^3$ defined in Construction <ref> (resp. the map $\langle t\rangle^4$ defined in Construction is trivial. By unraveling the definitions, we see that if we denote by $X$ the filtered space associated to $C$ (obtained by restriction of scalars along $R_1\to R_{n-1}$), the components of $\langle t\rangle^n$ are maps for all $m\in\bbZ$; specifying the structure of an $R_n$-module on $C$ is equivalent to choosing nullhomotopies for each such map. Let $X^0\to X^1\to\cdots\to X^{n}$ be a sequence of pointed spaces and maps between them, and assume that the filtered pointed space $X$ obtained by extending the given sequence by zeroes can be given the structure of an $R_{n-1}$-module. Let us moreover fix one such structure and denote the resulting object by We define the n-fold Toda bracket of $C_X$ to be the class given by the only nontrivial component of the map $\langle t\rangle^n\colon S^{n-2,n}\to\innmap{}_*(C_X,C_X)$ of Construction <ref>, or, in the case of $n=3$ (resp. $n=4$), the map $\langle t \rangle^3$ defined in Construction <ref> (resp. the map $\langle t\rangle^4$ defined in Construction Given any filtered pointed space $X\in\Fun(\bbZ\op,\spaces_*)$, we say it is a naive cochain complex if its corresponding $R_1$-module can be given a structure of $R_2$-module. Given a naive cochain complex $X$, we say that it has uniformly trivial n-fold Toda brackets if it can be recursively given the structure of an $R_n$-module after choosing $R_m$-module structures for all $m<n$. As all pointed are canonically enriched over pointed spaces, the definition of Toda brackets generalizes in an obvious way to sequences of maps in an arbitrary pointed . In particular, it extends to sequences of maps in any stable . The $\bbE_1$-algebra $A$ of Theorem <ref> is equivalent to the colimit $\colim_n R_n$ of the sequence (<ref>); i.e. for $\scrC$ pointed presentable with a special extremal separator, $$\coCh(\scrC)\simeq\operatorname{RMod}_{\colim_{\bbZ\op} R_n} It follows from the resolution sequence (<ref>), together with the associated sequence in homology, that the underlying graded space of $\colim_n R_n$ is equivalent to $S^{0,0}\amalg S^{0,1}$, as is also the underlying space of the $\bbE_1$-algebra $A$ of Theorem <ref>. As $S^{0,0}\amalg S^{0,1}$ is 0-truncated, the $\bbE_1$-algebra structures on it are determined by the associative ring structures on the discrete object $S^{0,0} \amalg S^{0,1}$ (living in the 1-category $\coCh(\mathbf{Top}_*)$). But, by a direct check, there exists only one nontrivial associative ring structure on $S^{0,0} \amalg S^{0,1}$. As $\colim_n R_n$ is augmented over $S^{0,0}$, it cannot have trivial multiplication; likewise, $A$ cannot be trivial by Theorem <ref>. Therefore $A$ and $\colim_n R_n$ must be equivalent as $\bbE_1$ algebras. The above proposition can be informally rephrased by saying that the datum of a coherent cochain complex is equivalent to the datum of a naive chain complex together with choices for all the nullhomotopies of pairs of consecutive maps, and recursively defined choices of nullhomotopies for all possible $n$-fold Toda brackets, for all $n\geq 3$. §.§ The resolution sequence for $n\leq 3$ Let $\scrC$ be a pointed . Let $X=(X^n)_{n\in\bbZ}\in\Gr\scrC$. Given any $s\in\bbZ$ and any $t\in\bbN$ will denote by $\Omega^{t,s}$ the composite functor $\Omega_\scrC^t\circ\{-s\} \simeq \{-s\}\circ\Omega_\scrC^t.$ That is, Notice that, for $S^{0,0}\in\Gr\spaces_*$, we have $\Omega^{0,s}\left(S^{0,0}\right)\simeq S^{0,-s}$. Let $X\in\coCh\spaces_*$; we have that: * For all $t\in\bbN$, $s\in\bbZ$, $$\innmap{}_*(S^{t,s},X)\simeq(\Omega^t X)\{-s\}\simeq \Omega^{t,s}X;$$ * For all $t\in\bbN$, $s\in\bbZ$, $$S^{t,s}\tensor_{\text{Day}}X \simeq (\Sigma^t X)\{s\}.$$ The structure map $\rr{Free}_{\bbE_1}(S^{0,1})\to R_2$ defined in Construction <ref> determines a degree $1$ square-zero element $t\in(\pi_0(R_2))^1$, whose action induces the following diagram of right $R_2$-modules: \begin{tikzcd} R_2 \ar[r, "t"] \ar[rr, bend right=40, "0"'] \ar[rr, phantom, bend right=20, "\rotatebox{90}{$\Rightarrow$}"] & \Omega^{0,1}R_2 \ar[r, "t"] & \Omega^{0,2}R_2 \ar[r, "t"] & \Omega^{0,3}R_2. \ar[from=ll, bend left=40, "0"] \ar[from=ll, phantom, bend left=20, "\rotatebox{90}{$\Leftarrow$}"] \end{tikzcd} In the situation of Remark <ref>, let us denote by $F_2$ the fiber (in $R_2$-modules) of multiplication by $t$, considered as a map $R_2\to\Omega^{0,1}R_2$. Choosing a nullhomotopy $\alpha\colon 0\Rightarrow t^2$ is equivalent to chosing a factorization of $t$ through $\Omega^{0,1}F_2$. By the following diagram (where all squares are Cartesian) \begin{tikzcd} R_2 \ar[r] \ar[rd, dotted, "\langle t\rangle^3"'] & \Omega^{0,1}F_2 \ar[r]\ar[d] & \Omega^{0,1}R_2 \ar[d]\ar[from=ll, bend left=40, "t"]& \\ & \Omega^{0,3} R_2\ar[r]\ar[d] & \Omega^{0,2}F_2\ar[d]\ar[r] & \pt\ar[d] \\ & \pt \ar[r]& \Omega^{0,2}R_2\ar[r]& \Omega^{0,3}R_2 \\ \end{tikzcd} we see that there exists a map $R_2\to\Omega^{1,3}R_2$ induced by such a factorization. We call this map the 3-fold Toda power of $t$, and denote it by $\langle t \rangle^3$. With a little abuse of terminology, we will reserve the same name for the map $S^{1,3}\to\innmap{}_*(R_2,R_2)$ obtained by adjoining over \cfrac{R_2\to\innmap{}_*(S^{1,3},R_2)} {\cfrac{R_2\tensor S^{1,3}\to R_2} As $R_2$ is a free $R_2$-module of rank $1$, we have that $\innmap{}_*(R_2,R_2)\simeq R_2$, hence the triple Toda power can be equivalently expressed as a map $S^{1,3}\to R_2$. The map $S^{1,3}\to R_2$ constructed above induces on homology the map $\rr{Free}_{\bbE_1}(r_3)\to \Lambda^{(2)}$ defined in Theorem <ref>. Upon passing to homology degree-wise, using that $H(R_2)\simeq \Lambda^{(2)}$ in the notation of Theorem and that, by Remark <ref>, we can use $\wt{P}_2 \coloneqq \bbZ\langle e_1,e_2\rangle/(\partial e_2 = e_1^2)\simeq \Lambda^{(2)}$ as a representative for the homology in the $1$-category of dg-modules, the defining diagram can be represented by the strictly commuting diagram of dg-modules \begin{tikzcd}[ampersand replacement =\&] \wt P_2 \ar[r, "\binom{e_1}{- e_2}"] \ar[rd] \& \Sigma^{0,-1}\wt P_2 \oplus \Sigma^{-1,-2}\wt P_2 \ar{r}{\begin{pmatrix} 1 & 0 \end{pmatrix}} \ar{d}{\begin{pmatrix}e_2 & e_1\end{pmatrix}} \& \Sigma^{0,-1}\wt P_2 \ar[d, "\binom{e_1}{- e_2}"]\& \\[3.5em] \& \Sigma^{-1,-3}\wt P_2\ar[r,"\binom{0}{1}"]\ar[d] \& \Sigma^{0,-2}\wt P_2\oplus\Sigma^{-1,-3}\wt P_2 \ar{d}{\begin{pmatrix}1 & 0\end{pmatrix}}\ar[r] \& 0\ar[d] \\[3.5em] \& 0 \ar[r]\& \Sigma^{0,-2}\wt P_2\ar[r,"e_1"]\& \Sigma^{0,-3}\wt P_2 \\ \end{tikzcd} (where the negative signs are due to the direction of the homotopy, going from $0$ to $e_2$). Thus, the diagonal map is given by the action of $e_1e_2-e_2e_1$, which by Remark <ref> represents exactly the homology class pointed by $r_3$ in the defining diagram for $\Lambda^{(3)}$. Let us now consider the pushout square \begin{tikzcd} \rr{Free}_{\bbE_1}(S^{1,3})\ar[r]\ar[d]& S^{0,0} \ar[d] \\ R_2\ar[r]& R_3 \end{tikzcd} where the vertical map is the one induced by the 3-fold Toda power costructed above. We pick the bottom horizontal map to be the second map in the sequence By the commutativity of the following diagram of $R_3$-modules (where we know that $\langle t\rangle^3\simeq 0$ by the defining pushout for \begin{tikzcd} R_3 \ar[r] \ar[rd, "t"'] & \Omega^{0,1}F_3 \ar[r]\ar[d] & \Omega^{1,3}R_3 \ar[d]\ar[from=ll, bend left=40, "0"] \\ & \Omega^{0,1}R_3\ar[r] & \Omega^{0,2}F_3 \\ \end{tikzcd} we see that $t$ factors through If we denote by $G_3\coloneqq\fib(\Omega^{0,1}F_3\to \Omega^{1,3} R_3)$, we have the following diagram (where all squares are Cartesian) \begin{tikzcd} R_3 \ar[r] \ar[rd, dotted, "\langle t\rangle^4"'] & L_3 \ar[r]\ar[d] & \Omega^{0,1}R_3 \ar[d]\ar[from=ll, bend left=40, "t"]& \\ & \Omega^{2,4} R_3\ar[r]\ar[d] & \Omega^{0,1} G_3\ar[d]\ar[r] & \pt\ar[d] \\ & \pt \ar[r]& \Omega^{0,2}F_3\ar[r]& \Omega^{1,4} R_3 \\ \end{tikzcd} and we define the composite $$R_3\to L_3\to \Omega^{2,4}R_3$$ to be the 4-fold Toda power of $t$, denoted $\langle t\rangle^4$. As for the 3-fold case, the datum of a 4-fold Toda bracket uniquely determines a map $S^{2,4}\to R_3$ that we will refer to by the same name. The map $S^{2,4}\to R_3$ constructed above induces on homology the map $\rr{Free}_{\bbE_1}(r_4)\to \Lambda^{(3)}$ defined in Theorem <ref>. As in Lemma <ref>, we can consider the diagram of dg-modules obtained from the defining diagram of the $4$-fold Toda power by passing to homology: \begin{tikzcd}[ampersand replacement =\&, row sep=4em, column sep=1.3em] \wt{P}_3 \ar{d}{\begin{pmatrix}e_1 \\ - e_2 \\ e_3\end{pmatrix}} \ar[dd, bend right=80] \&\& \\ \Sigma^{0,-1}\wt P_3 \oplus \Sigma^{0,-2}\wt P_2\oplus\Sigma^{-1,-3}\wt P_2 \ar{r}{\begin{pmatrix}1 & 0 & 0\end{pmatrix}} \ar{d}{\begin{pmatrix}e_3 & e_2 & e_1\end{pmatrix}} \& \Sigma^{0,-1}\wt P_3 \ar{d}{\begin{pmatrix}e_1 \\ - e_2 \\ -e_3\end{pmatrix}}\& \\ \Sigma^{-2,-4} \wt{P}_3\ar{r}[swap]{\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}}\ar[d] \& \Sigma^{0,-2}\wt P_2\oplus\Sigma^{-1,-3}\wt P_2 \oplus \Sigma^{-2,-4} \wt P_3 \ar{d}{\begin{pmatrix}1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}}\ar[r] \& 0\ar[d] \\ 0 \ar[r] \&\Sigma^{0,-2}\wt P_2\oplus\Sigma^{-1,-3}\wt P_2 \ar{r}{\begin{pmatrix}e_2 & e_1\end{pmatrix}}\& \Sigma^{-1,-4} \wt P_3 \\ \end{tikzcd} (again, the signs are consistent with the choice of having the nullhomotopies starting at $0$) and we see that the $4$-fold Toda power induces on homology the map given by the action of $$e_1e_3 - e_2^2 + e_3e_1$$ which by Remark <ref> represents exactly the homology class pointed by $r_4$ in the defining diagram for $\Lambda^{(4)}$. § TOTAL HOMOLOGY AND K-DECOMPOSITIONS The Postnikov tower construction makes clear how a spectrum is uniquely determined by its homotopy groups together with its k-invariants. In particular, any spectrum $X$ uniquely determines co/fiber sequences of the for each $n\in\bbZ$, and it's easy to verify that the $k_n$'s are such that $$k_{n}\circ k_{n-1}[-1]\simeq 0$$ for all $n\in\bbZ$. It is widely known among the experts that the above observation admits a converse precisely when the nullhomotopies for such pairwise compositions of “truncated k-invariants” are suitably compatible (see e.g. <cit.>, where an instance of this fact is discussed for objects with finite Postnikov filtrations in the setting of triangulated categories); to be precise, a spectrum is uniquely determined by the datum of its homotopy groups together with a collection of maps $\{k_{n}\colon\pi_{n}X[n]\to\pi_{n+1}X[n+2]\}_{n\in\bbZ}$ such that all pairs of composable maps compose to a nullhomotopic one, and all the possible higher Toda brackets are trivial. In this section we provide a rigorous formulation of the idea discussed above using the language of coherent cochain complexes developed in the previous sections, generalizing it to the context of stable satisfying some mild hypothesis[that is, admitting sequential limits, sequential colimits and equipped with a right complete t-structure] and general homotopy objects in (ordinary) Abelian categories. The process of recostructing an object from its decomposition in homotopy objects and maps between their shifts will be given by the total homology construction (see <ref>), a concept further investigated in the next section, in relation to the spectral sequence generated by a coherent cochain Let us begin by constructing an that encapsulates the data needed to reconstruct the objects from their homotopy groups and their Let $\scrC$ be a stable equipped with a We will denote by $\Ch_{\scrC}^{+}(\scrC^\heartsuit)$ the full subcategory of $\coCh\scrC$ spanned by those objects $C$ such that $C^n\in\scrC^\heartsuit[2n]$ for all $n\in\bbZ$, and refer to it as the of degenerate cochain complexes of $\scrC$. The notation $\Ch_\scrC^+(\scrC^\heartsuit)$ is slightly abusive, as the object depends also on the t-structure $\scrC$ is equipped with. We can now construct a functor that incarnates the idea discussed in the introduction of this section. Let $\scrC$ be a stable equipped with a right complete t-structure. By the definition of right completeness, there exists an equivalence \scrC\simeq\lim\left(\cdots\to \left(\scrC\right)_{\geq n}\xto{\tau_{\geq n+1}} \left(\scrC\right)_{\geq n+1}\to\cdots\right). By the right completeness of $\cFild\scrC$ (see Remark <ref>, together with Theorem <ref>), the fully faithful inclusion $\scrC\hookrightarrow\Fild(\scrC)$ factors through the subcategory of complete objects: \wtrunc{\istar}\colon\scrC\hookrightarrow\cFild(\scrC). Let $\scrC$ be a stable presentable equipped with a right complete Notice that, by Remark <ref>, the composite $\caush\circ\wtrunc\istar\colon\scrC\to\coCh(\scrC)$ factors through the inclusion We define the k-decomposition functor C ↦(⋯→π_n-1X[2n-2]→π_nX[2n]→π_n+1[2n+2]→⋯) to be such factorization of $\caush\circ\wtrunc{\istar}$ through $\Ch_\scrC^+(\scrC^\heartsuit)$. We refer to the differentials of the complex $\kinv C$ as the k-invariants of $C$. Let $R$ be an ordinary commutative ring, and let $\rrD(R)$ denote its derived category (considered as a stable equipped with the usual Given any object $X\in\rrD(R)$, its k-decomposition $\kinv X$ is given by the coherent cochain complex $$\cdots\to (H^{-n+1}X)[2n-2]\to(H^{-n} X)[2n]\to(H^{-n-1})X[2n+2]\to\cdots$$ with $H^{-n}X \in \Ab$. Given any $X\in\Sp$, its k-decomposition $\kinv X$ is given by the coherent cochain complex \to(\pi_{n+1}X)[2n+2]\to\cdots$$ with $\pi_n X \in \Ab$. The equivalence $\Ch_\Sp^+(\Ab)\simeq\Sp$ can be interpreted as saying that any spectrum can be expressed as a coherent chain complex of (suitably shifted) Abelian groups, and, viceversa, one can construct a spectrum from the datum of objects in $\bigoplus_\bbZ \Ab[2n]$ and maps between consecutive objects $A_n[2n]\to A_{n+1}[2n+2]$ such that all possible Toda brackets between them are trivial. The previous two examples make clear how, from the point of view of coherent chain complexes, the only difference between an object in $\rrD(\bbZ)$ and a spectrum lies in the $\bbZ$-linearity of the k-invariants: Of course, this is just a consequence of a theorem of Shipley's (see [30]) showing that the derived $\rrD(R)$ of a discrete commutative ring $R$ is equivalent to the of $HR$-modules We now turn our attention to the functor inverse to $\kinv$. It turns out that the process reconstructing an object of $\scrC$ from its k-decomposition is a particular case of the more general construction assigning to every coherent cochain complex $C$ the object underlying its piled-up filtration $\imp C$. Let $\scrC$ be a stable with sequential limits and sequential colimits. We define the total homology functor to be the composite functor C ↦(C)^-∞. Let $\scrC$ be a stable with sequential limits and sequential colimits, equipped with a right complete t-structure. The restriction of $\tothom$ to $\Ch_\scrC^+(\scrC^\heartsuit)$ induces an equivalence whose inverse is given by the k-decomposition functor of Definition By the definition of right completeness, there exists an equivalence \scrC\simeq\lim\left(\cdots\to \left(\scrC\right)_{\geq n}\xto{\tau_{\geq n+1}} \left(\scrC\right)_{\geq n+1}\to\cdots\right). By the discussion in [24] right before Proposition 1.2.1.17, we can give an alternative description of the right hand side as the full subcategory of $\Fild(\scrC)$ spanned by those filtered objects for * For each $n\in\bbZ$, $F^n\in\scrC_{\geq n}$; * For each $m\geq n$, the associated map $F^m\to F^n$ induces an equivalence $F^m\xto{\sim}\wtrunc{m}F^n$. We first notice that, as long as (<ref>) holds, we can replace (<ref>) with the weaker assumption that ($\spadesuit$) For each $n\in\bbZ$, the associated map $F^{n+1}\to F^n$ induces an equivalence By induction, assume that for a fixed $n$ and some $m\geq n$ $F^m\to F^n$ induces an equivalence $F^m\to \wtrunc{m} F^n$. We want to prove that $F^{m+1}\to F^n$ induces an equivalence $F^{m+1}\to\wtrunc{m+1}F^n$. By ($\spadesuit$), $F^{m+1}\to F^m$ induces an equivalence $F^{m+1}\to\wtrunc{m+1}F^m$, and by our inductive is an equivalence, hence the composite map $F^{m+1}\to\wtrunc{m+1}F^m\to \wtrunc{m+1}F^n$ (which, by (<ref>), is precisely the image under $\wtrunc{m+1}$ of the composite $F^{m+1}\to F^m\to F^n$) is an equivalence as desired. As $\wtrunc{\istar}$ is fully faithful, it induces an equivalence with its essential image in $\cFild(\scrC)$ (see Remark <ref>). As $\caush$ is an equivalence, it is sufficient to check that By the definition of $\Ch_\scrC^+\scrC^\heartsuit$ and Lemma <ref>, all objects in $\imp\left(\Ch_\scrC^+\scrC^\heartsuit\right)$ satisfy (<ref>) and On the other hand, again by Lemma <ref>, any filtered object satisfying conditions (<ref>) and is such that its shelled complex lies in $\Ch_\scrC^+\scrC^\heartsuit$. Hence, $\imp\left(\Ch_\scrC^+\scrC^\heartsuit\right)$ and $\wtrunc{\istar}\scrC$ are two full subcategories of $\cFild\scrC$ spanned by the same objects. To identify its inverse, it now suffices to notice that ∘ ≃∘.25em∘∘ as desired. We now turn our attention to the behavior of $\tothom$ on objects coming from the (ordinary) category of chain complexes in the heart. Let $\scrC$ be as in Theorem <ref>. Then the t-structure homotopy objects of the total homology of an ordinary chain complex in $\oC \in \coCh(\scrC^\heartsuit)$ (considered as an element of $(\coCh\scrC)^\heartsuit$) are given by the cohomology of $$\pi_{-n}\tothom\oC\simeq \rrH^n\oC.$$ By Proposition <ref>, together with the hypothesis that $\oC$ lies in $\coCh(\scrC^\heartsuit)$, we have that $\gr^n\imp\oC\simeq \oC^n[-n]$; in particular, (as $\imp\oC$ is complete) we see that for any integer $n$, $\imp\oC^n$ is $(-n)$-coconnective, and that \begin{equation}\label{eq-pi-tothom} \pi_m\left(\imp\oC^n\right) \cong \pi_m\left(\imp\oC^{n-1}\right) \text{ for all } m\leq-n. \end{equation} By Remark <ref>, we have a co/fiber sequence $$\oC^{n+1}[-n-1]\to \imp \oC^n/\imp \oC^{n+2} \to \oC^n[-n]$$ whose associated long exact sequence on homotopy objects lets us identify \pi_m\left(\imp \oC^n/\imp \oC^{n+2}\right)\cong \begin{cases} \ker(d^n_{\oC}) \quad &\text{ for } m=-n;\\ \coker(d^n_{\oC}) \quad &\text{ for } m=-n-1;\\ 0 \quad &\text{ else.} \end{cases} Again by Remark <ref>, we have a co/fiber sequence \imp \oC^n/\imp \oC^{n+2}\to\imp \oC^{n-1}/\imp \oC^{n+2}\to \oC^{n-1}[-n+1] whose associated long exact sequence starting at $-n+1$ looks as follows \cdots 0 \to \pi_{-n+1}\left(\imp \oC^{n-1}/\imp \oC^{n+2}\right) \to \oC^{n-1} \to \ker(d^n) \to \pi_{-n}\left(\imp \oC^{n-1}/\imp \oC^{n+2}\right) \to 0 \cdots letting us identify $\pi_{-n}\left(\imp \oC^{n-1}/\imp \oC^{n+2}\right)$ as $$\coker \left(\oC^{n-1}\xto{d^{n-1}} \ker(d^n)\right) \cong \rrH^n\oC.$$ As $\imp \oC^{n+2}$ is $(-n-2)$-coconnective, the long exact sequence associated to the co/fiber sequence $$\imp \oC^{n+2}\to\imp\oC^{n-1}\to\imp \oC^{n-1}/\imp \oC^{n+2}$$ shows that \text{ for }m\geq-n-1$$ and thus in particular that $\pi_{-n}\left(\imp\oC^{n-1}\right) \cong\rrH^n\oC$. Finally, (<ref>) implies the desired result. In the special case of $\scrC=\Sp$, the proof of Proposition <ref> shows that $\imp\oC$ for an ordinary chain complex of Abelian groups $\oC$ gives precisely the tower obtained through the “brutal truncations”[sometimes referred to also as the “stupid truncation” in the literature] of the complex $\oC$. That is, $$\imp\oC^n\simeq H(\tau^{\geq n}\oC),$$ where $\tau^{\geq n}$ here denotes the brutal truncation (1-)functor and $H$ denotes the Eilenberg–MacLane ($\infty$-)functor $\coCh(\Ab)\to\Sp$. In particular, we have that in the case of spectra the total homology functor $\tothom$ restricted to $\coCh(\Ab)\subset\coCh(\Sp)$ coincides with the Eilenberg–MacLane functor. We conclude the section with the following variant of Definition Let $\scrC$ be a stable presentable equipped with a right separated t-structure, and let $q\in\bbN$ be fixed. Let $\Ch_{\scrC}^{+q}(\scrC^\heartsuit)$ denote the full subcategory of $\coCh\scrC$ spanned by those objects $C$ such that $C^n\in\scrC^\heartsuit[n(q+1)]$ for all $n\in\bbZ$. Consider the full subcategory $\scrC_{q\text{-periodic}} \subset\scrC$ spanned by the objects $X$ such that the t-structure homotopy objects $\pi_n X$ are isomorphic to $0$ for $n$ not a multiple of $q$: \scrC_{q\text{-periodic}}\coloneqq\{X \ | \ \pi_n X \cong 0 \text { for } n\not\equiv 0 \mod q\}. We define $\kinv_q$ to be the factorization of (where $\wtrunc{q\istar}$ denotes the sub-filtration of $\wtrunc{\istar}$ obtained by skipping all the stages of the Whitehead tower that are not multiples of $q$) through the inclusion and refer to it as the $q$-periodic k-decomposition functor. Similarly to what happens for $\kinv$, the functor $\kinv_q$ induces an equivalence $\scrC_{q\text{-periodic}}\simeq\Ch_\scrC^{+q} We learned about the following example from Achim Krause. As an instance of Variant <ref>, we can consider the $2(p^n-1)$-periodic k-decomposition of the $n$-th Morava K-theory ($n\geq1$) spectrum, for some fixed prime $p$. the coherent cochain complex $\kinv_{2(p^n-1)}K(n)$ looks as follows \bbF_p[(m+1)(p^n-1)]\to\cdots$$ (where $\bbF_p[m(p^n-1)]$ sits in degree $m$), and the differentials are given by suitable shifts of the $n$-th Milnor primitive for the mod $p$ Steenrod algebra: $$\partial^m\simeq Q_n[m]\in\calA_p.$$ § THE SPECTRAL SEQUENCE ASSOCIATED TO A COHERENT COCHAIN COMPLEX In this section, we discuss how coherent cochain complexes give rise to spectral sequences. We have the following result. Let $\scrC$ be a stable with sequential limits and sequential colimits, equipped with a right complete t-structure. Then, every coherent cochain complex $C\in\coCh\scrC$ generates a spectral whose $E_1$ page is given by the homotopy groups of the components of $C$ and whose $E_1$ differentials are obtained from the coherent differentials of $C$ by passing to When the spectral sequence collapses at a finite stage, it converges strongly to the homotopy groups of the total homology of $C$ $$E_1^{i,j}\cong\pi_{-j}C^i \Longrightarrow \pi_{-i-j}\tothom C.$$ We defer the proof to later in this section. Of course, the above theorem follows at once from Theorem <ref> together with Theorem <ref>
# On Impact of Semantically Similar Apps in Android Malware Datasets * ††thanks: Identify applicable funding agency here. If none, delete this. Roopak Surendran Kerala,India ###### Abstract Malware authors reuse the same program segments found in other applications for performing the similar kind of malicious activities such as information stealing, sending SMS and so on. Hence, there may exist several semantically similar malware samples in a family/dataset. Many researchers unaware about these semantically similar apps and use their features in their ML models for evaluation. Hence, the performance measures might be seriously affected by these similar kinds of apps. In this paper, we study the impact of semantically similar applications in the performance measures of ML based Android malware detectors. For this, we propose a novel opcode subsequence based malware clustering algorithm to identify the semantically similar malware and goodware apps. For studying the impact of semantically similar apps in the performance measures, we tested the performance of distinct ML models based on API call and permission features of malware and goodware application with/without semantically similar apps. In our experimentation with Drebin dataset, we found that, after removing the exact duplicate apps from the dataset ($\epsilon=0$) the malware detection rate (TPR) of API call based ML models is dropped from 0.95 to 0.91 and permission based model is dropped from 0.94 to 0.90. In order to overcome this issue, we advise the research community to use our clustering algorithm to get rid of semantically similar apps before evaluating their malware detection mechanism. ###### Index Terms: Code reuse, Android malware, Opcodes ## I Introduction It is known that, malware apps frequently reuse the program segments of previously detected malware apps [1]. Also, they can add some junk codes or remove redundant codes to change the signatures. However, these malicious apps tend to preserve the malicious program segments intended for some specific functionalities such as information stealing, sending SMS and so on. Hence, it is clear that there may exist common malicious program segments shared by the Android malware families. Android is an open source operating system which provides specific APIs (Application Programming Interface) to perform sensitive operations such as sending SMS, making phone call and so on [2]. For example, sendTextMessage() API call can be used for sending SMS to others. Initially, a malware author constructs a malicious program segment which is intended to perform a particular kind of malicious activity. This is done by invoking some specific API calls in a particular manner. For convenience, evolving malware apps tend to reuse these existing malicious program segments to perform the same kind of behavior. Furthermore, there exists several other existing frameworks such as kwetza for injecting the existing malicious program segments into benign applications. Most of the existing works use machine learning algorithms for malware classification [3]. These approaches randomly select malware and goodware samples from the dataset for training and testing the classifiers. Some of the malware or goodware apps are semantically similar and may contain similar features. These semantically similar apps can result in overrated performance of the machine learning classifier especially in holdout evaluation. So, the reported accuracies in their paper may be biased. However, the performance of the models are not highly affected in k-fold cross validation based evaluation. In this paper, we make a study about this problem and propose a clustering algorithm to filter out the semantically similar apps from malware and goodware datasets. In recent years, many research papers have been published in the area of Android malware detection. These works are classified into static, dynamic and hybrid analysis. In static analysis, the source code level features such as API calls, permissions etc. are used for malware detection. However, in the cases of dynamic analysis, the runtime features such as system calls, network packets etc. are used. In hybrid analysis, both static and dynamic features are used. Most of the existing works use Drebin dataset for evaluating their mechanism. Drebin is a public malware dataset which contains 5560 malware apps from 179 malware families [4]. Because of this popularity of Drebin dataset, we have selected this Drebin dataset for studying the impact of semantically similar apps in ML models. The usage of applications with similar program segments in experimental evaluations can give biased results. So it is necessary to identify applications to similar programs segments. For this, we propose a novel malware clustering algorithm based on opcode subsequences to filter out semantically similar apps. The researchers can use the filtered datasets for their experimental purpose for eliminating the bias in their results. In this work, we study the impact of semantically similar apps in machine learning models for malware detection. A clustering algorithm is proposed to filter out similar applications from both goodware and malware dataset. Then, we tested the performance of ML models in various features of malware and goodware samples with and without the semantically similar apps. We found that the performance of ML models very slightly dropped after the semantically similar apps from the dataset when k-fold cross validation technique is used. Hence, it is advised to use k-fold cross validation for evaluating the models or filter out the semantically similar apps from malware and goodware dataset for fair evaluation. Figure 1: Clustering of Android Apps in the Dataset The rest of the paper organized as follows. In Section 2, we discuss about the literature review. In Section 3, we discuss about the procedure of extracting opcode subsequences of an application. Our malware clustering algorithm is discussed in Section 4. In Section 5, we discuss about the performance of ML models in Drebin dataset with and without the semantically similar apps. In Section 6, we discuss about the limitations and future directions for our work. ## II Literature Review In this section, we discuss about the existing research works on code reuse and Android malware detection. ### II-A Detection of Code Reuse Many researchers discussed about the impact of code reuse in Android applications. By using the existing program segment dedicated for a particular functionality, an application developer can significantly save his time and effort. Moreover, it is very helpful to reduce errors or bugs in the application. However, now a days, this feature is increasingly misused by the hackers. They generate several versions of a particular kind of malware app by injecting its payload (malicious code segments) into other legitimate apps. Because of the dissimilarity in hash values, anti-malware solutions are easily evaded by these repackaged malware apps. In this section, we discuss about the main works related to the code reuse detection in malware apps. In GroupDroid [5], static control flow graphs were used for clustering 4211 Android malware apps into different groups. In Droidsim [1], component based- control flow graphs were used for identifying similarities among malware apps and found the code reuse in a dataset of 706 malware applications. Hanna et al. [6] proposed a framework called JuxtApp for detecting code reusage among Android applications. In JuxtApp, the feature matrix of applications were constructed for measuring the similarities. JuxtApp identified the malicious code reusage in 463 vulnerable apps and 34 malware apps. In DNADroid [7], program dependency graphs were used for measuring the similarities among the applications and found code cloning in at least 141 apps in their dataset. In the area of Android malware detection, many researchers considered the features of cloned apps (semantically similar) in their ML model. In this work, we study the impact of these semantically apps in ML models. For this, we propose a simple and lightweight algorithm to detect semantically similar applications in a malware dataset. Here, we used opcode subsequences as features for measuring the similarities. It is because the program segments (program statements in a function or methods) of an application can be conveniently represented with opcode subsequences. Hence, similarity/dissimilarity values can be easily computed from by comparing different sets of opcode subsequences. ### II-B Review on Malware Detection Mechanisms In existing works, machine learning algorithms are used for malware analysis because of the ability to predict malicious behavior in unseen data points [8]. Most of the popular works use Drebin dataset for evaluation. Drebin is the public malware dataset which contains 5560 malware apps from 179 malware families [4]. Also, Drebin dataset contains malicious applications from MalGenome dataset. Hence, we have selected Drebin dataset for studying the impact of semantically similar apps in ML models. The existing Android malware detection mechanisms use either static features such as API calls, permissions etc. or dynamic features such as system calls, network packets etc. (or the combination of both) for malware analysis. The popular static and dynamic malware analysis mechanisms in Drebin/MalGenome dataset are discussed below. In static analysis, the features associated with the source code of an application is used for malware detection. In [9], the authors used probabilistic machine learning classifiers trained with API call based features for malware detection. In [10], the app permissions are used as input features of a machine learning classifier for malware detection. In [11], the data flows are extracted from an application for finding malicious behavior. In [12], the intent based features are used in a machine learning classifier for malware detection. In [13], n-gram frequencies of opcode level features are used in a machine learning classifier for malware detection. In dynamic analysis, an application is executed in an emulator or in a real device and collect the features such as system calls, network packets using the third party utilities. In [14], the runtime API calls are used for malware detection. In [15], authors used system metric level features such as CPU, memory usages for malware detection. In [16] , the authors used system calls as features of supervised binary classifiers for malware detection. In [17], the authors used network packets as features for malware detection. In all of the above mechanisms, the authors used entire samples in their dataset (Drebin/MalGenome) for the experimental purpose. It is known that a malware author reuses existing malicious codes to generate new varients. Hence, these malware dataset may contain several semantically similar apps. In this paper, we study the impact of semantically similar apps in machine learning models for malware detection. We propose a clustering algorithm to filter out the semantically similar apps from datasets. After removing the semantically similar apps, we tested the performance of ML models in datasets with and without semantically similar apps. From our experimental evaluations, we conclude that the presence of semantically similar apps result in the overrated performance of ML models in malware detection. ## III Opcode Based Clustering Algorithm In this section, we investigate the impact of semantically similar apps in Android malware datasets. Our mechanism has three phases. In the first phase, we extract opcode subsequences from a set of malware and goodware applications. In the next phase, we filter out the semantically similar applications from the opcode subsequence dataset using our novel clustering algorithm. In the final phase, we evaluate the performance of ML models in dataset with and without semantically similar apps. On the basis of this performance evaluation, we will make conclusions. The clustering procedure is given in Figure 1. Figure 2: Method Based Opcode Sub sequence of an Application ### III-A Extraction of Opcodes Subsequences from an Android Application In this section, we discuss about the procedure of extracting opcode subsequences from Android applications. Opcodes (Operation Codes) are used to specify the kind of operations need to be performed by the device hardware. It is a part of machine language program. The details about the opcodes in Android operating system are given in Table I. TABLE I: List Opcodes in Android Operating System Hex Value | Opcode | Hex Value | Opcode | Hex Value | Opcode | Hex Value | Opcode ---|---|---|---|---|---|---|--- 00 | nop | 01 | move | 02 | move/from 16 | 03 | move/16 04 | move-wide/from | 05 | move-wide/16 | 07 | move-object | 08 | move-object/from16 09 | move-object/16 | 0A | move-result | 0B | move-result-wide | 0C | move-result-object 0D | move-exception | 0E | return-void | 0F | return | 10 | return-wide 11 | return-object | 12 | const/4 | 13 | const/16 | 14 | const 15 | const | 16 | const-wide/16 | 17 | const-wide/32 | 18 | const-wide 19 | const-wide/high 16 | 1A | const-string | 1B | const-string-jumbo | 1C | const-class 1D | monitor-enter | 1E | monitor-exit | 1F | check-cast | 20 | instance-of 21 | array-length | 22 | new-instance | 23 | new-array | 24 | filled-new-array 25 | filled-new-array-range | 26 | fill-array-data | 27 | throw | 28 | goto 29 | goto/16 | 2A | goto/32 | 2B | packed-switch | 2C | sparse-switch 2D | cmpl-float | 2E | cmpg-float | 2F | cmpl-double | 30 | cmpg-double 31 | cmp-long | 32 | if-eq | 33 | if-ne | 34 | if-lt 35 | if-ge | 36 | if-gt | 37 | if-le | 38 | if-eqz 39 | if-nez | 3A | if-ltz | 3B | if-gez | 3C | if-gtz 3D | if-lez | 3E | unused_3E | 3F | unused_3F | 40 | unused_40 41 | unused_41 | 42 | unused_42 | 43 | unused_43 | 44 | aget 45 | aget-wide | 46 | aget-object | 47 | aget-boolean | 48 | aget-byte 49 | aget-char | 4A | aget-short | 4B | aput | 4C | aput-wide 4D | aput-object | 4E | aput-boolean | 4F | aput-byte | 50 | aput-char 51 | aput-short | 52 | iget | 53 | iget-wide | 54 | iget-object 55 | iget-boolean | 56 | iget-byte | 57 | iget-char | 58 | iget-short 59 | iput | 5A | iput-wide | 5B | iput-object | 5C | iput-boolean 5D | iput-byte | 5E | iput-char | 5F | iput-short | 60 | sget 61 | sget-wide | 62 | sget-object | 63 | sget-boolean | 64 | sget-byte 65 | sget-char | 66 | sget-short | 67 | sput | 68 | sput-wide 69 | sput-object | 6A | sput-boolean | 6B | sput-byte | 6C | sput-char 6D | sput-short | 6E | invoke-virtual | 6F | invoke-super | 70 | invoke-direct 71 | invoke-static | 72 | invoke-interface | 73 | unused_73 | 74 | invoke-virtual/range 75 | invoke-super/range | 76 | invoke-direct/range | 77 | invoke-static/range | 78 | invoke-interface-range 79 | unused_79 | 7A | unused_7A | 7B | neg-int | 7C | not-int 7D | neg-long | 7E | not-long | 7F | neg-float | 80 | neg-double 81 | int-to-long | 82 | int-to-float | 83 | 83 int-to-double | 84 | long-to-int 85 | long-to-float | 86 | long-to-double | 87 | float-to-int | 88 | float-to-long 89 | float-to-double | 8A | double-to-int | 8B | double-to-long | 8C | double-to-float 8D | int-to-byte | 8E | int-to-char | 8F | int-to-short | 90 | add-int 91 | sub-int | 92 | mul-int | 93 | div-int | 94 | rem-int 95 | and-int | 96 | or-int | 97 | xor-int | 98 | shl-int 99 | shr-int | 9A | ushr-int | 9B | add-long | 9C | sub-long 9D | mul-long | 9E | div-long | 9F | rem-long | A0 | and-long A1 | or-long | A2 | xor-long | A3 | shl-long | A4 | shr-long A5 | ushr-long | A6 | add-float | A7 | sub-float | A8 | mul-float A9 | div-float | AA | rem-float | AB | add-double | AC | sub-double AD | mul-double | AE | div-double | AF | rem-double | B0 | add-int/2addr B1 | sub-int/2addr | B2 | mul-int/2addr e | B3 | div-int/2addr | B4 | rem-int/2addr B5 | and-int/2addr | B6 | or-int/2addr | B7 | xor-int/2addr | B8 | shl-int/2addr B9 | shr-int/2addr | BA | ushr-int/2addr | BB | add-long/2addr | BC | sub-long/2addr BD | mul-long/2addr | BE | div-long/2addr | BF | rem-long/2addr | C0 | and-long/2addr C1 | or-long/2addr | C2 | xor-long/2addr | C3 | shl-long/2addr | C4 | shr-long/2addr C5 | ushr-long/2addr | C6 | add-float/2addr | C7 | sub-float/2addr | C8 | mul-float/2addr C9 | div-float/2addr | CA | rem-float/2addr | CB | add-double/2addr | CC | sub-double/2addr CD | mul-double/2addr | CE | div-double/2addr | CF | rem-double/2addr | D0 | add-int/lit16 D1 | add-int/lit16 | D2 | sub-int/lit16 | D3 | mul-int/lit16 | D4 | div-int/lit16 D5 | and-int/lit16 | D6 | or-int/lit16 | D7 | xor-int/lit16 | D8 | add-int/lit8 D9 | sub-int/lit8 | DA | mul-int/lit8 | DB | div-int/lit8 | DC | rem-int/lit8 DD | and-int/lit8 | DE | or-int/lit8 | DF | xor-int/lit8 | E0 | shl-int/lit8 E1 | shr-int/lit8 | E2 | ushr-int/lit8 | E3 | unused_E3 | E4 | unused_E4 E5 | unused_E5 | E6 | unused_E6 | E7 | unused_E7 | E8 | unused_E8 E9 | unused_E9 | EA | unused_EA | EB | unused_EB | EC | unused_EC ED | unused_ED | EE | execute-inline | EF | unused_EF | F0 | invoke-direct-empty F1 | unused_F1 | F2 | iget-quick | F3 | iget-wide-quick | F4 | iget-object-quick F5 | iput-quick | F6 | iput-wide-quick | F7 | iput-object-quick | F8 | invoke-virtual-quick F9 | invoke-virtual-quick/range | FA | invoke-super-quick | FB | invoke-super-quick/range | FC | unused_FC FD | unused_FD | FE | unused_FE | FF | unused_FF | | In Android operating system, ART (Android Runtime) or Dalvik Virtual Machine (DVM) is responsible for handling the opcodes in the form of dex (dalvik executable format) file format [18][19]. Android programs are written in java or kotlin language and then compiled to a ‘dex’ (dalvik executable format) file [19]. Figure 3: Process of Extracting Opcode Sub sequences of an Application In an Android application, the program segments are written in the form of various functions or methods. Hence, there exist opcode subsequences corresponding to each program segment in that application. We extract the set of opcode subsequences from an application and use this opcode subsequence set for representing it. Reverse engineering tools such as apktool can be used to extract opcode sequences from the ‘dex’ file [20]. Apktool extracts the opcode subsequences from it. Here, we considered opcode subsequences as the sequence of opcodes in a method segment. A sample opcode subsequence of an application is given in Figure 2. The procedure of extracting opcode subsequences is given in Figure 3. ### III-B Clustering Opcode Subsequences In this section, we propose a novel algorithm for clustering the malware apps in a dataset. Here, we cluster the malware apps in the malware dataset by using our algorithm. Here, we use Ochiai coefficient (Euclidean distance) [21, 22, 23] for measuring the similarities between two applications. Cosine similarity works well even if the opcode subsequences of apps differs in size. Assume that $A$ and $B$ are the two sets of opcode subsequences of applications $P$ and $Q$. The Ochiai coefficient $S$ is calculated as: $S=1-\frac{|A\cap B|}{\sqrt{|A|\times|B|}},$ (1) where $|A\cap B|$ is the number of opcode subsequences that are found in both $A$ and $B$, $|A|$ is the number of opcode subsequences in $A$ and $|B|$ is the number of opcode subsequences in $B$. ////////////// The proposed malware clustering algorithm is based on DBSCAN algorithm [24] [25]. The algorithm accepts a malware family dataset $X=\\{X_{1},X_{2}X_{3},…,X_{n}\\}$ as input and gives the cluster centers $C=\\{C_{1},C_{2},…\\}$ as output. Let $\epsilon$ be the distance value ranging from 0 to 1. The steps in our algorithm are given below. 1. 1. Initialize $j=1$. 2. 2. Select a random malware app $X_{i}$ from $X$ and mark $X_{i}$ as visited. 3. 3. Find out the neighbours of $X_{i}$ using $\epsilon$ (All malware apps which are within the $\epsilon$ distance value are considered as neighbours). 4. 4. Form a cluster having centroid $C_{j}=X_{i}$ and update $j=j+1$. 1. (a) Remove all the clustered apps from $X$ 5. 5. Go to step 1 and repeat the process until all apps in $X$ are visited ## IV Illustration of Our Clustering Algorithm in Drebin Dataset In this section, we discuss about the performance of our clustering algorithm in dataset[4] because of its wide acceptance and popularity in research works. Drebin dataset consists of 5560 malware applications selected from 179 malware families over a period ranging from 2010 to 2012. Our clustering algorithm is developed and tested in an Ubuntu PC having 32 GB of memory. We reused an existing python program [26] to extract opcode subsequences of the applications in our malware family dataset. In that python code, apktool [27] is used to decompile an application and extract smali code from it. From the smali code, opcode subsequences are extracted and saved in a file. Then, we cluster all these files using our algorithm to identify the semantically similar applications. We execute our clustering algorithm in different values of $\epsilon$. With the value $\epsilon=0$, we can remove all duplicate applications in the dataset. Also, the obtained clusters are more reliable. The number of clusters get reduced by increasing the value of $\epsilon$. Hence, by increasing the value of $\epsilon$, we can identify the highly dissimilar apps in the dataset. The number of clusters in different $\epsilon$ values are given in Figure 4. Here, we found that almost 50% of apps in drebin dataset are the exact copies of others. These duplicate apps might affect the actual performance of the machine learning based malware detection mechanisms. In the next section, we investigate this with the help of API calls and permissions based classifiers. Figure 4: Number of Clusters$0$$0.1$$0.2$$0.3$$0.4$$0.5$$0.6$$0.7$$0.8$$0.9$$1$$100$$600$$1{,}100$$1{,}600$$2{,}100$$2{,}600$$3{,}000$$\epsilon$$Clusters$ TABLE II: Distribution of Semantically Dissimilar Apps Dataset | $\epsilon$ | Number of Malware Samples | Number of Goodware Samples ---|---|---|--- Dataset 1 | 0 | 2642 | 4655 Dataset 2 | 0.1 | 1650 | 3989 Dataset 3 | 0.2 | 1305 | 3610 ## V Evaluation of Drebin Malware Samples with and without Semantically Similar Apps In this section, we make a study about the impact of semantically similar application in machine learning based malware detection mechanisms. For analyzing false positives, we collected 5500 goodware samples from Androzoo dataset [28]. In order to avoid the bias in the API levels, we selected the goodware samples ranging from 2010 to 2012 (same period/API level as that of drebin dataset). The overall evaluation dataset consisting of malware and goodware samples. From this dataset, we filtered out the semantically similar goodware and malware apps and constructed datasets with semantically dissimilar samples. The statistics of apps in the datasets are given in Table II. All the datasets are evaluated in machine learning algorithms trained with different kinds of features. The used features are given as follows: 1. 1. API calls; 2. 2. Permissions. TABLE III: Selected Permissions for Malware Detection SI.No | Permissions | SI.No | Permissions ---|---|---|--- 1 | READ_PHONE_STATE | 2 | WRITE_CONTACTS 3 | CALL_PHONE | 4 | READ_CONTACTS 5 | INTERNET | 6 | SEND_SMS 7 | DISABLE_KEYGUARD | 8 | PROCESS_OUTGOING_CALLS 9 | RECEIVE_BOOT_COMPLETED | 10 | READ_SMS 11 | FACTORY_TEST | 12 | DEVICE_POWER 13 | HARDWARE_TEST | 14 | CHANGE_WIFI_STATE 15 | GET_ACCOUNTS | 16 | READ_HISTORY_BOOKMARKS 17 | WRITE_APN_SETTINGS | 18 | MODIFY_PHONE_STATE 19 | WRITE_HISTORY_BOOKMARKS | 20 | ACCESS_LOCATION 21 | EXPAND_STATUS_BAR | 22 | WRITE_EXTERNAL_STORAGE 23 | RECEIVE_SMS | 24 | WRITE_SMS 25 | ACCESS_WIFI_STATE | 26 | MODIFY_AUDIO_SETTINGS 27 | ACCESS_NETWORK_STATE | 28 | WRITE_SETTINGS 29 | READ_EXTERNAL_STORAGE | 30 | ACCESS_MOCK_LOCATION 31 | USE_CREDENTIALS | 32 | HARDWARE_TEST 33 | VIBRATE | 34 | READ_LOGS 35 | CHANGE_NETWORK_STATE | 36 | ACCESS_GPS 37 | WAKE_LOCK | 38 | ACCESS_COURSE_UPDATES 39 | ACCESS_LOCATION_EXTRA_COMMANDS | 40 | ACCESS_FINE_LOCATION 41 | GET_TASKS | 42 | RESTART_PACKAGES 43 | MOUNT_UNMOUNT_FILESYSTEMS | 44 | INSTALL_PACKAGES 45 | KILL _BACKGROUND_PROCESS | | TABLE IV: K-Fold Cross Validation Results in Permission Classifier Dataset | TPR | FPR | Accuracy | Precision | F1Score ---|---|---|---|---|--- Overall Dataset | 0.941 | 0.050 | 0.945 | 0.945 | 0.945 Dataset 1 | 0.900 | 0.051 | 0.931 | 0.931 | 0.931 Dataset 2 | 0.855 | 0.054 | 0.920 | 0.920 | 0.920 Dataset 3 | 0.826 | 0.051 | 0.917 | 0.917 | 0.917 ### V-A Permission Analysis In this section, we re-implement permission based malware detection mechanism in our datasets (with and without semantically similar apps) for analyzing the impact of semantically similar in dataset. Here, we used the key permission based features mentioned in our previous work [29]. The list of permission based features are given in Table III. We extract the permission based features of malware and goodware apps in the dataset and a Comma Separated Value (CSV) file is constructed. This CSV file is supplied to Weka framework [30] and tested in machine learning classifiers by employing 10 fold cross validation technique. We obtained a high accuracy in random forest classifier [31]. From Table IV, we can see that the performance of the classifiers dropped very slightly in the datasets of semantically dissimilar apps. Random forest algorithm works on the basis of information gain values [32]. Therefore, we have given the information gain values of permission based features in our Dataset is given in Table V. From Table V, we can see that the information gain values can be slightly affected by the semantically similar apps in the datasets. TABLE V: Changes in Information Gain Values of Permissions in the Datasets Permissions | Overall Dataset | Dataset1 | Dataset2 | Dataset3 ---|---|---|---|--- READ_PHONE_STATE | 0.379 | 0.334 | 0.283 | 0.264 SEND_SMS | 0.257 | 0.112 | 0.121 | 0.126 RECEIVE_BOOT_COMPLETED | 0.189 | 0.193 | 0.143 | 0.122 READ_SMS | 0.175 | 0.127 | 0.115 | 0.109 RECEIVE_SMS | 0.160 | 0.068 | 0.088 | 0.096 ACCESS_WIFI_STATE | 0.138 | 0.209 | 0.169 | 0.142 WRITE_EXTERNAL_STORAGE | 0.120 | 0.114 | 0.108 | 0.101 WRITE_SMS | 0.096 | 0.083 | 0.074 | 0.070 WAKE_LOCK | 0.081 | 0.073 | 0.057 | 0.049 INTERNET | 0.079 | 0.060 | 0.052 | 0.053 ### V-B API Call Analysis In this section, we re-implement API call based malware detection mechanism in our datasets for analyzing the impact of semantically similar in dataset. Here, we reused the key API call based features mentioned in our previous work [29]. The list of API call based features are given in Table VI. We extract the API call based features of malware and goodware apps in the dataset and a Comma Separated Value (CSV) file is constructed. This CSV file is supplied to Weka framework and tested in machine learning classifiers by employing 10 fold cross validation technique. We obtained a high accuracy in random forest classifier. From Table VII, we can see that the performance of the classifiers dropped very slightly in the datasets of semantically dissimilar apps. Random forest algorithm works on the basis of information gain values. Therefore, we have given the information gain values of API Call based features in our Dataset is given in Table VIII. From Table VIII, we can see that the information gain values can be slightly affected by the semantically similar apps in the datasets. TABLE VI: Selected API Calls for Malware Detection SI.No | API Calls | SI.No | API Calls ---|---|---|--- 1 | getNetworkType | 18 | getDisplayMessageBody 2 | getNetworkOperator | 19 | getPackageInfo 3 | loadClass | 20 | getLastKnownLocation 4 | getMessage | 21 | getAppPackageName 5 | getMethod | 22 | getCookies 6 | getClassLoader | 23 | isProviderEnabled 7 | GetLongitude | 24 | getSimOperatorName 8 | GetLatitude | 25 | getDeviceId 9 | createFromPdu | 26 | getCertStatus 10 | getInputStream | 27 | getSimSerialNumber 11 | getOutputStream | 28 | getLine1Number 12 | getWifiState | 29 | killProcess 13 | abortBroadCast | 30 | exec 14 | RequestFocus | 31 | getAppPackageName 15 | getSubscriberId | 32 | setSerialNumber 16 | getDisplayOriginatingAddress | 33 | getSessions 17 | sendTextMessage | 34 | getCredential TABLE VII: K-Fold Cross Validation Results in API Call Classifier Dataset | TPR | FPR | Accuracy | Precision | F1Score ---|---|---|---|---|--- Overall Dataset | 0.954 | 0.046 | 0.957 | 0.957 | 0.957 Dataset 1 | 0.91 | 0.037 | 0.944 | 0.944 | 0.944 Dataset 2 | 0.856 | 0.044 | 0.929 | 0.929 | 0.928 Dataset 3 | 0.831 | 0.044 | 0.925 | 0.925 | 0.924 TABLE VIII: Changes in Information Gain Values of API Calls in the Datasets API Call | Overall Dataset | Dataset1 | Dataset2 | Dataset3 ---|---|---|---|--- getDeviceId | 0.205 | 0.301 | 0.249 | 0.229 sendTextMessage | 0.175 | 0.110 | 0.106 | 0.108 getLine1Number | 0.164 | 0.224 | 0.194 | 0.180 getNetworkOperator | 0.157 | 0.128 | 0.087 | 0.069 getSubscriberId | 0.154 | 0.199 | 0.228 | 0.237 createFromPdu | 0.082 | 0.157 | 0.062 | 0.065 abortBroadcast | 0.081 | 0.047 | 0.055 | 0.059 getSimOperatorName | 0.068 | 0.093 | 0.055 | 0.045 getSimSerialNumber | 0.065 | 0.114 | 0.128 | 0.122 getCellLocation | 0.049 | 0.076 | 0.064 | 0.059 API Calls(DS1)API Calls(DS2)API Calls(DS3)Permissions(DS1)Permissions(DS2)Permissions(DS3)$0.9$$0.91$$0.92$$0.93$$0.94$AccuracyUnbalancedBalanced Figure 5: Accuracy of Balanced Datasets without Semantically Similar Apps ## VI Testing the Performance in Balanced Datasets In this section, we evaluate the performance of API call and permission based classifiers in balanced datasets. From Table II, we can see that the number of unique apps in goodware dataset is higher than that of malware dataset. That is, the distribution of apps in the classifier is not unique and class imbalance problem may occur in evaluation. Hence, it is required to evaluate the performances in balanced datasets before confirming our findings. From the goodware datasets (dataset 1 , dataset 2 and dataset 3), we removed some random goodware apps for balancing the dataset. After balancing the datasets, we evaluated the performances in API call and permission based classifiers. The performance of API call and permission based classifiers in both balanced and unbalanced datasets are given in Fig 8. From Fig 8, we can see that the performance of the classifier is dropped after balancing the dataset. ## VII Overrated Performance in Holdout Evaluation In this section, we illustrate the performance bias due to the semantically similar apps in test dataset. In ML based Android malware detection, a malware researcher randomly divides the dataset samples to train and test set for evaluation. Most of this time, he unaware about the duplicate copies in the dataset. The rate of duplicate samples in the test dataset may significantly affect the performance of the model. So, it is very difficult to generalize the model in accurately detecting the diverse malware apps. Here, we illustrate this phenomenon in API call based classifier. We trained our API call and permission based classifier with the features diverse malware and goodware samples and tested with more duplicate malware and goodware samples. Further, the test dataset is constructed with more semantically similar apps of more malware samples those have more malicious features and more semantically similar apps of more goodware samples those have very few malicious features. Here,we followed thumb rule (80:20) for train-test split. Also, we shuffled the train and test set samples until obtaining the accuracy of 1. The performance metrics are given in Table IX. From the Table IX, it is clear that it is possible for a researcher to report his desired performance by shuffling the datasets in holdout evaluation. So, it has been advised to use our clustering algorithm to remove semantically similar apps from the dataset before holdout evaluation. TABLE IX: Holdout Evaluation Results in API Call and Permission Classifier Classifier | TPR | FPR | Accuracy | Precision | F1Score ---|---|---|---|---|--- API Call Classifier | 1 | 0 | 1 | 1 | 1 Permission Classifier | 1 | 0 | 1 | 1 | 1 ## VIII Discussion and Conclusions In this work, we proposed a clustering mechanism to assess the impact of semantically similar apps in Android malware dataset. We found that, the presence of semantically similar apps especially duplicate apps those influence the performance of ML models in hold out evaluation. So, it has been advised to filter out all semantically similar apps before performing the holdout evaluation. Our clustering algorithm have some limitations which affects the clustering process. In opcode injection attack, it is possible for an adversary to inject irrelevant opcodes in between the opcode subsequences of an application [33]. In such cases, the application has not become the part of any cluster. In future, we will explore some other additional features such as API calls and permission sequences for efficient clustering these apps. In our experiments, the decompilation errors has occurred in some applications. Due to this decompilation errors, we cannot cluster these apps. In future, we will investigate the reason behind this decompilation errors and design some new tools to decompile these apps. ## Acknowledgment ## References * [1] X. Sun, Y. Zhongyang, Z. Xin, B. Mao, and L. Xie, “Detecting code reuse in android applications using component-based control flow graph,” in IFIP international information security conference, pp. 142–155, Springer, 2014. * [2] T. McDonnell, B. Ray, and M. Kim, “An empirical study of api stability and adoption in the android ecosystem,” in 2013 IEEE International Conference on Software Maintenance, pp. 70–79, IEEE, 2013. * [3] J. Sahs and L. Khan, “A machine learning approach to android malware detection,” in 2012 European Intelligence and Security Informatics Conference, pp. 141–147, IEEE, 2012. * [4] D. Arp, M. Spreitzenbarth, M. Hubner, H. Gascon, K. Rieck, and C. Siemens, “Drebin: Effective and explainable detection of android malware in your pocket.,” in Ndss, vol. 14, pp. 23–26, 2014. * [5] N. Marastoni, A. Continella, D. Quarta, S. Zanero, and M. D. Preda, “Groupdroid: Automatically grouping mobile malware by extracting code similarities,” in Proceedings of the 7th Software Security, Protection, and Reverse Engineering/Software Security and Protection Workshop, pp. 1–12, 2017. * [6] S. Hanna, L. Huang, E. Wu, S. Li, C. Chen, and D. Song, “Juxtapp: A scalable system for detecting code reuse among android applications,” in International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment, pp. 62–81, Springer, 2012. * [7] J. Crussell, C. Gibler, and H. Chen, “Attack of the clones: Detecting cloned applications on android markets,” in European Symposium on Research in Computer Security, pp. 37–54, Springer, 2012. * [8] K. Liu, S. Xu, G. Xu, M. Zhang, D. Sun, and H. Liu, “A review of android malware detection approaches based on machine learning,” IEEE Access, vol. 8, pp. 124579–124607, 2020. * [9] L. Cen, C. S. Gates, L. Si, and N. Li, “A probabilistic discriminative model for android malware detection with decompiled source code,” IEEE Transactions on Dependable and Secure Computing, vol. 12, no. 4, pp. 400–412, 2014. * [10] P. Rovelli and Ý. Vigfússon, “Pmds: Permission-based malware detection system,” in International conference on information systems security, pp. 338–357, Springer, 2014. * [11] S. Arzt, S. Rasthofer, C. Fritz, E. Bodden, A. Bartel, J. Klein, Y. Le Traon, D. Octeau, and P. McDaniel, “Flowdroid: Precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps,” Acm Sigplan Notices, vol. 49, no. 6, pp. 259–269, 2014. * [12] K. Xu, Y. Li, and R. H. Deng, “Iccdetector: Icc-based malware detection on android,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 6, pp. 1252–1264, 2016. * [13] G. Canfora, A. De Lorenzo, E. Medvet, F. Mercaldo, and C. A. Visaggio, “Effectiveness of opcode ngrams for detection of multi family android malware,” in 2015 10th International Conference on Availability, Reliability and Security, pp. 333–340, IEEE, 2015. * [14] T. Kim, B. Kang, and E. G. Im, “Runtime detection framework for android malware,” Mobile Information Systems, vol. 2018, 2018. * [15] J. Milosevic, M. Malek, and A. Ferrante, “A friend or a foe? detecting malware using memory and cpu features.,” in SECRYPT, pp. 73–84, 2016. * [16] G. Canfora, E. Medvet, F. Mercaldo, and C. A. Visaggio, “Detecting android malware using sequences of system calls,” in Proceedings of the 3rd International Workshop on Software Development Lifecycle for Mobile, pp. 13–20, 2015. * [17] M. Zaman, T. Siddiqui, M. R. Amin, and M. S. Hossain, “Malware detection in android by network traffic analysis,” in 2015 international conference on networking systems and security (NSysS), pp. 1–5, IEEE, 2015. * [18] S. Brahler, “Analysis of the android architecture,” Karlsruhe institute for technology, vol. 7, no. 8, 2010. * [19] H.-S. Oh, B.-J. Kim, H.-K. Choi, and S.-M. Moon, “Evaluation of android dalvik virtual machine,” in Proceedings of the 10th international workshop on java technologies for real-time and embedded systems, pp. 115–124, 2012. * [20] P. O. Fora, “Beginners guide to reverse engineering android apps,” in RSA Conference, pp. 21–22, 2014. * [21] T. Oguchi, “Geomorphological debates in japan related to surface processes, tectonics, climate, research principles, and international geomorphology,” Geomorphology, vol. 366, p. 106805, 2020. * [22] Y. Otsuka, “The faunal character of the japanese pleistocene marine mollusca, as evidence of climate having become colder during the pleistocene in japan,” Biogeograph Soc Japan, vol. 6, no. 16, pp. 165–170, 1936. * [23] O. Akira, “Zoogeographical studies on the soleoid fishes found in japan and its neighbouring regions–ii,” Bull Jap Soc Sci Fish, vol. 22, no. 9, pp. 526–530, 1957. * [24] R. Xu and D. Wunsch, “Survey of clustering algorithms,” IEEE Transactions on neural networks, vol. 16, no. 3, pp. 645–678, 2005. * [25] S. Vassilvitskii and D. Arthur, “k-means++: The advantages of careful seeding,” in Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pp. 1027–1035, 2006. * [26] N. McLaughlin, J. Martinez del Rincon, B. Kang, S. Yerima, P. Miller, S. Sezer, Y. Safaei, E. Trickel, Z. Zhao, A. Doupé, et al., “Deep android malware detection,” in Proceedings of the seventh ACM on conference on data and application security and privacy, pp. 301–308, 2017. * [27] H. Rawal and C. Parekh, “Android internal analysis of apk by droid_safe & apk tool.,” International Journal of Advanced Research in Computer Science, vol. 8, no. 5, 2017. * [28] K. Allix, T. F. Bissyandé, J. Klein, and Y. Le Traon, “Androzoo: Collecting millions of android apps for the research community,” in 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR), pp. 468–471, IEEE, 2016. * [29] R. Surendran, T. Thomas, and S. Emmanuel, “A tan based hybrid model for android malware detection,” Journal of Information Security and Applications, vol. 54, p. 102483, 2020. * [30] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The weka data mining software: an update,” ACM SIGKDD explorations newsletter, vol. 11, no. 1, pp. 10–18, 2009. * [31] A. Liaw, M. Wiener, et al., “Classification and regression by randomforest,” R news, vol. 2, no. 3, pp. 18–22, 2002. * [32] J. T. Kent, “Information gain and a general measure of correlation,” Biometrika, vol. 70, no. 1, pp. 163–173, 1983. * [33] X. Zhang, J. Wang, M. Sun, and Y. Feng, “Andropgan: An opcode gan for android malware obfuscations,” in International Conference on Machine Learning for Cyber Security, pp. 12–25, Springer, 2020.
# On Metric Learning for Audio-Text Cross-Modal Retrieval ###### Abstract Audio-text retrieval aims at retrieving a target audio clip or caption from a pool of candidates given a query in another modality. Solving such cross-modal retrieval task is challenging because it not only requires learning robust feature representations for both modalities, but also requires capturing the fine-grained alignment between these two modalities. Existing cross-modal retrieval models are mostly optimized by metric learning objectives as both of them attempt to map data to an embedding space, where similar data are close together and dissimilar data are far apart. Unlike other cross-modal retrieval tasks such as image-text and video-text retrievals, audio-text retrieval is still an unexplored task. In this work, we aim to study the impact of different metric learning objectives on the audio-text retrieval task. We present an extensive evaluation of popular metric learning objectives on the AudioCaps and Clotho datasets. We demonstrate that NT-Xent loss adapted from self-supervised learning shows stable performance across different datasets and training settings, and outperforms the popular triplet-based losses. Our code is available at https://github.com/XinhaoMei/audio-text_retrieval. Index Terms: metric learning, audio retrieval, text-based retrieval, cross- modal task ## 1 Introduction Given an audio clip or a caption as a query, audio-text retrieval aims at retrieving a paired item from a pool of candidates in another modality. This cross-modal retrieval task is challenging as it requires not only learning robust feature representations for both acoustic and textual modalities but also capturing fine-grained interaction between the learned acoustic and textual features and aligning them in a shared embedding space. Audio-text retrieval can be potentially applied to applications such as film, audio book production and web search. Cross-modal retrieval tasks (e.g. image-text retrieval and video-text retrieval) have received extensive attention in recent years and have made great progress [1, 2, 3, 4, 5, 6]. However, little attention has been paid to audio-text retrieval in the literature. One reason might be the lack of appropriate datasets, thus early works just focused on tag-based audio retrieval, where the queries were words not sentences. Chechik et al. [7] proposed a tag-based audio retrieval system using traditional machine learning techniques (e.g. support vector machines and Gaussian mixture models). Ikawa et al. [8] investigated searching sounds using onomatopoeic words. Elizalde et al. [9] employed a siamese network to align audio and textual features to a joint embedding space. Although these tag-based audio retrieval works show reasonable performance, they are constrained in the query format. Retrieving audio clips using free-form language (sentences) is more natural for humans. With the fast development of audio captioning in recent three years [10, 11, 12, 13], publicly available audio captioning datasets are released [14, 15], which are naturally suited for the free-form language-based audio-text retrieval task. Koepke et al. [16] first established the benchmarks for free- form language-based audio retrieval, where they adapted models from video retrieval and made use of pre-trained models to solve the data scarcity problem. Since both the audio and text (captions) are sequence data, free-form language-based audio-text retrieval is more challenging than tag-based audio retrieval and is the focus of this paper. We use the term audio-text and audio-caption interchangeably in this paper. Similar to other cross-modal retrieval models [1], the audio-text retrieval models can be built with two sub-networks, namely, an audio encoder and a text encoder. The objectives of these two encoders are to map the audios and texts into a joint embedding space, where the semantically similar embeddings are close to each other and the dissimilar items are far away. We refer to the embeddings as Acoustic Semantic Embeddings (ASE) as they are learned via jointly modeling the audio and language modalities. The training objective of cross-modal retrieval models is consistent with that of metric learning [17]. To this end, metric learning has been a popular choice for the optimization of the cross-modal retrieval models. Numerous metric learning objectives have been introduced for various tasks such as face identification [18], speaker recognition [19], and retrieval [1, 2], however, there is no clear argument which one is the most suited since they may work well on specific tasks or data but may not generalize well to other tasks [20]. In this work, we aim to study and compare the impact of different metric learning objectives for the free-form language-based audio-text retrieval task in a constant training setting. We focus on triplet loss and its variants, as they have demonstrated promising performance and are popularly employed [18]. In a triplet setting for audio- text retrieval, an audio clip and its corresponding caption are regarded as an anchor and a positive example, respectively, while other unpaired captions are regarded as negatives. The hinge-based triplet ranking loss sums over all negative samples within a mini-batch (thus we refer to it as triplet-sum). Faghri et al. [1] argued that hard negatives should be emphasised as other easy negatives may dominate the loss and create local minimal, thus they proposed a triplet ranking loss with hard negative mining (we refer to it as triplet-max) which focuses only on the hardest negative within a mini-batch. Wei et al. [21] further proposed a universal weighting framework for cross- modal retrieval, where the pairs are weighted based on their similarity scores (we refer to it as triplet-weighted). In addition to the triplet-based losses, we further adapt a contrastive loss used in self-supervised learning here for supervised cross-modal retrieval, that is, normalized temperature-scaled cross entropy loss (NT-Xent) [22]. The NT-Xent loss is based on softmax and aims to identify the positive pairs within a mini-batch. In summary, we first establish a baseline using pre-trained models to learn the acoustic semantic embeddings, then we present an extensive evaluation of the popular metric learning objectives described above on our baseline in a constant training setting. Against popular belief [1, 20], we demonstrate that triplet losses with hard negative mining are sensitive to the training settings and may be hard to converge, the NT-Xent loss shows stable performance with respect to different datasets and training settings and outperforms the triplet-based losses. ## 2 Audio-Text Retrieval with Metric Learning In this section, we first formulate the audio-text retrieval problem and introduce the baseline model, then the metric learning objectives (loss functions) we evaluated are introduced. ### 2.1 Problem formulation Let $D=\\{(a_{i},t_{i})\\}_{i=1}^{N}$ be an audio captioning dataset of $N$ examples, where $a_{i}$ is an audio clip and $t_{i}$ is the paired caption. Therefore, $(a_{i},t_{i})$ is regarded as a positive pair while $(a_{i},t_{j,j\neq i})$ is a negative pair. One audio clip could have multiple paired captions, we just consider one here for simplicity. The audio-text retrieval models usually consist of an audio encoder $f$ and a text encoder $g$, which project the audio clip and text into a shared embedding space, respectively. For an audio-caption pair $(a_{i},t_{j})$, the similarity of the audio and caption can be measured by cosine similarity of their embeddings: $s_{ij}=\frac{f(a_{i})\cdot g(t_{j})}{||f(a_{i})||_{2}||g(t_{j})||_{2}}.$ (1) The two encoders are trained to make the similarity score of positive pairs $s_{ii}$ higher than that of negative pairs $s_{ij}$. ### 2.2 Model The data available in the audio captioning datasets is limited, and this data scarcity problem usually limits the performance of the model in learning robust feature representations. Transfer learning has been adopted as a standard recipe to alleviate the data scarcity problem and has shown promising performance in audio captioning task [23]. Therefore, pre-trained models are employed here. Audio Encoder Pre-trained audio neural networks (PANNs) [24] are pre-trained on AudioSet with an audio tagging task, which are shown to provide robust audio representations and perform well on a wide range of audio-related tasks. The ResNet-38 in PANNs is employed as the audio encoder, where the last two linear layers are discarded. An average and max pooling layer is applied to aggregate along the frequency dimension on the feature map output by the last convolutional block. A multilayer perceptron (MLP) block is used to project the audio features into a shared embedding space, which consists of two linear layers with a ReLU [25] activation layer between them. Text Encoder Numerous large-scale per-trained language models have been published in recent year, which show powerful capability to model the language and produce contextual-aware embeddings. BERT [26], which stands for Bidirectional Encoder Representations from Transformers, obtains state-of-the- art results on a wide variety of Natural Language Processing (NLP) tasks. The pre-trained BERT is employed as the text encoder here. A “<CLS>” token is appended at the start of each sentence and used as the final sentence representation. A MLP block is also applied to project the sentence representation into the shared embedding space. ### 2.3 Loss functions During training, we sample a mini-batch of audio-caption pairs $\\{a_{i},t_{i}\\}_{i=1}^{B}$ where $B$ is the batch size. Triplet-based loss functions are based on the concept of triplet, which is made up by an anchor, a positive (paired candidate in another modality) and a negative (unpaired candidate in another modality). The anchor with its positive is a positive pair and the anchor with its negative is a negative pair as we defined above. Triplet-sum For each query, the triplet-sum loss aims to maximize the similarity score of its positive pair while minimizing the similarity scores to all other negatives within a mini-batch, thus it can be formulated as $\mathcal{L}=\frac{1}{B}\sum_{i=1}^{B}\sum_{j\neq i}[m+s_{ij}-s_{ii}]_{+}+[m+s_{ji}-s_{ii}]_{+},$ (2) where $[x]_{+}=\max(0,x)$ and $m$ is a distance margin. Since audio-text retrieval is a bidirectional retrieval task (audio-to-text and text-to-audio), the loss has two terms, the first term sums over all negative captions given a query audio clip while the second term sums over all negative audio clips given a query caption. If the similarity of the positive pair is larger than that of any negatives in the mini-batch by the margin $m$, the loss will be zero. Triplet-max Triplet-sum loss sums over all negatives for each query within a mini-batch, Faghri et al. [1] argued that the easy negatives may dominate the loss and make it stuck into local minimal, thus, hard negatives should be emphasized. They proposed the triplet-max loss that focuses on the hardest negatives during training, which can be formulated as: $\mathcal{L}=\frac{1}{B}\sum_{i=1}^{B}\max_{j\neq i}{[m+s_{ij}-s_{ii}]_{+}}+\max_{j\neq i}{[m+s_{ji}-s_{ii}]_{+}}.$ (3) For each query, it aims to maximize the similarity score of its positive pair while just minimizing the similarity score to its hardest negative within a mini-batch, that is, the negative closest to the query in the embedding space. In this way, the easy negatives won’t violate the loss and the hardest negative takes all the gradients. Triplet-weighted Both the positive and negative pairs are treated equally in the triplet-sum and triplet-max losses. Wei et al. [21] further introduced a universal function $G(\cdot)$ to weight the pairs based on their similarity scores. Specifically, the weighting function is defined as a polynomial function. For a positive pair $(a_{i},t_{i})$, the positive weight function $G_{\rm{pos}}$ is defined as: $G_{\rm{pos}}=a_{p}s_{ii}^{p}+a_{p-1}s_{ii}^{p-1}+\cdots+a_{1}s_{ii}^{1}+a_{0},$ (4) where $p$ is the order of the polynomial function and $\\{a_{i}\\}_{i=0}^{p}$ are the hyper-parameters. The negative weight function $G_{\rm{neg}}$ can be formulated as: $G_{\rm{neg}}=b_{q}s_{ij}^{q}+b_{q-1}s_{ij}^{q-1}+\cdots+b_{1}s_{ij}^{1}+b_{0},$ (5) where $\\{b_{i}\\}_{i=0}^{q}$ are the hyper-parameters and $q$ is the order. If the similarity score of the positive pair $(a_{i},t_{i})$ increases, the positive weight value will decrease. In contrast, for a negative pair $(a_{i},t_{j})$, the negative weight value increases if the similarity score of the negative pair increases. Table 1: Results of the experiments. * indicates the learning rate is $5\text{\times}{10}^{-5}$ otherwise $1\text{\times}{10}^{-4}$. Dataset | Fine-tune | Objective | Text-to-Audio | Audio-to-Text ---|---|---|---|--- R$@1$ | R$@5$ | R$@10$ | R$@1$ | R$@5$ | R$@10$ AudioCaps | No | Triplet-sum | $17.2_{\pm 0.5}$ | $47.6_{\pm 0.2}$ | $64.3_{\pm 0.1}$ | $19.3_{\pm 1.5}$ | $51.4_{\pm 0.8}$ | $67.2_{\pm 0.7}$ Triplet-max | $19.9_{\pm 0.2}$ | $51.6_{\pm 0.3}$ | $67.4_{\pm 0.3}$ | $21.2_{\pm 0.9}$ | $54.9_{\pm 1.4}$ | $70.2_{\pm 1.0}$ Triplet-weighted | $19.9_{\pm 0.2}$ | $52.1_{\pm 0.7}$ | $67.6_{\pm 0.6}$ | $21.9_{\pm 0.8}$ | $56.3_{\pm 0.3}$ | $71.5_{\pm 0.4}$ NT-Xent | $19.2_{\pm 0.4}$ | $51.1_{\pm 0.2}$ | $66.6_{\pm 0.2}$ | $21.3_{\pm 0.5}$ | $53.6_{\pm 0.7}$ | $69.8_{\pm 0.5}$ Yes | Triplet-sum | $32.2_{\pm 0.3}$ | $68.2_{\pm 0.6}$ | $81.6_{\pm 0.5}$ | $36.1_{\pm 1.2}$ | $69.2_{\pm 1.3}$ | $81.4_{\pm 1.7}$ Triplet-max | $32.7_{\pm 0.3}$ | $68.3_{\pm 0.8}$ | $81.6_{\pm 0.5}$ | $38.7_{\pm 1.0}$ | $70.6_{\pm 0.7}$ | $82.2_{\pm 0.8}$ Triplet-weighted | $32.6_{\pm 0.6}$ | $67.7_{\pm 0.7}$ | $81.0_{\pm 0.8}$ | $39.6_{\pm 1.1}$ | $72.0_{\pm 2.2}$ | $82.2_{\pm 1.4}$ NT-Xent | $33.9_{\pm 0.4}$ | $69.7_{\pm 0.2}$ | $82.6_{\pm 0.3}$ | $39.4_{\pm 1.0}$ | $72.0_{\pm 1.0}$ | $83.9_{\pm 0.6}$ Clotho | No | Triplet-sum | $7.0_{\pm 0.2}$ | $23.0_{\pm 0.3}$ | $34.8_{\pm 0.5}$ | $8.3_{\pm 0.3}$ | $25.4_{\pm 0.8}$ | $36.7_{\pm 0.5}$ Triplet-max | $7.9_{\pm 0.2}$ | $23.6_{\pm 0.4}$ | $34.2_{\pm 0.5}$ | $8.8_{\pm 1.0}$ | $25.7_{\pm 1.0}$ | $36.3_{\pm 0.8}$ Triplet-weighted* | $7.2_{\pm 0.2}$ | $22.1_{\pm 0.3}$ | $33.0_{\pm 0.3}$ | $7.5_{\pm 0.4}$ | $23.2_{\pm 0.1}$ | $33.3_{\pm 0.3}$ NT-Xent | $8.0_{\pm 0.2}$ | $25.3_{\pm 0.1}$ | $36.9_{\pm 0.3}$ | $9.2_{\pm 0.7}$ | $27.9_{\pm 0.5}$ | $39.0_{\pm 0.7}$ Yes | Triplet-sum | $14.2_{\pm 0.5}$ | $36.6_{\pm 0.5}$ | $49.3_{\pm 0.7}$ | $16.1_{\pm 0.7}$ | $37.5_{\pm 1.2}$ | $50.7_{\pm 1.0}$ Triplet-max* | $14.2_{\pm 0.9}$ | $36.5_{\pm 1.2}$ | $49.0_{\pm 0.4}$ | $15.8_{\pm 0.4}$ | $36.4_{\pm 1.6}$ | $49.6_{\pm 2.0}$ Triplet-weighted* | $14.2_{\pm 0.4}$ | $36.6_{\pm 0.5}$ | $49.7_{\pm 0.3}$ | $16.9_{\pm 0.4}$ | $38.1_{\pm 0.2}$ | $51.4_{\pm 0.2}$ NT-Xent | $14.4_{\pm 0.4}$ | $36.6_{\pm 0.2}$ | $49.9_{\pm 0.2}$ | $16.2_{\pm 0.7}$ | $37.5_{\pm 0.9}$ | $50.2_{\pm 0.7}$ Let $N_{a_{i}}=\\{s_{ij,j\neq i}\\}$ and $N_{t_{j}}=\\{s_{ji,i\neq j}\\}$ be the similarity scores for all negative pairs of an audio sample $a_{i}$ and a text sample $t_{j}$ respectively. The loss can be formulated as: $\begin{split}\mathcal{L}&=\frac{1}{B}\sum_{i=1}^{B}[\sum_{p=0}^{P}a_{p}s_{ii}^{p}+\sum_{q=0}^{Q}b_{q}\max\\{N_{a_{i}}^{q}\\}]_{+}\\\ &+\frac{1}{B}\sum_{i=1}^{B}[\sum_{p=0}^{P}a_{p}s_{ii}^{p}+\sum_{q=0}^{Q}b_{q}\max\\{N_{t_{i}}^{q}\\}]_{+},\end{split}$ (6) where $P$ and $Q$ are the highest power of positive and negative pairs, respectively. The $\max$ term in the equation will select the hardest negative pair, thus the loss is referred to as the maximum polynomial loss. They also proposed another average polynomial loss that will first select informative negative pairs based on a mining policy and average the similarity scores of the selected pairs. We just employ the maximum polynomial loss here as it performs better than the average one in [21]. NT-Xent NT-Xent is a contrastive loss based on softmax, proposed by Chen et al. [22] to learn visual representations via self-supervised learning. We adapt it here for the supervised cross-modal retrieval task, as follows: $\begin{split}\mathcal{L}=-\frac{1}{B}&\left(\sum_{i=1}^{B}\log\frac{\exp({s_{ii}/\tau})}{\sum_{j=1}^{B}\exp{({s_{ij}/\tau)}}}+\right.\\\ &\left.\sum_{i=1}^{B}\log\frac{\exp({s_{ii}/\tau})}{\sum_{j=1}^{B}\exp{({s_{ji}/\tau)}}}\right),\end{split}$ (7) where $\tau$ is a temperature hyper-parameter. It aims to maximize the similarity of the positive pair with respect to all negative pairs within a mini-batch, and the final loss is also computed in a bidirectional manner. ## 3 Experiments ### 3.1 Datasets We focus on free-form language-based audio-text retrieval, thus audio captioning datasets are naturally employed, in which each audio clip is annotated by humans using natural sentences. Extensive experiments are carried out on AudioCaps [14] and Clotho [15] datasets, respectively. AudioCaps AudioCaps is the largest audio captioning dataset with around $50$k audio clips. All the audio clips are $10$-second long and are sourced from AudioSet [27], the largest dataset for audio tagging. In our downloaded version, the training set contains $49\,274$ audio clips and each of them has one corresponding human-annotated caption, the validation and test set contain $494$ and $957$ audio clips, respectively, and each of them has five human- annotated captions. Clotho Clotho is an audio captioning dataset whose audio clips are collected from the Freesound111https://freesound.org/ archive. The length of the audio clips ranges uniformly from $15$ to $30$ seconds. Clotho v2 is used here. There are $3839$ audio clips in the training set and $1045$ audio clips in the validation and test sets respectively. All the audio clips have five diverse human-annotated captions of eight to $20$ words in length. Table 2: Experimental results with different batch sizes on AudioCaps dataset. The pre-trained encoders are not fine-tuned. Batch size | Objective | Text-to-Audio | Audio-to-Text ---|---|---|--- R$@1$ | R$@5$ | R$@10$ | R$@1$ | R$@5$ | R$@10$ 32 | Triplet-sum | $17.2_{\pm 0.5}$ | $47.6_{\pm 0.2}$ | $64.3_{\pm 0.1}$ | $19.3_{\pm 1.5}$ | $51.4_{\pm 0.8}$ | $67.2_{\pm 0.7}$ Triplet-max | $19.9_{\pm 0.2}$ | $51.6_{\pm 0.3}$ | $67.4_{\pm 0.3}$ | $21.2_{\pm 0.9}$ | $54.9_{\pm 1.4}$ | $70.2_{\pm 1.0}$ Triplet-weighted | $19.9_{\pm 0.2}$ | $52.1_{\pm 0.7}$ | $67.6_{\pm 0.6}$ | $21.9_{\pm 0.8}$ | $56.3_{\pm 0.3}$ | $71.5_{\pm 0.4}$ NT-Xent | $19.2_{\pm 0.4}$ | $51.1_{\pm 0.2}$ | $66.6_{\pm 0.2}$ | $21.3_{\pm 0.5}$ | $53.6_{\pm 0.7}$ | $69.8_{\pm 0.5}$ 128 | Triplet-sum | $17.0_{\pm 0.7}$ | $47.2_{\pm 0.2}$ | $62.7_{\pm 0.2}$ | $20.0_{\pm 1.2}$ | $49.9_{\pm 1.5}$ | $66.7_{\pm 1.0}$ Triplet-max | $11.8_{\pm 0.3}$ | $38.1_{\pm 0.9}$ | $53.7_{\pm 0.5}$ | $15.0_{\pm 0.5}$ | $43.4_{\pm 0.6}$ | $59.6_{\pm 0.7}$ Triplet-weighted | $10.1_{\pm 0.6}$ | $31.7_{\pm 0.9}$ | $46.5_{\pm 0.9}$ | $10.7_{\pm 0.9}$ | $33.4_{\pm 1.7}$ | $48.7_{\pm 2.6}$ NT-Xent | $19.5_{\pm 0.1}$ | $50.4_{\pm 0.3}$ | $65.6_{\pm 0.8}$ | $22.2_{\pm 0.7}$ | $53.7_{\pm 2.0}$ | $69.5_{\pm 0.4}$ ### 3.2 Experimental setups Log mel-spectrograms are used as the audio features, which are extracted using a $1024$-points Hanning window with $320$-points hop size and $64$ mel bins. All the models are trained for $50$ epochs using Adam [28]. The learning rate is set to $1\text{\times}{10}^{-4}$ or $5\text{\times}{10}^{-5}$ and is decayed to $1$/$10$ of itself every $20$ epochs. The batch size is set to $32$ for the AudioCaps dataset and $24$ for the Clotho dataset. We perform experiments by freezing and fine-tuning the pre-trained models. The best model is selected based on the sum of recalls on the validation set. For all triplet-based losses, the distance margin $m$ is set to $0.2$, and the temperature hyper-parameter $\tau$ in the NT-Xent loss is set to $0.07$. For triplet-weighted loss, we follow the hyper-parameter settings in [21], that is, $Q=2,\\{a_{0}=0.5,a_{1}=-0.7,a_{2}=0.2\\}$, and $P=2,\\{b_{0}=0.03,b_{1}=-0.4,a_{2}=0.9\\}$. The dimension of the shared embedding space is $1024$. The learned acoustic semantic embeddings are normalized. All experiments are carried out on a single RTX3090 GPU. ### 3.3 Evaluation protocol Recall at rank $k$ (R$@k$) is used as the evaluation metric, which is the popular cross-modal retrieval evaluation protocol. R$@k$ measures the percentage of targets retrieved within the top $k$ ranked results, thus the higher the score, the better the performance. We report R$@1$, R$@5$, and R$@10$. All the experiments are repeated three times with different training seeds and we report the mean and standard deviation of the metrics. ## 4 Results ### 4.1 Model performance The experimental results are shown in Table 1. The baseline model is simple but effective. It can be observed that fine-tuning can substantially improve the model performance and lead to state-of-the-art results on both datasets. In addition, we could observe that the scores on the Clotho dataset are relatively lower than those on the AudioCaps dataset, which is consistent with the results in [16]. The reasons might be two folds: (1) the size of training data is limited in the Clotho dataset, (2) captions for each audio clip in the Clotho dataset are more diverse thus it is challenging to map the diverse captions close with each other in the shared acoustic semantic embedding space. ### 4.2 Metric learning objectives As can be seen in Table 1, the NT-Xent loss shows stable performance on both AudioCaps and Clotho datasets regardless of whether the pre-trained encoders are fine-tuned or frozen. It outperforms the other three triplet-based losses in most cases on both text-to-audio and audio-to-text retrieval. For the other three triplet-based losses, the models trained via triplet-sum are not as good as others especially when the encoders are frozen. However, they achieve similar performance on the Clotho dataset when the encoders are fine-tuned. The triplet-max and triplet-weighted losses achieve similar results on the AudioCaps dataset, but the models trained via triplet-weighted loss perform less well than others on the Clotho dataset when the encoders are frozen. The weighting method does not bring substantial improvements. The reason might be that the weighting function has many hyper-parameters to be tuned while we have used the values in the original work [21]. Tuning the hyper-parameters for the dataset we evaluated may lead to better performance. Overall, the NT- Xent loss outperforms other three triplet-based losses on both text-to-audio and audio-to-text retrieval in most situations. In our experiments, we also found that the two losses based on hardest negative mining, the triplet-max and triplet-weighted losses, are sensitive to the training settings, while the other two, triplet-sum and NT-Xent, are more robust and stable to different training settings. For example, triplet-max and triplet-weighted need a good learning rate, otherwise the models are difficult to converge. On the AudioCaps dataset, all the models are trained with a learning rate of $1\text{\times}{10}^{-4}$ and all converge well. However, the models trained via triplet-max and triplet-sum with such a learning rate do not converge on the Clotho dataset. Furthermore, triplet-max and triplet- weighted are sensitive to the initialization of model parameters, and we found that some of the models with different training seeds may not converge under the same training settings. In addition, we study the impact of batch size on the AudioCaps dataset without fine-tuning the encoders. The results are shown in Table 2. We can observe the performance of models trained via triplet-max and triplet-weighted losses degrades considerably when the batch size is increased to $128$, while models trained via the other two losses (triplet-sum and NT-Xent) show stable performance with respect to the change of the batch size. This is somewhat inconsistent with results in the literature, where it is generally believed that larger batch sizes could lead to better performance for the triplet-based losses [18]. We found the triplet-max and triplet-weighted do not converge with a batch size of 128. The reason might be that the learning rate should also be tuned to adapt for a larger batch size. Overall, triplet-sum and NT- Xent losses are more robust to different training settings and datasets, while triplet-max and triplet-weighted are more difficult to train. ## 5 Conclusions We have presented a simple but effective model to learn the Acoustic Semantic Embeddings for the free-form language-based audio-text retrieval task, then we studied the impact of metric learning objectives based on our model in a constant training setting. We empirically demonstrated that the metric learning objectives have a significant impact on the model performance, where the NT-Xent loss outperformed the popular triplet-based losses and showed stable performances with respect to different training settings and datasets. The triplet losses with hard negative mining need careful tuning of hyper- parameters, and are sensitive to the initialization of model parameters. ## 6 Acknowledgements This work is partly supported by a Newton Institutional Links Award from the British Council, titled “Automated Captioning of Image and Audio for Visually and Hearing Impaired” (Grant number 623805725) and grants EP/T019751/1 and EP/V002856/1 from the Engineering and Physical Sciences Research Council (EPSRC). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. ## References * [1] F. Faghri, D. J. Fleet, J. R. Kiros, and S. Fidler, “VSE++: Improving visual-semantic embeddings with hard negatives,” in _Proceedings of the British Machine Vision Conference (BMVC)_ , 2018. [Online]. Available: https://github.com/fartashf/vsepp * [2] K. Li, Y. Zhang, K. Li, Y. Li, and Y. Fu, “Visual semantic reasoning for image-text matching,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 4654–4662. * [3] X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei _et al._ , “Oscar: Object-semantics aligned pre-training for vision-language tasks,” in _European Conference on Computer Vision_. Springer, 2020, pp. 121–137. * [4] Y. Liu, S. Albanie, A. Nagrani, and A. Zisserman, “Use what you have: Video retrieval using representations from collaborative experts,” in _British Machine Vision Conference_ , 2019. * [5] A. Miech, I. Laptev, and J. Sivic, “Learning a text-video embedding from incomplete and heterogeneous data,” _arXiv preprint arXiv:1804.02516_ , 2018\. * [6] V. Gabeur, C. Sun, K. Alahari, and C. Schmid, “Multi-modal Transformer for Video Retrieval,” in _European Conference on Computer Vision (ECCV)_ , 2020. * [7] G. Chechik, E. Ie, M. Rehn, S. Bengio, and D. Lyon, “Large-scale content-based audio retrieval from text queries,” in _Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval_ , 2008, pp. 105–112. * [8] S. Ikawa and K. Kashino, “Acoustic event search with an onomatopoeic query: measuring distance between onomatopoeic words and sounds,” in _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE)_ , 2018, pp. 59–63. * [9] B. Elizalde, S. Zarar, and B. Raj, “Cross modal audio search and retrieval with joint embeddings based on text and audio,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2019, pp. 4095–4099. * [10] X. Mei, X. Liu, Q. Huang, M. D. Plumbley, and W. Wang, “Audio captioning transformer,” in _Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)_ , Barcelona, Spain, November 2021, pp. 211–215. * [11] X. Mei, X. Liu, J. Sun, M. D. Plumbley, and W. Wang, “Diverse audio captioning via adversarial training,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2022. * [12] X. Liu, Q. Huang, X. Mei, T. Ko, H. Tang, M. D. Plumbley, and W. Wang, “CL4AC: A contrastive loss for audio captioning,” in _Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)_ , Barcelona, Spain, November 2021, pp. 196–200. * [13] X. Liu, X. Mei, Q. Huang, J. Sun, J. Zhao, H. Liu, M. D. Plumbley, V. Kılıç, and W. Wang, “Leveraging pre-trained bert for audio captioning,” _arXiv preprint arXiv:2203.02838_ , 2022. * [14] C. D. Kim, B. Kim, H. Lee, and G. Kim, “AudioCaps: Generating captions for audios in the wild,” in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , 2019, pp. 119–132. * [15] K. Drossos, S. Lipping, and T. Virtanen, “Clotho: An audio captioning dataset,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 736–740. * [16] A. S. Koepke, A.-M. Oncescu, J. Henriques, Z. Akata, and S. Albanie, “Audio retrieval with natural language queries: A benchmark study,” _IEEE Transactions on Multimedia_ , 2022. * [17] K. Musgrave, S. Belongie, and S.-N. Lim, “A metric learning reality check,” in _European Conference on Computer Vision_. Springer, 2020, pp. 681–699. * [18] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 815–823. * [19] J. S. Chung, J. Huh, S. Mun, M. Lee, H.-S. Heo, S. Choe, C. Ham, S. Jung, B.-J. Lee, and I. Han, “In defence of metric learning for speaker recognition,” _Proc. Interspeech 2020_ , pp. 2977–2981, 2020. * [20] M. Bleeker and M. de Rijke, “Do lessons from metric learning generalize to image-caption retrieval?” _arXiv preprint arXiv:2202.07474_ , 2022. * [21] J. Wei, X. Xu, Y. Yang, Y. Ji, Z. Wang, and H. T. Shen, “Universal weighting metric learning for cross-modal matching,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2020\. * [22] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in _International Conference on Machine Learning_. PMLR, 2020, pp. 1597–1607. * [23] X. Mei, Q. Huang, X. Liu, G. Chen, J. Wu, Y. Wu, J. ZHAO, S. Li, T. Ko, H. Tang, X. Shao, M. D. Plumbley, and W. Wang, “An encoder-decoder based audio captioning system with transfer and reinforcement learning,” in _Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)_ , Barcelona, Spain, November 2021, pp. 206–210. * [24] Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley, “PANNs: Large-scale pretrained audio neural networks for audio pattern recognition,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 28, pp. 2880–2894, 2020. * [25] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_. JMLR Workshop and Conference Proceedings, 2011, pp. 315–323. * [26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , 2019, pp. 4171–4186. * [27] J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio Set: An ontology and human-labeled dataset for audio events,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , New Orleans, LA, 2017. * [28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _ICLR (Poster)_ , 2015. [Online]. Available: http://arxiv.org/abs/1412.6980
# A short note on model selection by LASSO methods in a change-point model Fuqi Chen University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4. Email<EMAIL_ADDRESS>and Sévérien Nkurunziza University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4. Email<EMAIL_ADDRESS> ###### Abstract In Ciuperca (2012) (Ciuperca. Model selection by LASSO methods in a change- point model, Stat. Papers, 2012;(in press)), the author considered a linear regression model with multiple change-points occurring at unknown times. In particular, the author studied the asymptotic properties of the LASSO-type and of the adaptive LASSO estimators. While the established results seem interesting, we point out some major errors in proof of the most important result of the quoted paper. Further, we present a corrected result and proof. Keywords: Asymptotic properties; Change-points; Model selection; LASSO; Regression. ## 1 Introduction In Ciuperca (2012), the author considered a linear regression model with multiple change-points occurring at unknown times. In particular, the author studied the asymptotic properties of the LASSO-type and that of the adaptive LASSO estimators. While the established results seem interesting, we point out a major error in proof of one of the important result. In particular, the proof of Part (ii) of Lemma 3 in Ciuperca (2011) is based on the inequality $|a^{2}-b^{2}|\leqslant(a-b)^{2}$, which is wrong. Indeed, take $a=2$ and $b=1$, we get $|a^{2}-b^{2}|=3>(2-1)^{2}=1$ which contradicts the inequality used in the quoted paper. For the sake of clarity, we use the same notation and we suppose that the main assumptions in Ciuperca (2012) hold. Below, we recall these assumptions for the convenience of the reader. Namely, we consider the following model: $Y_{i}=f_{\theta}(X_{i})+\varepsilon_{i}$, where $f_{\theta}(X_{i})=X^{\prime}_{i}\phi_{1}\mathbb{\mathbb{I}}_{\left\\{i<l_{1}\right\\}}+X^{\prime}_{i}\phi_{2}\mathbf{\mathbb{I}}_{\left\\{l_{1}\leq i<l_{2}\right\\}}+...+X^{\prime}_{i}\phi_{K+1}\mathbf{\mathbb{I}}_{\left\\{i>l_{K}\right\\}},\quad i=1,...n,$ $\mathbb{\mathbb{I}}_{A}$ denotes the indicator function of the event $A$, $Y_{i}$ denotes the response variable, $X_{i}$ is a $p$-vector of regressors, $(\varepsilon_{i})_{1\leqslant i\leqslant n}$ are the errors which are assumed to be independent and identically distributed (i.i.d.) random variables, $\phi_{i}\in\Gamma\subset\mathbb{R}^{p}$, $\Gamma$ is compact, $i=1,2,\dots,K$. The model parameters are given by $\theta=(\theta_{1},\theta_{2})$, with the regression parameters $\theta_{1}=(\phi_{1},...\phi_{k+1})$ and the change-points $\theta_{2}=(l_{1},...,l_{k})$. In addition, we set $\theta_{1}^{0}=(\phi_{1}^{0},...\phi_{k+1}^{0})$ and $\theta_{2}^{0}=(l_{1}^{0},...,l_{k}^{0})$ to be the true values of $\theta_{1}$ and $\theta_{2}$, respectively. As in Ciuperca (2012), we impose the following conditions. ### Main Assumptions $\bm{(H_{1})}$ There exists two positive constants $u,c_{0}(>0)$ such that $l_{r+1}-l_{r}\geqslant c_{0}[n^{u}]$, for every $r\in(1,...,K)$, with $l_{0}=1$ and $l_{K+1}=n$. Without loss of generality, we consider $3/4\leqslant u\leqslant 1$, and $c_{0}=1$. $(\bm{H_{2}})$ $n^{-1}\,\displaystyle{\max_{1\leqslant i\leqslant n}}(X^{\prime}_{i}X_{i})\xrightarrow[n\rightarrow\infty]{}\bm{0}$ and for any $r=1,...,K+1$, the matrix $C_{n,r}\equiv(l_{r}-l_{r-1})\displaystyle{\sum_{i=l_{r-1}+1}^{l_{r}}}X_{i}X^{\prime}_{i}\xrightarrow[n\rightarrow\infty]{}C_{r}$, where $C_{r}$ is a non-negative definite matrix. $(\bm{H_{3}})$ $\varepsilon$ is a random variable absolutely continuous with $\textrm{E}(\varepsilon_{i})=0$, $\textrm{E}(\varepsilon_{i}^{2})=\sigma^{2}$, $i=1,2,\dots,n$. We assume that $\phi_{r}\neq\phi_{r+1}$, $r=1,...,k$, and consider the following penalized sum: $S(l_{1},...,l_{k})=\sum_{r=1}^{k+1}\Big{[}\inf_{\phi_{r}}\sum_{i=l_{r-1}+1}^{l_{r}}\Big{(}((Y_{i}-X_{i}\phi_{r})^{2})+\frac{\lambda_{n,(l_{r-1},l_{r})}}{l_{r}-l_{r-1}}\sum_{u=1}^{P}|\phi_{r,u}|^{\gamma}\Big{)}\Big{]},$ where $\lambda_{n,(l_{r-1},l_{r})}=O(l_{r}-l_{r-1})^{1/2}$ is the tuning parameter and $\gamma>0$. We define the LASSO-type estimator of $(\theta_{1}^{0},\theta_{2}^{0})$, say $(\hat{\theta}_{1}^{s},\hat{\theta}_{2}^{s})$, where $\hat{\theta}_{1}^{s}=(\hat{l}_{1}^{s},...,\hat{l}_{k}^{s})$ and $\hat{\theta}_{2}^{s}=(\hat{\phi}_{1}^{s},...,\hat{\phi}_{k+1}^{s})$, by $\hat{\phi}_{r}^{s}=\displaystyle{\arg\min_{\phi_{r}}}\sum_{i=l_{r-1}+1}^{l_{r}}\Big{(}(Y_{i}-X_{i}\phi_{r})^{2})+\frac{\lambda_{n,(l_{r-1},l_{r})}}{l_{r}-l_{r-1}}\sum_{u=1}^{P}|\phi_{r,u}|^{\gamma}\Big{)},\quad\forall r=1,...,k+1,$ and $\hat{\theta}_{1}^{s}=\displaystyle{\arg\min_{\theta_{1}}}S(l_{1},...,l_{k}).$ Note that, for $\gamma=1$ and $\gamma=2$, we obtain the LASSO estimator and ridge estimator respectively. The rest of this paper is organized as follows. Section 2 gives the main result of this paper, and in Section 3. The proof of the main result is given in the Appendix. ## 2 Main result ###### Lemma 2.1. Under Assumptions $(\bm{H_{2}})$, $(\bm{H_{3}})$, for all $n_{1}$, $n_{2}\in N$, such that $n_{1}\geqslant n^{u}$, with $3/4\leq u\leq 1$, $n_{2}\leq n^{v}$, $v<1/4$, let be the model: $\displaystyle Y_{i}$ $\displaystyle=$ $\displaystyle X_{i}^{\prime}\phi_{1}^{0}+\epsilon_{i},\;i=1,...,n_{1}$ $\displaystyle Y_{i}$ $\displaystyle=$ $\displaystyle X_{i}^{\prime}\phi_{2}^{0}+\epsilon_{i},\;i=n_{1}+1,...,n_{2},$ with $\phi_{1}^{0}\neq\phi_{2}^{0}$. We set $A_{n_{1}+n_{2}}^{s}(\phi)=\displaystyle{\sum_{i=1}^{n_{1}}}\eta_{i;(0,n_{1})}^{s}(\phi,\phi_{1}^{0})+\displaystyle{\sum_{i=n_{1}+1}^{n_{1}+n_{2}}}\eta_{i;(n_{1},n_{1}+n_{2})}^{s}(\phi,\phi_{2}^{0})$ and $\hat{\phi}_{n_{1}+n_{2}}^{s}=\displaystyle{\arg\min_{\phi}}A_{n_{1}+n_{2}}^{s}(\phi)$. Let $\delta\in(0,u-3v)$. Then, 1. (i) $||\hat{\phi}_{n_{1}+n_{2}}^{s}-\phi_{1}^{0}||\leq n^{-(u-v-\delta)/2}$. 2. (ii) If $\phi_{2}^{0}=\phi_{1}^{0}+\phi_{3}^{0}\,n^{-1/4}$ for some $\phi_{3}^{0}$, then $\displaystyle{\sum_{i=1}^{n_{1}}}\eta_{i;(0,n_{1})}^{s}(\phi_{n_{1}+n_{2}}^{s},\phi_{1}^{0})=O_{p}(1).$ ###### Remark 2.1. It should be noted that, although Part (i) of the above lemma is the same as that of lemma 3 of Ciuperca (2012), Part (ii) is slightly different. The established result holds if $\phi_{2}^{0}=\phi_{1}^{0}+\phi_{3}^{0}\,n^{-1/4}$, while the result stated in Ciuperca (2012) is supposed to hold for all $\phi_{2}^{0}\neq\phi_{1}^{0}$, but with incorrect proof. So far, we are neither able to correct the proof for all $\phi_{2}^{0}\neq\phi_{1}^{0}$ nor to prove that the statement itself is wrong. Similarly, Part (ii) of Lemmas 4 and 8 hold under the condition that $\phi_{2}^{0}=\phi_{1}^{0}+\phi_{3}^{0}\,n^{-1/4}$. ## 3 Concluding Remark In this paper, we proposed a modification of Part (ii) of Lemma 3 given in Ciuperca (2012) for which the proof is wrong. Further, we provided the correct proof. It should be noted that there are several important results in the quoted paper which were established by using Lemma 3. In particular, the quoted author used this lemma in establishing Lemmas 4 and 8, as well as Theorems 1, 2 and 4. ## Appendix A Appendix ###### Proof of Lemma 2.1. 1. (i) The proof of Part (i) is similar to that in Ciuperca (2012). 2. (ii) Let $Z_{n}(\phi)=\displaystyle{\sum_{i=1}^{n_{1}}}\eta_{i}(\phi,\phi_{1}^{0})$, $t_{n}(\phi)=\displaystyle{\sum_{i=n_{1}+1}^{n_{1}+n_{2}}}[(\epsilon_{i}-X_{i}^{\prime}(\phi-\phi_{2}^{0}))^{2}-(\epsilon_{i}-X_{i}^{\prime}(\phi_{1}-\phi_{2}^{0}))^{2}]$. Then, $\displaystyle|t_{n}(\hat{\phi}_{n_{1}+n_{2}})|$ $\displaystyle=$ $\displaystyle|-2\sum_{i=n_{1}+1}^{n_{1}+n_{2}}\varepsilon_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0})+(\hat{\phi}_{n_{1}+n_{2}}-\phi_{2}^{0})^{\prime}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{2}^{0})$ $\displaystyle-(\phi_{1}^{0}-\phi_{2}^{0})^{\prime}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}(\phi_{1}^{0}-\phi_{2}^{0})|.$ Since $||\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0}||\leq n^{-(u-v-\delta)/2}$, by Cauchy-Schwarz inequality, we have $|\sum_{i=n_{1}+1}^{n_{1}+n_{2}}\varepsilon_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0})|\leqslant O(n^{v/2}n^{-(u-v-\delta)/2})=o(1).$ Further, let $\lambda_{\max}$ be the largest eigenvalue of $\frac{1}{n_{2}}\displaystyle{\sum_{i=n_{1}+1}^{n_{1}+n_{2}}}X_{i}X_{i}^{\prime}$. Then, using the fact that $\phi_{2}^{0}=\phi_{1}^{0}+\phi_{3}^{0}n^{-1/4}$ and Cauchy-Schwarz inequality, we have $\displaystyle(\hat{\phi}_{n_{1}+n_{2}}-\phi_{2}^{0})^{\prime}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{2}^{0})$ $\displaystyle=$ $\displaystyle(\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0})^{\prime}n_{2}\frac{1}{n_{2}}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0})-2\phi_{3}^{0^{\prime}}n^{-1/4}n_{2}\frac{1}{n_{2}}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0})$ $\displaystyle+n^{-1/2}\phi_{3}^{0^{\prime}}n_{2}\frac{1}{n_{2}}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}\phi_{3}^{0}$ $\displaystyle\leqslant$ $\displaystyle n_{2}\lambda_{\max}||\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0}||^{2}+2||\phi_{3}^{0}||n^{-1/4}n_{2}\lambda_{\max}||\hat{\phi}_{n_{1}+n_{2}}-\phi_{1}^{0}||+n^{-1/2}n_{2}\lambda_{\max}||\phi_{3}^{0}||^{2},$ and then, $\displaystyle(\hat{\phi}_{n_{1}+n_{2}}-\phi_{2}^{0})^{\prime}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}(\hat{\phi}_{n_{1}+n_{2}}-\phi_{2}^{0})=O(n^{(u-v-\delta)}n^{v})+o(1)+O(n^{v-1/2})=o(1).$ Also, we have $\displaystyle({\phi}_{1}^{0}-\phi_{2}^{0})^{\prime}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X^{\prime}_{i}(\phi_{1}^{0}-\phi_{2}^{0})=n^{-1/2}\phi_{3}^{0^{\prime}}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}X_{i}X_{i}^{\prime}\phi_{3}^{0}=O(n^{v-1/2})=o(1).$ Therefore, $|t_{n}(\hat{\phi}_{n_{1}+n_{2}})|=o_{p}(1)$. Further, since $Z_{n}(\phi_{1}^{0})=t_{n}(\phi_{1}^{0})=0$, $Z_{n}(\hat{\phi}_{n_{1}+n_{2}})+t_{n}(\hat{\phi}_{n_{1}+n_{2}})\leqslant Z_{n}(\phi_{1}^{0})+t_{n}(\phi_{1}^{0})$, we have $0\geqslant z_{n}(\hat{\phi}_{n_{1}+n_{2}})+t_{n}(\hat{\phi}_{n_{1}+n_{2}})\geqslant\inf_{\phi}Z_{n}(\phi)-|t_{n}(\hat{\phi}_{n_{1}+n_{2}})|=\inf_{\phi}Z_{n}(\phi)-|o_{p}(1)|.$ Hence $|z_{n}(\hat{\phi}_{n_{1}+n_{2}})|-|t_{n}(\hat{\phi}_{n_{1}+n_{2}})|\leqslant|z_{n}(\hat{\phi}_{n_{1}+n_{2}})+t_{n}(\hat{\phi}_{n_{1}+n_{2}})|\leqslant|\inf_{\phi}Z_{n}(\phi)|+o_{p}(1),$ which implies that $|z_{n}(\hat{\phi}_{n_{1}+n_{2}})|\leqslant|\inf_{\phi}Z_{n}(\phi)|+o_{p}(1)+|t_{n}(\hat{\phi}_{n_{1}+n_{2}})|=|\inf_{\phi}Z_{n}(\phi)|+o_{p}(1).$ Let $\hat{\phi}_{n_{1}}=\arg\min_{\phi}Z_{n}(\phi)$ and $\lambda_{\max}$ be the largest eigenvalue of $n_{1}^{-1}\sum_{i=1}^{n_{1}}X_{i}X^{\prime}_{i}$. Then, by Cauchy-Schwarz inequality, $\displaystyle\inf_{\phi}Z_{n}(\phi)\leqslant\left(\sqrt{n_{1}}\left\|\hat{\phi}_{n_{1}}-\phi_{1}^{0}\right\|\right)^{2}\lambda_{\max}+2\sqrt{n_{1}}\left|(\hat{\phi}_{n_{1}}-\phi_{1}^{0})^{\prime}n_{1}^{-1/2}\sum_{i=1}^{n_{1}}\varepsilon_{i}X_{i}\right|,$ and then, $\displaystyle\inf_{\phi}Z_{n}(\phi)=O_{p}(1)+O_{p}(1)O_{p}(1)=O_{p}(1),\quad{}\mbox{ and }\quad{}|z_{n}(\hat{\phi}_{n_{1}+n_{2}})|=O_{p}(1).$ Now, let $\displaystyle Z_{n}^{s}(\phi)$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n_{1}}\eta_{i}(\phi,\phi_{1}^{0})+\lambda_{n;(0,n_{1})}[\sum_{k=1}^{p}(|\phi_{,k}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})]$ $\displaystyle t_{n}^{s}(\phi)$ $\displaystyle=$ $\displaystyle\sum_{i=n_{1}+1}^{n_{1}+n_{2}}[(\epsilon_{i}-X_{i}^{\prime}(\phi-\phi_{2}^{0}))^{2}-(\epsilon_{i}-X_{i}^{\prime}(\phi_{1}-\phi_{2}^{0}))^{2}]$ $\displaystyle\quad{}+\lambda_{n;(n_{1},n_{1}+n_{2})}[\sum_{k=1}^{p}(|\phi_{,k}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})].$ Then, $A_{n_{1}+n_{2}}^{s}(\phi)=Z_{n}^{s}(\phi)+t_{n}^{s}(\phi)-(\epsilon_{i}-X_{i}^{\prime}(\phi_{1}-\phi_{2}^{0}))^{2}+\lambda_{n;(n_{1},n_{1}+n_{2})}[\sum_{k=1}^{p}(|\phi_{1,k}^{0}|^{\gamma}-|\phi_{2,k}^{0}|^{\gamma})].$ Then $\hat{\phi}_{n_{1}+n_{2}}^{s}=\arg\min_{\phi}(Z_{n}^{s}(\phi)+t_{n}^{s}(\phi))=\arg\min_{\phi}A_{n_{1}+n_{2}}^{s}(\phi)$. In addition, using the similar approach as previous, with the fact that $||\hat{\phi}_{n_{1}+n_{2}}^{s}-\phi_{1}^{0}||\leqslant n^{-(u-v-\delta)/2}$ and $\phi_{2}^{0}=\phi_{1}^{0}+\phi_{3}^{0}n^{-1/4}$, we have $\displaystyle|t_{n}^{s}(\hat{\phi}_{n_{1}+n_{2}}^{s})|$ $\displaystyle\leqslant$ $\displaystyle o(1)+\lambda_{n;(n_{1},n_{1}+n_{2})}[\sum_{k=1}^{p}(|\hat{\phi}_{n_{1}+n_{2},k}^{s}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})]$ $\displaystyle=$ $\displaystyle o_{p}(1)+O(n^{v/2})O_{p}(||\hat{\phi}_{n_{1}+n_{2}}^{s}-\phi_{1}^{0}||)=O_{p}(n^{-(u-2v-\delta)/2})=o_{p}(1).$ Besides, $Z_{n}^{s}(\phi_{1}^{0})=t_{n}^{s}(\phi_{1}^{0})=0$, thus $\displaystyle 0\geqslant\inf_{\phi}(Z_{n}^{s}(\phi_{1}^{0})+t_{n}^{s}(\phi_{1}^{0}))=Z_{n}^{s}(\hat{\phi}_{n_{1}+n_{2}}^{s})+t_{n}^{s}(\hat{\phi}_{n_{1}+n_{2}}^{s})=Z_{n}^{s}(\hat{\phi}_{n_{1}+n_{2}}^{s})-|o_{p}(1)|$ $\displaystyle\geqslant\inf_{\phi}Z_{n}^{s}(\phi)-|o_{p}(1)|.$ Hence $|Z_{n}^{s}(\hat{\phi}_{n_{1}+n_{2}}^{s})|\leqslant|\inf_{\phi}Z_{n}^{s}(\phi)|+o_{p}(1).$ On the other hand, since $0\geqslant\inf_{\phi}Z_{n}^{s}(\phi)\geqslant\inf_{\phi}Z_{n}(\phi)+\lambda_{n;(0,n_{1})}\inf_{\phi}[\sum_{k=1}^{p}(|\phi_{,k}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})],$ $|\inf_{\phi}Z_{n}^{s}(\phi)|\leqslant|\inf_{\phi}Z_{n}(\phi)|+|\lambda_{n;(0,n_{1})}\inf_{\phi}[\sum_{k=1}^{p}(|\phi_{,k}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})]|.$ Further, $\inf_{\phi}[\sum_{k=1}^{p}(|\phi_{,k}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})]\leqslant\sum_{k=1}^{p}(|\hat{\phi}_{n_{1},k}|^{\gamma}-|\phi_{1,k}^{0}|^{\gamma})=O_{p}(||\hat{\phi}_{n_{1}}-\phi_{1}^{0}||)=O_{p}(n_{1}^{-1/2}),$ and $\inf_{\phi}Z_{n}(\phi)=O_{p}(1)$. It follows that $|\inf_{\phi}Z_{n}^{s}(\phi)|\leqslant O_{p}(1)$. Hence, $|Z_{n}^{s}(\hat{\phi}_{n_{1}+n_{2}}^{s})|\leqslant|\inf_{\phi}Z_{n}^{s}(\phi)|+o_{p}(1)=O_{p}(1).$ ∎ ## References * [1] Ciuperca, G. (2012). Model selection by LASSO methods in a change-point model. Stat Papers, DOI: 10.1007/s00362-012-0482-x (in press).
# Smallness of Faltings heights of CM abelian varieties Xunjing Wei ###### Abstract We prove that assuming the Colmez conjecture and the “no Siegel zeros” conjecture, the stable Faltings height of a CM abelian variety over a number field is less than or equal to the logarithm of the root discriminant of the field of definition of the abelian variety times an effective constant depending only on the dimension of the abelian variety. In view of the fact that the Colmez conjecture for abelian CM fields, the averaged Colmez conjecture, and the “no Siegel zeros” conjecture for CM fields with no complex quadratic subfields are already proved, we prove unconditional analogues of the result above. In addition, we also prove that the logarithm of the root discriminant of the field of everywhere good reduction of CM abelian varieties can be “small”. thmdummy thm lemdummy lem propdummy prop cordummy cor defndummy defn conjdummy conj exmpdummy exmp notadummy nota remdummy rem notedummy note ###### Contents 1. 1 Introduction 2. 2 The Faltings height 3. 3 The Colmez conjecture revisited 4. 4 The zero of the Artin $L$-function near $1$ 1. 4.1 Relation between the zero near $1$ and the logarithmic derivative at $0$ of the Artin $L$-function 2. 4.2 Sufficient conditions for the nonexistence of the zero near $1$ of the Artin $L$-function 3. 4.3 Proofs of section 1 and section 1 5. 5 The (proved) averaged Colmez conjecture 6. 6 Field of everywhere good reduction of CM abelian varieties ## 1 Introduction Let $E$ be a CM-field, and let $\Phi$ be a CM-type of $E$. Let $A$ be an abelian variety over a number field $K$ such that we have an embedding $i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K}(A)$ such that $(A,i)$ has CM-type $\Phi$. It is proved by Colmez in [Col93] that the stable Faltings height $h_{\mathrm{Falt}}^{\mathrm{st}}(A)$ of the abelian variety $A$ depends only on the CM-field $E$ and the CM-type $\Phi$ and not on the abelian variety $A$. We denote it as $h_{(E,\Phi)}^{\mathrm{Falt}}$. In [Col93] Colmez proposed a conjecture relating $h_{(E,\Phi)}^{\mathrm{Falt}}$ to the logarithmic derivatives at $s=0$ of certain Artin $L$-functions defined by $(E,\Phi)$. We will refer to this conjecture as the Colmez conjecture. The precise statement is as follows: We define a function $A_{(E,\Phi)}^{0}$ from $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ to $\mathbb{C}$ by $A_{(E,\Phi)}^{0}(\sigma)=\frac{1}{[\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}):\mathrm{Stab}(\Phi)]}\sum_{\nu\in\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})/\mathrm{Stab}(\Phi)}|\nu\Phi\cap\sigma\nu\Phi|,\forall\sigma\in\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}),$ where $\mathrm{Stab}(\Phi)$ is the subgroup of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ consisting of the stabilizers of $\Phi$. This function is locally constant and constant on conjugacy classes. Therefore, there is a unique decomposition of $A_{(E,\Phi)}^{0}$ into $\mathbb{C}$-linear combinations of irreducible Artin characters $\chi$ (i.e. characters $\chi$ of irreducible continuous representations of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ on finite-dimensional $\mathbb{C}$-vector spaces) ${A_{(E,\Phi)}^{0}}=\sum_{\chi}m_{(E,\Phi)}(\chi)\chi,\ \ \ \ \ \ \ m_{(E,\Phi)}(\chi)\in\mathbb{C}.$ It can be shown that for any irreducible Artin character $\chi$ such that $m_{(E,\Phi)}(\chi)\neq 0$, the Artin $L$-function $L(s,\chi,\mathbb{Q})$ is defined and nonzero at $s=0$. We define $Z_{(E,\Phi)}\coloneqq-\frac{1}{2}g\log(2\pi)+\sum_{\chi}m_{(E,\Phi)}(\chi)\frac{L^{\prime}(0,\chi,\mathbb{Q})}{L(0,\chi,\mathbb{Q})}$ and $\mu_{(E,\Phi)}\coloneqq\sum_{\chi}m_{(E,\Phi)}(\chi)\log(\mathfrak{f}(\chi,\mathbb{Q})),$ where $\mathfrak{f}(\chi,\mathbb{Q})$ is the Artin conductor of the Artin character $\chi$ (a positive integer). The Colmez conjecture says that we have $h_{(E,\Phi)}^{\mathrm{Falt}}=-Z_{(E,\Phi)}-\frac{1}{2}\mu_{(E,\Phi)}.$ When $E$ is a complex quadratic field, the Colmez conjecture is the same as the classical Chowla-Selberg formula (see for example Page 91 and 92 of [Wei76]), and so it is in fact a theorem. Colmez [Col93] and Obus [Obu13] proved that the Colmez conjecture is true if the extension $E/\mathbb{Q}$ is Galois with abelian Galois group (subsection 4.3). Yuan–Zhang [YZ18] and Andreatta–Goren–Howard–Madapusi-Pera [AGHMP18] independently proved that the Colmez conjecture is true when one averages over all CM-types of a given CM-field (section 5). Let $-d\in\mathbb{Z}_{\leq-2}$ be a fundamental discriminant, so $\mathrm{disc}(\mathbb{Q}(\sqrt{-d}))=d$. Let $\chi_{d}$ be the quadratic character associated to the quadratic field extension $\mathbb{Q}(\sqrt{-d})/\mathbb{Q}$, so $\chi_{d}(p)=(-d|p)$ for any prime $p$. Let $L(s,\chi_{d})$ be the Dirichlet $L$-function of the character $\chi_{d}$, so $L(s,\chi_{d})=\frac{\zeta_{\mathbb{Q}(\sqrt{-d})}(s)}{\zeta_{\mathbb{Q}}(s)}$. It is known that there is at most one zero of $L(s,\chi_{d})$ in the region $1-\frac{1}{4\log(d)}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log(d)},$ and if such a zero exists it is real and simple. For any $0<c\leq\frac{1}{4}$, we define the $c$-Siegel zero of $L(s,\chi_{d})$ to be the zero of $L(s,\chi_{d})$ in the region $1-\frac{c}{\log(d)}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log(d)}$ (if it exists). We define the Siegel zero of $L(s,\chi_{d})$ to be the $\frac{1}{4}$-Siegel zero of $L(s,\chi_{d})$. The conjecture Introduction is as follows: ###### Conjecture (No $\frac{1}{O(1)}$-Siegel zero of $L(s,\chi_{d})$). There exists some effectively computable absolute constant $C_{\mathrm{zero}}\in\mathbb{R}_{\geq 4}$ such that for any fundamental discriminant $-d\in\mathbb{Z}_{\leq-2}$, the Dirichlet $L$-function $L(s,\chi_{d})$ has no zeros in the region $1-\frac{1}{C_{\mathrm{zero}}\log(d)}\leq\mathrm{Re}(s)<1$, $|\operatorname{Im}(s)|\leq\frac{1}{4\log(d)}$. Now let $E$ be an arbitrary CM-field. Let $F$ be the maximal totally real subfield of $E$, and so $E$ is a totally complex quadratic field extension of $F$. Let $\chi_{E/F}$ be the quadratic character associated to the quadratic field extension $E/F$, so for any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{F}$, $\chi_{E/F}(\mathfrak{p})=\begin{cases}1&\mbox{if }\mathfrak{p}\mbox{ splits completely in }\mathcal{O}_{E},\\\ -1&\mbox{if }\mathfrak{p}\mbox{ is unramified but does not split completely in }\mathcal{O}_{E},\\\ 0&\mbox{if }\mathfrak{p}\mbox{ is ramified in }\mathcal{O}_{E}.\end{cases}$ Let $L(s,\chi_{E/F})$ be the $L$-function of the character $\chi_{E/F}$, and so $L(s,\chi_{E/F})=\frac{\zeta_{E}(s)}{\zeta_{F}(s)}$. Similarly to the case where $E$ is a complex quadratic field, by Lemma 3 of [Sta74], for any CM-field $E$ with maximal totally real subfield $F$, $L(s,\chi_{E/F})$ has at most one zero in the region $1-\frac{1}{4\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|}.$ If such a zero exists, it is real and simple. For any $0<c\leq\frac{1}{4}$, we define the generalized $c$-Siegel zero of $L(s,\chi_{E/F})$ to be the zero of $L(s,\chi_{E/F})$ in the region $1-\frac{c}{\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|}$ (if it exists). We define the generalized Siegel zero of $L(s,\chi_{E/F})$ to be the generalized $\frac{1}{4}$-Siegel zero of $L(s,\chi_{E/F})$. The conjecture Introduction is as follows: ###### Conjecture (No generalized $\frac{1}{O_{g}(1)}$-Siegel zero of $L(s,\chi_{E/F})$). For any $g\in\mathbb{Z}_{\geq 1}$, there exists some effectively computable constant $C_{\mathrm{zero}}(g)\in\mathbb{R}_{\geq 4}$ depending only on $g$ such that for any CM-field $E$ with maximal totally real subfield $F$ such that $[F:\mathbb{Q}]=g$, the function $L(s,\chi_{E/F})$ has no zeros in the region $1-\frac{1}{C_{\mathrm{zero}}(g)\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|}.$ It is proved by Stark (Lemma 9 of [Sta74]) that section 1 implies section 1. He also proved that section 1 is true whenever the CM-field $E$ contains no complex quadratic subfields. We show that assuming the Colmez conjecture, the nonexistence of the generalized Siegel zero of $L$-functions of quadratic characters associated to CM extensions is closely related to the stable Faltings height of CM abelian varieties being bounded by the logarithm of the root discriminant of the field of definition. More precisely, we prove the following theorem: ###### Theorem . Suppose that the Colmez conjecture holds. Suppose further that Introduction holds. Then for any $g\in\mathbb{Z}_{\geq 1}$, there exist effectively computable constants $C_{1}(g)>0$, $C_{2}(g)\in\mathbb{R}$ depending only on $g$ such that $\begin{split}h_{\mathrm{Falt}}^{\mathrm{st}}(A)\leq C_{1}(g)\frac{1}{[K:\mathbb{Q}]}\log|\mathrm{disc}(K)|+C_{2}(g),\end{split}$ for any dimension-$g$ abelian variety $A$ defined over a number field $K$ with complex multiplication by $\mathcal{O}_{E}$ for some CM-field $E$. Since the Colmez conjecture for abelian CM-fields is already proved, and since section 1 is true when the CM-field $E$ contains no complex quadratic subfields, we can also prove an unconditional version of the theorem above: ###### Theorem . For any $g\in\mathbb{Z}_{\geq 1}$, there exists effectively computable constants $C_{3}(g)>0$, $C_{4}(g)\in\mathbb{R}$ depending only on $g$ such that $\begin{split}h_{\mathrm{Falt}}^{\mathrm{st}}(A)\leq C_{3}(g)\frac{1}{[K:\mathbb{Q}]}\log|\mathrm{disc}(K)|+C_{4}(g),\end{split}$ for any dimension-$g$ abelian variety $A$ over a number field $K$ with complex multiplication by $\mathcal{O}_{E}$ for some CM-field $E$ such that the extension $E/\mathbb{Q}$ is Galois with abelian Galois group and $E$ does not contain any complex quadratic subfields. ###### Remark . To show that the condition “$E$ does not contain any complex quadratic subfields” in the hypotheses in section 1 is possible, we give examples of CM fields $E$ containing no complex quadratic subfields such that the extension $E/\mathbb{Q}$ is Galois with abelian Galois group. Let $n$ be an integer greater than or equal to $3$ such that the group $(\mathbb{Z}/n\mathbb{Z})^{\times}$ is a cyclic group and such that $\\#(\mathbb{Z}/n\mathbb{Z})^{\times}$ divides $4$. (Equivalently, $n=p^{k}$ or $n=2p^{k}$ for some odd prime $p$ such that $p\equiv 1\mod 4$.) Let $E$ be the $n$-th cyclotomic field $\mathbb{Q}(\mu_{n})$, where $\mu_{n}$ denotes a primitive $n$-th root of unity. Then $E$ is a CM-field with maximal totally real subfield $F=\mathbb{Q}(\mu_{n}+\mu_{n}^{-1})$. The extension $E/\mathbb{Q}$ is Galois and $\mathrm{Gal}(E/\mathbb{Q})$ is isomorphic to $(\mathbb{Z}/n\mathbb{Z})^{\times}$. Since $\mathrm{Gal}(E/\mathbb{Q})$ is cyclic and of even order, there is a unique subgroup $H$ of $\mathrm{Gal}(E/\mathbb{Q})$ of index $2$, and so there is a unique quadratic subfield $K$ of $E$. Let $\iota$ be the nontrivial element of $\mathrm{Gal}(E/F)\subset\mathrm{Gal}(E/\mathbb{Q})$. Then $\iota$ is the unique element in $\mathrm{Gal}(E/\mathbb{Q})$ of order $2$. Since $\\#\mathrm{Gal}(E/\mathbb{Q})=\\#(\mathbb{Z}/n\mathbb{Z})^{\times}$ divides $4$, we have $\iota\in H$. Thus, $K$ is fixed by the element $\iota$. Therefore, $K$ is a real quadratic field and so $E$ contains no complex quadratic subfields. More generally, let $E$ be any totally imaginary number field such that the extension $E/\mathbb{Q}$ is Galois with abelian Galois group. Then $E$ is a CM-field (any abelian extension over $\mathbb{Q}$ is either totally real or a CM field). We know that $\mathrm{Gal}(E/\mathbb{Q})\cong\mathbb{Z}/q_{1}\mathbb{Z}\times\mathbb{Z}/q_{2}\mathbb{Z}\times\cdots\times\mathbb{Z}/q_{k}\mathbb{Z},$ where $q_{1},q_{2},\cdots q_{m}$ are powers of prime numbers ($q_{1},q_{2},\cdots q_{m}$ are not necessarily distinct). Suppose further that the number “$2$” does not appear in $q_{1},q_{2},\cdots q_{m}$, i.e. each $q_{i}$ is either $2^{k_{i}}$ for $k_{i}\geq 2$ or a power of an odd prime. Then each component $\mathbb{Z}/q_{i}\mathbb{Z}$ such that $q_{i}$ is $2^{k_{i}}$ ($k_{i}\geq 2$) contains a unique subgroup $H_{i}$ of index $2$ and a unique element $\sigma_{i}$ of order $2$, and $\sigma_{i}\in H_{i}$. Let $\iota$ be the nontrivial element of $\mathrm{Gal}(E/F)\subset\mathrm{Gal}(E/\mathbb{Q})$, where $F$ is the maximal real subfield of $E$. Then $\iota\in H$ for any subgroup $H$ of $\mathrm{Gal}(E/\mathbb{Q})$ of index $2$. Therefore, $E$ contains no complex quadratic subfields. Since the averaged Colmez conjecture is already proved, we can also prove averaged analogues of the theorems above. ###### Theorem . For any $g\in\mathbb{Z}_{\geq 1}$, there exists effectively computable constants $C_{5}(g)>0$, $C_{6}(g)\in\mathbb{R}$ depending only on $g$ such that $\begin{split}&\frac{1}{2}\biggl{(}h_{\mathrm{Falt}}^{\mathrm{st}}(A_{1})+h_{\mathrm{Falt}}^{\mathrm{st}}(A_{2})\biggr{)}\\\ \leq&C_{5}(g)\cdot\frac{1}{2}\biggl{(}\frac{1}{[K_{1}:\mathbb{Q}]}\log|\mathrm{disc}(K_{1})|+\frac{1}{[K_{1}:\mathbb{Q}]}\log|\mathrm{disc}(K_{2})|\biggr{)}+C_{6}(g),\end{split}$ for any pair $A_{1},A_{2}$ of dimension-$g$ abelian varieties defined over number fields $K_{1},K_{2}$ respectively, such that the following holds: * • There exists a CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ and embeddings $i_{1}\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K_{1}}(A_{1})$, $i_{2}\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K_{2}}(A_{2})$ such that $E$ does not contain any complex quadratic subfields and the CM-type $\Phi_{1}$ of $(A_{1},i_{1})$ and the CM-type $\Phi_{2}$ of $(A_{2},i_{2})$ satisfy: $|\Phi_{1}\cap\Phi_{2}|=g-1.$ ###### Theorem . Let $g$ be a positive integer. Suppose that there exists some effectively computable constant $C_{\mathrm{zero}}(g)\in\mathbb{R}_{\geq 4}$ depending only on $g$ such that for any CM-field $E$ with maximal totally real subfield $F$ such that $[F:\mathbb{Q}]=g$, the function $L(s,\chi_{E/F})$ has no zeros in the region $1-\frac{1}{C_{\mathrm{zero}}(g)\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|},$ i.e. the conjecture Introduction holds for $g$. Then there exist effectively computable constants $C_{7}(g)>0$, $C_{8}(g)\in\mathbb{R}$ depending only on $g$ such that $\begin{split}&\frac{1}{2}\biggl{(}h_{\mathrm{Falt}}^{\mathrm{st}}(A_{1})+h_{\mathrm{Falt}}^{\mathrm{st}}(A_{2})\biggr{)}\\\ \leq&C_{7}(g)\cdot\frac{1}{2}\biggl{(}\frac{1}{[K_{1}:\mathbb{Q}]}\log|\mathrm{disc}(K_{1})|+\frac{1}{[K_{1}:\mathbb{Q}]}\log|\mathrm{disc}(K_{2})|\biggr{)}+C_{8}(g),\end{split}$ for any pair $A_{1},A_{2}$ of dimension-$g$ abelian varieties defined over number fields $K_{1},K_{2}$ respectively, such that the following holds: * • There exists a CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ and embeddings $i_{1}\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K_{1}}(A_{1})$, $i_{2}\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K_{2}}(A_{2})$ such that the CM-type $\Phi_{1}$ of $(A_{1},i_{1})$ and the CM-type $\Phi_{2}$ of $(A_{2},i_{2})$ satisfy: $|\Phi_{1}\cap\Phi_{2}|=g-1.$ It might be interesting to know that if we only make use of the (proved) averaged Colmez conjecture, then we cannot obtain results stronger than section 1 and section 1 (i.e. the “average” condition in these theorems cannot be dropped), even if we further assume that the abelian variety over the number field has everywhere good reduction. In particular, we prove the following theorems, which show that the logarithm of the root discriminant of the field of everywhere good reduction of CM abelian varieties can be “small”. ###### Theorem . Assume the Generalized Riemann Hypothesis. For any $g\in\mathbb{Z}_{\geq 1}$, there exist effectively computable constants $C_{13}(g)>0$, $C_{14}(g)\in\mathbb{R}$, such that for any CM-field $E$ with $[E:\mathbb{Q}]=2g$, for any CM-type $\Phi$ of $E$, there exists a number field $K^{\prime}$ and a CM abelian variety $(A,i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K^{\prime}}(A))$ over $K^{\prime}$ of CM-type $\Phi$ such that the abelian variety $A$ over $K^{\prime}$ has everywhere good reduction and $\begin{split}&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log|\mathrm{disc}(K^{\prime})|\\\ \leq&C_{13}(g)\log\log|\mathrm{disc}(E)|+\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|+C_{14}(g).\end{split}$ (1) ###### Theorem . For any $g\in\mathbb{Z}_{\geq 1}$, there exist effectively computable constants $C_{15}(g)>0$, $C_{16}(g)\in\mathbb{R}$, such that for any CM-field $E$ with $[E:\mathbb{Q}]=2g$, for any CM-type $\Phi$ of $E$, there exists a number field $K^{\prime}$ and a CM abelian variety $(A,i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K^{\prime}}(A))$ over $K^{\prime}$ of CM-type $\Phi$ such that the abelian variety $A$ over $K^{\prime}$ has everywhere good reduction and $\begin{split}\frac{1}{[K^{\prime}:\mathbb{Q}]}\log|\mathrm{disc}(K^{\prime})|\leq C_{15}(g)\log|\mathrm{disc}(E)|+C_{16}(g).\end{split}$ (2) This will be discussed in detail in section 6. In Theorem 6(ii) of [Col98], Colmez has proved that there exist effectively computable absolute constants $C_{\mathrm{Col},1}>0$, $C_{\mathrm{Col},2}\in\mathbb{R}$ such that for any CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ and any CM-type $\Phi$ of $E$ such that the following hold: 1. 1. $(E,\Phi)$ satisfies the Colmez conjecture, 2. 2. For any irreducible Artin character $\chi$ such that $m_{(E,\Phi)}(\chi)\neq 0$, the Artin conjecture for $\chi$ holds (i.e. the Artin $L$-function $L(s,\chi,\mathbb{Q})$ is holomorphic everywhere except possibly for a simple pole at $s=1$), 3. 3. For any irreducible Artin character $\chi$ such that $m_{(E,\Phi)}(\chi)\neq 0$, the Artin $L$-function $L(s,\chi,\mathbb{Q})$ has no zeros on the ball of radius $\frac{1}{4}$ centered at $0$, we have $\begin{split}h_{(E,\Phi)}^{\mathrm{Falt}}\leq C_{\mathrm{Col},1}\cdot\mu_{(E,\Phi)}+gC_{\mathrm{Col},2}.\end{split}$ The proofs of section 1 and section 1 show that that we can actually remove the second hypothesis that the Artin $L$-functions involved satisfy the Artin conjecture. Moreover, in the remark after Theorem 6 of [Col98], Colmez asked whether it is possible to remove the third hypothesis that the Artin $L$-functions involved have no zeros on the ball of radius $\frac{1}{4}$ centered at $0$ and making use of “no Siegel zeros” instead. The proofs of section 1 and section 1 is more or less a positive answer to this question. ### Acknowledgements The author is deeply grateful to Professor Wei Zhang for suggesting this problem to the author, supervising the author on this project, and teaching, mentoring and guiding the author all along. For many times the author would have been in distress had it not been for the kind and generous help of Professor Zhang. The author also thanks the Undergraduate Research Opportunities Program of MIT for providing this opportunity for the author to do undergraduate research. ## 2 The Faltings height Let $A$ be a dimension-$g$ abelian variety defined over a number field $K$. Let $\pi\colon\mathcal{A}\to\mathrm{Spec}(\mathcal{O}_{K})$ be the Néron model of $A$, and take $\omega$ to be any global section of $\mathcal{L}\coloneqq\pi_{*}\Omega^{g}_{\mathcal{A}/\mathrm{Spec}\mathcal{O}_{K}}$. We define the unstable Faltings height of $A$ as follows: $\begin{split}h_{\mathrm{Falt}}^{\mathrm{unst}}(A/K)\coloneqq\frac{1}{[K:\mathbb{Q}]}\bBigg@{4}(&\log\\#\biggl{(}H^{0}(\mathrm{Spec}\mathcal{O}_{K},\mathcal{L})/(\mathcal{O}_{K}\cdot\omega)\biggr{)}\\\ &-\frac{1}{2}\sum_{\sigma\colon K\to\mathbb{C}}\log\Biggl{(}\frac{1}{(2\pi)^{g}}\biggl{|}\int_{A(\mathbb{C})}\sigma(\omega\wedge\overline{\omega})\biggr{|}\Biggr{)}\bBigg@{4}).\end{split}$ This definition is independent of the choice of $\omega\in H^{0}(\mathrm{Spec}\mathcal{O}_{K},\mathcal{L})$. We define the stable Faltings height of $A$ to be $h_{\mathrm{Falt}}^{\mathrm{st}}(A)\coloneqq h_{\mathrm{Falt}}^{\mathrm{unst}}(A_{K^{\prime}}/K^{\prime}),$ where $K^{\prime}$ is a finite extension of $K$ such that $A_{K^{\prime}}/K^{\prime}$ has everywhere semistable reduction. This definition does not depend on the choice of the finite extension $K^{\prime}/K$. Unlike the unstable Faltings height, the stable Faltings height does not depend on the field of definition of the abelian variety. The following is a theorem of Bost ([Bos96]). ###### Theorem . There exists an effectively computable absolute constant $C_{\mathrm{lower}}>0$ such that for any dimension-$g$ abelian variety $A$ over a number field, we have $h_{\mathrm{Falt}}^{\mathrm{st}}(A)\geq-gC_{\mathrm{lower}}.$ As we have mentioned in section 1, it is proved by Colmez in [Col93] that for any CM-field $E$ and any CM-type $\Phi$ of $E$, if $(A_{1},i_{1}\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K_{1}}(A_{1}))$ and $(A_{2},i_{2}\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K_{1}}(A_{2}))$ are CM abelian varieties over number fields $K_{1}$ and $K_{2}$, both with CM- type $\Phi$, then $h_{\mathrm{Falt}}^{\mathrm{st}}(A_{1})=h_{\mathrm{Falt}}^{\mathrm{st}}(A_{2}).$ We denote this stable Faltings height as $h_{(E,\Phi)}^{\mathrm{Falt}}$. ## 3 The Colmez conjecture revisited Throughout this section $g$ is an arbitrary positive integer, $E$ is an arbitrary CM-field of degree $[E:\mathbb{Q}]=2g$ and $\Phi$ is an arbitrary CM-type of $E$. We denote as $E^{*}_{\Phi}$ the reflex field of $(E,\Phi)$. Let $A_{(E,\Phi)}^{0}$ be the function from $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ to $\mathbb{C}$ defined as in section 1. Then since $\mathrm{Stab}(\Phi)\subset\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ is equal to $\mathrm{Gal}(\overline{\mathbb{Q}}/E^{*}_{\Phi})$, we have: $A_{(E,\Phi)}^{0}$ factors as $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\twoheadrightarrow\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})\to\mathbb{C},$ where we denote as $\widetilde{E^{*}_{\Phi}}$ the Galois closure of the extension $E^{*}_{\Phi}/\mathbb{Q}$. For the following we view the function $A_{(E,\Phi)}^{0}$ as a (class) function from $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ to $\mathbb{C}$. For any irreducible Artin character $\chi$ such that $m_{(E,\Phi)}(\chi)\neq 0$, $\chi$ also factors as $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\twoheadrightarrow\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})\to\mathbb{C},$ and so we also view $\chi$ as a character of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$. We fix an embedding $\overline{\mathbb{Q}}\hookrightarrow\mathbb{C}$ and let $\iota$ be the element of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ induced by complex conjugation. Let $\chi$ be an irreducible Artin character. We say that $\chi$ is odd if $\chi(\iota)=-\chi(1)$. Some computations show that for the trivial character $\chi=\mathbf{1}$, we have $m_{(E,\Phi)}(\mathbf{1})=\frac{1}{2}g$; and for any nontrivial irreducible Artin character $\chi$, $m_{(E,\Phi)}(\chi)=0$ unless $\chi$ is odd. Therefore, we have $A_{(E,\Phi)}^{0}=\frac{1}{2}g\cdot\mathbf{1}+\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ \rm{odd}}\end{subarray}}m_{(E,\Phi)}(\chi)\chi.$ (3) This implies that for any irreducible Artin character $\chi$ such that $m_{(E,\Phi)}(\chi)\neq 0$, the Artin $L$-function $L(s,\chi,\mathbb{Q})$ is defined and nonzero at $s=0$. Let $Z_{(E.\Phi)}$ and $\mu_{(E,\Phi)}$ be as in section 1. Since $\frac{\zeta^{\prime}_{\mathbb{Q}}(0)}{\zeta_{\mathbb{Q}}(0)}=\log(2\pi)$ and $\mathfrak{f}(\mathbf{1},\mathbb{Q})=0,$ we can deduce that $Z_{(E,\Phi)}=\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ \rm{odd}}\end{subarray}}m_{(E,\Phi)}(\chi)\frac{L^{\prime}(0,\chi,\mathbb{Q})}{L(0,\chi,\mathbb{Q})},$ (4) and $\mu_{(E,\Phi)}=\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ \rm{odd}}\end{subarray}}m_{(E,\Phi)}(\chi)\log\mathfrak{f}(\chi,\mathbb{Q}).$ (5) ## 4 The zero of the Artin $L$-function near $1$ ### 4.1 Relation between the zero near $1$ and the logarithmic derivative at $0$ of the Artin $L$-function Throughout this subsection $g$ is an arbitrary positive integer, $E$ is an arbitrary CM-field of degree $[E:\mathbb{Q}]=2g$ and $\Phi$ is an arbitrary CM-type of $E$. We denote as $E^{*}_{\Phi}$ the reflex field of $(E,\Phi)$. We denote as $\widetilde{E^{*}_{\Phi}}$ the Galois closure of the extension $E^{*}_{\Phi}/\mathbb{Q}$. By Chapter 2, Section 5 of [MM97], for any nontrivial irreducible character $\chi$ of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$, the functions $\frac{\zeta_{\widetilde{E^{*}_{\Phi}}}(s)}{L(s,\chi,\mathbb{Q})}$ and $\zeta_{\widetilde{E^{*}_{\Phi}}}(s)L(s,\chi,\mathbb{Q})$ are both holomorphic except for a simple pole at $s=1$. By Lemma 3 of [Sta74], for any number field $K$ such that $K\neq\mathbb{Q}$, the function $\zeta_{K}(s)$ has at most one zero in the region $1-\frac{1}{4\log|\mathrm{disc}(K)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(K)|}.$ If such a zero exists, it is real and simple. Therefore, the function $L(s,\chi,\mathbb{Q})$ has at most one zero in the region $1-\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}.$ If such a zero exists, it is real and simple. ###### Proposition . Let $\chi$ be a nontrivial odd irreducible character of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$. Denote as $\beta_{0}$ the (necessarily real and simple) zero of $L(s,\chi,\mathbb{Q})$ in the region $1-\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}$ (if it exists). Let $\delta_{\chi}$ be $1$ if $\beta_{0}$ exists, and let $\delta_{\chi}$ be $0$ otherwise. We have $\begin{split}&-\biggl{(}\frac{L^{\prime}(0,\chi,\mathbb{Q})}{L(0,\chi,\mathbb{Q})}+\frac{L^{\prime}(0,\overline{\chi},\mathbb{Q})}{L(0,\overline{\chi},\mathbb{Q})}+\frac{2\delta_{\chi}}{1-\beta_{0}}\biggr{)}\\\ >&-75\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|+\Biggl{(}\log(\frac{\mathfrak{f}(\chi,\mathbb{Q})}{\pi^{\chi(1)}})+\chi(1)\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}\Biggr{)},\end{split}$ and $\begin{split}&-\biggl{(}\frac{L^{\prime}(0,\chi,\mathbb{Q})}{L(0,\chi,\mathbb{Q})}+\frac{L^{\prime}(0,\overline{\chi},\mathbb{Q})}{L(0,\overline{\chi},\mathbb{Q})}+\frac{2\delta_{\chi}}{1-\beta_{0}}\biggr{)}\\\ <&75\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|+\Biggl{(}\log(\frac{\mathfrak{f}(\chi,\mathbb{Q})}{\pi^{\chi(1)}})+\chi(1)\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}\Biggr{)}.\end{split}$ (6) ###### Proof. We define the function $\Lambda(s,\chi,\mathbb{Q})$ to be $\Lambda(s,\chi,\mathbb{Q})\coloneqq(\mathfrak{f}(\chi,\mathbb{Q}))^{s/2}\Biggl{(}\pi^{-(s+1)/2}\Gamma\bigl{(}\frac{s+1}{2}\bigr{)}\Biggr{)}^{\chi(1)}L(s,\chi,\mathbb{Q}).$ We have the functional equation $\Lambda(s,\chi,\mathbb{Q})=W(\chi)\Lambda(1-s,\overline{\chi},\mathbb{Q})$ for some $W(\chi)\in\mathbb{C}$ with absolute value $1$. We define the function $\xi_{\widetilde{E^{*}_{\Phi}}}$ to be $\xi_{\widetilde{E^{*}_{\Phi}}}(s)\coloneqq s(s-1)|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|^{s/2}\Biggl{(}2(2\pi)^{-s}\Gamma(s)\Biggr{)}^{[\widetilde{E^{*}_{\Phi}}:\mathbb{Q}]/2}\zeta_{\widetilde{E^{*}_{\Phi}}}(s).$ We have the functional equation $\xi_{\widetilde{E^{*}_{\Phi}}}(s)=\xi_{\widetilde{E^{*}_{\Phi}}}(1-s).$ First consider the function $f_{1}(s)\coloneqq(\xi_{\widetilde{E^{*}_{\Phi}}}(s))^{2}\Lambda(s,\chi,\mathbb{Q})\Lambda(s,\overline{\chi},\mathbb{Q})$. It is entire and satisfies the functional equation $f_{1}(s)=f_{1}(1-s).$ Since $f_{1}(s)$ is real for $s$ real, for any $\rho\in\mathbb{C}$ the order of the zero of $f_{1}(s)$ at $s=\rho$ is equal to that at $s=\overline{\rho}$. Moreover, all zeros of $f_{1}(s)$ lie in the critical strip $0<\mathrm{Re}(s)<1$. Therefore, by logarithmically differentiating the Hadamard product formula for $f_{1}(s)$ at $s=1$ we get $\sum_{\rho\colon f_{1}(\rho)=0}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}=\frac{f^{\prime}_{1}(1)}{f_{1}(1)},$ i.e. $\begin{split}&\sum_{\rho\colon f_{1}(\rho)=0}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}\\\ =&2\frac{\xi^{\prime}_{\widetilde{E^{*}_{\Phi}}}(1)}{\xi_{\widetilde{E^{*}_{\Phi}}}(1)}+\frac{\Lambda^{\prime}(1,\chi,\mathbb{Q})}{\Lambda(1,\chi,\mathbb{Q})}+\frac{\Lambda^{\prime}(1,\overline{\chi},\mathbb{Q})}{\Lambda(1,\overline{\chi},\mathbb{Q})}.\end{split}$ (7) Let $\delta_{\widetilde{E^{*}_{\Phi}}}$ be $1$ if $\beta_{0}$ is a zero of $\zeta_{\widetilde{E^{*}_{\Phi}}}(s)$, and let $\delta_{\widetilde{E^{*}_{\Phi}}}$ be $0$ otherwise. Then $\delta_{\widetilde{E^{*}_{\Phi}}}-\delta_{\chi}$ is equal to $0$ or $1$ and the order of the zero of $f_{1}(s)$ at $s=\beta_{0}$ is equal to $2\delta_{\widetilde{E^{*}_{\Phi}}}+2\delta_{\chi}$. Thus, we have $\begin{split}&\sum_{\begin{subarray}{c}\rho\colon f_{1}(\rho)=0\\\ \rho\neq\beta_{0}\end{subarray}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}\\\ =&\Biggl{(}2\frac{\xi^{\prime}_{\widetilde{E^{*}_{\Phi}}}(1)}{\xi_{\widetilde{E^{*}_{\Phi}}}(1)}-\frac{2\delta_{\widetilde{E^{*}_{\Phi}}}}{1-\beta_{0}}\Biggr{)}+\Biggl{(}\frac{\Lambda^{\prime}(1,\chi,\mathbb{Q})}{\Lambda(1,\chi,\mathbb{Q})}+\frac{\Lambda^{\prime}(1,\overline{\chi},\mathbb{Q})}{\Lambda(1,\overline{\chi},\mathbb{Q})}-\frac{2\delta_{\chi}}{1-\beta_{0}}\Biggr{)}.\end{split}$ (8) Since the function $\frac{(\zeta_{\widetilde{E^{*}_{\Phi}}}(s))^{2}}{L(s,\chi,\mathbb{Q})L(s,\overline{\chi},\mathbb{Q})}$ is holomorphic on $0<\mathrm{Re}(s)<1$, for any $\rho\in\mathbb{C}$ such that $0<\mathrm{Re}(\rho)<1$, the order of the zero at $s=\rho$ of the function $f_{1}(s)$ is less than or equal to $4$ times the order of the zero at $s=\rho$ of the function $\zeta_{\widetilde{E^{*}_{\Phi}}}(s)$. In view of the fact that all zeros of $f_{1}(s)$ lie in the critical strip $0<\mathrm{Re}(s)<1$, we have $0\leq\sum_{\begin{subarray}{c}\rho\colon f_{1}(\rho)=0\\\ \rho\neq\beta_{0}\end{subarray}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}\leq 4\sum_{\begin{subarray}{c}\rho\colon\zeta_{\widetilde{E^{*}_{\Phi}}}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\\\ \rho\neq\beta_{0}\end{subarray}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}.$ (9) Then consider the function $f_{2}(s)\coloneqq\frac{(\xi_{\widetilde{E^{*}_{\Phi}}}(s))^{2}}{\Lambda(s,\chi,\mathbb{Q})\Lambda(s,\overline{\chi},\mathbb{Q})}$. Similar to the case of $f_{1}$, we have $\begin{split}&\sum_{\begin{subarray}{c}\rho\colon f_{2}(\rho)=0\\\ \rho\neq\beta_{0}\end{subarray}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}\\\ =&\Biggl{(}2\frac{\xi^{\prime}_{\widetilde{E^{*}_{\Phi}}}(1)}{\xi_{\widetilde{E^{*}_{\Phi}}}(1)}-\frac{2\delta_{\widetilde{E^{*}_{\Phi}}}}{1-\beta_{0}}\Biggr{)}-\Biggl{(}\frac{\Lambda^{\prime}(1,\chi,\mathbb{Q})}{\Lambda(1,\chi,\mathbb{Q})}+\frac{\Lambda^{\prime}(1,\overline{\chi},\mathbb{Q})}{\Lambda(1,\overline{\chi},\mathbb{Q})}-\frac{2\delta_{\chi}}{1-\beta_{0}}\Biggr{)},\end{split}$ (10) and $0\leq\sum_{\begin{subarray}{c}\rho\colon f_{2}(\rho)=0\\\ \rho\neq\beta_{0}\end{subarray}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}\leq 4\sum_{\begin{subarray}{c}\rho\colon\zeta_{\widetilde{E^{*}_{\Phi}}}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\\\ \rho\neq\beta_{0}\end{subarray}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}.$ (11) By logarithmically differentiating the functional equation of $\Lambda(s,\chi,\mathbb{Q})\Lambda(s,\overline{\chi},\mathbb{Q})$ at $s=1$, we have $\frac{\Lambda^{\prime}(0,\chi,\mathbb{Q})}{\Lambda(0,\chi,\mathbb{Q})}+\frac{\Lambda^{\prime}(0,\overline{\chi},\mathbb{Q})}{\Lambda(0,\overline{\chi},\mathbb{Q})}=-\Biggl{(}\frac{\Lambda^{\prime}(1,\chi,\mathbb{Q})}{\Lambda(1,\chi,\mathbb{Q})}+\frac{\Lambda^{\prime}(1,\overline{\chi},\mathbb{Q})}{\Lambda(1,\overline{\chi},\mathbb{Q})}\Biggr{)}.$ The result then follows from subtracting Equation (8) by Equation (10) and the following subsection 4.1. ∎ ###### Lemma . Let $K$ be a number field such that $K\neq\mathbb{Q}$. Denote as $\beta_{0}$ the (necessarily real and simple) zero of $\zeta_{K}(s)$ in the region $1-\frac{1}{4\log|\mathrm{disc}(K)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(K)|}$ (if it exists). Then we have $\begin{split}0\leq{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\\\ \rho\neq\beta_{0}\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}<\frac{75}{2}\log|\mathrm{disc}(K)|.\end{split}$ ###### Proof. For any $\rho\in\mathbb{C}$ such that $\mathrm{Re}(\rho)<1-\frac{1}{4\log|\mathrm{disc}(K)|}$, we have $\begin{split}0<&\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}<25\cdot\biggl{(}\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\rho}+\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\overline{\rho}}\biggr{)}.\end{split}$ For any $\rho\in\mathbb{C}$ such that $1-\frac{1}{4\log|\mathrm{disc}(K)|}\leq\mathrm{Re}(\rho)<1$ and $|\operatorname{Im}(\rho)|>\frac{1}{4\log|\mathrm{disc}(K)|}$, we have $\begin{split}0<\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}<5\cdot\biggl{(}\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\rho}+\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\overline{\rho}}\biggr{)}.\end{split}$ Therefore, we have $\begin{split}0\leq&{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\\\ \rho\neq\beta_{0}\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}\\\ <&25{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\\\ \rho\neq\beta_{0}\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\rho}+\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\overline{\rho}}\biggr{)}\\\ \leq&25{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\rho}+\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\overline{\rho}}\biggr{)}.\end{split}$ By the proof of Lemma 3 of [Sta74], we have $0\leq{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{s-\rho}+\frac{1}{s-\overline{\rho}}\biggr{)}<\frac{1}{s-1}+\frac{1}{2}\log|\mathrm{disc}(K)|,$ (12) for any $s$ real with $1<s<2$. Taking $s=1+\frac{1}{\log|\mathrm{disc}(K)|}$ in Equation (12), we get: $\begin{split}0\leq&{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\rho}+\frac{1}{1+\frac{1}{\log|\mathrm{disc}(K)|}-\overline{\rho}}\biggr{)}\\\ <&\frac{3}{2}\log|\mathrm{disc}(K)|.\end{split}$ Therefore, we have $\begin{split}0\leq{\sum_{\begin{subarray}{c}\rho\colon\zeta_{K}(\rho)=0\\\ 0<\mathrm{Re}(\rho)<1\\\ \rho\neq\beta_{0}\end{subarray}}}\frac{1}{2}\biggl{(}\frac{1}{1-\rho}+\frac{1}{1-\overline{\rho}}\biggr{)}<\frac{75}{2}\log|\mathrm{disc}(K)|.\end{split}$ ∎ ###### Corollary . Let $c$ be a real number such that $0<c\leq\frac{1}{4}$. Suppose that for any nontrivial odd irreducible character $\chi$ of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ such that $m_{(E,\Phi)}(\chi)\neq 0$, there is no zero of $L(s,\chi,\mathbb{Q})$ in the region $1-\frac{c}{\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|},$ then we have $\begin{split}-Z_{(E,\Phi)}-\frac{1}{2}\mu_{(E,\Phi)}<&\frac{1}{4}g\cdot(75+2c^{\prime})(2g)!\log|\mathrm{disc}(E^{*}_{\Phi})|+\frac{1}{4}g\biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}-\log(\pi)\biggr{)},\end{split}$ (13) where $c^{\prime}$ is defined to be $\frac{1}{c}$ if $c<\frac{1}{4}$, and $0$ if $c=\frac{1}{4}$. ###### Proof. By Lemma 2 and Section 2 of [Col98], for any $\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))$, $m_{(E,\Phi)}(\chi)$ is a non-negative real number and $m_{(E,\Phi)}(\chi)=m_{(E,\Phi)}(\overline{\chi})$. Hence, we have $\begin{split}&-Z_{(E,\Phi)}-\frac{1}{2}\mu_{(E,\Phi)}\\\ =&\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ odd}\end{subarray}}m_{(E,\Phi)}(\chi)\biggl{(}-\frac{L^{\prime}(0,\chi,\mathbb{Q})}{L(0,\chi,\mathbb{Q})}-\frac{1}{2}\log(\mathfrak{f}(\chi,\mathbb{Q}))\biggr{)}\\\ =&\frac{1}{2}\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ odd}\end{subarray}}m_{(E,\Phi)}(\chi)\Biggl{(}-\biggl{(}\frac{L^{\prime}(0,\chi,\mathbb{Q})}{L(0,\chi,\mathbb{Q})}+\frac{L^{\prime}(0,\overline{\chi},\mathbb{Q})}{L(0,\overline{\chi},\mathbb{Q})}\biggr{)}-\log(\mathfrak{f}(\chi,\mathbb{Q}))\Biggr{)}\\\ <&\frac{1}{2}\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ odd}\end{subarray}}m_{(E,\Phi)}(\chi)\Biggl{(}(75+2c^{\prime})\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|+\chi(1)\biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}-\log(\pi)\biggr{)}\Biggr{)},\end{split}$ (14) by Equation (6). By the definition of ${A_{(E,\Phi)}^{0}}$ we have $\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ odd}\end{subarray}}m_{(E,\Phi)}(\chi)\chi(1)={A_{(E,\Phi)}^{0}}(1)-\frac{1}{2}g=\frac{1}{2}g.$ Since for any $\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))$, $m_{(E,\Phi)}(\chi)$ is a non-negative real number and $\chi(1)\geq 1$, we have $\sum_{\begin{subarray}{c}\chi\in\mathrm{Irr}(\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q}))\\\ \chi\neq\mathbf{1}\\\ \chi\mbox{ odd}\end{subarray}}m_{(E,\Phi)}(\chi)\leq\frac{1}{2}g.$ By the following Equation (17) we have $\begin{split}\frac{1}{[\widetilde{E^{*}_{\Phi}}:\mathbb{Q}]}\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|\leq\log|\mathrm{disc}(E^{*}_{\Phi})|.\end{split}$ The reflex field $E^{*}_{\Phi}$ is contained in the Galois closure $\widetilde{E}$ of the extension $E/\mathbb{Q}$, and so $\widetilde{E^{*}_{\Phi}}$ is also contained in $\widetilde{E}$. Thus, we have $[\widetilde{E^{*}_{\Phi}}:\mathbb{Q}]\leq(2g)!$. Hence, we get our claim. ∎ ###### Lemma . Let $K_{1}$ and $K_{2}$ be number fields. Let $K_{1}K_{2}$ be the compositum of $K_{1}$ and $K_{2}$. Then we have $|\mathrm{disc}(K_{1}K_{2})|^{1/[K_{1}K_{2}:\mathbb{Q}]}\leq|\mathrm{disc}(K_{1})|^{1/[K_{1}:\mathbb{Q}]}|\mathrm{disc}(K_{2})|^{1/[K_{2}:\mathbb{Q}]},$ (15) and $|\mathrm{disc}(K_{1}K_{2})|\leq|\mathrm{disc}(K_{1})|^{[K_{2}:\mathbb{Q}]}|\mathrm{disc}(K_{2})|^{[K_{1}:\mathbb{Q}]}.$ (16) In particular, let $K$ be a number field and let $\widetilde{K}$ be the Galois closure of the extension $K/\mathbb{Q}$. Then $|\mathrm{disc}(\widetilde{K})|^{1/[\widetilde{K}:\mathbb{Q}]}\leq|\mathrm{disc}(K)|,$ (17) and $|\mathrm{disc}(\widetilde{K})|\leq|\mathrm{disc}(K)|^{[K:\mathbb{Q}]!}.$ (18) ###### Proof. This is Lemma 7 of [Sta74]. ∎ ### 4.2 Sufficient conditions for the nonexistence of the zero near $1$ of the Artin $L$-function By Theorem 3 of [Sta74], we have the following theorem. ###### Theorem . Let $L/K$ be a finite Galois extension of number fields. Let $s_{0}\in\mathbb{C}$ be a simple zero of $\zeta_{L}(s)$. (1) For any irreducible character $\chi$ of $\mathrm{Gal}(L/K)$, $L(s,\chi,K)$ is defined at $s=s_{0}$. There is a (unique) irreducible character $\mathcal{X}_{s_{0},L/K}$ of $\mathrm{Gal}(L/K)$ such that for any irreducible character $\chi$ of $\mathrm{Gal}(L/K)$, $L(s_{0},\chi,K)=0$ if and only if $\chi=\mathcal{X}_{s_{0},L/K}$. $\mathcal{X}_{s_{0},L/K}$ is a linear character of $\mathrm{Gal}(L/K)$ (so $\mathcal{X}_{s_{0},L/K}$ is a group homomorphism from $\mathrm{Gal}(L/K)$ to $\mathbb{C}^{\times}$). (2) There is a (unique) subfield $\mathcal{K}_{s_{0},L/K}$ of $L$ containing $K$ such that for any field $K^{\prime}$ containing $K$ and contained in $L$, $\zeta_{K^{\prime}}(s_{0})=0$ if and only if $K^{\prime}$ contains $\mathcal{K}_{s_{0},L/K}$. The extension $\mathcal{K}_{s_{0},L/K}/K$ is cyclic. (3) $\mathcal{K}_{s_{0},L/K}$ is the fixed field of the kernel of $\mathcal{X}_{s_{0},L/K}$. (4) Suppose further that $s_{0}$ is real. Then exactly one of the following holds: 1. 1. $\mathcal{K}_{s_{0},L/K}$ is equal to $K$ and $\mathcal{X}_{s_{0},L/K}$ is the trivial character. 2. 2. $\mathcal{K}_{s_{0},L/K}$ is quadratic over $K$ and $\mathcal{X}_{s_{0},L/K}$ is the group homomorphism from $\mathrm{Gal}(L/K)$ to $\mathbb{C}^{\times}$ with kernel $\mathrm{Gal}(L/\mathcal{K}_{s_{0},L/K})$ and image $\\{\pm 1\\}$. In particular, $\mathcal{X}_{s_{0},L/K}$ is a nontrivial real linear character. For the rest of this subsection $E$ is an arbitrary CM-field and $\Phi$ is an arbitrary CM-type of $E$. We denote as $E^{*}_{\Phi}$ the reflex field of $(E,\Phi)$. We denote as $\widetilde{E^{*}_{\Phi}}$ the Galois closure of the extension $E^{*}_{\Phi}/\mathbb{Q}$. ###### Corollary . Suppose that one (or two, or all) of the following conditions hold: 1. 1. The Galois closure $\widetilde{E}$ of the extension $E/\mathbb{Q}$ does not contain any complex quadratic subfields. 2. 2. $\widetilde{E^{*}_{\Phi}}$ does not contain any complex quadratic subfields. 3. 3. There does not exist a nontrivial irreducible real linear character $\chi$ of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ such that $m_{(E,\Phi)}(\chi)\neq 0$ and the homomorphism $\chi$ from $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ to $\mathbb{C}^{\times}$ has image $\\{\pm 1\\}$ and kernel $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/K)$ for some complex quadratic subfield $K$ of $\widetilde{E^{*}_{\Phi}}$. (Note that Condition 1 implies Condition 2 since $E^{*}_{\Phi}\subset\widetilde{E}$, and Condition 2 implies Condition 3.) Then for any nontrivial odd irreducible character $\chi$ of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$, there is no zero of $L(s,\chi,\mathbb{Q})$ in the region $1-\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}.$ ###### Proof. Let $\chi$ be a nontrivial odd irreducible character of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ such that such a zero exists. Denote this zero as $\beta_{0}$. Then $\beta_{0}$ must be real and $\beta_{0}$ is also a simple zero of $\zeta_{\widetilde{E^{*}_{\Phi}}}(s)$. Therefore, by subsection 4.2, $\chi$ is a real linear character of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$, and the homomorphism $\chi$ from $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ to $\mathbb{C}^{\times}$ has image $\\{\pm 1\\}$ and kernel $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/K)$ for some quadratic subfield $K$ of $\widetilde{E^{*}_{\Phi}}$. Since $\chi$ is an odd character, we have $\chi(\iota)=-\chi(1)$, where $\iota$ is the element in $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ induced by complex conjugation, and so $K/\mathbb{Q}$ must be a complex quadratic extension. Therefore, our claim follows. ∎ Since the compositum of two CM-fields is also a CM-field, the Galois closure of a CM-field (viewed as an extension over $\mathbb{Q}$) is also a CM-field. We know that the reflex field $E^{*}_{\Phi}$ of $(E,\Phi)$ is a CM-field. Therefore, $\widetilde{E^{*}_{\Phi}}$ is also a CM-field. We denote as $(\widetilde{E^{*}_{\Phi}})_{+}$ the maximal totally real subfield of $\widetilde{E^{*}_{\Phi}}$. ###### Proposition . Let $c$ be a real number such that $0<c\leq\frac{1}{4}$. Suppose that the function $L(s,\chi_{\widetilde{E^{*}_{\Phi}}/(\widetilde{E^{*}_{\Phi}})_{+}})=\frac{\zeta_{\widetilde{E^{*}_{\Phi}}}}{\zeta_{(\widetilde{E^{*}_{\Phi}})_{+}}}$ has no zero in the region $1-\frac{c}{\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}.$ Then for any nontrivial odd irreducible character $\chi$ of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$, there is no zero of $L(s,\chi,\mathbb{Q})$ in the above region either. ###### Proof. Let $\chi$ be a nontrivial odd irreducible character of $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ such that such a zero exists. Denote this zero as $\beta_{0}$. Then $\beta_{0}$ must be real and $\beta_{0}$ is also a simple zero of $\zeta_{\widetilde{E^{*}_{\Phi}}}(s)$. By our assumption on $L(s,\chi_{\widetilde{E^{*}_{\Phi}}/(\widetilde{E^{*}_{\Phi}})_{+}})$, $\beta_{0}$ cannot be a zero of $L(s,\chi_{\widetilde{E^{*}_{\Phi}}/(\widetilde{E^{*}_{\Phi}})_{+}})$. Therefore, $\beta_{0}$ is a zero of $\zeta_{(\widetilde{E^{*}_{\Phi}})_{+}}(s)$. Therefore, the field $\mathcal{K}_{\beta_{0},\widetilde{E^{*}_{\Phi}}/\mathbb{Q}}$ in subsection 4.2 must be contained in the field $(\widetilde{E^{*}_{\Phi}})_{+}$, and so $\mathcal{K}_{\beta_{0},\widetilde{E^{*}_{\Phi}}/\mathbb{Q}}$ is a real quadratic field. By subsection 4.2, since $L(\beta_{0},\chi,\mathbb{Q})=0$, $\chi$ is a group homomorphism from $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ to $\mathbb{C}^{\times}$ with kernel $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathcal{K}_{\beta_{0},\widetilde{E^{*}_{\Phi}}/\mathbb{Q}})$, and so $\chi(\iota)=\chi(1)=1$, where $\iota$ is the element in $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ induced by complex conjugation. This is a contradiction since the character $\chi$ is assumed to be odd. ∎ ### 4.3 Proofs of section 1 and section 1 ###### Proof of section 1. Let $g$ be a positive integer. Let $E$ be a CM-field with maximal totally real subfield $F$ of degree $[F:\mathbb{Q}]=g$. Let $(A,i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K}(A))$ be a CM abelian variety over a number field $K$ and let $\Phi$ be the CM-type of $(A,i)$. Then the field $K$ contains the reflex field $E^{*}_{\Phi}$. Thus, we have $\begin{split}\frac{1}{[K:\mathbb{Q}]}\log|\mathrm{disc}(K)|&\geq\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|\\\ &\geq\frac{1}{(2g)!}\log|\mathrm{disc}(E^{*}_{\Phi})|,\end{split}$ where the last inequality follows from the fact that the reflex field $E^{*}_{\Phi}$ is contained in the Galois closure $\widetilde{E}$ of the extension $E/\mathbb{Q}$. By Lemma 8 and Lemma 9 of [Sta74], suppose that there is a (necessarily real and simple) zero $\beta_{0}$ of $L(s,\chi_{\widetilde{E^{*}_{\Phi}}/(\widetilde{E^{*}_{\Phi}})_{+}})$ in the range $1-\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|},$ then there exists a complex quadratic subfield $K$ of $\widetilde{E^{*}_{\Phi}}$ such that $\zeta_{K}(\beta_{0})=0$ also. Since the Riemann zeta function $\zeta_{\mathbb{Q}}(s)$ has no real zeros in the range $0<s<1$, this means that $\beta_{0}$ is a zero of the function $L(s,\chi_{K/\mathbb{Q}})=\frac{\zeta_{K}(s)}{\zeta_{\mathbb{Q}}(s)}$. Since $K$ is contained in $\widetilde{E^{*}_{\Phi}}$, we have $|\mathrm{disc}(\widetilde{E^{*}_{\Phi}})|\geq|\mathrm{disc}(K)|$. Therefore, $\beta_{0}$ is a Siegel zero of $L(s,\chi_{K/\mathbb{Q}})$. The result then follows from subsection 4.2 and subsection 4.1. ∎ It is proved by Colmez ([Col93]) and Obus ([Obu13]) that the Colmez conjecture is true when the CM-field is abelian: ###### Theorem . Let $E$ be a CM-field such that the extension $E/\mathbb{Q}$ is Galois with abelian Galois group. Then we have $h_{(E,\Phi)}^{\mathrm{Falt}}=-Z_{(E,\Phi)}-\frac{1}{2}\mu_{(E,\Phi)}$ for any CM-type $\Phi$ of $E$. As a corollary, we can prove an unconditional analogue of section 1. ###### Proof of section 1. Similar to the above proof of section 1, the statement follows from the above- mentioned Lemma 8 and Lemma 9 of [Sta74], subsection 4.2, subsection 4.1, and subsection 4.3. ∎ ## 5 The (proved) averaged Colmez conjecture Although the formula $-Z_{(E,\Phi)}-\frac{1}{2}\mu_{(E,\Phi)}$ in the Colmez conjecture appears very complicated, the average over all CM-types $\Phi$ of a CM-field $E$ is much simpler: As is conjectured in Page 634 of [Col93] and proved in [YZ18] and [AGHMP18], we have the following proposition. ###### Proposition . Let $E$ be a CM-field with maximal totally real subfield $F$. Then we have $\frac{1}{2^{[F:\mathbb{Q}]}}\sum_{\Phi}\biggl{(}-Z_{(E,\Phi)}-\frac{1}{2}\mu_{(E,\Phi)}\biggr{)}=-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|),$ where the sum on the left-hand-side is over all CM-types $\Phi$ of $E$. In other words, the Colmez conjecture implies the The (proved) averaged Colmez conjecture stated below. ###### Theorem ((Proved) averaged Colmez conjecture). Let $E$ be a CM-field with maximal totally real subfield $F$. Then we have $\frac{1}{2^{[F:\mathbb{Q}]}}\sum_{\Phi}h_{(E,\Phi)}^{\mathrm{Falt}}=-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|),$ (19) where the sum on the left-hand-side is over all CM-types $\Phi$ of $E$. This is proved independently by Yuan–Zhang [YZ18] and Andreatta–Goren–Howard–Madapusi-Pera [AGHMP18]. In the following, we use the proved averaged Colmez conjecture to prove averaged analogues of section 1 and section 1. ###### Proposition . Let $g$ be a positive integer. Suppose that there exists some effectively computable constant $C_{\mathrm{zero}}(g)\in\mathbb{R}_{\geq 4}$ depending only on $g$ such that for any CM-field $E$ with maximal totally real subfield $F$ such that $[F:\mathbb{Q}]=g$, the function $L(s,\chi_{E/F})$ has no zeros in the region $1-\frac{1}{C_{\mathrm{zero}}(g)\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|},$ then there exist effectively computable constants $C_{9}(g)>0$, $C_{10}(g)\in\mathbb{R}$ depending only on $g$ such that $h_{\mathrm{Falt}}^{\mathrm{st}}(A)\leq C_{9}(g)\log|\mathrm{disc}(E)|+C_{10}(g)$ for any CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ and for any abelian variety $A$ over a number field with complex multiplication by $\mathcal{O}_{E}$. ###### Proof. Let $E$ be any CM-field with maximal totally real subfield $F$ such that $[F:\mathbb{Q}]=g$. Denote as $\beta_{0}$ the (necessarily real and simple) zero of $L(s,\chi_{E/F})$ in the region $1-\frac{1}{4\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|}$ (if it exists). We define $\delta_{\chi_{E/F}}$ to be $1$ if $\beta_{0}$ exists, and we define $\delta_{\chi_{E/F}}$ to be $0$ otherwise. By an argument similar to the proof of subsection 4.1, we have $\begin{split}&-\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{\delta_{\chi_{E/F}}}{1-\beta_{0}}\\\ \geq&\frac{1}{2}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)+\frac{g}{2}\biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}-\log(\pi)\biggr{)},\end{split}$ and $\begin{split}&-\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{\delta_{\chi_{E/F}}}{1-\beta_{0}}\\\ <&\frac{75}{2}\log|\mathrm{disc}(E)|+\frac{1}{2}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)+\frac{g}{2}\biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}-\log(\pi)\biggr{)}.\end{split}$ By our assumption, we then have $\begin{split}&-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\\\ <&\frac{1}{2}\Biggl{(}\frac{75}{2}\log|\mathrm{disc}(E)|+C_{\mathrm{zero}}(g)\log|\mathrm{disc}(E)|+\frac{g}{2}\biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}-\log(\pi)\biggr{)}\Biggr{)}.\end{split}$ Let $(A,i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K}(A))$ be any CM abelian variety over a number field $K$. Let $\Phi_{0}$ be the CM-type of $(A,i)$. By section 5, we have $h_{\mathrm{Falt}}^{\mathrm{st}}(A)=-\sum_{\Phi\neq\Phi_{0}}h_{(E,\Phi)}^{\mathrm{Falt}}+2^{g}\Biggl{(}-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\Biggr{)}.$ Let $C_{\mathrm{lower}}>0$ be as in section 2. Then by section 2 we have $\begin{split}h_{\mathrm{Falt}}^{\mathrm{st}}(A)&=-\sum_{\Phi\neq\Phi_{0}}h_{(E,\Phi)}^{\mathrm{Falt}}+2^{g}\Biggl{(}-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\Biggr{)}\\\ &\leq(2^{g}-1)gC_{\mathrm{lower}}\\\ &+2^{g}\cdot\frac{1}{2}\Biggl{(}\frac{75}{2}\log|\mathrm{disc}(E)|+C_{\mathrm{zero}}(g)\log|\mathrm{disc}(E)|+\frac{g}{2}\biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}-\log(\pi)\biggr{)}\Biggr{)}.\end{split}$ ∎ ###### Proposition . For any $g\in\mathbb{Z}_{\geq 1}$, there exist constants $C_{11}(g)>0$, $C_{12}(g)\in\mathbb{R}$ depending only on $g$ such that $h_{\mathrm{Falt}}^{\mathrm{st}}(A)\leq C_{11}(g)\log|\mathrm{disc}(E)|+C_{12}(g)$ for any CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ such that $E$ has no complex quadratic subfields and for any abelian variety $A$ over a number field with complex multiplication by $\mathcal{O}_{E}$. ###### Proof. Let $g$ be a positive integer. Let $E$ be a CM-field with maximal totally real subfield $F$ with degree $[F:\mathbb{Q}]=g$. By Lemma 9 of [Sta74], suppose that there exists a (necessarily real and simple) zero $\beta_{0}$ of $L(s,\chi_{E/F})$ in the range $1-\frac{1}{16g!\log|\mathrm{disc}(E)|}\leq\mathrm{Re}(s)<1,|\operatorname{Im}(s)|\leq\frac{1}{4\log|\mathrm{disc}(E)|},$ then there exists a complex quadratic subfield $K$ of $E$ such that $\zeta_{K}(\beta_{0})=0$ as well. So if $E$ does not contain any complex quadratic fields, then there is no such zero. The rest of the proof is similar to that of section 5. ∎ ###### Lemma . Let $E$ be a CM-field with maximal totally real subfield $F$ of degree $[F:\mathbb{Q}]=g$. Let $\Phi_{1},\Phi_{2}$ be CM-types of $E$ such that $|\Phi_{1}\cap\Phi_{2}|=g-1$. Let $\varphi_{0}$ be the unique element in $\mathrm{Hom}_{\mathbb{Q}}(F,\mathbb{R})$ such that the element $\phi_{1}$ in $\Phi_{1}$ lying above $\varphi_{0}$ is not equal to the element $\phi_{2}$ in $\Phi_{2}$ lying above $\varphi_{0}$. We have $\phi_{1}=\phi_{2}\circ\iota$, where $\iota$ is the nontrivial element of $\mathrm{Gal}(E/F)$. It is easy to see that the subfield $\phi_{1}(E)$ of $\mathbb{C}$ is equal to the subfield $\phi_{2}(E)$ of $\mathbb{C}$. Let $E^{*}_{\Phi_{1}},E^{*}_{\Phi_{2}}$ be the reflex fields of $(E,\Phi_{1}),(E,\Phi_{2})$, respectively. Then the compositum of fields $E^{*}_{\Phi_{1}}E^{*}_{\Phi_{2}}$ contains the field $\phi_{1}(E)=\phi_{2}(E)$. ###### Proof. Since $E$ is a totally complex quadratic extension of the totally real field $F$, we can write $E=F[\sqrt{-\alpha_{E}}]$ for some totally positive $\alpha_{E}\in F$, where $\sqrt{-\alpha_{E}}$ is any square root of $-\alpha_{E}$ in $\overline{\mathbb{Q}}$. Thus, $\sum_{\phi\in\Phi_{1}}\phi(\sqrt{-\alpha_{E}})\in E^{*}_{\Phi_{1}}$ and $\sum_{\phi\in\Phi_{2}}\phi(\sqrt{-\alpha_{E}})\in E^{*}_{\Phi_{2}}$. By our assumption on $\Phi_{1},\Phi_{2}$ and $\varphi_{0}$, we have $\begin{split}\sum_{\phi\in\Phi_{1}}\phi(\sqrt{-\alpha_{E}})-\sum_{\phi\in\Phi_{2}}\phi(\sqrt{-\alpha_{E}})&=\phi_{1}(\sqrt{-\alpha_{E}})-\phi_{2}(\sqrt{-\alpha_{E}})\\\ &=2\phi_{1}(\sqrt{-\alpha_{E}})\\\ &=-2\phi_{2}(\sqrt{-\alpha_{E}}).\end{split}$ Therefore, the compositum of fields $E^{*}_{\Phi_{1}}E^{*}_{\Phi_{2}}$ contains the element $\phi_{1}(\sqrt{-\alpha_{E}})=-\phi_{2}(\sqrt{-\alpha_{E}})$. Let $\alpha_{F}$ be an element of $F$ such that $F=\mathbb{Q}[\alpha_{F}]$. Then similar to above, since $\begin{split}\sum_{\phi\in\Phi_{1}}\phi(\alpha_{F}\sqrt{-\alpha_{E}})-\sum_{\phi\in\Phi_{2}}\phi(\alpha_{F}\sqrt{-\alpha_{E}})&=\phi_{1}(\alpha_{F}\sqrt{-\alpha_{E}})-\phi_{2}(\alpha_{F}\sqrt{-\alpha_{E}})\\\ &=\varphi_{0}(\alpha_{F})\phi_{1}(\sqrt{-\alpha_{E}})-\varphi_{0}(\alpha_{F})\phi_{2}(\sqrt{-\alpha_{E}})\\\ &=2\varphi_{0}(\alpha_{F})\phi_{1}(\sqrt{-\alpha_{E}})\\\ &=-2\varphi_{0}(\alpha_{F})\phi_{2}(\sqrt{-\alpha_{E}}),\end{split}$ the compositum of fields $E^{*}_{\Phi_{1}}E^{*}_{\Phi_{2}}$ contains the element $\varphi_{0}(\alpha_{F})\phi_{1}(\sqrt{-\alpha_{E}})=-\varphi_{0}(\alpha_{F})\phi_{2}(\sqrt{-\alpha_{E}})$. Combined with above, we have: the compositum of fields $E^{*}_{\Phi_{1}}E^{*}_{\Phi_{2}}$ contains the element $\varphi_{0}(\alpha_{F})$ and the element $\phi_{1}(\sqrt{-\alpha_{E}})=-\phi_{2}(\sqrt{-\alpha_{E}})$, and so it contains the field $\phi_{1}(E)=\phi_{2}(E)$. ∎ ###### Remark . The CM-types $\Phi_{1},\Phi_{2}$ in section 5 is a pair of “nearby” CM-types considered in [YZ18]. ###### Corollary . Let $E$ be a CM-field with maximal totally real subfield $F$. Let $\Phi_{1},\Phi_{2}$ be CM-types of $E$ such that $|\Phi_{1}\cap\Phi_{2}|=g-1$. Then we have $\begin{split}|\mathrm{disc}(E^{*}_{\Phi_{1}})|^{1/[E^{*}_{\Phi_{1}}:\mathbb{Q}]}|\mathrm{disc}(E^{*}_{\Phi_{2}})|^{1/[E^{*}_{\Phi_{2}}:\mathbb{Q}]}\geq|\mathrm{disc}(E)|^{1/[E:\mathbb{Q}]},\end{split}$ where $E^{*}_{\Phi_{1}},E^{*}_{\Phi_{2}}$ are the reflex fields of $(E,\Phi_{1}),(E,\Phi_{2})$, respectively. ###### Proof. This follows from section 5 and Equation (15). ∎ ###### Proof of section 1. This follows from section 5, section 5, and the fact that the field of definition of any CM abelian variety contains the reflex field. ∎ ###### Proof of section 1. This follows from section 5, section 5, and the fact that the field of definition of any CM abelian variety contains the reflex field. ∎ ## 6 Field of everywhere good reduction of CM abelian varieties We know that any abelian variety over a number field with complex multiplication by a CM-field has potential good reduction everywhere. In this section, we show that the logarithm of the root discriminant of the field of everywhere good reduction can be small compared with the logarithm of the discriminant of the CM-field. ###### Lemma . Let $A$ be an abelian variety over a number field $K$. Let $L_{1},L_{2}$ be number fields containing $K$. If the abelian variety $A_{L_{1}}/L_{1}$ and the abelian variety $A_{L_{2}}/L_{2}$ both have everywhere good reduction, then the abelian variety $A_{L_{1}\cap L_{2}}/L_{1}\cap L_{2}$ has everywhere good reduction. ###### Proof. This follows from the Neron-Ogg-Shafarevich criterion. ∎ By part (b) of Corollary 2 to Theorem 2 of [ST68], we have the following theorem: ###### Theorem . Let $A$ be an abelian variety over a number field $K$. Let $\mathfrak{p}$ be a prime ideal of $\mathcal{O}_{K}$. Let $p$ be the characteristic of the residue field $\mathcal{O}_{K}/\mathfrak{p}$. Suppose that $A/K$ has potential good reduction at $\mathfrak{p}$. Let $m$ be any integer $\geq 3$ and prime to $p$. Let $K(A[m])$ be the minimal field of definition of the set of $m$-torsion points $A[m]$ of $A$. The following are equivalent: (a) The extension $K(A[m])/K$ is unramified at $\mathfrak{p}$. (b) The abelian variety $A/K$ has good reduction at $\mathfrak{p}$. ###### Corollary . Let $K$ be a number field. Let $A$ be an abelian variety over $K$ with potential good reduction everywhere. Let $S_{A/K}$ be the set of all prime ideals of $\mathcal{O}_{K}$ where the abelian variety $A$ over $K$ does not have good reduction. There exists a finite Galois extension $L/K$, $L/K$ unramified at all primes $\mathfrak{p}$ of $\mathcal{O}_{K}$ with $\mathfrak{p}\notin S_{A/K}$, such that the abelian variety $A_{L}/L$ has good reduction everywhere. ###### Proof. We first fix a prime $p_{1}$ such that the abelian variety $A/K$ has good reduction at every prime ideal $\mathfrak{p}_{1}$ of $\mathcal{O}_{K}$ above $p_{1}$. Let $L_{1}\coloneqq K(A[p_{2}])$ be the minimal field of definition of the set of $p_{1}$-torsion points $A[p_{1}]$ of $A$. By section 6, we can show that $L_{1}/K$ is a finite Galois extension unramified at any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$ such that $\mathfrak{p}\notin S_{A/K}$ and the characteristic of the residue field $\mathcal{O}_{K}/\mathfrak{p}$ is not equal to $p_{1}$, and the abelian variety $A_{L_{1}}/L_{1}$ has everywhere good reduction. Next, we fix a prime $p_{2}$ not equal to $p_{1}$ such that the abelian variety $A/K$ has good reduction at every prime ideal $\mathfrak{p}_{2}$ of $\mathcal{O}_{K}$ above $p_{2}$. Let $L_{2}\coloneqq K(A[p_{2}])$ be the minimal field of definition of the set of $p_{2}$-torsion points $A[p_{2}]$ of $A$. Again by section 6, we can show that $L_{2}/K$ is a finite Galois extension unramified at any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$ such that $\mathfrak{p}\notin S_{A/K}$ and the characteristic of the residue field $\mathcal{O}_{K}/\mathfrak{p}$ is not equal to $p_{2}$, and the abelian variety $A_{L_{2}}/L_{2}$ has everywhere good reduction. Now consider the extension $L_{1}\cap L_{2}$ of $K$. It is a finite Galois extension unramified at any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$ such that $\mathfrak{p}\notin S_{A/K}$ (since $p_{1}\neq p_{2}$). Since the abelian varieties $A_{L_{1}}/L_{1}$ and $A_{L_{2}}/L_{2}$ both have everywhere good reduction, by section 6, the abelian variety $A_{L_{1}\cap L_{2}}/L_{1}\cap L_{2}$ also has everywhere good reduction. Taking $L=L_{1}\cap L_{2}$, we get our claim. ∎ The following lemma shows that in terms of unramifiedness, the extension $L/K$ in section 6 is the “best possible”. ###### Lemma . Let $K$ be a number field. Let $K^{\prime}/K$ be a finite extension. Let $\mathfrak{p}^{\prime}$ be a prime ideal of $\mathcal{O}_{K^{\prime}}$, lying above a prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$. Let $A$ be an abelian variety over $K$. Suppose that the extension $K^{\prime}/K$ is unramified at $\mathfrak{p}^{\prime}$, and the abelian variety $A_{K^{\prime}}/K^{\prime}$ has good reduction at $\mathfrak{p}^{\prime}$, then the abelian variety $A/K$ has good reduction at $\mathfrak{p}$. ###### Proof. This follows from the Neron-Ogg-Shafarevich criterion. ∎ By Theorem 7 and the remarks before Theorem 7 of [ST68], we have the following theorem: ###### Theorem . Let $K$ be a number field. Let $E$ be a CM-field. Let $A$ be an abelian variety over $K$ with complex multiplication by $E$. Let $\mu(E)$ be the group of all roots of unity in $E$. There exists a cyclic extension $C$ of $K$ of degree $[C:K]\leq 2\cdot\\#\mu(E)$, such that the abelian variety $A_{C}$ over $C$ has everywhere good reduction. ###### Corollary . Let $K$ be a number field. Let $E$ be a CM-field. Let $A$ be an abelian variety over $K$ with complex multiplication by $E$. Let $\mu(E)$ be the group of all roots of unity in $E$. Let $S_{A/K}$ be the set of all prime ideals of $\mathcal{O}_{K}$ where the abelian variety $A$ over $K$ does not have good reduction. There exists a cyclic extension $K^{\prime}$ of $K$ of degree $[K^{\prime}:K]\leq 2\cdot\\#\mu(E)$, $K^{\prime}/K$ unramified at any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$ such that $\mathfrak{p}\notin S_{A/K}$, such that the abelian variety $A_{K^{\prime}}$ over $K^{\prime}$ has everywhere good reduction. ###### Proof. Let $C/K$ be the finite cyclic extension in section 6 and let $L/K$ be the finite Galois extension in section 6. Let $K^{\prime}=C\cap L$. Then $K^{\prime}/K$ is a cyclic extension of degree $[K^{\prime}:K]\leq 2\cdot\\#\mu(E)$ and $K^{\prime}/K$ is unramified at any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$ such that $\mathfrak{p}\notin S_{A/K}$. By section 6, the abelian variety $A_{K^{\prime}}/K^{\prime}$ has everywhere good reduction. Hence we get our claim. ∎ In order to prove section 1 and section 1, we will also need the following theorem, which is a combination of Corollary A.4.6.5, Theorem A.4.5.1 and Remark A.4.5.2 of [CCO14]. ###### Theorem . Let $E$ be a CM-field and let $\Phi$ be a CM-type of $E$. Let $E^{*}_{\Phi}$ be the reflex field of $(E,\Phi)$ and let $M$ be the field of moduli for the reflex norm of $(E,\Phi)$ ($M$ is an everywhere unramified finite abelian extension of $E^{*}_{\Phi}$). There exists a prime $p$ and a CM abelian variety $(A,i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{M}(A))$ over $M$ of CM- type $\Phi$ such that $A$ has good reduction at every prime ideal of $\mathcal{O}_{M}$ outside $p$. Moreover, we can choose $p$ such that $p\leq 2\cdot|\mathrm{disc}(E(\mu_{mp_{1}p_{2}\cdots p_{s}}))|^{C_{\mathrm{prime}}},$ (20) where $C_{\mathrm{prime}}$ is an effectively computable absolute constant in $\mathbb{R}_{>0}$, for any positive integer $n$ $\mu_{n}$ denotes a primitive $n$-th root of unity, $m$ is the order of the group $\mu(E)$ of all roots of unity in $E$, and $p_{1},p_{2},\cdots,p_{s}$ are the distinct prime divisors of $m$. Assuming the Generalized Riemann Hypothesis, the above bound on $p$ can be improved to $p\leq 70\cdot\biggl{(}\log|\mathrm{disc}(E(\mu_{mp_{1}p_{2}\cdots p_{s}}))|\biggr{)}^{2}.$ (21) ###### Proof of section 1. Assume the Generalized Riemann Hypothesis. Let $g$ be a positive integer. Let $E$ be a CM-field such that $[E:\mathbb{Q}]=2g$. Let $\Phi$ be a CM-type of $E$. Let $E^{*}_{\Phi}$ be the reflex field of $(E,\Phi)$ and let the field $M$, the abelian variety $A$ over $M$ and the prime $p$ be as in section 6, such that the upper bound on $p$ is given by Equation (21). Denote $K\coloneqq M$. As in section 6, let $S_{A/K}$ be the set of all prime ideals of $\mathcal{O}_{K}$ where the abelian variety $A$ over $K$ does not have good reduction. By our choice of $A$, for any $\mathfrak{p}\in S_{A/K}$, $\mathfrak{p}$ lies above the prime $p$. Let $K^{\prime}$ be the cyclic extension of $K$ in section 6 of degree $[K^{\prime}:K]\leq 2\cdot\\#\mu(E)$, $K^{\prime}/K$ unramified at any prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$ such that $\mathfrak{p}\notin S_{A/K}$, such that the abelian variety $A_{K^{\prime}}$ over $K^{\prime}$ has everywhere good reduction. Therefore, the extension $K^{\prime}/K$ is ramified only at the prime ideals $\mathfrak{q}$ of $\mathcal{O}_{K^{\prime}}$ such that $\mathfrak{q}$ lies above the prime $p$. Let $\mathcal{D}_{K^{\prime}/K}$ be the different of the extension $K^{\prime}/K$. By Chapter 3, Section 6 of [Ser79], we have $e_{\mathfrak{q}/\mathfrak{p}}-1\leq\mathrm{val}_{\mathfrak{q}}(\mathcal{D}_{K^{\prime}/K})\leq e_{\mathfrak{q}/\mathfrak{p}}-1+\mathrm{val}_{\mathfrak{q}}(e_{\mathfrak{q}/\mathfrak{p}}),$ for any prime ideal $\mathfrak{q}$ of $\mathcal{O}_{K^{\prime}}$ lying above a prime ideal $\mathfrak{p}$ of $\mathcal{O}_{K}$, where $e_{\mathfrak{q}/\mathfrak{p}}$ is the ramification index of $\mathfrak{q}|\mathfrak{p}$. This means that we have $\mathrm{val}_{\mathfrak{q}}(\mathcal{D}_{K^{\prime}/K})\leq 2e_{\mathfrak{q}/p}e_{\mathfrak{q}/\mathfrak{p}}\leq 2e_{\mathfrak{q}/p}[K^{\prime}:K],$ where $e_{\mathfrak{q}/p}$ is the ramification index of the prime ideal $\mathfrak{q}$ of $\mathcal{O}_{K^{\prime}}$ lying above the prime ideal $(p)$ of $\mathbb{Z}$. Therefore, we have $\mathcal{D}_{K^{\prime}/K}\Biggl{|}\prod_{\begin{subarray}{c}\mathfrak{q}\subset\mathcal{O}_{K^{\prime}}\\\ \mathfrak{q}|p\end{subarray}}\mathfrak{q}^{e_{\mathfrak{q}/p}\cdot 2\cdot\\#\mu(E)},$ where the product is over the prime ideals $\mathfrak{q}$ of $\mathcal{O}_{K^{\prime}}$ above $p$. Thus, we have $\begin{split}\log\biggl{(}\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{K^{\prime}/K})\biggr{)}&\leq\log\Biggl{(}\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}\biggl{(}\prod_{\begin{subarray}{c}\mathfrak{q}\subset\mathcal{O}_{K^{\prime}}\\\ \mathfrak{q}|p\end{subarray}}\mathfrak{q}^{e_{\mathfrak{q}/p}\cdot 2\cdot\\#\mu(E)}\biggr{)}\Biggr{)}\\\ &\leq 2\cdot\\#\mu(E)\sum_{\begin{subarray}{c}\mathfrak{q}\subset\mathcal{O}_{K^{\prime}}\\\ \mathfrak{q}|p\end{subarray}}e_{\mathfrak{q}/p}\cdot\log\biggl{(}\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathfrak{q})\biggr{)}\\\ &=2\cdot\\#\mu(E)\sum_{\begin{subarray}{c}\mathfrak{q}\subset\mathcal{O}_{K^{\prime}}\\\ \mathfrak{q}|p\end{subarray}}e_{\mathfrak{q}/p}f_{\mathfrak{q}/p}\log(p)\\\ &=2\cdot\\#\mu(E)[K^{\prime}:\mathbb{Q}]\log(p),\end{split}$ (22) where $f_{\mathfrak{q}/p}$ is the residue degree of the prime ideal $\mathfrak{q}$ of $\mathcal{O}_{K^{\prime}}$ lying above the prime ideal $(p)$ of $\mathbb{Z}$. Since the extension $K/E^{*}_{\Phi}$ is unramified, the different $\mathcal{D}_{K/E^{*}_{\Phi}}$ of the extension $K/E^{*}_{\Phi}$ is equal to the unit ideal of $\mathcal{O}_{K}$. Thus, we have $\begin{split}&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log|\mathrm{disc}(K)|\\\ =&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log(\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{K^{\prime}/\mathbb{Q}}))\\\ =&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log(\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{K^{\prime}/K}\mathcal{D}_{K/E^{*}_{\Phi}}\mathcal{D}_{E^{*}_{\Phi}/\mathbb{Q}}))\\\ =&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log(\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{K^{\prime}/K}\mathcal{D}_{E^{*}_{\Phi}/\mathbb{Q}}))\\\ =&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log(\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{K^{\prime}/K}))+\frac{1}{[K^{\prime}:\mathbb{Q}]}\log(\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{E^{*}_{\Phi}/\mathbb{Q}}))\\\ =&\frac{1}{[K^{\prime}:\mathbb{Q}]}\log(\mathrm{Norm}_{K^{\prime}/\mathbb{Q}}(\mathcal{D}_{K^{\prime}/K}))+\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|\\\ \leq&2\cdot\\#\mu(E)\log(p)+\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|,\end{split}$ (23) where the last inequality follows from Equation (22). By our assumption on $p$, we have $p\leq 70\cdot\biggl{(}\log|\mathrm{disc}(E(\mu_{mp_{1}p_{2}\cdots p_{s}}))|\biggr{)}^{2}.$ Thus, we have $\begin{split}\log(p)&\leq\log(70)+2\log\log|\mathrm{disc}(E(\mu_{mp_{1}p_{2}\cdots p_{s}}))|\\\ &\leq\log(70)+2\log\log\biggl{(}((4g)^{4})^{(4g)^{4}\cdot 2g}(|\mathrm{disc}(E)|)^{(4g)^{4}}\biggr{)},\end{split}$ (24) where the last inequality is by the following section 6. By the following section 6, we have $\begin{split}\mu(E)\leq(4g)^{2}.\end{split}$ (25) Plugging Equation (24) and Equation (25) into Equation (23), we get our claim. ∎ ###### Lemma . Let $K$ be a number field of degree $[K:\mathbb{Q}]=n$. Let $\mu(K)$ be the group of all roots of unity in $K$. Then $\mu(K)$ is a finite cyclic group of order less than or equal to $(2n)^{2}$. ###### Proof. Since $[K:\mathbb{Q}]$ is finite, $\mu(K)$ is a finite group. It is easy to see that $\mu(K)$ is a cyclic group. Let $m$ be the order of the group $\mu(K)$, and let $p_{1},p_{2},\cdots,p_{s}$ be the distinct prime divisors of $m$. Denote as $\mu_{m}$ the primitive $m$-th root of unity. Then $K$ contains the $m$-th cyclotomic field $\mathbb{Q}(\mu_{m})$. Since $[\mathbb{Q}(\mu_{m}):\mathbb{Q}]=\\#(\mathbb{Z}/m\mathbb{Z})^{\times}=\frac{m(p_{1}-1)(p_{2}-1)\cdots(p_{s}-1)}{p_{1}p_{2}\cdots p_{s}}$ and $[K:\mathbb{Q}]=n$, this means that $\frac{m(p_{1}-1)(p_{2}-1)\cdots(p_{s}-1)}{p_{1}p_{2}\cdots p_{s}}\leq n.$ For any prime $p\neq 2$, we have $\sqrt{p}\leq p-1$. Thus, we have $\begin{split}\frac{m(p_{1}-1)(p_{2}-1)\cdots(p_{s}-1)}{p_{1}p_{2}\cdots p_{s}}&\geq\frac{m\sqrt{p_{1}}\sqrt{p_{2}}\cdots\sqrt{p_{s}}}{2p_{1}p_{2}\cdots p_{s}}\\\ &=\frac{m}{2\sqrt{p_{1}}\sqrt{p_{2}}\cdots\sqrt{p_{s}}}\\\ &\geq\frac{m}{2\sqrt{m}}\\\ &=\sqrt{m}/2.\end{split}$ Therefore, we have $m\leq(2n)^{2}.$ ∎ ###### Lemma . Let $K$ be a number field of degree $[K:\mathbb{Q}]=n$. Then we have $|\mathrm{disc}(K(\mu_{mp_{1}p_{2}\cdots p_{s}}))|\leq((2n)^{4})^{(2n)^{4}\cdot n}(|\mathrm{disc}(K)|)^{(2n)^{4}}.$ where for any positive integer $k$ $\mu_{k}$ denotes a primitive $k$-th root of unity, $m$ is the order of the group $\mu(K)$ of all roots of unity in $K$, and $p_{1},p_{2},\cdots,p_{s}$ are the distinct prime divisors of $m$. ###### Proof. For any $k\in\mathbb{Z}_{\geq 3}$, the $k$-th cyclotomic field $\mathbb{Q}(\mu_{k})$ has degree $[\mathbb{Q}(\mu_{k}):\mathbb{Q}]=\\#(\mathbb{Z}/k\mathbb{Z})^{\times}$. By [MP05], we have: $\mathrm{disc}(\mathbb{Q}(\mu_{k}))=(-1)^{\varphi(k)/2}\frac{k^{\varphi(k)}}{\prod_{p|k}p^{\varphi(k)/(p-1)}},$ where $\varphi(k)=\\#(\mathbb{Z}/k\mathbb{Z})^{\times}$ is Euler’s totient function, and the product in the denominator on the right-hand-side is over primes $p$ dividing $k$. Thus, we have $|\mathrm{disc}(\mathbb{Q}(\mu_{k}))|\leq k^{k}.$ By section 6, we have $m\leq(2n)^{2}.$ Thus, the $mp_{1}p_{2}\cdots p_{s}$-th cyclotomic field $\mathbb{Q}(\mu_{mp_{1}p_{2}\cdots p_{s}})$ has degree $\begin{split}[\mathbb{Q}(\mu_{mp_{1}p_{2}\cdots p_{s}}):\mathbb{Q}]&=\\#(\mathbb{Z}/mp_{1}p_{2}\cdots p_{s}\mathbb{Z})^{\times}\\\ &=m(p_{1}-1)(p_{2}-1)\cdots(p_{s}-1)\\\ &\leq m^{2}\\\ &\leq(2n)^{4}.\end{split}$ Moreover, $mp_{1}p_{2}\cdots p_{s}\leq m^{2}\leq(2n)^{4}$. Thus, we have $|\mathrm{disc}(\mathbb{Q}(\mu_{mp_{1}p_{2}\cdots p_{s}}))|\leq((2n)^{4})^{(2n)^{4}}.$ We know that $K(\mu_{mp_{1}p_{2}\cdots p_{s}})$ is equal to the compositum of $K$ and $\mathbb{Q}(\mu_{mp_{1}p_{2}\cdots p_{s}})$. Thus, by Equation (16), we have $\begin{split}|\mathrm{disc}(K(\mu_{mp_{1}p_{2}\cdots p_{s}}))|&\leq|\mathrm{disc}(K)|^{[\mathbb{Q}(\mu_{mp_{1}p_{2}\cdots p_{s}}):\mathbb{Q}]}|\mathrm{disc}(\mathbb{Q}(\mu_{mp_{1}p_{2}\cdots p_{s}}))|^{[K:\mathbb{Q}]}\\\ &\leq((2n)^{4})^{(2n)^{4}\cdot n}(|\mathrm{disc}(K)|)^{(2n)^{4}}.\end{split}$ ∎ ###### Proof of section 1. The proof is similar to that of section 1 (using section 6 and section 6). The difference is that since we do not assume the Generalized Riemann Hypothesis, we use Equation (20) in section 6 to bound the prime $p$ instead of Equation (21). The term $\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|$ of Equation (1) is submerged into the term $C_{15}(g)\log|\mathrm{disc}(E)|$ of Equation (2) by the fact that the reflex field $E^{*}_{\Phi}$ is contained in the Galois closure $\widetilde{E}$ of the extension of $E/\mathbb{Q}$ (and so $\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|$ is less than or equal to $\frac{1}{[\widetilde{E}:\mathbb{Q}]}\log(|\mathrm{disc}(\widetilde{E})|)$) and the fact that $\frac{1}{[\widetilde{E}:\mathbb{Q}]}\log(|\mathrm{disc}(\widetilde{E})|)\leq\log|\mathrm{disc}(E)|$ by Equation (17). ∎ ###### Remark . In comparison with section 1, one might ask how the right-hand-side of the formula in The (proved) averaged Colmez conjecture behaves under the Generalized Riemann Hypothesis. The right-hand-side of Equation (19) is equal to $\begin{split}&-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\\\ =&\frac{1}{2}\frac{L^{\prime}(1,\chi_{E/F})}{L(1,\chi_{E/F})}+\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\\\ &+\frac{1}{2}\cdot\frac{g}{2}\Biggl{(}\frac{\Gamma^{\prime}\bigl{(}\frac{1}{2}\bigr{)}}{\Gamma\bigl{(}\frac{1}{2}\bigr{)}}+\frac{\Gamma^{\prime}(1)}{\Gamma(1)}-2\log(\pi)\Biggr{)},\end{split}$ where $g\coloneqq[F:\mathbb{Q}]$. The equality follows from logarithmically differentiating the functional equation of $L(s,\chi_{E/F})$ at $s=0$. Assume the Generalized Riemann Hypothesis. Then for any $g\in\mathbb{Z}_{\geq 1}$, there exist constants $C_{\mathrm{GRH},1}(g)>0,C_{\mathrm{GRH},2}(g)\in\mathbb{R}$ depending only on $g$ such that $\biggl{|}\frac{L^{\prime}(1,\chi_{E/F})}{L(1,\chi_{E/F})}\biggr{|}\leq C_{\mathrm{GRH},1}(g)\log\log|\mathrm{disc}E|+C_{\mathrm{GRH},2}(g)$ for any CM-field $E$ with maximal totally real subfield $F$ such that $[F:\mathbb{Q}]=g$. Then it is easy to see that for any $g\in\mathbb{Z}_{\geq 1}$, for any $\epsilon>0$, there exists a constant $c(g,\epsilon)>0$ depending only on $g$ and $\epsilon$ such that $\begin{split}&\Biggl{|}\biggl{(}-\frac{1}{2}\frac{L^{\prime}(0,\chi_{E/F})}{L(0,\chi_{E/F})}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\biggr{)}-\frac{1}{4}\log(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)\Biggr{|}\\\ <&\epsilon\log|\mathrm{disc}(E)|,\end{split}$ for any CM-field $E$ with maximal totally real subfield $F$ such that $[E:\mathbb{Q}]=2g$ and $|\mathrm{disc}(E)|\geq c(g,\epsilon)$. Since $|\mathrm{disc}(E)|/|\mathrm{disc}(F)|\leq|\mathrm{disc}(E)|\leq(|\mathrm{disc}(E)|/|\mathrm{disc}(F)|)^{2}$, this means that assuming the Generalized Riemann Hypothesis, the right-hand- side of Equation (19) is “approximately some constant times $\log|\mathrm{disc}(E)|$”. ###### Remark . One might wonder whether there is a lower bound for $|\mathrm{disc}(E^{*}_{\Phi})|$ in terms of $|\mathrm{disc}(E)|$ and $[E:\mathbb{Q}]$. The following example shows that the answer is no: Let $F$ be any totally real number field. Let $-d\in\mathbb{Z}_{\leq-2}$ be any fundamental discriminant (so $\mathrm{disc}(\mathbb{Q}(\sqrt{-d}))=d$) such that $-d$ is prime to $\mathrm{disc}(F)$. (For any totally real number field $F$, there are infinitely many such $-d$.) Let $E$ be the compositum of the fields $F$ and $\mathbb{Q}(\sqrt{-d})$. Then $E$ is a CM-field with maximal totally real subfield $F$. Let $\Phi$ be the CM-type defined as follows: For any $\varphi_{0}\in\mathrm{Hom}_{\mathbb{Q}}(F,\mathbb{R})$, the element $\phi\colon E\to\mathbb{C}$ in $\Phi$ lying above $\varphi_{0}$ always sends $\sqrt{-d}$ to $\sqrt{-d}$. Then it is easy to see that $E^{*}_{\Phi}=\mathbb{Q}(\sqrt{-d})$. Thus, $\mathrm{disc}(E^{*}_{\Phi})=\mathrm{disc}(\mathbb{Q}(\sqrt{-d}))=d$. Since $\mathrm{disc}(\mathbb{Q}(\sqrt{-d}))=d$ is coprime to $\mathrm{disc}(F)$, by Theorem 4.26 of [Nar90], for example, we have $\begin{split}|\mathrm{disc}(E)|=d^{[F:\mathbb{Q}]}|\mathrm{disc}(F)|^{2}.\end{split}$ Therefore, for any fixed $g\in\mathbb{Z}_{\geq 2}$, the quotient $\frac{\log|\mathrm{disc}(E^{*}_{\Phi})|}{\log|\mathrm{disc}(E)|},$ where $E$ is a CM-field of degree $[E:\mathbb{Q}]=2g$ and $\Phi$ is a CM-type of $E$, can be arbitrarily small. Combining section 6 with section 1, we have shown the following: ###### Proposition . Assume the Generalized Riemann Hypothesis. For any $g\in\mathbb{Z}$ such that $g\geq 2$, for any $\epsilon>0$, there exists a CM-field $E$ with $[E:\mathbb{Q}]=2g$, a CM-type $\Phi$ of $E$, a number field $K^{\prime}$ and a CM abelian variety $(A,i\colon\mathcal{O}_{E}\hookrightarrow\mathrm{End}_{K^{\prime}}(A))$ over $K^{\prime}$ of CM-type $\Phi$ such that the abelian variety $A$ over $K^{\prime}$ has everywhere good reduction and $\begin{split}\frac{1}{[K^{\prime}:\mathbb{Q}]}\log|\mathrm{disc}(K^{\prime})|\leq\epsilon\log|\mathrm{disc}(E)|.\end{split}$ ###### Remark . In view of section 6 and section 6, we cannot remove the “average” condition in section 1 and section 1—Using only the The (proved) averaged Colmez conjecture, we can only prove averaged analogues of section 1 and section 1. ###### Remark . In Theorem 6(i) of [Col98], Colmez has proved that there exist effectively computable absolute constants $C_{\mathrm{Col},3}>0$, $C_{\mathrm{Col},4}\in\mathbb{R}$ such that for any CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ and any CM-type $\Phi$ of $E$ such that the following hold: 1. 1. $(E,\Phi)$ satisfies the Colmez conjecture, 2. 2. For any irreducible Artin character $\chi$ such that $m_{(E,\Phi)}(\chi)\neq 0$, the Artin conjecture for $\chi$ holds (i.e. the Artin $L$-function $L(s,\chi,\mathbb{Q})$ is holomorphic everywhere except possibly for a simple pole at $s=1$), we have $\begin{split}h_{(E,\Phi)}^{\mathrm{Falt}}\geq C_{\mathrm{Col},3}\cdot\mu_{(E,\Phi)}+gC_{\mathrm{Col},4}.\end{split}$ Let $E$ be a CM-field of degree $[E:\mathbb{Q}]=2g$ and let $\Phi$ be a CM- type of $E$. It is easy to see that for the function ${A_{(E,\Phi)}^{0}}$ from $\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$ to $\mathbb{C}$, for any $\sigma\in\mathrm{Gal}(\widetilde{E^{*}_{\Phi}}/\mathbb{Q})$, ${A_{(E,\Phi)}^{0}}(\sigma)=g$ if and only if $\sigma=1$. Therefore, some calculations using the definition of the Artin conductor of Artin characters show that for any $g\in\mathbb{Z}_{\geq 1}$, there exist effectively computable constants $C_{\mu,1}(g)>0$, $C_{\mu,2}(g)\in\mathbb{R}$ such that $\mu_{(E,\Phi)}\geq C_{\mu,1}(g)\frac{1}{[E^{*}_{\Phi}:\mathbb{Q}]}\log|\mathrm{disc}(E^{*}_{\Phi})|+C_{\mu,2}(g)$ for any CM-field $E$ of degree $[E:\mathbb{Q}]=2g$ and any CM-type $\Phi$ of $E$. We can compare this to section 1. ## References * [AGHMP18] Fabrizio Andreatta, Eyal Z. Goren, Benjamin Howard, and Keerthi Madapusi Pera. Faltings heights of abelian varieties with complex multiplication. Ann. of Math. (2), 187(2):391–531, 2018. * [Art86] M. Artin. Néron models. In Arithmetic geometry (Storrs, Conn., 1984), pages 213–230. Springer, New York, 1986. * [BK94] Armand Brumer and Kenneth Kramer. The conductor of an abelian variety. Compositio Math., 92(2):227–248, 1994. * [BLR90] Siegfried Bosch, Werner Lütkebohmert, and Michel Raynaud. Néron models, volume 21 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Springer-Verlag, Berlin, 1990. * [Bos96] Jean-Benoît Bost. Arakelov geometry of abelian varieties. In Conference on Arithmetical Geometry, Berlin, March 21–26 1996\. Max-Planck Institut für Mathematik Bonn. Preprint 96–51. * [CCO14] Ching-Li Chai, Brian Conrad, and Frans Oort. Complex multiplication and lifting problems, volume 195 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2014. * [Cha86] Ching-Li Chai. Siegel moduli schemes and their compactifications over ${\bf C}$. In Arithmetic geometry (Storrs, Conn., 1984), pages 231–251. Springer, New York, 1986. * [Col93] Pierre Colmez. Périodes des variétés abéliennes à multiplication complexe. Ann. of Math. (2), 138(3):625–683, 1993. * [Col98] Pierre Colmez. Sur la hauteur de Faltings des variétés abéliennes à multiplication complexe. Compositio Math., 111(3):359–368, 1998. * [Dav80] Harold Davenport. Multiplicative number theory, volume 74 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, second edition, 1980. Revised by Hugh L. Montgomery. * [Fal83] G. Faltings. Endlichkeitssätze für abelsche Varietäten über Zahlkörpern. Invent. Math., 73(3):349–366, 1983. * [GS00] Andrew Granville and H. M. Stark. $abc$ implies no “Siegel zeros” for $L$-functions of characters with negative discriminant. Invent. Math., 139(3):509–523, 2000. * [Hin07] Marc Hindry. Why is it difficult to compute the Mordell-Weil group? In Diophantine geometry, volume 4 of CRM Series, pages 197–219. Ed. Norm., Pisa, 2007. * [How18] Benjamin Howard. On the averaged colmez conjecture, 2018. * [Lan59] Serge Lang. Abelian varieties. Interscience Tracts in Pure and Applied Mathematics, No. 7. Interscience Publishers, Inc., New York; Interscience Publishers Ltd., London, 1959. * [Lev96] B. Ya. Levin. Lectures on entire functions, volume 150 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1996. In collaboration with and with a preface by Yu. Lyubarskii, M. Sodin and V. Tkachenko, Translated from the Russian manuscript by Tkachenko. * [Mil72] J. S. Milne. On the arithmetic of abelian varieties. Invent. Math., 17:177–190, 1972. * [Mil05] J. S. Milne. Introduction to Shimura varieties. In Harmonic analysis, the trace formula, and Shimura varieties, volume 4 of Clay Math. Proc., pages 265–378. Amer. Math. Soc., Providence, RI, 2005. * [Mil08] James S. Milne. Abelian varieties (v2.00), 2008. Available at www.jmilne.org/math/. * [Mil17] J. S. Milne. Algebraic groups, volume 170 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2017. The theory of group schemes of finite type over a field. * [Mil20] J.S. Milne. Class field theory (v4.03), 2020. Available at www.jmilne.org/math/. * [MM97] M. Ram Murty and V. Kumar Murty. Non-vanishing of $L$-functions and applications. Modern Birkhäuser Classics. Birkhäuser/Springer Basel AG, Basel, 1997. [2011 reprint of the 1997 original] [MR1482805]. * [MP05] Yuri Ivanovic Manin and Alexei A. Panchishkin. Introduction to modern number theory, volume 49 of Encyclopaedia of Mathematical Sciences. Springer-Verlag, Berlin, second edition, 2005. Fundamental problems, ideas and theories, Translated from the Russian. * [N6́4] André Néron. Modèles minimaux des variétés abéliennes sur les corps locaux et globaux. Inst. Hautes Études Sci. Publ. Math., (21):128, 1964. * [Nar90] Władysław Narkiewicz. Elementary and analytic theory of algebraic numbers. Springer-Verlag, Berlin; PWN—Polish Scientific Publishers, Warsaw, second edition, 1990. * [Obu13] Andrew Obus. On Colmez’s product formula for periods of CM-abelian varieties. Math. Ann., 356(2):401–418, 2013. * [Ogg67] A. P. Ogg. Elliptic curves and wild ramification. Amer. J. Math., 89:1–21, 1967. * [Ser79] Jean-Pierre Serre. Local fields, volume 67 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1979. Translated from the French by Marvin Jay Greenberg. * [Shi98] Goro Shimura. Abelian varieties with complex multiplication and modular functions, volume 46 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1998. * [ST68] Jean-Pierre Serre and John Tate. Good reduction of abelian varieties. Ann. of Math. (2), 88:492–517, 1968. * [Sta74] H. M. Stark. Some effective cases of the Brauer-Siegel theorem. Invent. Math., 23:135–152, 1974. * [Tat84] John Tate. Les conjectures de Stark sur les fonctions $L$ d’Artin en $s=0$, volume 47 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston, MA, 1984. Lecture notes edited by Dominique Bernardi and Norbert Schappacher. * [Wei76] André Weil. Elliptic functions according to Eisenstein and Kronecker. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 88. Springer-Verlag, Berlin-New York, 1976. * [Yua19] Xinyi Yuan. On Faltings heights of abelian varieties with complex multiplication. In Proceedings of the Seventh International Congress of Chinese Mathematicians, Vol. I, volume 43 of Adv. Lect. Math. (ALM), pages 521–536. Int. Press, Somerville, MA, 2019. * [YZ18] Xinyi Yuan and Shou-Wu Zhang. On the averaged Colmez conjecture. Ann. of Math. (2), 187(2):533–638, 2018.
galaxies are found to have high stellar masses in our SED-fitting, with a median stellar mass of $\log_{10}M_{\star}/M_{\odot}$ = 10.30 in our fiducial Bagpipes results, with five of them forming the most massive galaxies in our fiducial sample. It is likely that at least some of these objects do contain AGN, which would lower the inferred stellar masses and hence lower the high- mass end of the GSMF. Further observations with NIRSpec or MIRI are essential to ascertain the true nature of these sources. In Appendix A we recompute the $z=7$ GSMF without any LRDs and show that these sources dominate the high mass end of the GSMF. D’Silva et al. (2023) has shown that accounting for the contribution of AGN lowers the cosmic star formation rate density by 0.4 dex at $z\geq 9.5$, which will also lower the inferred stellar mass density, and finds that a significant fraction of the LRDs are hidden AGN. ### 6.4 Impact of Modified IMF Our implementation of a modified top-heavy IMF in Prospector has been shown to reduce masses by up to $\sim$0.5 dex for galaxies at $z\geq$12 (HOT 60K) when compared to a standard Kroupa (2001) IMF, as shown in Figure 5. The decrease in mass seen is also found to be dependent on the star formation model, with a larger decrease in stellar mass found when assuming a parametric “delayed” SFH model (0.46 dex) than when assuming a non-parametric “continuity bursty” SFH model (0.35 dex). Comparison of the best-fitting models has also shown that the $\chi^{2}$ is almost unaffected by the change in IMF, suggesting the modified IMF models can match the observed photometry of the galaxies as well as the original data. We see similar results, with smaller decreases in stellar mass ($\approx$0.3 dex), for the modified IMF model used at 8$\leq z\leq 12$ (HOT 45K). There are a number of comparable studies which also look at the impact of a top-heavy IMF on the masses of high redshift galaxies. The most direct comparison can be made to the results of Steinhardt et al. (2023) as we have used the same top-heavy IMF model with a different SED fitting tool. Their results are based on some of the earliest JWST observations and they only fit a small number of galaxies. Direct comparison of their results to the default EAZY-py fsps templates is also impacted by other differences between the two template sets, including the re-parametrization of the IMF from Chabrier et al. (2000) to Kroupa (2001) and different assumptions for SFH and nebular emission. Our implementation changes only the IMF, with everything else modelled in the same way. They observe decreases of $0.5-1$ dex in stellar mass, whereas our implementation sees smaller decreases, typically from $0.3-0.5$ dex. The reason for this discrepancy is unclear, but it may be possible that the other differences between the standard EAZY-py templates and their models have a larger effect on the stellar mass than expected, or that the change in IMF in our modelling resulted in different star formation histories that acted to somewhat decrease the impact of the modified IMF. Woodrum et al. (2023) also look at other top-heavy modifications to commonly used IMF parametrizations using Prospector. They look at a modification to the Chabrier et al. (2000) IMF, rather than the Kroupa (2001) we use, but they find comparable reductions in stellar mass (between 0.38 and 0.5 dex). They also see no change in the goodness of fit when using a modified IMF. Whilst we have shown that a modified top-heavy IMF can decrease the stellar masses of high-$z$ galaxies, with no change in the simulated photometry, our analysis of the $\Lambda$CDM limits on the growth of stellar mass in § 6.1 does not require a non-standard IMF in order to be compatible with $\Lambda$CDM. In Figure 1 we have shown that the stellar masses can vary significantly with other SED fitting assumptions (dust law, assumed SFH), before any IMF change is considered. For the galaxies with high stellar masses across all the models we consider, it is not possible to distinguish between a modified IMF or simply a high star formation efficiency based on photometry alone. ### 6.5 Comparing the measured GSMF with other observations and theory/simulations In Figure 8, we compare our GSMF estimates to a wide-range of observational and theoretical/simulation based predictions of the GSMF. Here we briefly discuss our GSMF estimates for each of the redshift bins. In order to make direct comparisons where necessary all results have been converted to use a Kroupa (2001) IMF. The overall evolution of the derived Schechter parameters and a comparison to the results derived by other studies can be seen in Figure 10. Whilst the Schechter parameters are highly covariant, and our results typically have large uncertainties, we observe an evolution in $\alpha$ and $\phi^{\star}$ compared, with both parameters decreasing compared to the results at $z\sim 4$ of Caputi et al. (2015), Duncan et al. (2014) and Grazian et al. (2015). We see little evolution of M⋆ within the large range of uncertainties. #### 6.5.1 Redshift z=7 GSMF We derive a mass function at $z\sim 7$ primarily as a proof of concept of our method. We do not take advantage of galaxy lensing in this work, so any reasonable mass completeness limit is higher than previous studies, nor do we have the area of wide-field studies like Weaver et al. (2022) in order to detect rare, bright and high mass galaxies. However with JWST we have seen a surprising excess of UV-faint LRD like objects with high inferred stellar masses, as discussed in § 6.3. The majority of these sources were previously undetected with HST due to relatively weak Lyman-breaks, and so do not appear in pre-JWST stellar mass estimates. Their inclusion in our GSMF has resulted in an excess at the high mass end of the GSMF when compared to other observational studies, and consequently a higher and poorly constrained estimate of M⋆, as we see little evidence for any exponential turnover. The highest mass GSMF data-points of Weaver et al. (2022) fall within our 1$-\sigma$ uncertainty region, but our results are significantly above the measurements of Stefanon et al. (2021). At the low-mass end we fall below the results of Kikuchihara et al. (2020) and Navarro-Carrera et al. (2023), but agree within the uncertainties of Furtak et al. (2021) and Bhatawdekar et al. (2019). Stefanon et al. (2021); Kikuchihara et al. (2020); Bhatawdekar et al. (2019) and Furtak et al. (2021) are all based on HST+Spitzer observations of the Hubble Frontier Fields, and incorporate lensing, which means they probe the low mass end of the GSMF more accurately then this study. Our low mass slope $\alpha=-1.94^{+0.1}_{-0.1}$ is in good agreement with Furtak et al. (2021), but steeper than the results Stefanon et al. (2021) and Kikuchihara et al. (2020). At the time of writing, Navarro-Carrera et al. (2023), Gottumukkala et al. (2023) and Wang et al. (2024) are the only other studies to incorporate JWST observations into their GSMF estimates. We see reasonable agreement with the results of Navarro-Carrera et al. (2023) as our data is within their GSMF uncertainties. This work relies almost entirely on JWST observations, whereas they combine deep JWST observations of small volumes ($\leq$20 arcmin2) with HST and ground-based catalogues. This ground-based data allows them to find more rare, high-mass galaxies than our study, but at lower masses the small volumes probed with their JWST data are potentially vulnerable to cosmic variance. Reliance on ground-based and HST data also limits the maximum redshift they can probe to $z\leq 8$. Wang et al. (2024) uses PRIMER observations with NIRCam and MIRI to measure the GSMF, notably finding that the use of MIRI observations systematically reduces stellar masses measured with SED fitting. We see good agreement in the measured GSMF within the uncertainties of both studies, despite this work not incorporating MIRI data or correcting for any systematic offset in mass arising from the lack of restframe $>1\mu$m observations. Our GSMF does extend to higher stellar mass than the result of Wang et al. (2024), resulting in a higher value for $M^{\star}$, as can be seen in Figure 10. Gottumukkala et al. (2023) examine the contribution of high-mass, dusty galaxies at $3<z<8$ to the GSMF using data from the CEERS survey. Given that our GSMF probes a wider galaxy population, we do not expect to see overlap at all stellar masses. We see good overlap at the highest stellar masses M${}_{\odot}\sim 10^{10.5}$, where our SMF estimate is dominated by dusty LRD galaxies (as discussed in § 6.3). When we compare to predictions from models and simulations, we see agreement with the majority of models at the low-mass end but a significant excess at higher masses that is not reproduced by any of the models. We find in particular that the JAGUAR model we use for our completeness simulations shows a more rapid decline at high stellar mass than the other models, but given that we are not reliant on our completeness correction in this mass regime this does not impact our estimate of the GSMF. We are closest to the prediction of _Universe Machine_ (Behroozi et al., 2019) at the highest stellar mass bin. #### 6.5.2 Redshift z=8 GSMF Our fiducial GSMF estimate at $z\sim 8$ shows reasonable agreement with most predictions. As we do not bootstrap in redshift when constructing the GSMF, we do not account for galaxies scattering between redshift bins, and for example a galaxy found to be at $z=7.49$ with Bagpipes would contribute only to the $z=7$ GSMF, even in a significant fraction of the redshift PDF is above $z\geq 7.50$. This does not affect the majority of galaxies within our sample, but it does explain some of the discrepancy between our results and the implied results of Labbé et al. (2023), shown in purple in Figure 8 assuming 100% completeness. The LRD galaxies of Labbé et al. (2023) which we also select are all found to be at $z\leq 7.5$, meaning they do not contribute at all to our estimate of the $z=8$ GSMF. The best-fitting redshift for these objects in some cases is quite close to this boundary however, meaning that these objects could theoretically contribute to the $z=8$ GSMF instead, which would boost the high-mass end significantly. Our GSMF is also lower than the results of Kikuchihara et al. (2020), which incorporates strong gravitational lensing in order to probe to lower stellar mass. Our GSMF agrees with the results of Song et al. (2016); Bhatawdekar et al. (2019); Stefanon et al. (2021), and appears to validate the majority of pre-JWST GSMF estimates. We can also draw a comparison to the results of Wang et al. (2024), whose GSMF results at $z=8$ are systematically above our results, but within the derived uncertainties of both studies. We see good agreement with most theoretical predictions of the GSMF at this redshift, with _Universe Machine_ (Behroozi et al., 2019), _SC SAM GUREFT_ (Yung et al., 2023) and _FLARES_ (Lovell et al., 2021; Wilkins et al., 2023b) having the most similar results. #### 6.5.3 Redshift z=9 GSMF Our GSMF estimate at $z\sim 9$ is below the results of Bhatawdekar et al. (2019) and Kikuchihara et al. (2020), but within the uncertainties of Stefanon et al. (2021). We are below the implied result of Labbé et al. (2023), which is derived from two galaxies in their sample in this redshift bin, but assuming 100% completeness. We include one of these galaxies in our GSMF at this redshift, as the other does not meet our selection criteria, as we do not detect the Lyman break at 5$\sigma$. We additionally include one other candidate from their sample in this GSMF, as our fiducial Bagpipes photo-$z$ places it within this redshift bin, rather than the $z\sim 8$ redshift bin based on their photo-$z$. For both of their galaxies we do include in this redshift bin we find $\sim 0.4$ dex lower stellar masses, meaning they contribute to a lower stellar mass bin. Our reliance on the rest-frame UV to robustly detect sources is one limitation of this work, although the increased depth of JWST observations when compared to HST has reduced this in some fields. We investigated less-stringent constraints on the Lyman-break, but found that this dramatically increased rates of contamination within our sample. When compared to simulations, _FLARES_ and _Universe Machine_ are close to our GSMF estimate, but almost all the predictions are within our posterior region. Interestingly, in this redshift bin we are below the predictions of two recent JWST-era studies; Mauerhofer & Dayal (2023) and Li et al. (2023), both of which incorporate higher star formation efficiencies than typical models. #### 6.5.4 Redshift z=10.5 GSMF At $z\sim 10.5$ observational comparisons can be made only to the pre-JWST results of Stefanon et al. (2021). In comparison to their results we find a significant excess of high mass galaxies in our observations. Our results show that above $z\geq 10$ JWST observations are essential to accurately sample the high-$z$ galaxy population. Our results are above the majority of theoretical and simulation-derived predictions, but do show good agreement with _Universe Machine_ (Behroozi et al., 2019) and _FLARES_ (Lovell et al., 2021; Wilkins et al., 2023b). #### 6.5.5 Redshift z=12.5 GSMF In our highest redshift bin, 11.5 $<z\leq 13.5$ which covers only $\approx$ 80 Myr, there are no published observationally-derived results and few theoretical or simulation-based GSMF comparisons at this redshift. Pre-JWST estimates of the GSMF were not possible at this redshift, and even with JWST our GSMF estimate is also uncertain due to the difficulty in accurate stellar mass estimates as well as the possible contribution of contaminants. At $z=12.5$ the longest wavelength NIRCam filter falls within the rest-frame UV, which is dominated by young stars, leading to highly uncertain star formation histories and stellar masses. As we explored briefly in § 4.5, the more likely possibility of a top-heavy IMF or exotic stellar populations in these early galaxies further increases the systematic uncertainties in the stellar mass estimates. An example of three galaxies within this redshift bin is shown in the lower plot of Figure 3 and in Figure 4, and the range of stellar mass estimates ($\sim 0.8-1dex$) for different Bagpipes and Prospector with very little difference in the fitted rest-UV spectra shows the difficulty in estimating stellar mass at these redshift. Previous studies at lower redshift with HST and Spitzer have found that stellar masses estimated by HST alone, with no measurement of the rest optical emission, typically underestimate stellar masses by 0.62 dex, compared to measurements including HST and Spitzer NIR observations (Furtak et al., 2021). At this redshift range, our JWST NIRCam observations are probing comparable rest-frame UV wavelengths to HST observations at $z\sim 6-7$, and it possible that our stellar masses are also underestimated unless there is a significant change in stellar populations or IMF. The possibility of more stochastic star formation histories at this redshift compared to $z\sim 6-7$ may also lead to outshining, which further increases the stellar mass discrepancy (Narayanan et al., 2023). We note that several galaxies in this bin have been excluded from the GSMF in this case due to our requirement that the contamination is less than 50%. The inclusion of these possible contaminants would result in an $\approx$ 0.3 dex increase in the lowest mass bin. The results of this are shown in Appendix C. Whilst we do not attempt to fit the GSMF, we can make approximate comparisons to the few available predictions. We see the closest agreement with _DELPHI_ (Mauerhofer & Dayal, 2023) and are within 1.5$\sigma$ of _FLARES_ (Lovell et al., 2021; Wilkins et al., 2023b) in the higher mass bin, but have an excess of $\approx 10^{8}$ M⊙ galaxies when comparing to _FLARES_ and _SC SAM GUREFT_ (Yung et al., 2023). Our results are significantly above the predictions of _BLUETIDES_ (Feng et al., 2016; Wilkins et al., 2017) at all stellar masses. Our fiducial GSMF prediction is slightly below the prediction of Li et al. (2023) with maximum star formation efficiency $\epsilon_{\rm max}$ = 0.2, and significantly below the upper limit of $\epsilon_{\rm max}$ = 1. #### 6.5.6 Alternative GSMF Estimates Following on from the comparison of galaxy stellar mass estimates with different choices of SFH and priors in § 4.3, it is possible to estimate the GSMF for any of the different SED-fitting models. A full comparison of the derived GSMF for every model, given the many possible combinations of possible IMF and SFH models, is beyond the scope of this work, but we give a representative example of the GSMF derived at z$\sim$10.5 for the models which show the most variation in stellar mass when compared to our fiducial Bagpipes fitting. Here we choose to investigate the GSMF dependence on the chosen SFH model and SED fitting tool, rather than the choice of parameter prior or dust law. This is because in § 4.3, there was larger variation in stellar mass with little variation in $\chi^{2}$ for these alternative models. Additionally, as discussed, the use of non-parametric SFHs is more common in the literature (e.g. Tacchella et al., 2022; Giménez-Arteaga et al., 2023; Jain et al., 2024; Giménez-Arteaga et al., 2024) for high-$z$ galaxies due to problems such as outshining. Figure 13: GSMF Schechter parametrization for the $z\sim$10.5 redshift bin derived for our fiducial Bagpipes SED-fitting compared to alternative GSMF estimates. We show the GSMF derived using Bagpipes with the non-parametric “continuity bursty” SFH (labelled ‘bursty’) as well as for two GSMFs derived using Prospector SED-fitting for a parametric and non-parametric SFH. We also show for comparison the GSMF inferred with the alternative top-heavy IMF model. The derived GSMF is clearly dependent on the choice of model and SED fitting tool, with the “continuity bursty” SFH model typically shifting the GSMF towards higher stellar mass. These Schechter functions are tabulated in Appendix B. We derive these GSMF estimates using the same method as described in § 5 for the fiducial Bagpipes GSMF, replacing the stellar mass PDFs, best-fitting SEDs and redshift estimates with those of the chosen model. Figure 13 shows a comparison of the fiducial Bagpipes GSMF to a GSMF derived from the “continuity bursty” non-parametric SFH model, which increases the stellar mass estimates by 0.2 dex on average, but $\geq 1$ dex in some cases. This results in the largest change in the overall GSMF when compared to the fiducial Bagpipes result. We also show mass functions derived from our Prospector SED-fitting, which are offset above our fiducial Bagpipes GSMF for both the parametric and non- parametric results. These results are somewhat comparable to the spread seen in the Bagpipes results, although the low-mass end of the GSMF in the “continuity bursty” SFH model produces a shallower slope than the other models. Crucially we can see that the derived GSMFs are in tension with each other, and do not typically fall within the confidence intervals across the majority of the stellar mass range. This is consistent with Wang et al. (2023b), who argue that the stellar mass uncertainties are typically underestimated by SED fitting procedures. The change in inferred stellar mass we observe with a modified IMF does not appear to vary strongly with stellar mass, so the the impact on the GSMF can generally be seen as a shift towards lower stellar mass of $0.3-0.4$ dex. This is comparable in magnitude and opposite in direction to the shift seen when moving from Bagpipes to Prospector when using a parametric SFH, which results in little overall change in the resulting GSMF. Resolved SED fitting from Giménez-Arteaga et al. (2023) has found that the higher stellar masses inferred by the non-parametric SFHs can better account for the out-shining of older stellar populations in bright, actively star forming galaxies. How widespread the issue of outshining is among our sample is unknown, but if it is common then the masses and resulting GSMF of the “continuity bursty” SFH model may more accurately recover the true stellar populations of these early galaxies. These results demonstrate the overall systematic uncertainty different assumptions cause in the GSMF which are not represented by the uncertainty contours. Most GSMF estimates do not consider the overall uncertainty introduced by the assumptions of their modelling, which often dwarfs the statistical uncertainty in the fit itself. The variation in the derived GSMF can also significantly impact the implied SMD, as discussed in the next section. ### 6.6 Stellar Mass Density Evolution in the Early Universe The growth of stellar mass density in the early Universe is highly uncertain. Some observational studies (e.g. Oesch et al., 2014; Stefanon et al., 2021; Willott et al., 2023a) have found a sharp decline in stellar mass density at $z\geq$ 8, whereas others see a flatter evolution (Kikuchihara et al., 2020; Bhatawdekar et al., 2019). On the theoretical side, Oesch et al. (2018) uses dark matter halo evolutionary models to predict a deviation from the constant star formation efficiency (CSFE) model of Madau & Haardt (2015), which follows a significantly steeper slope at $z\geq$7. Our results from our fiducial Bagpipes model fall between the predictions of Madau & Haardt (2015) and Oesch et al. (2018). We see a flatter evolution with redshift than predicted with (Oesch et al., 2018), but an overall lower stellar mass density than the CFSE model of Madau & Haardt (2015). For our other GSMF estimates at $z=10.5$, shown in Figure 13, we find that $\rho_{\star}$ increases by up to 0.75 dex, which would bring it closer to the constant star formation efficiency prediction of Madau & Haardt (2015). Whilst we do not show the SMD scatter measured in other redshift bins, we typically see the same behaviour at $z>7$, with our fiducial Bagpipes SMD result producing lower $\rho_{\star}$ estimates than our alternative models. With our fiducial Bagpipes results we see significant evolution of the GSMF between $z=7$ and $z=8$, with $\rho_{\star}$ decreasing by $\sim 0.85$ dex. However we see a significantly flatter evolution in the SMD derived from the “continuity bursty” model GSMF, with a decrease of only $\sim 0.4$ dex across the same redshift range. This is due partly to the overall increase in stellar mass estimates observed with this SFH model when compared to our fiducial model, as detailed in § 4.3, but is also due to the scattering of the high-mass LRD galaxies scattering between the $z=7$ and $z=8$ redshift bins due to uncertain photo-$z$ estimates, which significantly impacts the GSMF at higher stellar masses. We see a good agreement between the integration of the star formation rate density of Adams et al. (2023), which uses the same sample, and our fiducial SMD results. There are very few JWST-era GSMF estimates to directly compare against, and so we have computed the inferred stellar mass density based on the integral of the cosmic star formation rate density of other studies. We note however the numerous works showing the increased scatter in mass-to-light ratios observed due to bursty star formation (Santini et al., 2022; Asada et al., 2023), which will impact the assumptions made to convert these UV luminosity densities into stellar mass densities. In Figure 11 we show the overall stellar mass density range we find when we use a different SED fitting tool or star formation history model (dotted red uncertainty). This is significantly larger than the statistical uncertainty in the stellar mass density from our fiducial Bagpipes results. A change of up to 0.75 dex at $z\approx 10.5$ is possible, when only the SED fitting tool or SFH model is varied and the overall sample is unchanged. More significant variations are possible between the results of independent studies, which also have to consider differences in reduction, source detection, photo-$z$ estimation, selection procedure, cosmic variance, and completeness corrections. Not accounting for the contribution of AGN to the observed photometry may cause overestimation of the stellar mass density at high- redshift (D’Silva et al., 2023). The range of stellar mass densities possible with our alternative GSMF estimates at $z\sim 10.5$ is mostly above the 1$\sigma$ range of Adams et al. (2023). A discrepancy between the integrated star formation rate density and stellar mass density measured for the same sample could hint at a different IMF, since the assumed return fraction is strongly dependent on the chosen IMF, and the SMF and UVLF probe different stellar populations with different characteristic stellar mass. However there are a number of other possible issues with the conversion of the UVLF into a SMD estimate; the conversion of UV flux to SFR ($\kappa_{\rm UV}$), is often assumed to be a constant factor but actually dependent on the age and metallicity Madau & Dickinson (2014), as well as the other assumptions used to calculate the return fraction (closed- box model, constant IMF & metal yield and instantaneous recycling of metals) which may not be valid at high-redshift. As we show in § 4.3, in some cases discrimination between models or priors based on the goodness of fit may be possible, but in others (e.g. assumed SFH model), significant scatter in stellar mass estimates are possible with no difference in $\chi^{2}$. Other studies which use only one method for measuring stellar mass estimates will underestimate the overall uncertainty in the derived GSMF and stellar mass density estimates. ## 7 Conclusions In this paper we present an investigation into the properties of the EPOCHS v1 sample of 1120 high-redshift galaxies at 6.5 $\leq z\leq 13.5$ taken from a uniform reduction of 187 arcmin2 of JWST data, including the GTO program PEARLS as well as other public ERS/GO JWST programs. We examine the consistency of galaxy properties, including stellar mass, under different assumptions and using different SED fitting tools, including Bagpipes and Prospector. In particular we examine the impact of different SFH parametrizations as well as switching between a parametric and non-parametric SFH models. We also also investigate the possible reduction in stellar mass when assuming a top-heavy IMF. We then use this sample and our range of stellar mass estimates to construct possible realisations of the galaxy stellar mass function. Lastly we integrate our mass function estimates to probe the buildup of stellar mass in the early Universe via the stellar mass density. The major conclusions from this study are as follows: 1\. We find that the stellar mass of high-redshift galaxies can depend strongly on assumed models, their priors and the SED fitting package used. In particular the estimated stellar mass can increase by $>$1 dex when a parametric SFH is exchanged for a non-parametric SFH, with no change in the goodness of fit. Higher stellar mass discrepancies are seen at $z>10$ due to a lack of rest-optical emission. 2\. We find that the assumption of a modified top-heavy Kroupa (2001) IMF, which may more accurately model the hot star-forming regions within high-$z$ galaxies, can reduce stellar mass estimates by up to 0.5 dex with no impact on the goodness of fit. 3\. Whilst some of the stellar mass estimates imply a high star formation efficiency, in our analysis of the most massive galaxies in our sample using the Extreme-Value Statistics methodology of Lovell et al. (2023) we do not find any galaxies which are incompatible with the existing $\Lambda$CDM cosmology. The largest stellar mass estimates are typically found when fitting the non-parametric SFH models, and often can be significantly reduced with an alternative model. We not require a top-heavy IMF 4\. Across all of the fitted models, the highest mass galaxies in our sample are ‘Little Red Dots’, with inferred masses of $>$1010 M⊙ at $z\approx 7$. These galaxies dominate the highest mass bins of our galaxy stellar mass function (GSMF) estimates, so understanding their true stellar populations and accounting for the likely contribution of AGN (Greene et al., 2023; Furtak et al., 2023, 2023) will be essential to more accurately constrain further GSMF estimates. 5\. With the GSMF derived from our fiducial Bagpipes results, we typically see good agreement with existing constraints on the GSMF at z$\leq$9.5. At the limits of HST+Spitzer (z$\geq$10) we see an excess of galaxies when compared to pre-JWST observations, but our GSMF results fall within predictions of simulations and theory. 5\. The systematic variation in stellar mass estimates we find can dramatically impact the inferred galaxy stellar mass function and therefore the stellar mass density. We show that the choice of star formation history model or SED fitting tool can cause up to a 0.75 dex shift in the overall stellar mass density at z$\approx$10.5 with the same sample of galaxies. We predict larger offsets between independent samples, where different reductions, selection techniques and photo-$z$ estimates will increase the uncertainties. 6\. We see a flatter evolution of the cumulative stellar mass density than predicted by dark matter halo evolution models, whilst the slope of our results is more consistent with constant star formation efficiency models. Our results suggest that significant stellar mass had already formed at $z\geq$11.5. This is only the beginning of GSMF estimates at $z>10$, and the use of ultra- deep observations (the second NGDEEP epoch (Bagley et al., 2023a), the JADES Origins Field (Eisenstein et al., 2023b; Robertson et al., 2023a) and others) and magnification by lensing clusters (PEARLS, UNCOVER, CANUCS; Windhorst et al., 2023; Bezanson et al., 2022; Willott et al., 2023b) will help constrain the GSMF at stellar masses below our completeness limit of $\sim 10^{8}$ M⊙, whilst widefield surveys (e.g. PRIMER, UNCOVER, Cosmos-Webb; Dunlop et al., 2021; Bezanson et al., 2022; Kartaltepe et al., 2021) will add area and rare higher-mass sources. Deep MIRI F560W or F770W observations (e.g. the MIRI HUDF survey, Norgaard-Nielsen & Perez-Gonzalez, 2017) will be crucial to provide better constraints on stellar mass estimates at these redshifts by extending the wavelength range further into the rest-frame optical, although the sensitivity of MIRI decreases rapidly with increasing wavelength (e.g. Wang et al., 2024). More complete NIRSpec coverage is also important to identify interlopers, confirm photometric redshifts and distinguish between AGN emission and star forming galaxies. All of the raw JWST data used in this work are the same as used in Adams et al. (2023) and can be accessed via this MAST DOI: DOI 10.17909/5h64-g193. All proprietary data from the PEARLS program will all become accessible over 2024. Catalogues for all high-$z$ galaxies will be published with the EPOCHS I paper (Conselice et. al., in prep). The fiducial GSMF and SMD results from this work are availale on GitHub, and results for our alternative models will be made available upon request. TH, CC, NA, DA, QL, JT, LW acknowledge support from the ERC Advanced Investigator Grant EPOCHS (788113), as well as two studentships from the STFC. AZ acknowledges support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF); by the Ministry of Science & Technology, Israel; and by the Israel Science Foundation Grant No. 864/23. RAW, SHC, and RAJ acknowledge support from NASA JWST Interdisciplinary Scientist grants NAG5-12460, NNX14AN10G and 80NSSC18K0200 from GSFC. CCL acknowledges support from the Royal Society under grant RGF/EA/181016. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. CNAW acknowledges funding from the JWST/NIRCam contract NASS-0215 to the University of Arizona. M.N. acknowledges INAF-Mainstreams 1.05.01.86.20. CNAW acknowledges support from the NIRCam Development Contract NAS5-02105 from NASA Goddard Space Flight Center to the University of Arizona. This work is based on observations made with the NASA/ESA Hubble Space Telescope (HST) and NASA/ESA/CSA James Webb Space Telescope (JWST) obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST, and NAS 5–26555 for HST. The authors thank all involved with the construction and operation of JWST, without whom this work would not be possible. We also thank the PI’s and teams who designed and executed the ERS, GTO and GO programs used within this work, including PEARLS (1176, 2738), SMACS-0723 (2737), GLASS (1324), CEERS (1345), JADES (1180, 1210, 1895, 1963) and NGDEEP (2079). This work makes use of astropy (Astropy Collaboration et al., 2013, 2018, 2022b), matplotlib (Hunter, 2007), reproject, DrizzlePac (Hoffmann et al., 2021), SciPy (Virtanen et al., 2020) and photutils (Bradley et al., 2022). ## Appendix A z = 7 Stellar Mass Function without ‘Little Red Dots’ As discussed in § 6.3, the ‘Little Red Dots’ (LRDs) dominate the high mass end of our GSMF at $z=7$. As the contribution of AGN to their photometry is still somewhat uncertain, and likely differs on an individual basis between galaxies, in the main results of this paper we do not remove LRDs from the GSMF estimates, or account for any possible AGN emission. In this Appendix we briefly present the alternative case, where we remove all objects which meet the color-color selection criteria of Kokorev et al. (2024) and reconstruct the $z=7$ GSMF. When we apply their ‘red2’ color selection, compactness criterion and SNR requirements to our robust sample, we find 34 galaxies which meet these cuts. There are 13 in the NEP-TDF, 17 in CEERS, 3 in the JADES DR1 field, and 1 in the NGDEEP field. The median redshift is 7.16, with all candidates falling between $z=6.5$ (our redshift cut) and $z=8.7$. The median fiducial Bagpipes stellar mass is $\log_{10}M_{\star}/M_{\odot}=8.90$, with a maximum stellar mass of $\log_{10}M_{\star}/M_{\odot}=10.70$. Figure 14: Galaxy Stellar Mass Function at $6.5<z\leq 7.5$, excluding all ‘Little Red Dots’, compared to our fiducial Bagpipes results. We exclude these 34 candidates from our sample and reconstruct the stellar mass function at $z=7$. No other changes are made to our GSMF construction or fitting procedures. Figure 14 shows the GSMF derived without including any ‘Little Red Dots’. Compared to the fiducial GSMF, this removes the two highest mass bins entirely, which demonstrates our reliance on these galaxies at the high mass regime. In terms of the derived Schechter parameters, the exponential mass cutoff M⋆, which is not well constrained, decreases from 11.57${}^{+0.63}_{-0.85}$ to 10.64${}^{+1.25}_{-0.98}$ when we exclude the LRDs. The median posterior $\phi^{\star}$ and $\alpha$ for the GSMF without LRDs are $-5.40^{+1.34}_{-1.43}$ and $-2.04^{+0.18}_{-0.13}$ respectively. ## Appendix B Tabulated Schechter parameters for alternative GSMF estimates at z = 10.5 We give the Schechter function parameters for our alternative GSMF fits at $z\sim 10.5$ in Table 6. These are the Schechter parameters representing the fits shown in Figure 13. These GSMF estimates are equivalent to the fits given in Table 3 for the fiducial GSMF, and are calculated using the same method, simply replacing the redshift and stellar mass PDF used in constructing the GSMF with those derived under the alternate SED fitting assumptions. Table 6: Schechter function parameters for the GSMF at $9.5\leq z\leq 11.5$ for each of the alternative models shown in Figure 13. For $\alpha$, M⋆ and $\log_{10}\phi^{\star}$ we give both the median posterior and maximum likelihood values (in brackets). The details of the Bagpipes and Prospector configurations for each model are given in § 4.1 and § 4.2. SED Fitting Tool | SFH Model | IMF | $\alpha$ | M⋆ | $\log_{10}\phi^{\star}$ ---|---|---|---|---|--- Bagpipes | “continuity bursty” | Kroupa (2001) | $-1.93^{+0.21}_{-0.16}(-1.80)$ | $10.70^{+1.21}_{-1.06}(9.70)$ | $-5.94^{+1.33}_{-1.28}(-4.70)$ Prospector | “continuity bursty” | Kroupa (2001) | $-1.72^{+0.26}_{-0.18}(-1.51)$ | $10.51^{+1.34}_{-0.99}(9.54)$ | $-5.57^{+1.13}_{-1.19}(-4.46)$ Prospector | delayed-$\tau$ | Kroupa (2001) | $-1.98^{+0.22}_{-0.17}(-1.89)$ | $10.67^{+1.22}_{-1.07}(9.68)$ | $-6.22^{+1.37}_{-1.35}(-4.98)$ Prospector | “continuity bursty” | HOT 45K | $-1.93^{+0.25}_{-0.20}(-1.81)$ | $10.56^{+1.28}_{-1.10}(9.53)$ | $-6.09^{+1.39}_{-1.39}(-4.83)$ Prospector | delayed-$\tau$ | HOT 45K | $-2.19^{+0.25}_{-0.22}(-2.18)$ | $10.54^{+1.14}_{-1.10}(9.55)$ | $-6.67^{+1.63}_{-1.45}(-5.31)$ ## Appendix C z = 12.5 GSMF with no contamination limit Our fiducial GSMF applies a 50% contamination limit on all galaxies. Given the fields a galaxy is selected in and its stellar mass, our JAGUAR contamination simulation computes a likelihood of contamination, based on simulated galaxies with the same stellar mass. The highest contamination is seen in the $z=12.5$ GSMF, for the $10^{7.5}<$ M⋆/M${}_{\odot}\leq 10^{8.5}$ bin, and results in several galaxies being removed from our fiducial GSMF estimate. As the predictions of JAGUAR are uncertain at these redshifts, it is hard to judge how accurate our contamination predictions are. In Figure 15 we have recomputed the $z=12.5$ GSMF with no contamination limit, which boosts the lower stellar mass bin by $\sim$0.3 dex. This brings it closer to the predictions of Li et al. (2023)’s Feedback Free Model (FFB), which has higher star formation efficiency than most models. The FFB model shown is for $\epsilon_{\rm max}=0.20$ specifically, and the models with higher SFE ($\epsilon_{\rm max}=0.5-1$) overpredict the GSMF at this redshift compared to our observations. If our contamination is over-estimated in this redshift bin, then the FFB model of Li et al. (2023) or Mauerhofer & Dayal (2023)’s DELPHI model provide the closest predictions, suggesting high but not extreme star formation efficiency is required to produce the observed GSMF at this redshift. Figure 15: Galaxy Stellar Mass Function at $11.5<z\leq 13.5$ without a 50% contamination limit, compared to our fiducial Bagpipes results. ## References * Adams et al. (2023) Adams, N. J., Conselice, C. J., Ferreira, L., et al. 2023, MNRAS, 518, 4755, doi: 10.1093/mnras/stac3347 * Adams et al. (2023) Adams, N. J., Conselice, C. J., Austin, D., et al. 2023, arXiv preprint arXiv:2304.13721 * Akins et al. (2023) Akins, H. B., Casey, C. M., Allen, N., et al. 2023, arXiv preprint arXiv:2304.12347 * Arrabal Haro et al. (2023) Arrabal Haro, P., Dickinson, M., Finkelstein, S. L., et al. 2023, arXiv e-prints, arXiv:2304.05378, doi: 10.48550/arXiv.2304.05378 * Arrabal Haro et al. (2023) Arrabal Haro, P., Dickinson, M., Finkelstein, S. L., et al. 2023, Nature, 622, 707 * Arrabal Haro et al. (2023) Arrabal Haro, P., Dickinson, M., Finkelstein, S. L., et al. 2023, arXiv e-prints, arXiv:2303.15431, doi: 10.48550/arXiv.2303.15431 * Asada et al. (2023) Asada, Y., Sawicki, M., Abraham, R., et al. 2023, Monthly Notices of the Royal Astronomical Society, stad3902 * Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 * Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f * Astropy Collaboration et al. (2022a) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022a, apj, 935, 167, doi: 10.3847/1538-4357/ac7c74 * Astropy Collaboration et al. (2022b) —. 2022b, ApJ, 935, 167, doi: 10.3847/1538-4357/ac7c74 * Atek et al. (2023) Atek, H., Shuntov, M., Furtak, L. J., et al. 2023, MNRAS, 519, 1201, doi: 10.1093/mnras/stac3144 * Austin et al. (2023) Austin, D., Adams, N., Conselice, C., et al. 2023, arXiv preprint arXiv:2302.04270 * Bagley et al. (2023a) Bagley, M. B., Pirzkal, N., Finkelstein, S. L., et al. 2023a, arXiv e-prints, arXiv:2302.05466, doi: 10.48550/arXiv.2302.05466 * Bagley et al. (2023b) Bagley, M. B., Finkelstein, S. L., Koekemoer, A. M., et al. 2023b, ApJ, 946, L12, doi: 10.3847/2041-8213/acbb08 * Barro et al. (2023) Barro, G., Perez-Gonzalez, P. G., Kocevski, D. D., et al. 2023, arXiv e-prints, arXiv:2305.14418, doi: 10.48550/arXiv.2305.14418 * Behroozi et al. (2019) Behroozi, P., Wechsler, R. H., Hearin, A. P., & Conroy, C. 2019, MNRAS, 488, 3143, doi: 10.1093/mnras/stz1182 * Bertin & Arnouts (1996) Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 * Bezanson et al. (2022) Bezanson, R., Labbe, I., Whitaker, K. E., et al. 2022, arXiv e-prints, arXiv:2212.04026, doi: 10.48550/arXiv.2212.04026 * Bhatawdekar et al. (2019) Bhatawdekar, R., Conselice, C. J., Margalef-Bentabol, B., & Duncan, K. 2019, Monthly Notices of the Royal Astronomical Society, 486, 3805 * Bhowmick et al. (2018) Bhowmick, A. K., Di Matteo, T., Feng, Y., & Lanusse, F. 2018, Monthly Notices of the Royal Astronomical Society, 474, 5393 * Bhowmick et al. (2020) Bhowmick, A. K., Somerville, R. S., Di Matteo, T., et al. 2020, MNRAS, 496, 754, doi: 10.1093/mnras/staa1605 * Bouwens et al. (2023) Bouwens, R., Illingworth, G., Oesch, P., et al. 2023, Monthly Notices of the Royal Astronomical Society, 523, 1009 * Bouwens et al. (2016) Bouwens, R., Oesch, P., Labbé, I., et al. 2016, The Astrophysical Journal, 830, 67 * Bouwens et al. (2015) Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, ApJ, 803, 1, doi: 10.1088/0004-637X/803/1/34 * Bowler et al. (2023) Bowler, R. A. A., Inami, H., Sommovigo, L., et al. 2023, arXiv e-prints, arXiv:2309.17386, doi: 10.48550/arXiv.2309.17386 * Bowman et al. (2018) Bowman, J. D., Rogers, A. E., Monsalve, R. A., Mozdzen, T. J., & Mahesh, N. 2018, Nature, 555, 67 * Boylan-Kolchin (2023) Boylan-Kolchin, M. 2023, Nature Astronomy, 1 * Bradač et al. (2014) Bradač, M., Ryan, R., Casertano, S., et al. 2014, The Astrophysical Journal, 785, 108 * Bradley et al. (2022) Bradley, L., Sipőcz, B., Robitaille, T., et al. 2022, Zenodo, doi: 10.5281/zenodo.6825092 * Brammer et al. (2008) Brammer, G. B., van Dokkum, P. G., & Coppi, P. 2008, The Astrophysical Journal, 686, 1503 * Brinchmann & Ellis (2000) Brinchmann, J., & Ellis, R. S. 2000, ApJ, 536, L77, doi: 10.1086/312738 * Brown et al. (2021) Brown, A. G., Vallenari, A., Prusti, T., et al. 2021, Astronomy & Astrophysics, 649, A1 * Bruzual & Charlot (2003) Bruzual, G., & Charlot, S. 2003, Monthly Notices of the Royal Astronomical Society, 344, 1000, doi: 10.1046/j.1365-8711.2003.06897.x * Buchner (2016) Buchner, J. 2016, Astrophysics Source Code Library, ascl * Bundy et al. (2006) Bundy, K., Ellis, R. S., Conselice, C. J., et al. 2006, ApJ, 651, 120, doi: 10.1086/507456 * Bunker et al. (2023a) Bunker, A. J., Cameron, A. J., Curtis-Lake, E., et al. 2023a, arXiv e-prints. https://arxiv.org/abs/2306.02467v1 * Bunker et al. (2023b) Bunker, A. J., Saxena, A., Cameron, A. J., et al. 2023b, arXiv preprint arXiv:2302.07256 * Bushouse et al. (2022) Bushouse, H., Eisenhamer, J., Dencheva, N., et al. 2022, JWST Calibration Pipeline, 1.8.2, Zenodo, doi: 10.5281/zenodo.7325378 * Calzetti et al. (2000) Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, The Astrophysical Journal, 533, 682 * Cameron et al. (2023) Cameron, A. J., Katz, H., Witten, C., et al. 2023, Nebular dominated galaxies in the early Universe with top-heavy stellar initial mass functions. https://arxiv.org/abs/2311.02051 * Caputi et al. (2015) Caputi, K., Ilbert, O., Laigle, C., et al. 2015, The Astrophysical Journal, 810, 73 * Carnall et al. (2019) Carnall, A. C., Leja, J., Johnson, B. D., et al. 2019, The Astrophysical Journal, 873, 44 * Carnall et al. (2018) Carnall, A. C., McLure, R. J., Dunlop, J. S., & Davé, R. 2018, Monthly Notices of the Royal Astronomical Society, 480, 4379, doi: 10.1093/mnras/sty2169 * Castellano et al. (2022) Castellano, M., Fontana, A., Treu, T., et al. 2022, ApJ, 938, L15, doi: 10.3847/2041-8213/ac94d0 * Chabrier et al. (2000) Chabrier, G., Baraffe, I., Allard, F., & Hauschildt, P. 2000, ApJ, 542, 464, doi: 10.1086/309513 * Charlot & Fall (2000) Charlot, S., & Fall, S. M. 2000, The Astrophysical Journal, 539, 718 * Chevallard & Charlot (2016) Chevallard, J., & Charlot, S. 2016, Monthly Notices of the Royal Astronomical Society, 462, 1415 * Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, The Astrophysical Journal, 823, 102 * Clauwens et al. (2016) Clauwens, B., Schaye, J., & Franx, M. 2016, Monthly Notices of the Royal Astronomical Society, 462, 2832 * Cleveland (1979) Cleveland, W. S. 1979, Journal of the American statistical association, 74, 829 * Conroy & Gunn (2010) Conroy, C., & Gunn, J. E. 2010, Astrophysics Source Code Library, ascl * Cullen et al. (2023) Cullen, F., McLure, R., McLeod, D., et al. 2023, Monthly Notices of the Royal Astronomical Society, 520, 14 * Curti et al. (2023) Curti, M., d’Eugenio, F., Carniani, S., et al. 2023, Monthly Notices of the Royal Astronomical Society, 518, 425 * Curtis-Lake et al. (2023) Curtis-Lake, E., Carniani, S., Cameron, A., et al. 2023, Nature Astronomy, doi: 10.1038/s41550-023-01918-w * Davidzon et al. (2017) Davidzon, I., Ilbert, O., Laigle, C., et al. 2017, Astronomy & Astrophysics, 605, A70 * Davis et al. (2007) Davis, M., Guhathakurta, P., Konidaris, N. P., et al. 2007, The Astrophysical Journal, 660, L1 * Dome et al. (2024) Dome, T., Tacchella, S., Fialkov, A., et al. 2024, Monthly Notices of the Royal Astronomical Society, 527, 2139 * Donnan et al. (2022) Donnan, C. T., McLeod, D. J., Dunlop, J. S., et al. 2022, arXiv, 000, arXiv:2207.12356. https://ui.adsabs.harvard.edu/abs/2022arXiv220712356D/abstract * Dotter (2016) Dotter, A. 2016, The Astrophysical Journal Supplement Series, 222, 8 * Drakos et al. (2022) Drakos, N. E., Villasenor, B., Robertson, B. E., et al. 2022, ApJ, 926, 194, doi: 10.3847/1538-4357/ac46fb * Dressler et al. (2023) Dressler, A., Rieke, M., Eisenstein, D., et al. 2023, arXiv e-prints. https://arxiv.org/abs/2306.02469v1 * Driver & Robotham (2010) Driver, S. P., & Robotham, A. S. G. 2010, MNRAS, 407, 2131, doi: 10.1111/j.1365-2966.2010.17028.x * Driver et al. (2018) Driver, S. P., Andrews, S. K., da Cunha, E., et al. 2018, MNRAS, 475, 2891, doi: 10.1093/mnras/stx2728 * Duncan et al. (2014) Duncan, K., Conselice, C. J., Mortlock, A., et al. 2014, Monthly Notices of the Royal Astronomical Society, 444, 2960 * Dunlop et al. (2021) Dunlop, J. S., Abraham, R. G., Ashby, M. L. N., et al. 2021, PRIMER: Public Release IMaging for Extragalactic Research, JWST Proposal. Cycle 1, ID. #1837 * D’Silva et al. (2023) D’Silva, J. C., Driver, S. P., Lagos, C. D., et al. 2023, The Astrophysical Journal Letters, 959, L18 * Eddington (1913) Eddington, A. 1913, Monthly Notices of the Royal Astronomical Society, 73, 359 * Eisenstein et al. (2023a) Eisenstein, D. J., Willott, C., Alberts, S., et al. 2023a, arXiv e-prints, arXiv:2306.02465, doi: 10.48550/arXiv.2306.02465 * Eisenstein et al. (2023b) Eisenstein, D. J., Johnson, B. D., Robertson, B., et al. 2023b, arXiv e-prints, arXiv:2310.12340, doi: 10.48550/arXiv.2310.12340 * Endsley et al. (2021) Endsley, R., Stark, D. P., Chevallard, J., & Charlot, S. 2021, Monthly Notices of the Royal Astronomical Society, 500, 5229 * Endsley et al. (2023a) Endsley, R., Stark, D. P., Whitler, L., et al. 2023a, MNRAS, 000, 1. https://arxiv.org/abs/2306.05295v1 * Endsley et al. (2023b) Endsley, R., Stark, D. P., Lyu, J., et al. 2023b, Monthly Notices of the Royal Astronomical Society, 520, 4609 * Faucher-Giguère (2018) Faucher-Giguère, C.-A. 2018, Monthly Notices of the Royal Astronomical Society, 473, 3717 * Feng et al. (2016) Feng, Y., Di-Matteo, T., Croft, R. A., et al. 2016, MNRAS, 455, 2778, doi: 10.1093/mnras/stv2484 * Ferland et al. (2017) Ferland, G. J., Chatzikos, M., Guzmán, F., et al. 2017, Rev. Mexicana Astron. Astrofis., 53, 385, doi: 10.48550/arXiv.1705.10877 * Feroz et al. (2009) Feroz, F., Hobson, M., & Bridges, M. 2009, Monthly Notices of the Royal Astronomical Society, 398, 1601 * Ferreira et al. (2022) Ferreira, L., Adams, N., Conselice, C. J., et al. 2022, arXiv e-prints, arXiv:2207.09428. https://arxiv.org/abs/2207.09428 * Ferreira et al. (2023) Ferreira, L., Conselice, C. J., Sazonova, E., et al. 2023, The Astrophysical Journal, 955, 94 * Finkelstein et al. (2022) Finkelstein, S. L., Bagley, M., Song, M., et al. 2022, The Astrophysical Journal, 928, 52, doi: 10.3847/1538-4357/ac3aed * Finkelstein et al. (2023a) Finkelstein, S. L., Bagley, M. B., Ferguson, H. C., et al. 2023a, ApJ, 946, L13, doi: 10.3847/2041-8213/acade4 * Finkelstein et al. (2023b) Finkelstein, S. L., Leung, G. C. K., Bagley, M. B., et al. 2023b, arXiv e-prints, arXiv:2311.04279, doi: 10.48550/arXiv.2311.04279 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 * Fujimoto et al. (2022) Fujimoto, S., Finkelstein, S. L., Burgarella, D., et al. 2022, arXiv preprint arXiv:2211.03896 * Fujimoto et al. (2023) Fujimoto, S., Arrabal Haro, P., Dickinson, M., et al. 2023, arXiv e-prints, arXiv:2301.09482, doi: 10.48550/arXiv.2301.09482 * Furlong et al. (2015) Furlong, M., Bower, R. G., Theuns, T., et al. 2015, MNRAS, 450, 4486, doi: 10.1093/mnras/stv852 * Furtak et al. (2021) Furtak, L. J., Atek, H., Lehnert, M. D., Chevallard, J., & Charlot, S. 2021, MNRAS, 501, 1568, doi: 10.1093/mnras/staa3760 * Furtak et al. (2023) Furtak, L. J., Shuntov, M., Atek, H., et al. 2023, MNRAS, 519, 3064, doi: 10.1093/mnras/stac3717 * Furtak et al. (2023) Furtak, L. J., Zitrin, A., Plat, A., et al. 2023, The Astrophysical Journal, 952, 142 * Furtak et al. (2023) Furtak, L. J., Labbé, I., Zitrin, A., et al. 2023, arXiv e-prints, arXiv:2308.05735, doi: 10.48550/arXiv.2308.05735 * Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 * Genel et al. (2014) Genel, S., Vogelsberger, M., Springel, V., et al. 2014, MNRAS, 445, 175, doi: 10.1093/mnras/stu1654 * Giménez-Arteaga et al. (2023) Giménez-Arteaga, C., Oesch, P. A., Brammer, G. B., et al. 2023, ApJ, 948, 126, doi: 10.3847/1538-4357/acc5ea * Giménez-Arteaga et al. (2024) Giménez-Arteaga, C., Fujimoto, S., Valentino, F., et al. 2024, Outshining in the Spatially Resolved Analysis of a Strongly-Lensed Galaxy at z=6.072 with JWST NIRCam. https://arxiv.org/abs/2402.17875 * Gottumukkala et al. (2023) Gottumukkala, R., Barrufet, L., Oesch, P., et al. 2023, arXiv preprint arXiv:2310.03787 * Grazian et al. (2012) Grazian, A., Castellano, M., Fontana, A., et al. 2012, A&A, 547, A51, doi: 10.1051/0004-6361/201219669 * Grazian et al. (2015) Grazian, A., Fontana, A., Santini, P., et al. 2015, Astronomy & Astrophysics, 575, A96 * Greene et al. (2023) Greene, J. E., Labbe, I., Goulding, A. D., et al. 2023, arXiv preprint arXiv:2309.05714 * Griffiths et al. (2018) Griffiths, A., Conselice, C. J., Alpaslan, M., et al. 2018, Monthly Notices of the Royal Astronomical Society, 475, 2853 * Grogin et al. (2011) Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, ApJS, 197, 35, doi: 10.1088/0067-0049/197/2/35 * Gunawardhana et al. (2011) Gunawardhana, M. L., Hopkins, A. M., Sharp, R. G., et al. 2011, Monthly Notices of the Royal Astronomical Society, 415, 1647 * Hainline et al. (2023) Hainline, K. N., Johnson, B. D., Robertson, B., et al. 2023, arXiv e-prints, arXiv:2306.02468, doi: 10.48550/arXiv.2306.02468 * Hainline et al. (2023) Hainline, K. N., Helton, J. M., Johnson, B. D., et al. 2023, Brown Dwarf Candidates in the JADES and CEERS Extragalactic Surveys. https://arxiv.org/abs/2309.03250 * Harikane et al. (2016) Harikane, Y., Ouchi, M., Ono, Y., et al. 2016, The Astrophysical Journal, 821, 123 * Harikane et al. (2023) Harikane, Y., Ouchi, M., Oguri, M., et al. 2023, ApJS, 265, 5, doi: 10.3847/1538-4365/acaaa9 * Haslbauer et al. (2022) Haslbauer, M., Kroupa, P., Zonoozi, A. H., & Haghi, H. 2022, Has JWST already falsified dark-matter-driven galaxy formation?, arXiv, doi: 10.48550/ARXIV.2210.14915 * Hoffmann et al. (2021) Hoffmann, S. L., Mack, J., Avila, R., et al. 2021, in American Astronomical Society Meeting Abstracts, Vol. 53, American Astronomical Society Meeting Abstracts, 216.02 * Hopkins et al. (2005) Hopkins, P. F., Hernquist, L., Cox, T. J., et al. 2005, ApJ, 630, 716, doi: 10.1086/432463 * Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 * Illingworth et al. (2016) Illingworth, G., Magee, D., Bouwens, R., et al. 2016, arXiv e-prints, arXiv:1606.00841, doi: 10.48550/arXiv.1606.00841 * Jaacks et al. (2019) Jaacks, J., Finkelstein, S. L., & Bromm, V. 2019, Monthly Notices of the Royal Astronomical Society, 488, 2202 * Jain et al. (2024) Jain, S., Tacchella, S., & Mosleh, M. 2024, Monthly Notices of the Royal Astronomical Society, 527, 3291 * Jansen & Windhorst (2018) Jansen, R. A., & Windhorst, R. A. 2018, PASP, 130, 124001, doi: 10.1088/1538-3873/aae476 * Jermyn et al. (2018) Jermyn, A. S., Steinhardt, C. L., & Tout, C. A. 2018, Monthly Notices of the Royal Astronomical Society, 480, 4265, doi: 10.1093/MNRAS/STY2123 * Jespersen et al. (2024) Jespersen, C. K., Steinhardt, C. L., Somerville, R. S., & Lovell, C. C. 2024, On the Significance of Rare Objects at High Redshift: The Impact of Cosmic Variance. https://arxiv.org/abs/2403.00050 * Johnson et al. (2023) Johnson, B., Foreman-Mackey, D., Sick, J., et al. 2023, Zenodo * Johnson et al. (2021) Johnson, B. D., Leja, J., Conroy, C., & Speagle, J. S. 2021, ApJS, 254, 22, doi: 10.3847/1538-4365/abef67 * Kartaltepe et al. (2021) Kartaltepe, J., Casey, C. M., Bagley, M., et al. 2021, COSMOS-Webb: The Webb Cosmic Origins Survey, JWST Proposal. Cycle 1, ID. #1727 * Katz et al. (2022) Katz, H., Rosdahl, J., Kimm, T., et al. 2022, Monthly Notices of the Royal Astronomical Society, 510, 5603 * Kikuchihara et al. (2020) Kikuchihara, S., Ouchi, M., Ono, Y., et al. 2020, The Astrophysical Journal, 893, 60 * Kocevski et al. (2023) Kocevski, D. D., Onoue, M., Inayoshi, K., et al. 2023, arXiv preprint arXiv:2302.00012 * Koekemoer et al. (2011) Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, ApJS, 197, 36, doi: 10.1088/0067-0049/197/2/36 * Kokorev et al. (2024) Kokorev, V., Caputi, K. I., Greene, J. E., et al. 2024, arXiv e-prints, arXiv:2401.09981. https://arxiv.org/abs/2401.09981 * Kroupa (2001) Kroupa, P. 2001, Monthly Notices of the Royal Astronomical Society, 322, 231 * Kroupa (2002) Kroupa, P. 2002, Science, 295, 82, doi: 10.1126/science.1067524 * Labbé et al. (2023) Labbé, I., van Dokkum, P., Nelson, E., et al. 2023, Nature, 616, 266, doi: 10.1038/s41586-023-05786-2 * Labbe et al. (2023) Labbe, I., Greene, J. E., Bezanson, R., et al. 2023, arXiv preprint arXiv:2306.07320 * Laigle et al. (2016) Laigle, C., McCracken, H. J., Ilbert, O., et al. 2016, The Astrophysical Journal Supplement Series, 224, 24 * Langeroodi et al. (2023) Langeroodi, D., Hjorth, J., Chen, W., et al. 2023, The Astrophysical Journal, 957, 39 * Larson et al. (2022) Larson, R. L., Hutchison, T. A., Bagley, M., et al. 2022, arxiv, doi: 10.48550/arxiv.2211.10035 * Larson et al. (2023) Larson, R. L., Finkelstein, S. L., Kocevski, D. D., et al. 2023, The Astrophysical Journal Letters, 953, L29 * Laseter et al. (2023) Laseter, I. H., Maseda, M. V., Curti, M., et al. 2023, arXiv e-prints. https://arxiv.org/abs/2306.03120v1 * Leja et al. (2019) Leja, J., Carnall, A. C., Johnson, B. D., Conroy, C., & Speagle, J. S. 2019, The Astrophysical Journal, 876, 3 * Leja et al. (2018) Leja, J., Johnson, B. D., Conroy, C., & van Dokkum, P. 2018, ApJ, 854, 62, doi: 10.3847/1538-4357/aaa8db * Leung et al. (2023) Leung, G. C. K., Bagley, M. B., Finkelstein, S. L., et al. 2023, ApJ, 954, L46, doi: 10.3847/2041-8213/acf365 * Li et al. (2023) Li, Z., Dekel, A., Sarkar, K. C., et al. 2023, arXiv preprint arXiv:2311.14662 * Looser et al. (2023) Looser, T. J., D’eugenio, F., Maiolino, R., et al. 2023, arXiv e-prints. https://arxiv.org/abs/2306.02470v1 * Lovell et al. (2023) Lovell, C. C., Harrison, I., Harikane, Y., Tacchella, S., & Wilkins, S. M. 2023, Monthly Notices of the Royal Astronomical Society, 518, 2511 * Lovell et al. (2021) Lovell, C. C., Vijayan, A. P., Thomas, P. A., et al. 2021, MNRAS, 500, 2127, doi: 10.1093/mnras/staa3360 * Lower et al. (2020) Lower, S., Narayanan, D., Leja, J., et al. 2020, The Astrophysical Journal, 904, 33 * Ma et al. (2018) Ma, X., Hopkins, P. F., Garrison-Kimmel, S., et al. 2018, MNRAS, 478, 1694, doi: 10.1093/mnras/sty1024 * Madau (1995) Madau, P. 1995, The Astrophysical Journal, 441, 18 * Madau & Dickinson (2014) Madau, P., & Dickinson, M. 2014, Annual Review of Astronomy and Astrophysics, 52, 415, doi: 10.1146/annurev-astro-081811-125615 * Madau & Haardt (2015) Madau, P., & Haardt, F. 2015, ApJ, 813, L8, doi: 10.1088/2041-8205/813/1/L8 * Marley et al. (2021) Marley, M., Saumon, D., Morley, C., et al. 2021, Zenodo, doi: 10.5281/zenodo.5063476 * Mason et al. (2023) Mason, C. A., Trenti, M., & Treu, T. 2023, Monthly Notices of the Royal Astronomical Society, 521, 497 * Matthee et al. (2023) Matthee, J., Naidu, R. P., Brammer, G., et al. 2023, arXiv preprint arXiv:2306.05448 * Mauerhofer & Dayal (2023) Mauerhofer, V., & Dayal, P. 2023, Monthly Notices of the Royal Astronomical Society, 526, 2196 * McLeod et al. (2023) McLeod, D. J., Donnan, C. T., McLure, R. J., et al. 2023, MNRAS, doi: 10.1093/mnras/stad3471 * Menanteau et al. (2012) Menanteau, F., Hughes, J. P., Sifón, C., et al. 2012, ApJ, 748, 7, doi: 10.1088/0004-637X/748/1/7 * Mortlock et al. (2011) Mortlock, A., Conselice, C. J., Bluck, A. F. L., et al. 2011, MNRAS, 413, 2845, doi: 10.1111/j.1365-2966.2011.18357.x * Mortlock et al. (2015) Mortlock, A., Conselice, C. J., Hartley, W. G., et al. 2015, MNRAS, 447, 2, doi: 10.1093/mnras/stu2403 * Moster et al. (2010) Moster, B. P., Somerville, R. S., Maulbetsch, C., et al. 2010, ApJ, 710, 903, doi: 10.1088/0004-637X/710/2/903 * Moster et al. (2011) Moster, B. P., Somerville, R. S., Newman, J. A., & Rix, H.-W. 2011, ApJ, 731, 113, doi: 10.1088/0004-637X/731/2/113 * Mowla et al. (2024) Mowla, L., Iyer, K., Asada, Y., et al. 2024, arXiv e-prints, arXiv:2402.08696, doi: 10.48550/arXiv.2402.08696 * Mutch et al. (2016) Mutch, S. J., Geil, P. M., Poole, G. B., et al. 2016, Monthly Notices of the Royal Astronomical Society, 462, 250 * Naidu et al. (2022) Naidu, R. P., Oesch, P. A., van Dokkum, P., et al. 2022, The Astrophysical Journal Letters, 940, L14 * Naidu et al. (2022) Naidu, R. P., Oesch, P. A., Setton, D. J., et al. 2022, arXiv e-prints, arXiv:2208.02794. https://arxiv.org/abs/2208.02794 * Nanayakkara et al. (2022) Nanayakkara, T., Glazebrook, K., Jacobs, C., et al. 2022, 11, doi: 10.48550/arxiv.2207.13860 * Narayanan et al. (2023) Narayanan, D., Lower, S., Torrey, P., et al. 2023, Outshining by Recent Star Formation Prevents the Accurate Measurement of High-z Galaxy Stellar Masses. https://arxiv.org/abs/2306.10118 * Navarro-Carrera et al. (2023) Navarro-Carrera, R., Rinaldi, P., Caputi, K. I., et al. 2023, arXiv preprint arXiv:2305.16141 * Noboriguchi et al. (2023) Noboriguchi, A., Inoue, A. K., Nagao, T., Toba, Y., & Misawa, T. 2023, The Astrophysical Journal Letters, 959, L14 * Norgaard-Nielsen & Perez-Gonzalez (2017) Norgaard-Nielsen, H. U., & Perez-Gonzalez, P. G. 2017, The MIRI HUDF Deep Imaging Survey, JWST Proposal. Cycle 1, ID. #1283 * O’Brien et al. (2024) O’Brien, R., Jansen, R. A., Grogin, N. A., et al. 2024, arXiv preprint arXiv:2401.04944 * Oesch et al. (2014) Oesch, P., Bouwens, R., Illingworth, G., et al. 2014, The Astrophysical Journal, 786, 108 * Oesch et al. (2018) Oesch, P. A., Bouwens, R. J., Illingworth, G. D., Labbé, I., & Stefanon, M. 2018, ApJ, 855, 105, doi: 10.3847/1538-4357/aab03f * Oke (1974) Oke, J. B. 1974, ApJS, 27, 21, doi: 10.1086/190287 * Oke & Gunn (1983) Oke, J. B., & Gunn, J. E. 1983, ApJ, 266, 713, doi: 10.1086/160817 * Ormerod et al. (2024) Ormerod, K., Conselice, C., Adams, N., et al. 2024, Monthly Notices of the Royal Astronomical Society, 527, 6110 * Papadopoulos et al. (2011) Papadopoulos, P. P., Thi, W.-F., Miniati, F., & Viti, S. 2011, Monthly Notices of the Royal Astronomical Society, 414, 1705 * Papovich et al. (2022) Papovich, C., Cole, J., Yang, G., et al. 2022, arXiv preprint arXiv:2301.00027 * Pérez-González et al. (2023) Pérez-González, P. G., Costantin, L., Langeroodi, D., et al. 2023, arXiv preprint arXiv:2302.02429 * Pérez-González et al. (2023) Pérez-González, P. G., Barro, G., Annunziatella, M., et al. 2023, ApJ, 946, L16, doi: 10.3847/2041-8213/acb3a5 * Pérez-González et al. (2024) Pérez-González, P. G., Barro, G., Rieke, G. H., et al. 2024, arXiv preprint arXiv:2401.08782 * Perrin et al. (2014) Perrin, M. D., Sivaramakrishnan, A., Lajoie, C.-P., et al. 2014, in Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, Vol. 9143, SPIE, 1174–1184 * Perrin et al. (2012) Perrin, M. D., Soummer, R., Elliott, E. M., Lallo, M. D., & Sivaramakrishnan, A. 2012, in Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, Vol. 8442, SPIE, 1193–1203 * Planck Collaboration et al. (2016) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13, doi: 10.1051/0004-6361/201525830 * Pontoppidan et al. (2022) Pontoppidan, K. M., Barrientes, J., Blome, C., et al. 2022, The Astrophysical Journal Letters, 936, L14 * Popesso et al. (2023) Popesso, P., Concas, A., Cresci, G., et al. 2023, Monthly Notices of the Royal Astronomical Society, 519, 1526 * Retzlaff et al. (2010) Retzlaff, J., Rosati, P., Dickinson, M., et al. 2010, Astronomy & Astrophysics, 511, A50 * Rieke et al. (2022) Rieke, M. J., Kelly, D. M., Misselt, K., et al. 2022, arXiv preprint arXiv:2212.12069 * Rieke et al. (2023) Rieke, M. J., Robertson, B., Tacchella, S., et al. 2023, The Astrophysical Journal Supplement Series, 269, 16 * Roberts-Borsani et al. (2016) Roberts-Borsani, G., Bouwens, R., Oesch, P., et al. 2016, The Astrophysical Journal, 823, 143 * Robertson et al. (2023a) Robertson, B., Johnson, B. D., Tacchella, S., et al. 2023a, arXiv e-prints, arXiv:2312.10033, doi: 10.48550/arXiv.2312.10033 * Robertson et al. (2023b) Robertson, B. E., Tacchella, S., Johnson, B. D., et al. 2023b, Nature Astronomy, 7, 611, doi: 10.1038/s41550-023-01921-1 * Robotham et al. (2011) Robotham, A. S., Norberg, P., Driver, S. P., et al. 2011, Monthly Notices of the Royal Astronomical Society, 416, 2640 * Roper et al. (2022) Roper, W. J., Lovell, C. C., Vijayan, A. P., et al. 2022, Monthly Notices of the Royal Astronomical Society, 514, 1921 * Rowan-Robinson & McCrea (1968) Rowan-Robinson, M., & McCrea, W. 1968, Monthly Notices of the Royal Astronomical Society, 138, 445 * Salim et al. (2018) Salim, S., Boquien, M., & Lee, J. C. 2018, ApJ, 859, 11, doi: 10.3847/1538-4357/aabf3c * Salpeter (1955) Salpeter, E. E. 1955, Astrophysical Journal, vol. 121, p. 161, 121, 161 * Santini et al. (2022) Santini, P., Fontana, A., Castellano, M., et al. 2022, arXiv, arXiv:2207.11379. https://ui.adsabs.harvard.edu/abs/2022arXiv220711379S/abstract * Schaye et al. (2015) Schaye, J., Crain, R. A., Bower, R. G., et al. 2015, MNRAS, 446, 521, doi: 10.1093/mnras/stu2058 * Schechter (1976) Schechter, P. 1976, ApJ, 203, 297, doi: 10.1086/154079 * Schlawin et al. (2020) Schlawin, E., Leisenring, J., Misselt, K., et al. 2020, AJ, 160, 231, doi: 10.3847/1538-3881/abb811 * Schmidt (1968) Schmidt, M. 1968, The Astrophysical Journal, 151, 393 * Shen et al. (2023) Shen, X., Vogelsberger, M., Boylan-Kolchin, M., Tacchella, S., & Kannan, R. 2023, Monthly Notices of the Royal Astronomical Society, 525, 3254 * Shen et al. (2024) Shen, X., Vogelsberger, M., Borrow, J., et al. 2024, arXiv preprint arXiv:2402.08717 * Smail et al. (2023) Smail, I., Dudzevičiūtė, U., Gurwell, M., et al. 2023, The Astrophysical Journal, 958, 36 * Sneppen et al. (2022) Sneppen, A., Steinhardt, C. L., Hensley, H., et al. 2022, ApJ, 931, 57, doi: 10.3847/1538-4357/ac695e * Song et al. (2016) Song, M., Finkelstein, S. L., Ashby, M. L. N., et al. 2016, ApJ, 825, 5, doi: 10.3847/0004-637X/825/1/5 * Speagle (2020) Speagle, J. S. 2020, Monthly Notices of the Royal Astronomical Society, 493, 3132 * Stanway & Eldridge (2018) Stanway, E. R., & Eldridge, J. 2018, Monthly Notices of the Royal Astronomical Society, 479, 75 * Stark et al. (2013) Stark, D. P., Schenker, M. A., Ellis, R., et al. 2013, The Astrophysical Journal, 763, 129 * Stefanon et al. (2021) Stefanon, M., Bouwens, R. J., Labbé, I., et al. 2021, The Astrophysical Journal, 922, 29 * Stefanon et al. (2017) —. 2017, The Astrophysical Journal, 843, 36 * Steinhardt et al. (2021) Steinhardt, C. L., Jespersen, C. K., & Linzer, N. B. 2021, The Astrophysical Journal, 923, 8, doi: 10.3847/1538-4357/ac2a2f * Steinhardt et al. (2023) Steinhardt, C. L., Kokorev, V., Rusakov, V., Garcia, E., & Sneppen, A. 2023, The Astrophysical Journal Letters, 951, L40 * Strait et al. (2020) Strait, V., Bradač, M., Coe, D., et al. 2020, The Astrophysical Journal, 888, 124 * Tacchella et al. (2022) Tacchella, S., Finkelstein, S. L., Bagley, M., et al. 2022, The Astrophysical Journal, 927, 170 * Tang et al. (2023) Tang, M., Stark, D. P., Chen, Z., et al. 2023, arXiv e-prints, arXiv:2301.07072, doi: 10.48550/arXiv.2301.07072 * Tomczak et al. (2014) Tomczak, A. R., Quadri, R. F., Tran, K.-V. H., et al. 2014, ApJ, 783, 85, doi: 10.1088/0004-637X/783/2/85 * Treu et al. (2022) Treu, T., Calabro, A., Castellano, M., et al. 2022, arXiv, arXiv:2207.13527. https://ui.adsabs.harvard.edu/abs/2022arXiv220713527T/abstract * Trussler et al. (2023) Trussler, J. A., Adams, N. J., Conselice, C. J., et al. 2023, Monthly Notices of the Royal Astronomical Society, stad1629 * Ulm (1990) Ulm, K. 1990, American Journal of Epidemiology, 131, 373, doi: 10.1093/oxfordjournals.aje.a115507 * Vallenari et al. (2022) Vallenari, A., Brown, A., & Prusti, T. 2022, Astronomy & Astrophysics * van Mierlo et al. (2023) van Mierlo, S. E., Caputi, K. I., & Kokorev, V. 2023, The Astrophysical Journal Letters, 945, L21 * Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2 * Wang et al. (2023a) Wang, B., Fujimoto, S., Labbé, I., et al. 2023a, The Astrophysical Journal Letters, 957, L34 * Wang et al. (2023b) Wang, B., Leja, J., Atek, H., et al. 2023b, arXiv preprint arXiv:2310.06781 * Wang et al. (2024) Wang, T., Sun, H., Zhou, L., et al. 2024, The true number density of massive galaxies in the early Universe revealed by JWST/MIRI. https://arxiv.org/abs/2403.02399 * Weaver et al. (2022) Weaver, J., Davidzon, I., Toft, S., et al. 2022, arXiv preprint arXiv:2212.02512 * Weidner et al. (2013) Weidner, C., Ferreras, I., Vazdekis, A., & La Barbera, F. 2013, Monthly Notices of the Royal Astronomical Society, 435, 2274 * Weigel et al. (2016) Weigel, A. K., Schawinski, K., & Bruderer, C. 2016, Monthly Notices of the Royal Astronomical Society, 459, 2150, doi: 10.1093/mnras/stw756 * Whitaker et al. (2019) Whitaker, K. E., Ashas, M., Illingworth, G., et al. 2019, ApJS, 244, 16, doi: 10.3847/1538-4365/ab3853 * Wilkins et al. (2017) Wilkins, S. M., Feng, Y., Di Matteo, T., et al. 2017, MNRAS, 469, 2517, doi: 10.1093/mnras/stx841 * Wilkins et al. (2023a) Wilkins, S. M., Turner, J. C., Bagley, M. B., et al. 2023a, arXiv e-prints, arXiv:2311.08065, doi: 10.48550/arXiv.2311.08065 * Wilkins et al. (2023b) Wilkins, S. M., Vijayan, A. P., Lovell, C. C., et al. 2023b, MNRAS, 519, 3118, doi: 10.1093/mnras/stac3280 * Williams et al. (2018) Williams, C. C., Curtis-Lake, E., Hainline, K. N., et al. 2018, ApJS, 236, 33, doi: 10.3847/1538-4365/aabcbb * Willott et al. (2023a) Willott, C. J., Desprez, G., Asada, Y., et al. 2023a, arXiv e-prints, arXiv:2311.12234, doi: 10.48550/arXiv.2311.12234 * Willott et al. (2023b) Willott, C. J., Abraham, R. G., Asada, Y., et al. 2023b, CANUCS: The CAnadian NIRISS Unbiased Cluster Survey, JWST Proposal. Cycle 3, ID. #4527 * Windhorst et al. (2023) Windhorst, R. A., Cohen, S. H., Jansen, R. A., et al. 2023, AJ, 165, 13, doi: 10.3847/1538-3881/aca163 * Withers et al. (2023) Withers, S., Muzzin, A., Ravindranath, S., et al. 2023, The Astrophysical Journal Letters, 958, L14 * Woodrum et al. (2023) Woodrum, C., Rieke, M., Ji, Z., et al. 2023, arXiv preprint arXiv:2310.18464 * Xiao et al. (2023) Xiao, M., Oesch, P., Elbaz, D., et al. 2023, arXiv preprint arXiv:2309.02492 * Yan et al. (2023) Yan, H., Ma, Z., Ling, C., Cheng, C., & Huang, J.-S. 2023, ApJ, 942, L9, doi: 10.3847/2041-8213/aca80c * Yan et al. (2022) Yan, H., Cohen, S. H., Windhorst, R. A., et al. 2022, arXiv, arXiv:2209.04092. https://ui.adsabs.harvard.edu/abs/2022arXiv220904092Y/abstract * Yung et al. (2023) Yung, L. Y. A., Somerville, R. S., Finkelstein, S. L., Wilkins, S. M., & Gardner, J. P. 2023, arXiv e-prints, arXiv:2304.04348, doi: 10.48550/arXiv.2304.04348 * Yung et al. (2019) Yung, L. Y. A., Somerville, R. S., Popping, G., et al. 2019, MNRAS, 490, 2855, doi: 10.1093/mnras/stz2755
# High-accuracy longitudinal position measurement using self-accelerating light Shashi Prabhakar<EMAIL_ADDRESS>Stephen Plachta Marco Ornigotti Robert Fickler Photonics Laboratory, Physics Unit, Tampere University, Tampere, FI-33720, Finland ###### Abstract Radially self-accelerating light exhibits an intensity pattern that describes a spiraling trajectory around the optical axis as the beam propagates. In this article, we show in simulation and experiment how such beams can be used to perform a high-accuracy distance measurement with respect to a reference using simple off-axis intensity detection. We demonstrate that generating beams whose intensity pattern simultaneously spirals with fast and slow rotation components enables a distance measurement with high accuracy over a broad range, using the high and low rotation frequency, respectively. In our experiment, we achieve an accuracy of around 2 $\mu$m over a longitudinal range of more than 2 mm using a single beam and only two quadrant detectors. As our method relies on single-beam interference and only requires a static generation and simple intensity measurements, it is intrinsically stable and might find applications in high-speed measurements of longitudinal position. ## I Introduction Structuring the spatial shape of light fields has become a broad research field spanning areas from the foundations of optics to optical communication, materials processing, quantum optics, and microscopy, to name a few rubinsztein2016roadmap . Amongst many interesting features structured light may have, one in particular has attracted a lot of attention and might even be seen as the starting point of the field, namely the azimuthal phase structure connected to the orbital angular momentum (OAM) of light padgett2017orbital . Light fields carrying such OAM have a transverse phase of the form $\exp(-i\ell\varphi)$, where $\varphi$ is the azimuthal coordinate and $\ell$ defines the quanta of OAM each photon carries allen1992orbital . These transverse scalar modes are commonly known as vortex modes, or donut beams as they have a phase singularity and, thus, an intensity null along the optical axis. Over the last decades, various techniques to imprint such twisted structures have been established, e.g., spiral phase plates beijersbergen1994helical ; holographic generation using spatial light modulators heckenberg1992generation ; carpentier2008making ; forbes2016creation ; cylindrical lenses beijersbergen1993astigmatic ; $q$\- and $J$-plates marrucci2006optical ; larocque2016arbitrary ; devlin2017arbitrary ; or direct generation of the light field inside the cavity of a laser forbes2019structured . One particular family of modes, whose higher orders have an azimuthal phase ramp, is known as Bessel beams. Bessel beams are propagation invariant light fields described by Bessel functions gori1987bessel . They have received significant attention due to being diffraction-less zahid1989directionality ; vetter2019realization and self-healing chu2012analytical ; mcgloin2005bessel . While zero-order Bessel beams have an intensity maximum along the optical axis and do not carry OAM, higher orders have a twisted phase structure leading to well-defined quanta of OAM per photon. In addition, Bessel beams are comparatively easy to generate using a ring aperture, with or without an azimuthally varying phase, at the back focal plane of a lens ($k$-space). The light fields in the focal plane of the lens, which have undergone an optical Fourier transformation, then resemble the theoretical Bessel beams; however, they have a finite beam extent and are therefore called Bessel-Gauss beams. If not only one ring but multiple rings of different radii and different orders are put into the back focal plane of the lens, a superposition of multiple Bessel beams is generated. Interestingly, the obtained superposition structures show the peculiar feature of spiraling around the optical axis along the propagation direction if the constituents forming the superposition have different OAM values chavez1996nondiffracting ; schechner1996wave ; abramochkin1997generation ; paakkonen1998rotating . This property of complex superpositions of higher-order Bessel beams has been the focus of various research efforts. Thorough theoretical and experimental studies of such spiraling beams have been performed, which have been enabled by the progress in experimental techniques of generating such beams with high precision and flexibility tervo2001rotating ; kotlyar2007rotation ; vasilyeu2009generating ; vetter2015optimization ; schulze2015accelerated . Contrary to Airy beams, whose self-accelerating character is essentially given by the fact that an observer in a reference frame solidal with an Airy beam would experience a tangential fictitious force greenberger1980comment that results in their characteristic parabolic propagation profile, radially self- accelerating beams (RSABs) are characterised by a centrifugal fictitious force linked to their characteristic spiraling motion vetter2014generalized . As such they are also distinct from another class of self-accelerating beams, recently investigated in vetter2017real ; webster2017radially . In this work, we demonstrate a novel application of spiraling beams as a means to determine the longitudinal distance with high accuracy by only measuring the intensity using a quadrant detector, i.e., in a limited number of off-axis locations. First, we briefly introduce the theory behind spiraling beams and describe how superpositions of three Bessel modes can lead to a complex rotating pattern having both quickly and slowly rotating parts at the same time. We then show in simulations and experiments that using such a spiraling beam enables the accurate determination of distance over a long range when the intensity using two quadrant detectors (or in minimally three off-axis positions) is recorded. In the experimental implementation, we are able to achieve an accuracy of around 2 $\mu$m over a range of 2 mm. The obtained result is mainly limited by the aperture of our optical system, as well as the resolution of the generating and detecting devices. Hence, the proposed and demonstrated method of measuring a longitudinal distance using structured light might find promising applications, similar to self-accelerating Airy beams, which for example have been used to resolve depth in microscopy applications recently jia2014isotropic ; he2019depth . Due to its simplicity and high accuracy, our method nicely complements other available techniques using light, e.g., time-of-flights measurements as used in LIDAR systems jarvis1983laser , interferometric approaches kubota1987interferometer , or schemes that rely on complex scattering of structured light fields berg2020microsphere , to name a few. ## II Theoretical background ### II.1 Spiraling light fields Measuring the longitudinal position, i.e., a certain distance with respect to a fixed reference, using an intensity structure that changes over propagation requires a light field with well-defined propagation dynamics. The recently demonstrated radially self-accelerating, or spiraling, light fields, which show a constant rotation of the intensity pattern along the propagation direction, are a very convenient solution to use. While light with more complex propagation dynamics would require sophisticated evaluation procedures, rotating structures allow the determination of the longitudinal position through simple measurements of the rotation angle. Importantly, such light fields can be easily realized by superimposing (at least) two vortex beams, each having a different OAM value $\ell$ as well as two different longitudinal wave vectors $k_{z}$ defining the propagation dynamics vetter2014generalized . The two-component solution, i.e., the so-called helicon beams, can then be written as $u(r,\phi,z)=A_{\ell_{1}}(r)\exp{[i(k_{1z}z+\ell_{1}\varphi)]}+A_{\ell_{2}}(r)\exp{[i(k_{2z}z+\ell_{2}\varphi)]},$ (1) where the indices $1,2$ label the two constituents and $A_{\ell}(r)$ is a radially-dependent envelope function. The resulting intensity, $I=u(r,\varphi,z)u(r,\varphi,z)^{*}$, can then be obtained to be $I(r,\varphi,z)\propto\cos^{2}{[\Delta kz+\Delta\ell\varphi]},$ (2) where $\Delta k=(k_{1z}-k_{2z})/2$ is the difference between the wave vectors of the two beams and $\Delta\ell=(\ell_{1}-\ell_{2})/2$ is the difference between their OAM values. Notice, moreover, that we have only kept the part of the intensity that is of importance, i.e., the one including the required angular and $z$-dependence. For more details we refer the interested reader to earlier works vetter2014generalized . From (2) we find that the angular orientation of the intensity profile $\phi(z)$ changes along the beam propagation according to the relation $\phi(z)=\frac{z\Delta k}{\Delta\ell},$ (3) from which the angular velocity can be calculated as $\omega=\frac{\partial\phi(z)}{\partial z}=\frac{\Delta k}{\Delta\ell}.$ (4) One such beam propagation is shown in Fig. 1, where the spiraling of the mode along the propagation axis is depicted. Figure 1: Normalised three-dimensional representation of the propagation of the central lobe of a radially self-accelerating beam along the $z$ direction, as defined in (1), consisting of the superposition of $\ell_{1}=0$ and $\ell_{2}=1$ Bessel modes, which matches the form of the single-frequency beam used in the experiment. The transverse directions $\\{k_{0}x,k_{0}y\\}$ are normalised to the central $k$-vector of the beam, i.e., $k_{0}$, while the longitudinal direction (i.e., the propagation direction $z$) has been normalised to the rotation period $\Lambda=2\pi/\omega$, with $\omega$ being the rotation speed defined by (4). The plot shows propagation up to the first two periods. The colour scale in the picture represents different iso- intensity surfaces, with brighter colours indicating regions of higher intensity. As can be seen, the whole intensity distribution rigidly rotates around the $z$ axis with rotation speed $2\pi/\Lambda$. This peculiar propagation pattern is the result of interference between the various Bessel beam components constituting the radially self-accelerating beam, as described by (1). In other words, if we measure the intensity over a certain angular region, the intensity along the beam propagation follows a periodic $\cos^{2}$-function and can be used to determine the longitudinal distance unambiguously within half a period. In principle, a simple measurement of the intensity at an off- axis transverse position therefore allows the determination of a distance with arbitrary precision. In practical situations, however, errors induced by the generation or measurement of the structure result in an uncertainty in the determination of the intensity’s angular position, which leads to a limitation of the longitudinal accuracy. One direct way to improve the measurement accuracy despite these imperfections is to increase the rotation frequency of the structure with respect to its propagation. The faster the rotation, the better the accuracy in measuring the longitudinal distance $z$. Hence, one aim in high-accuracy distance measurement using self-accelerating light fields is to achieve the largest possible difference in the longitudinal wave vectors $\Delta k$ allowed by the optical system. We also see that the difference in the OAM values $\Delta\ell$ should be kept as small as possible, i.e., the $\ell$-values of the two constituent beams should only differ by 1. Obviously, improving the longitudinal accuracy by increasing the rotation frequency comes at the cost of a reduced longitudinal range over which an unambiguous determination of the angular position is possible by only examining the intensity pattern. To circumvent this limitation, it is possible to realize a more complex structure, which shows a rotating intensity pattern that includes two (or more) well-defined rotation frequencies. Ideally, the intensity structure should have one very high rotation frequency used to obtain locally a high-accuracy measurement of the longitudinal position. The intensity dynamics should further include a rotating structure with very low frequency from which it is possible to determine the global distance and discriminate between different fast-varying periods. Both frequencies need to be adjusted such that each period of the high-accuracy measurement can be distinguished from any other using the slow rotation pattern. In the theoretical description, this idea can be implemented by adding a third term to the equation earlier introduced (1), such that we obtain $\displaystyle u(r,\phi,z)$ $\displaystyle=$ $\displaystyle A_{\ell_{1}}(r)\exp{[i(k_{1z}z+\ell_{1}\varphi)]}$ (5) $\displaystyle+A_{\ell_{2}}(r)\exp{[i(k_{2z}z+\ell_{2}\varphi)]}$ $\displaystyle+A_{\ell_{3}}(r)\exp{[i(k_{3z}z+\ell_{3}\varphi)]}.$ As can be seen, the electric field defined above contains three different contributions, each characterised by its spatial frequency $k_{i}$ and OAM $\ell_{i}$. If we now calculate the intensity distribution generated by such a field, we will have, together with the contributions of the single terms in (5) – i.e., terms proportional to $|A_{\ell_{k}}|^{2}$ – also all the possible interference terms between the three beams composing the field above. We can therefore write, neglecting the $z$\- independent terms, which amount only to an overall normalisation factor (supplemental material of vetter2014generalized ), $\displaystyle I$ $\displaystyle\propto$ $\displaystyle\cos^{2}{[\Delta k_{1,2}z+\Delta\ell_{1,2}\varphi]}+\cos^{2}{[\Delta k_{1,3}z+\Delta\ell_{1,3}\varphi]}$ (6) $\displaystyle+\cos^{2}{[\Delta k_{2,3}z+\Delta\ell_{2,3}\varphi]},$ where we labelled $\Delta k_{i,j}=(k_{iz}-k_{jz})/2$ and $\Delta\ell_{i,j}=(\ell_{i}-\ell_{j})/2$ as the pairwise differences between the wave vectors and OAM values of the three fields. By choosing $\Delta\ell_{2,3}=0$, i.e., $\ell_{2}=\ell_{3}$, the angular dependence of the propagation dynamics of the last term vanishes. Hence, we obtain the required light field, whose structure rotates with only two rotation frequencies at the same time. ### II.2 Experimental implementation In an experiment, it is convenient to realize these radially accelerating beams in the framework of Bessel beams, i.e., $A_{\ell}(r)=J_{\ell}(rk_{r})$ ornigotti2018vector ; rop2012measuring . For Bessel beams the longitudinal wavevector $k_{z}$ can straightforwardly controlled by adjusting its radial counterpart $k_{r}$. Both quantities are related through $k_{z}=\sqrt{k^{2}-k_{r}^{2}},$ (7) where $k=2\pi/\lambda$ labels the wavenumber and $\lambda$ corresponds to the wavelength of the utilized light field. As the angular spectrum of a Bessel beam forms a ring in $k$-space, such beams are relatively simple to generate in the laboratory. By modulating an incoming light field to have a ring-shaped amplitude at one plane ($k$-space), we can transform the light into a Bessel beam (real space) by implementing an optical Fourier transform using a properly placed lens. The lens is placed one focal distance $f$ away from the initial modulation plane, i.e., in the back focal plane of the lens, such that at around another focal distance behind the lens a Fourier transform leads to a Bessel beam with a well-defined longitudinal wave vector $k_{z}$. In the modulation plane is a phase-only spatial light modulator (SLM) whose screen displays a holographic pattern, generated by a MATLAB script, that modulates the amplitude and phase of the beam into the required ring shape rosales2017shape . Depending on the radius $r$ of the ring in the modulation plane, the Bessel beam with longitudinal wave vector $k_{z}=\frac{2\pi}{\lambda}\cos{\left(\frac{r}{f}\right)}$ (8) will be obtained, which follows from simple geometric arguments vasilyeu2009generating ; rop2012measuring . A radially self-accelerating beam, such as the one described above, follows from simply modulating the light field to have two (or more) rings with different radii $r_{i}$ and individual OAM values $\ell_{i}$. The spiraling around the optical axis can thus be tuned by changing the radii of the two rings, leading to a wave vector difference $\displaystyle\Delta k_{i,j}$ $\displaystyle=$ $\displaystyle(k_{iz}-k_{jz})/2$ (9) $\displaystyle=$ $\displaystyle\frac{\pi}{\lambda}\left[\cos{\left(\frac{r_{i}}{f}\right)}-\cos{\left(\frac{r_{j}}{f}\right)}\right].$ As discussed earlier, to achieve the best possible longitudinal accuracy, we aim at generating a structure that contains both a very high and a very low rotation frequency at the same time. To realise such a rotating structure, the light field generated by the SLM needs to contain three Bessel beam components, so that their mutual coupling gives rise to a fast-rotating and a slow-rotating self-accelerating beam. The former, for example, results from Bessel beam components 1 and 2, whose difference in radii should be as large as the optical system allows, while the OAM value differs by only a single quanta, i.e., $\Delta\ell=1$. As one of the rings, ring 2, necessarily has to have a large radius $r_{2}$ leading to a large difference in longitudinal wave vectors $\Delta k_{1,2}$, the interfering light field and, thus, the rotating structure will be strongly confined at a small transverse region around the optical axis. The slow-rotating part, on the other hand, results from Bessel components 1 and 3, so the radii of the two rings should be very similar to obtain a small difference in wave vectors $\Delta k_{1,3}$. However, the difference should also be large enough that the resulting rotation frequency allows us to discriminate between the repeating periods of the fast-oscillating signal. If these two rings are chosen to be similar in radius but much smaller than ring 2, the rotating light field in the Fourier plane of the lens, i.e., the slowly rotating structure, will cover not only the area in close proximity of the optical axis but also the outer region. Note that examples of the ring-shaped modulation patterns and the resulting propagation dynamics of the spiraling beams can be found in Figures 3 and 5 in later sections. This difference in the radial intensity compared to the fast-rotating structure enables us to discriminate the two differently varying patterns by observing the intensity in different radial regions. In the simplest case, these regions might be defined by a single transverse location where the intensity is evaluated. However, higher experimental accuracy can be obtained using two quadrant detectors, one for each rotating structure, which evaluate the differences between opposing quadrants to increase the signal-to-noise ratio. As such detectors can work with tens of nanoseconds of rise time, the proposed method might also find applications in high-speed longitudinal position measurements. Another important aspect is the overall distance over which the spiraling intensity can be observed. Here, the ultimate limit is given by the width of the ring in the modulation plane that generates the Bessel beam. The narrower the ring, the longer will the spiraling beam survive, thus allowing a measurement for longer distances. On the contrary, the wider the ring width is, the shorter will be the self-accelerating beam in the focal region, thus allowing measurements over only a very short distance. Obviously, in applications the ring width is determined by the resolution of the modulation device, which is in our case the SLM (see below), as well as the minimum amount of light required to detect the rotating intensity patterns. Using high-resolution generation methods, self-accelerating beams over a distance of 70 mm have been demonstrated already vetter2019realization . We further note that, in principle, one can also tune the rotation frequency by adjusting the difference in the OAM values $\Delta l$ of the two constituent beams, as can be seen in formula (3). However, by doing so, one should keep in mind that large differences in OAM result in beams of more complex angular structures, which might also require a detection system that is able to resolve those structures. Moreover, if the two constituents differ by many OAM quanta, they also show a significant difference in their OAM- induced divergence of the beam padgett2015divergence . As this difference also leads to a fast decrease in their spatial overlap, the region over which the interference and, thus, the rotation can be observed is reduced. Hence, improving the accuracy as well as the distance over which it can be measured is preferably done by tuning the rotation frequency through adjustments to the longitudinal wave vector difference $\Delta k$. ## III Results ### III.1 High-accuracy measurements As a first task, we investigated the largest possible rotation frequency and, thus, the highest possible accuracy. Before testing the theory in the laboratory, we performed the so-called split-step propagation methodpoon2017engineering to simulate the entire setup. A sketch of the setup can be seen in Fig. 2. During the first set of simulations, the main aim was to verify the idea and find the fastest rotation of the angular modes that was still within the limitations of our experimental system. These limitations were mainly the aperture of our optics (1-inch); the pixel resolution of our modulating device, a phase-only SLM (Holoeye, Pluto-2.1-NIR-011 LCOS, 1920$\times$1080 pixels, 8 $\mu$m pixel size); and the pixels of the detection system, a camera (ZWO ASI120MM Mini, 1280$\times$960 pixels, 3.75 $\mu$m pixel size). Figure 2: Sketch of the experimental setup used in simulation and experiment. A laser beam with a wavelength of 780 nm is enlarged to a wide radius using a telescope (lenses $f_{1}$ and $f_{2}$). For maximum efficiency, the beam’s polarization is controlled by a half-wave plate $\lambda/2$ such that it aligns with the orientation of the diffraction grating on the SLM, which modulates the diffracted light to have a shape with a central circle and ring(s) and imparts angular momentum to the ring(s). The beam is filtered through a 4f optical system (lenses $f_{3}$) that extracts the first diffraction order, thereby removing the un-modulated light. Beyond the image plane, the structured beam is focused by another lens $f_{4}$. The resulting spiraling structure is imaged by a camera on a translation stage (TS). In order to achieve the best matching between simulation and experiment, we included in the simulation the spatial resolution of our modulation and detection plane using the values of our laboratory system. In particular, we set the pixel size of the simulation to 1.875 $\mu$m, which is half the size of the physical pixels of our camera used in the experiment. At first, we started by simulating a beam that only rotates with a single frequency, i.e., a beam consisting of two Bessel components, as described in equation (1). To achieve the largest difference in their longitudinal wave vectors, we generated two Bessel beam components from two ring-shaped patterns of very different radii in the Fourier domain. The larger ring size was mainly limited by the screen size of the SLM as well as the optical apertures in our system. The ring we utilized had an inner radius of $r_{1}$=3.5 mm, with a ring width of 0.1 mm, and imprinted an OAM value of $\ell=$1 onto the beam. As the length over which the rotating intensity pattern exists is determined by the ring width, we chose the smallest possible width allowed by the resolution of the modulation device, i.e., the pixel size of the SLM. Utilizing an SLM with a pixel size of 8 $\mu$m, we chose a ring width of 100 $\mu$m in order to have around 11-13 pixels at any given angular position, which allowed an efficient generation using holographic methods. We chose the smaller ring to be a circular area of radius $r_{0}=0.4$ mm with a flat phase, i.e., an OAM value of $\ell=0$ (see Fig. 3a). The circular area was optimized to have a similar intensity as the ring, taking the Gaussian shape of our input light field, with a beam waist of 4.1 mm, into account. This optimization was done because equal amplitudes of the two components result in a rotating pattern with improved visibility vetter2015optimization ; ornigotti2018vector , which in our task improves the accuracy. Figure 3: Simulation for high-accuracy measurements. a) In the inset, the modulation pattern is shown to generate the spiraling structure. Its brightness corresponds to the amplitude of the light, and the color depicts the phase of the modulation. For clarity only the modulation is shown, not the holographic pattern required in the experiment. The green lines depict the region utilized to emulate a quadrant detector. b) The simulated angular intensity changes over propagation distance. Four exemplary intensity patterns are shown as insets to visualize the spiraling behaviour. Comparing the intensities found in 4 quadrants, shown in a), around the optical axis enables the determination of the longitudinal position. The intensity difference between quadrants 1,2 and 3,4 (2,3 and 1,4) leads to a sinusoidal curve shown in black (red) with a period of 311.2 $\mu$m. Using both fast-varying curves the longitudinal positions can be unambiguously determined over one half period. Finally, the ring-shaped intensity patterns were transformed into a spiraling beam through an optical Fourier transform using a lens of 50 mm focal length. The focal length was mainly limited by the pixel size of the camera, as shorter focal lengths lead to beams of smaller extent in the focus. We note that a magnification system might be used to circumvent this constraint if shorter focal length lenses are required. In order to determine the propagation distance from the change in angular intensity, we simulated the intensity in a 300$\times$300-pixel region centered on the optical axis during propagation. We then registered the intensity within a circular region of radius 3.75 $\mu$m around the optical axis. To emulate a quadrant detector we summed up the registered intensity of each of the four quadrants and evaluated the difference between the intensities of quadrants 1,2 and 3,4, i.e., the upper and lower halves (see Fig. 3a). We obtained a sinusoidal curve with a period of 311.2 $\mu$m, as shown in Fig. 3b, which matches the value of 311 $\mu$m expected from theory. In addition, if one evaluates the difference in intensity between quadrants 2,3 and 1,4, i.e., the left and right halves, it is possible to shift the steepest slope of the sinusoidal curve by a quarter of a period (an effective phase shift of $\pi/2$). When using both signals, it is possible to achieve the same longitudinal accuracy at the positions where the shallow slope of one signal would cause a significant decrease in accuracy (see Fig. 3b). After having verified the method in simulations, we turned to the experimental implementation to determine the actual accuracy limits of our system due to experimental imperfections and errors. In the experiment, sketched in Fig. 2, we used a fiber-coupled single-frequency Toptica laser (DLpro) at a wavelength of 780 nm. We enlarged the laser beam using a telescope system to a beam waist radius of approximately 4.1 mm, such that it illuminated the whole screen of the SLM. Via reflection off the SLM screen, we modulated the beam to have the multiple-ring shaped intensity structure with the dimensions described above. As the SLM is a liquid-crystal phase-only modulation device zhang2014LCOS , we used holographic modulation techniques to perform a complex amplitude modulation. This is done by displaying the diffractive holographic pattern only at the ring-shaped regions where we wanted to obtain a light field rosales2017shape . Through filtering only the first diffraction order using an aperture in a Fourier plane of a 4f-system, we not only imprint the required phase but also carve out the required intensity structure. At the image plane, we obtained the required ring-shaped amplitude that leads to a spiraling beam after a Fourier transformation, as described earlier. Analogous to the simulations, we performed this Fourier transformation using another lens with a focal distance of 50 mm. To record the spiraling structure, we placed the camera on a high-accuracy motorized translation stage, which was scanned over 1 mm in steps of 1 $\mu$m. The recording and translation of the camera were automated by interfacing the camera and translation stage with LabVIEW. At each position, we recorded 50 frames, from which we obtained the average and standard deviation of the angular intensity along the optical axis. Analogous to the simulation, we emulated a quadrant detector by registering the intensity difference between different quadrants illuminated by the beam. As the beam waist in the focal plane is very small, we only used 4 pixels of size 3.75 $\mu$m, each corresponding to one quadrant. Figure 4: Experimental high-accuracy distance measurements. a) The scatter plot of intensity differences between quadrants 1,2 and 3,4 (2,3 and 1,4) are shown in black (red). The intensities are obtained by averaging 50 images for each position. The solid lines show the corresponding sinusoidal fits to the measurements. b) The displacement accuracy analysis is obtained using the fitted sine functions and the error propagation method from the standard deviation of intensity fluctuations at each position. We find a best accuracy of 1-2 $\mu$m at the points of steepest slope. For maintaining the minimum error, one must hop between the steep slopes of the red and black curves, which leads to obtaining the accuracy of 2.2$\pm$0.9 $\mu$m over the full range (larger symbols). The resulting variations in intensity when comparing quadrants 1,2 and 3,4 are shown in Fig. 4a, demonstrating a period of about 332.6$\pm$0.3 $\mu$m. This result matches the expected period from theory and simulation, with the small discrepancy attributed to experimental imperfections such as finite resolution and misalignment. Using the standard deviation of the intensity at each position as the experimentally determined error and taking the known sinusoidal curve into account, we determined the longitudinal accuracy or displacement accuracy through error propagation. Hence, we define the displacement accuracy as the minimal displacement for which the errors of two data point do not overlap, i.e., the two data points that can be discriminated with a 1-$\sigma$ confidence interval. We found that at the steep slope of the curve, two different longitudinal positions that are 1 to 2 $\mu$m apart can still be resolved with one standard deviation significance (see Fig. 4b). As expected, the accuracy decreases dramatically around the extremal regions of the sinusoidal curve. However, as mentioned earlier, evaluating the difference in intensity between the left and right halves (quadrants 2,3 and 1,4) results in a periodic signal that is shifted by a quarter of a period relative to the signal of the upper and lower halves. We can therefore always refer to a fast- varying signal regardless of position, so the strong reduction of accuracy due to slow intensity variations at the extremal points of the sinusoidal curve is circumvented. When switching between the two signals such that the quarter period with the steepest slope is always used for a given longitudinal region, we obtain an average accuracy of 2.2$\pm$0.9 $\mu$m over the whole scanning range. ### III.2 Long range measurements In order to overcome the ambiguity between multiple fast-varying periods, we then studied a beam that spirals with two components simultaneously: a slow one and a fast one. The former can be used to determine the coarse position, while the latter can be used to obtain the longitudinal distance with high accuracy. Again, we first investigated the method in simulation before implementing the scheme in the laboratory. Figure 5: Simulation for distance measurements over a longer range. a) An intensity structure spiraling simultaneously with two frequencies is generated using the modulation pattern shown in the inset. As in Fig. 3, only the complex modulation is shown and the brightness depicts the amplitude and the color depicts the phase. The simulated beam close the focal region shows the small off-axis intensity, that is spiraling fast as before, as well as larger regions of increased intensity, which is simultaneously spiraling slower around the optical axis. The regions used to emulate the quadrant detectors are depicted with green lines. b) The angular intensity variation found in the inner and out regions around the optical axis over propagation distance. Comparing the intensity differences found in four quadrants in the inner region (1,2-3,4 in black and 2,3-1,4 in red) around the optical axis leads to a fast-varying pattern used as a local high-accuracy measurement analog as before. Comparing the intensity difference in the outer region (ring-quad, 1,2-3,4), however, leads to a slow-varying function (blue line) with a periodicity of 12.63 mm, which will give information about the global position such that the high-accuracy distance measurement can be extended over multiple periods, i.e., a longer region. Small oscillations in the slow-varying curve are due to an inevitable cross-talk from the strong fast-oscillating signal. As discussed in the theory section, such a dual-frequency beam can be generated by adding an additional ring to the modulation pattern. In our realization, we used an additional ring that was slightly bigger than the inner circular area, with an inner ring radius of $r_{1}$ = 0.6 mm and a width of 100 $\mu$m. The modulation pattern to generate such a beam can be found in Fig. 5a. As before, the ring width was limited by the pixel size of the modulating device used in the subsequent experiment. The additional ring results in a beam having a second, much smaller rotation frequency in the intensity, as described by equation (5). To prevent a third rotation frequency from appearing in the intensity pattern, we imprinted the additional ring with an OAM value of $\ell=1$, such that the pairwise interference only appears between the circular central area and each of the rings. Again, the width of the additional ring was chosen such that all three beam components were similar in amplitude, thus obtaining a high-visibility structure vetter2015optimization . The rest of the simulation remained the same as before. Figure 6: High-accuracy distance measurements over a long range. a) The connected-scatter plot of intensity differences between the inner quadrants (red and black) and the ring quadrant (blue) as shown in Fig. 5a. At every position, 25 images were recorded. b) Displacement accuracy analysis for the slow-varying curve (blue), again obtained using the fitted function from a), the standard deviation of the intensity fluctuations at each position, and error propagation. To distinguish the different steep slopes of the fast- oscillating signals, an accuracy of around 80 $\mu$m is required (dashed line), which we achieved over a region of about 2 mm (orange shaded region). Because the components of the light field that rotate slower are stemming from the components on the inner parts of the generation hologram, the resulting interference is distributed over a larger region around the optical axis (see Fig. 5a and insets in b). In other words, the angular intensity found in regions of larger radii is strongly determined by the slow-varying pattern. This spatial separation enables a simultaneous measurement of both variations: the fast-spiraling part close to the optical axis and the slow-spiraling part further away from the beam center. As before, we used the small circular region with a radius of 3.75 $\mu$m around the optical axis to determine the fast variation. The slow-varying signal was obtained by recording the intensity in a circular, ring-shaped region with an inner radius of 7.5 $\mu$m and a width of 22.5 $\mu$m. Upon propagation, the intensity follows a fast as well as slow-varying sinusoidal curve when the differences between the quadrants are evaluated (see Fig. 5b). As expected, the fast oscillation period is again 311.2 $\mu$m, while the slow-varying signal has a periodicity of around 12.63 mm. While the fast-oscillating period again matches nicely with the theoretical value of 311 $\mu$m, we find a bigger discrepancy of the slow-oscillating signal to the theoretically expected periodicity of 10.2 mm. We attribute the latter to the fact that we only obtained only one full fringe, which also shows an additional modulation. The additional slow-varying signal now allows a discrimination between the different fast-oscillating periods and, thus, a global determination of the longitudinal position. In other words, after a first gauging of slope of the slow-varying curve with respect to a coarse distance measure, the fast-oscillating signal can then be used to determine the distance with high accuracy. However, a closer inspection of the slow-varying curve shows that it is also slightly modulated by the fast-oscillating signal, as the two differently spiraling fields are not complete separable. While this leads to an increase of the slope in some regions, it also flattens the curve whenever the fast-varying modulation counteracts the change of the slow-oscillating curve. Obviously, in these regions the accuracy of the slow-varying signal will be reduced. Thus, an important experimental question is to determine if the detrimental effect of this additional modulation is small enough to allow a discrimination between the different fringes using our quadrant measurement technique: in other words, over what range is the displacement accuracy of the slow-varying signal less than $1/4$ of the period of the fast-varying signal? Apart from changing the modulation pattern at the SLM, the rest of the experimental setup remained the same as before. However, this time we scanned over a range of 10 mm with a step size of 10 $\mu$m and recorded 25 image frames of the spiraling structure at each position. In the data analysis, we again used the four pixels around the optical axis (2$\times$2 pixel array) as a quadrant detector to obtain the fast-oscillating signal. Additionally, to measure the slow-varying signal, we measured the intensity differences between the upper and lower quadrants (1,2 and 3,4) in a disk-shaped region with a radius of 30 $\mu$m around the optical axis, excluding the four inner pixels used to determine the fast-oscillating signal. As can be seen in Fig. 6a, we find a slow-varying curve with a periodicity of 11.29$\pm$0.03 mm and fast- oscillating fringes with periods of 312.9$\pm$0.4 $\mu$m, as expected. We also obtain, analogous to the simulations, the additional high-frequency modulation of the slow-varying signal. To evaluate the displacement accuracy, we first determine the function that describes the slow-varying curve, for which we use a sum of two sinusoidal functions whose amplitude and periodicity we obtain from fits. Using this function; experimentally obtained errors given by the standard deviation of the measured intensities; and error propagation, we find that we obtain the required accuracy of less than than 80 $\mu$m ($\sim 1/4$ of the period of the fast curve) over a range of more than 7 fast-oscillating fringes, or around 2 mm. Most displacement accuracies for the slow-varying curve in this region are as low as 10-20 $\mu$m (see Fig. 6b). This accuracy is enough to distinguish the different fast-oscillating fringes and the corresponding regions having a steep slope, thereby demonstrating an accuracy of around 2 $\mu$m over the full region, i.e., three orders of magnitude or 2 mm. We note that in principle our method does not require an initial calibration, as the longitudinal-varying angular intensities can be theoretically obtained from the beam dimensions as well as the focusing lens. However, in experiments the system might be initially characterized once, such that a camera (or quadrant detector) placed anywhere within the possible measurement range can be used to determine an absolute position without rescanning the entire translation region. ## IV Conclusion We have demonstrated that radially self-accelerating beams can be used to determine the distance with respect to a reference with an accuracy of 2 $\mu$m over three orders of magnitude. The main benefit of our technique is its simplicity, as only a single beam having the appropriate structure and two quadrant detectors are required. Apart from this strong benefit, there is also an important precaution worth mentioning. The technique requires a very accurate alignment of the beam’s optical axis with the center of the detector such that the recorded intensity does not move transversely while the detector is translated. Even a single pixel of transverse displacement can cause large discontinuous jumps in the measured intensity differences. However, once alignment has been ensured, the measurement is stable and only requires minimal post-processing (after having gauged the system), such that very high read-out speeds on the order of nano-seconds might be feasible. The obtained accuracy of the distance measurements can be further improved by using stronger focusing optics and custom-fabricated optics for beam generation, as well as an additional imaging system to lift the limitation of the finite resolution of the detection system. In addition, it might be interesting to consider more complex spiraling structures, such as an accelerated rotation schulze2015accelerated , which should further increase the accuracy over small regions at the cost of the accuracy elsewhere. Finally, we hope to stimulate further research into applications that benefit from the propagation-dependent intensity variations of radially self-accelerating beams. ## Acknowledgments SP, SP, MO, and RF acknowledge the support from Academy of Finland through the Competitive Funding to Strengthen University Research Profiles (decision 301820) and the Photonics Research and Innovation Flagship (PREIN - decision 320165). RF also acknowledges support from Academy of Finland through the Academy Research Fellowship (decision 332399). ## Disclosures The authors declare that there are no conflicts of interest related to this article. ## References * (1) H. Rubinsztein-Dunlop, A. Forbes, M. V. Berry, M. R. Dennis, D. L. Andrews, M. Mansuripur, C. Denz, C. Alpmann, P. Banzer, T. Bauer _et al._ , Journal of Optics 19, 013001 (2016). * (2) M. J. Padgett, Optics Express 25, 11265 (2017). * (3) L. Allen, M. W. Beijersbergen, R. Spreeuw, and J. Woerdman, Physical Review A 45, 8185 (1992). * (4) M. Beijersbergen, R. Coerwinkel, M. Kristensen, and J. Woerdman, Optics communications 112, 321 (1994). * (5) N. Heckenberg, R. McDuff, C. Smith, and A. White, Optics Letters 17, 221 (1992). * (6) A. V. Carpentier, H. Michinel, J. R. Salgueiro, and D. Olivieri, American Journal of Physics 76, 916 (2008). * (7) A. Forbes, A. Dudley, and M. McLaren, Advances in Optics and Photonics 8, 200 (2016). * (8) M. W. Beijersbergen, L. Allen, H. Van der Veen, and J. Woerdman, Optics Communications 96, 123 (1993). * (9) L. Marrucci, C. Manzo, and D. Paparo, Physical Review Letters 96, 163905 (2006). * (10) H. Larocque, J. Gagnon-Bischoff, F. Bouchard, R. Fickler, J. Upham, R. W. Boyd, and E. Karimi, Journal of Optics 18, 124002 (2016). * (11) R. C. Devlin, A. Ambrosio, N. A. Rubin, J. B. Mueller, and F. Capasso, Science 358, 896 (2017). * (12) A. Forbes, Laser & Photonics Reviews 13, 1900140 (2019). * (13) F. Gori, G. Guattari, and C. Padovani, Optics Communications 64, 491 (1987). * (14) M. Zahid and M. Zubairy, Optics Communications 70, 361 (1989). * (15) C. Vetter, R. Steinkopf, K. Bergner, M. Ornigotti, S. Nolte, H. Gross, and A. Szameit, Laser & Photonics Reviews 13, 1900103 (2019). * (16) X. Chu, The European Physical Journal D 66, 259 (2012). * (17) D. McGloin and K. Dholakia, Contemporary Physics 46, 15 (2005). * (18) S. Chávez-Cerda, G. McDonald, and G. New, Optics Communications 123, 225 (1996). * (19) Y. Y. Schechner, R. Piestun, and J. Shamir, Physical Review E 54, R50 (1996). * (20) E. Abramochkin, N. Losevsky, and V. Volostnikov, Optics Communications 141, 59 (1997). * (21) P. Pääkkönen, J. Lautanen, M. Honkanen, M. Kuittinen, J. Turunen, S. Khonina, V. Kotlyar, V. Soifer, and A. Friberg, Journal of Modern Optics 45, 2355 (1998). * (22) J. Tervo and J. Turunen, Optics Express 9, 9 (2001). * (23) V. Kotlyar, S. N. Khonina, R. Skidanov, and V. Soifer, Optics Communications 274, 8 (2007). * (24) R. Vasilyeu, A. Dudley, N. Khilo, and A. Forbes, Optics Express 17, 23389 (2009). * (25) C. Vetter, T. Eichelkraut, M. Ornigotti, and A. Szameit, Applied Physics Letters 107, 211104 (2015). * (26) C. Schulze, F. S. Roux, A. Dudley, R. Rop, M. Duparré, and A. Forbes, Physical Review A 91, 043821 (2015). * (27) D. M. Greenberger, American Journal of Physics 48, 256 (1980). * (28) C. Vetter, T. Eichelkraut, M. Ornigotti, and A. Szameit, Physical Review Letters 113, 183901 (2014). * (29) C. Vetter, A. Dudley, A. Szameit, and A. Forbes, Optics Express 25, 20530 (2017). * (30) J. Webster, C. Rosales-Guzmán, and A. Forbes, Optics Letters 42, 675 (2017). * (31) S. Jia, J. C. Vaughan, and X. Zhuang, Nature Photonics 8, 302 (2014). * (32) H. He, C. Kong, X.-J. Tan, K. Y. Chan, Y.-X. Ren, K. K. Tsia, and K. K. Wong, Optics Letters 44, 5238 (2019). * (33) R. A. Jarvis, IEEE Transactions on pattern analysis and machine intelligence PAMI-5, 505 (1983). * (34) T. Kubota, M. Nara, and T. Yoshino, Optics Letters 12, 310 (1987). * (35) S. Berg-Johansen, M. Neugebauer, A. Aiello, G. Leuchs, P. Banzer, and C. Marquardt, arXiv preprint arXiv:2010.16387 (2020). * (36) M. Ornigotti and A. Szameit, Journal of Optics 20, 125601 (2018). * (37) R. Rop, A. Dudley, C. López-Mariscal, and A. Forbes, Journal of Modern Optics 59, 259 (2012). * (38) C. Rosales-Guzmán and A. Forbes, _How to shape light with spatial light modulators_ (SPIE Press, 2017). * (39) M. J. Padgett, F. M. Miatto, M. P. Lavery, A. Zeilinger, and R. W. Boyd, New Journal of Physics 17, 023011 (2015). * (40) T.-C. Poon and T. Kim, _Engineering Optics with MATLAB_ (World Scientific Publishing Company, 2017). * (41) Z. Zhang, Z. You, and D. Chu, Light: Science & Applications 3, e213 (2014).
# Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021 Carole H. Sudre111Corresponding<EMAIL_ADDRESS>Kimberlin Van Wijnen Florian Dubost Hieab Adams David Atkinson Frederik Barkhof Mahlet A. Birhanu Esther E. Bron Robin Camarasa Nish Chaturvedi Yuan Chen Zihao Chen Shuai Chen Qi Dou Tavia Evans Ivan Ezhov Haojun Gao Marta Girones Sanguesa Juan Domingo Gispert Beatriz Gomez Anson Alun D. Hughes M. Arfan Ikram Silvia Ingala H. Rolf Jaeger Florian Kofler Hugo J. Kuijf Denis Kutnar Minho Lee Bo Li Luigi Lorenzini Bjoern Menze Jose Luis Molinuevo Yiwei Pan Elodie Puybareau Rafael Rehwald Ruisheng Su Pengcheng Shi Lorna Smith Therese Tillin Guillaume Tochon Hélène Urien Bas H.M. van der Velden Isabelle F. van der Velpen Benedikt Wiestler Frank J. Wolters Pinar Yilmaz Marius de Groot Meike W. Vernooij Marleen de Bruijne for the ALFA study MRC Unit for Lifelong Health and Ageing at UCL, University College London, London, United Kingdom Centre for Medical Image Computing, University College London, London, United Kingdom School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands Centre for Medical Imaging, University College London, London United Kingdom Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands Department of Radiology, University of Massachusetts Medical School, Worcester, The USA School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Department of Computer Science and Engineering, The Chinese University of Hong Kong, China Department of Informatics, Technische Universitat Munchen, Munich, Germany Department of Radiology, Zhejiang University, Hangzhou, China Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands Barcelona$\beta$ Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain Department of Radiology, Hospital San Pau i santa Creu, Barcelona, Spain Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands Institute of Neurology, University College London, London, United Kingdom Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China LRDE, EPITA, Paris, France ISEP-Institut Supérieur d’Électronique de Paris, Issy-les-Moulineaux, France Department of Computer Science, University of Copenhagen, Copenhagen, Denmark Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany Department of Quantitative Biomedicine, University of Zurich, Switzerland GlaxoSmithKline Research, Stevenage,UK H. Lundbeck A/S, Copenhagen, Denmark Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Barcelona, Spain ###### Abstract Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1 - EPVS, 9 for Task 2 - Microbleeds and 6 for Task 3 - Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1 - EPVS and Task 2 - Microbleeds and not practically useful results yet for Task 3 - Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level. ###### keywords: CSVD, brain, MRI, microbleeds, enlarged perivascular spaces, lacunes, automated, segmentation, detection, challenge 222The complete list of collaborators for the ALFA study can be found in acknowledgments ## 1 Introduction Cerebral small vessel disease (CSVD), the deterioration of the smallest brain vessels, encompasses a large variety of etiologies including arteriolosclerosis [Alistair, 2002] and amyloid pathology [Kester et al., 2014] and may be further driven by genetic predisposition [Haffner et al., 2016, Giau et al., 2019]. It results in observable damage or changes to the brain. Most commonly observed MRI markers of CSVD include white matter hyperintensities (WMH), cerebral microbleeds, lacunes of presumed vascular origin, and enlarged perivascular spaces [Wardlaw et al., 2013]. CSVD related damage has been associated with an increased risk of stroke and dementia, and with the acceleration of cognitive decline [Østergaard et al., 2016, Rensma et al., 2018]. The presence of these markers are also associated to one another [Zhang et al., 2014, Zhu et al., 2010, Yates et al., 2014]. WMH are the most visible marker of CSVD and have naturally taken the centre stage of clinical research in CSVD. In addition, research on development of WMH segmentation solutions has been particularly popularized thanks to impactful research showing the clinical importance of lesion volumetry [Van Straaten et al., 2006]. While the automated quantification of white matter hyperintensities has been heavily studied for the last decade with very successful solutions [Sudre et al., 2015, Guerrero et al., 2018, Atlason et al., 2019, De Boer et al., 2009], automated detection and segmentation of the small, focal markers of CSVD has been investigated less frequently. However, as the interest of the clinical community in these markers starts to grow, getting to understand their relevance in clinical research requires them to be adequately detected and quantified. While these markers are currently typically assessed visually through binary dichotomization (presence vs absence) [Yates et al., 2014], counts [Adams et al., 2015], or visual scales [Potter, 2011], such visual assessment is time consuming and subject to large inter- and intra-rater variability [Sudre et al., 2019]. Automated methods are therefore required to make quantification robust and reliable as well as feasible in the context of large data sets. So far, development of automated methods has been impeded by the methodological issues related to the very small size of these markers and the resulting extreme imbalance in the data, as well as the absence of a gold standard for annotation. Methodological developments towards automated solutions for the quantification of biomarkers have found a new dynamic thanks to the annotated datasets made available through technical challenges on segmentation and detection in brain MRI with notably the popular BRATS challenge [Menze et al., 2014], ISLes [Maier et al., 2017], MRBrainS [Mendrik et al., 2015], the 2017 MICCAI WMH challenge [Kuijf et al., 2019] or the more recent ADAM challenge [Timmins et al., 2021] on intracranial aneurysms. Such challenges give insight into state- of-the-art methodology and remaining technical problems for a specific question. The VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge was organized with the aim of promoting the development of new solutions for the automated detection and segmentation of these sparse and small structural brain changes (enlarged perivascular spaces (Task 1), cerebral microbleeds (Task 2) and lacunes (Task 3) ) while leveraging weak and noisy labels from manual annotation or visual assessment. Beyond a simple benchmarking exercise assessing the state of the solution space, this challenge was further intended to gain insight on the current pitfalls and challenges, raise awareness and contribute to the building of a community dedicated to developing solutions to facilitate quantification of CSVD markers in brain MRI scans. This paper describes the design, results, and lessons learnt through the challenge according to the reporting guidelines detailed in [Maier-Hein et al., 2020]. ## 2 Methods ### 2.1 Mission of the challenge The Where is VALDO? challenge was organized to assess three tasks, each of them focusing on one focal marker of CSVD - Task 1 on enlarged perivascular spaces (EPVS), Task 2 on cerebral microbleeds and Task 3 on Lacunes. Figure 1 illustrates each of these markers as annotated in the challenge training set. Figure 1: Annotated example of the three type of markers targeted in the challenge Currently, the lack of accurate and reproducible automated methods for all three markers prohibits the identification of clinically relevant characteristics at both individual and population levels. Therefore, for each of the stated markers both detection and segmentation performance need to be assessed. Ultimately, the improved quantification of these small focal markers of CSVD may be used to better understand their relevance and derive biomarkers for diagnosis or prognosis in the context of healthy ageing and dementia, and as surrogate end points in clinical trials. In proposing tasks particularly subject to high data imbalance and limited and/or noisy annotations, this challenge further aimed to catalyse methodological research to address these common issues in the medical image analysis community. Ultimately, the proposed methods should be applicable to different settings involving ageing populations such as population cohorts, clinical trials or memory clinics. The challenge dataset however consisted exclusively of population-based cohorts - two to three according to the task, with differences in MRI acquisition protocol, image resolution and scanner characteristics across datasets. No additional information beyond the images was provided. Each of the datasets was enriched for lesion burden through stratified sampling of the skewed population distributions. For each task, a similar approach to assessment was adopted to ensure consistency across tasks and address both segmentation and detection aspects, although some may currently be considered more important in one task than another, with different paradigms used in clinical practice. For instance, the blooming effect observed in the presence of microbleeds is protocol dependent, making the detection more relevant than the segmentation in that task [Buch et al., 2017]. ### 2.2 Challenge organization The Where is Valdo? challenge was run as a satellite event at MICCAI 2021 as a collaboration of University College London and Erasmus MC University Medical Center Rotterdam. Its three-task design was peer-reviewed prior to acceptance and made public at https://doi.org/10.5281/zenodo.4600654 Regarding prize eligibility, it was decided that organizers would not participate and while members of the same institutions as the organizers were allowed to participate in the challenge, they would not be eligible for prizes. Prizes were given to each winner of individual tasks and the overall winner across all tasks. Results were publicly presented for all participating teams. All submitting teams were invited to propose two team members (per task) to participate as co-authors in the challenge overview paper. After publication of this overview paper of the challenge, the submission will reopen to the community for anyone wanting to benchmark their methods against those previously submitted. Further information is available on the challenge website https://valdo.grand- challenge.org. The challenge was organized in 4 phases: 1) a training phase from the moment the annotated database was made downloadable (February 2021), 2&3) two optional validation steps on 5 new cases to provide individual (no public leaderboard) feedback on the performance (14th to 21st of June and 12th to 19th of July) and 4) the final evaluation stage on withheld cases (submission from 26th of July to 5th of August 2021). A grace period extending until the 10th August in case of technical difficulties was granted to all participants. Participants had to provide a docker container for their fully automated method (1 for each task) and were allowed to participate in any or all the tasks. Use of additional training data was allowed under the condition it would be made available at submission time. The methods did not have to be similar across all tasks. Details of the submission procedure are listed at https://valdo.grand-challenge.org/Submission/. Participating teams were also requested to provide a short technical note describing their solutions that have been made available at https://openreview.net/group?id=MICCAI.org/2021/Challenge/VALDO. Figure 2 presents the timeline of the challenge. Figure 2: Timeline of the challenge from inception in September 2019 Submitted data were evaluated on the test set at a GPU facility at Erasmus MC. In order to ensure that the proposed methods were running as expected, each docker was run on one example of the training set and the result sent back to the participants for checking, allowing for submission of a new docker if the output was not as expected. The evaluation code was made available prior to submission at https://github.com/WhereIsValdo/valdo-eval-2021. The participating teams were encouraged to make their source code publicly available and all participants except one team agreed for their docker containers to be made public. They have been placed on https://hub.docker.com/r/whereisvaldo/challenge2021/tags The challenge was sponsored by NVIDIA and Icometrix. Test data was available to CHS and KVW. The contribution of the authors listed in this manuscript can be found in supplementary material. ### 2.3 Community survey To better understand the interest within the community for such initiative, we launched in January 2021 a survey targeting the community working in the field of automated detection of CSVD lesions. This survey was sent to a list of researchers having recently published automated methods for detection or segmentation of one of the three lesion types considered in the Where is VALDO? challenge, the International Society of Vascular Behavioural and Cognitive Disorders (VasCog https://www.vas-cog.com), and the Medical Image Understanding and Analysis (MIUA<EMAIL_ADDRESS>mailing list, and the survey was shared on social media by the challenge organizing team. Overall, 36 answers were recorded with 25 individuals indicating to be very likely or likely to participate. Among the respondents, 39% indicated being already actively working in the field of CSVD and 30% more general in the neuroimaging field. Microbleed segmentation appeared as the most popular task in the survey with 15 respondents indicating they were highly likely to participate in this task against 10 for EPVS and 10 for lacunes. These answers helped shape the final challenge design, notably standardizing the evaluation of the different tasks and making the challenge overall more concise. ### 2.4 Challenge data sets The challenge data sets (training, validation, and test sets) came from the same cohorts with a similar ratio between them across tasks. This ratio was also kept in the testing set. #### 2.4.1 Datasets and image acquisition Two subsets of population cohorts were used for all three tasks and an additional one was further available for the microbleed detection/segmentation task, namely the SABRE and Rotterdam Scan Study (RSS) cohorts and the ALFA study respectively. All cohorts were retrospective studies for which local ethical approval had already been obtained from the National Research Ethics Service Committee, London-Fulham (14/LO/0108) for SABRE, the Population Research Act from the Ministry of Health for RSS and the Independent Ethics Committee Parc de Salut Mar Barcelona and registered at Clinicaltrials.gov (NCT01835717) for ALFA. For all datasets, acquisition of the data was performed by a trained radiographer according to a predefined research protocol. The training data for the Where is VALDO? challenge was made available under a CC BY NC-SA license. ##### SABRE The Southall and Brent Revisited (SABRE) cohort is a population cohort of individuals residing in the two named boroughs of west London (UK)[Tillin et al., 2013]. This tri-ethnic cohort was initially recruited in 1988 with the purpose of investigating metabolic and cardiovascular diseases across ethnicities. For their third clinical visit (2014-2018), life partners were also invited to take part and study participants underwent a brain MRI session on a Philips 3T scanner. Mean age in this cohort at time of acquisition was 72 years old ranging from 36 to 92. ##### RSS The Rotterdam Scan Study (RSS) [Ikram et al., 2015] is part of the larger Rotterdam Study (RS) [Ikram et al., 2020], a population-based study that aims to investigate chronic illness in the elderly. Started in 1995, the Rotterdam Scan Study initially concerned a selection of the RS but since 2005 brain MRI is part of the core protocol of the study. Individuals aged 45 and over without dementia are eligible for MRI and are followed up every 3-4 years. Since 2005, scanning has been performed on a 1.5T GE MRI scanner dedicated to the study. ##### ALFA The ALFA (Alzheimer’s and Families) cohort is based on the ALFA registry that gathers details of relatives (generally offspring) of patients with Alzheimer’s Disease making up for a cohort naturally enriched for genetic predisposition to AD. As described in the related protocol paper [Molinuevo et al., 2016], the ALFA cohort is composed of cognitively normal participants aged 45-74. Brain MRI sequences were acquired on a GE Discovery 3T scanner. Table 1 summarizes the acquisition parameters for the different sequences across the studied cohorts. Cohort | Sequence | Type | TR | TE | TI | FA | Resolution (mm) ---|---|---|---|---|---|---|--- SABRE | T1w | Inversion prepared gradient echo | 6.9 | 3.1 | / | / | 1.09 x 1.09 x 1.0 T2w | 3D sagittal turbo spin echo | 2500 | 222 | 836 | 8 | 1.09 x 1.09 x 1.0 FLAIR | | 4800 | 125 | 1650 | | 1.09 x 1.09 x 1.0 T2* | Gradient echo | 1288 | 21 | / | 18 | 0.45 x 0.45 x 3.0 RSS | T1w | Gradient recalled echo | 13.8 | 2.8 | 400 | 20 | 0.49 x 0.49 x 0.8 T2w | Fast spin echo | 12300 | 17.3 | / | / | 0.49 x 0.49 x 0.8 FLAIR | Fast spin echo | 8000 | 120 | 2000 | | 0.49 x 0.49 x 0.8 T2* | Gradient recalled echo | 45 | 31 | / | 13 | 0.49 x 0.49 x 0.8 ALFA | T1w | 3D | 8.0 | 3.7 | 450 | 8 | 1.0 x 1.0 x 1.0 T2w | Fast spin echo | 5000 | 85 | / | 110 | 1.0 x 1.0 x 3.0 T2* | Gradient recalled echo | 1300 | 23 | / | 15 | 1.0 x 1.0 x 3.0 Table 1: Acquisition details for the three cohorts. Acronyms FA - Flip angle; TE - echo time(ms); TI - inversion time(ms); TR - repetition time (ms) #### 2.4.2 Training, validation and testing data For Task 1 - EPVS and Task 3 - Lacunes, imaging data consists of T1-weighted, T2-weighted and FLAIR images, with the latter two modalities rigidly registered to the T1 image using NiftyReg [Modat et al., 2014]. For Task 2 - Microbleeds, imaging data is the combination of T2, T2* and T1-weighted images in T2* space. Table 2 presents the number of cases used for training and testing across the different tasks and the different cohorts. For each task, validation consisted of 5 cases from the RSS cohort. There was no overlap between training, test or validation datasets. | Task 1 - EPVS | Task 2 - Microbleeds | Task 3 - Lacunes ---|---|---|--- Cohort | Train | Test | Train | Test | Train | Test SABRE | 6 | 10 | 11 | 20 | 6 | 10 RSS | 34 (6/28) | 56 | 34 | 68 | 34 | 56 ALFA | / | / | 27 | 59 | / | / Total | 40 | 66 | 72 | 147 | 40 | 66 Table 2: Number of cases in train and test set for each task and cohort origin. For RSS Task 1 of training separation between cases with full annotation and cases with only counts The number of cases proposed for training was chosen based on annotation availability and data policy for making a certain number of cases publicly available. For Task 1 - EPVS and Task 3 - Lacunes, the SABRE segmentation data was already available for a set of 16 cases with high level of cerebrovascular damage. In comparison, for the RSS study, for which annotations were more widely available, data were selected to cover the variability in burden present in the study. They present close to a uniform distribution in burden thereby limiting data skewness towards cases without any lesion. In all tasks, annotated cases were distributed across training and testing set to follow approximately similar burden distribution. A ratio of 6:10 between training and testing data was chosen across all cohorts and tasks. #### 2.4.3 Annotation Across the three cohorts, raters were all trained for their annotation task and had at least 3 years of professional experience in dealing with medical images. The segmentation was performed for all SABRE and ALFA cases using ITKSnap [Yushkevich et al., 2016]. For the RSS cases a custom MeVisLab [Ritter et al., 2011] application was used. In all cases were two annotations were available, the average of the two annotations was used as reference. ##### Task 1 - Enlarged Perivascular Spaces For Task 1, the annotation strategy differed between the SABRE and RSS cohort. For identifying EPVS, the STRIVE criteria [Wardlaw et al., 2013] for EPVS were used in the SABRE cohort, while in the RSS cohort, the UNIVRSE criteria [Adams et al., 2015] were used. These criteria are very similar, except for the fact that the UNIVRSE criteria only consider EPVS with a diameter between 1 and 3 mm, while the STRIVE criteria do not have a lower limit and consider any EPVS with a diameter up to 3 mm. In the SABRE cohort, EPVS over the whole brain image were annotated independently by two raters (CHS and LL) with a senior radiologist (BGA) confirming the segmentation of CHS. The three modalities were jointly used for the segmentation that was assessed across the three axes. For this dataset the annotation was provided in either of two forms: over the full brain or on only 5 randomly selected slabs of 5mm. A mask was provided per case indicating the slabs that were annotated. In the RSS cohort, EPVS were annotated with segmentations in limited axial slices for 6 cases of the training set and the full test set, while the remaining 28 cases of the training set were annotated with dots only by a team of trained annotators supervized by KVW, FD and MWV. EPVS were annotated in four brain regions: the mesencephalon, hippocampus, basal ganglia, and the centrum semi-ovale. The first two smaller regions were annotated entirely. For the latter two regions, only one fixed slice was annotated. For the cases with EPVS segmentations, additional slices of the basal ganglia and of the white matter were annotated, the depth of these axial slices was randomly chosen per case. A mask indicating which parts of the brain had been annotated was computed using parcellation outputs for each case. For the training data made available to participants, the EPVS annotations were either presented just as counts (computed from the dots), per slice and per region or as segmentations plus counts in the same areas. The masks indicating the annotated regions and slices per case was also provided. Figure 3 illustrates the type of annotation masks that were provided to the participants. Figure 3: Example of annotation provided for Task 1 - EPVS with left) for SABRE slabs of 5 mm randomly selected or full segmentation over the image, middle) Segmentation on two slices of CSO, 2 slices of the basal ganglia, the hippocampi and mesencephalon for 6 RSS cases and right) count of EPVS on 1 slice of CSO, 1 slice of basal ganglia, hippocampi and mesencephalon for 28 cases of RSS. ##### Task 2 - Microbleeds Different raters annotated each of the cohorts but followed very similar protocols. The BOMBS criteria [Cordonnier et al., 2009] was applied for the SABRE (RR under the supervision of HRJ) and ALFA cohort (consensus of SI and LL under the supervision of FB) as described in [Ingala et al., 2020]. A team of trained raters under the supervision of MWV applied the protocol described in [Vernooij et al., 2008] for RSS. Both identification protocols are in line with the STRIVE guidelines [Wardlaw et al., 2013] that indicate that microbleeds are areas of signal void of generally 2-5 mm in diameter but can be up to 10 mm. ##### Task 3 - Lacunes Lacunes were identified using the STRIVE criteria [Wardlaw et al., 2013]. Cerebellar lacunes were excluded because of assumed differences in the underlying pathology in this brain region[Sigurdsson et al., 2022]. Any surrounding gliosis (the hyperintense rim visible on FLAIR sequences) was not included in the segmentation of the lacune. For the SABRE cohort, lacunes were identified at the same time as EPVS simply being assigned another label in the segmentation, with the two raters (CHS, LL) performing the identification and segmentation independently. For the RSS cohort, lacunes were independently segmented for all cases by two raters, the pair of raters varying across the cases. In RSS, all cases of training, validation and test set indicated by radiological reads as containing at least one lacune were consistently annotated by one rater (TE) on a custom MeVisLab[Ritter et al., 2011] application. The second set of annotations was performed using ITKSnap[Yushkevich et al., 2016]. PY annotated all cases of the training set. FW annotated the validation set as well as half of the test set. The remaining half of the test set was annotated by IFV. #### 2.4.4 Sources of annotation errors In all tasks, possible source of errors in the annotations pertain to multiple distinct sources: the appropriate identification of a target element either because these elements are very small and may be easy to miss or because it may be difficult to distinguish them from similarly appearing structures (mimics); the decision on the boundary of an object, probably notably more complex in a coarser resolution plane; the use of the segmentation software (too large brush, not considering all orientations for consistency or not adequately using the zoom). In the case of EPVS, identification of ”large enough” marker was also a subjective consideration possibly leading to different detection levels. #### 2.4.5 Preprocessing For all tasks, the preprocessing consisted of a rigid alignment of the images as indicated in section 2.4.2. A defacing mask derived from the T1-weighted image was applied to all registered modalities. While such a step would not be required in practice, this step was mandated by the data sharing policies around public release of the data. The defacing mask was obtained as the inverse of a dilated version of the brain mask as obtained from HD-BET [Isensee et al., 2019]. All RSS scans were corrected for intensity inhomogeneity with the default parameters of MINC N3 package [Sled et al., 1998]. ### 2.5 Assessment method All three tasks were evaluated using similar metrics in order to assess both detection and segmentation performance of the proposed solutions. A combination of relative error (F1 score and Mean Dice score) and absolute error (absolute element difference (AED) and absolute volume difference (AVD)) metrics was chosen, since they provide complementary information. The F1 score and the AED on the number of detected lesions were chosen as detection metrics while the Mean Dice score over the appropriately identified elements and the AVD were the metrics used for the evaluation of segmentation. Table 3 summarizes the purpose, formula and properties of the metrics used in the challenge across all tasks and calculated for each case, where c refers to 6-neighborhood connected components, TP to true positives, FP to False positive, FN to false negatives, Ref to the reference annotation and Seg to the predicted segmentation. Metric | Target | Formula | Range | Best ---|---|---|---|--- F1 Score | Detection | $\frac{2TP_{c}}{2TP_{c}+FP_{c}+FN_{c}}*100$ | 0 - 100 | 100 AED | Detection | $|\\#_{c}Ref-\\#_{c}Seg|$ | 0 - $\inf$ | 0 Mean Dice | Segmentation | $\frac{100}{\\#TP_{c}}\sum_{t\in TP_{c}}\frac{2*\sum(Ref_{t}*Seg_{t})}{\sum Ref_{t}+\sum Seg_{t}}$ | 0 - 100 | 100 AVD | Segmentation | $|Ref-Seg|$ | 0 - $\inf$ | 0 Table 3: Description of detection and segmentation metrics used across all tasks for the evaluation. One essential aspect in the evaluation for the derivation of both F1 and Mean Dice score was the definition of true positive elements. To determine which of the elements were true positives, for all three tasks, connected components with a neighbourhood of 6 were established for both annotation and prediction using a threshold for the probability of 0.5 for the prediction map. Each annotation element was matched to at most one element from the prediction. For Task 1 - EPVS, a possible matchable element had to have an Intersection over Union (IoU) of more than 10%. For Task 2 - Microbleeds and Task 3 - Lacunes, matching was possible when the centre of mass of the prediction element was less than 5 mm away from the center of mass of the ground truth segmentation element. When multiple elements were found to be matchable, the one with best association value (IoU or centre of mass distance) was attributed to the annotated label. For empty cases, the relative metrics were inapplicable, so only the absolute error metrics (number of elements and volume) were computed. In the event of algorithmic failure for a specific case, worst metric values were attributed. For bounded metrics (F1 and Mean Dice score) a value of 0 was given. For non-bounded error metrics (absolute element and absolute volume difference) an error of 100 000 was assigned as worst possible error. | Detection Error | Detection Error | True Positive | True Positive ---|---|---|---|--- | RefUnc $<=$ 0.5 | RefUnc $>$ 0.5 | RefUnc $<=$0.5 | RefUnc $>$ 0.5 PredUnc$<=$ 0.5 | FC | FC | TC | FC PredUnc $>$0.5 | TU | TU | FU | TC Table 4: Categorization for calculation of uncertainty measures; TU - Truly Uncertain; TC - Truly Certain; FU - Falsely Uncertain; FC - Falsely Certain For Task 3 - Lacunes two metrics related to the estimation of uncertainty were further included. One was designed to tackle detection uncertainty and the other segmentation uncertainty. In terms of uncertainty validity, elements are considered as either truly certain (TC), truly uncertain (TU), falsely certain (FC) or falsely uncertain (FU) as per Table 4. The uncertainty was calculated as $(TU+TC)/(TU+TC+FC+FU)$. The segmentation uncertainty was only assessed over true positive detected elements, assessing probabilistic uncertainty accuracy as $\frac{\sum_{TP}(1-Unc)+\sum_{FN+FP}Unc}{TP+FN+FP}$ All metrics were computed per image and the distribution over all cases of the test set was used for the final ranking. For each task, ranking of the methods was performed following the method described for the Medical Image Decathlon challenge [Antonelli et al., 2021]. Pairwise comparisons were performed using the Mann-Whitney U-test for the Mean Dice over cases with F1 $>$ 0 and the Wilcoxon paired test for the other metrics due to their non-normal distribution. For each method, the number of times it was found significantly better (with a p-value $\leq$0.05 for significance) than another was used to rank the given metric. The final rank was obtained as the average across the ranks (lower being better). The robustness of the ranking was further assessed using the distribution of Kendall’s tau correlation coefficient between ranking for all cases and the one obtained for 1000 bootstrap samples as described in [Wiesenfarth et al., 2021]. To identify the best overall team, the ranks were averaged across all common metrics of all tasks for the teams that provided a solution to all three tasks. ### 2.6 Additional analyses Further analyses were performed to inform on the following aspects: 1) clinical performance, 2) performance variability across datasets, 3) regional variability in performance (Task 1 - EPVS), 4) inter rater variability (Task 3 - Lacunes and part of Task 1 - EPVS), and finally ensemble performance using either all methods (EnsembleAll) or the top 50% (EnsembleTop). ##### Clinical performance For each task, the most clinically relevant metric was further defined and used to compare the different methods. For Task 1 - EPVS, to emphasize the notion of burden of EPVS, the correlation between predicted and reference volumes across the population of test cases was used. For Task 2 - Microbleeds and Task 3 - Lacunes where a binary statement of existence or absence is most clinically relevant, the balanced accuracy over cases considered as a whole- image classification task was chosen. ##### Cross-dataset performance For each task, the performance of each method was additionally computed per dataset and then compared. The ranking was also computed per dataset to examine specific discrepancies between cohorts. ##### Regional performance To assess whether the performance of the proposed methods differed depending on the region for Task 1 - EPVS, the evaluation was run for each region (centrum semi-ovale, basal ganglia, hippocampus and mesencephalon) separately. For each method, pairwise comparison across regions was performed to assess whether a given method performed better on a given area. The overall ranking between methods was also computed per region. ##### Inter-rater variability For Task 1 - EPVS and Task 3 - Lacunes for which annotations by two raters were available, the evaluation was run considering alternatively each rater as the reference. While the overall absolute differences (volume and number of identified components) between the two raters are independent of the reference chosen (rater 1 or rater 2), changing the reference will affect F1 score and Mean Dice calculation due to differences in definition of true positives. ##### Ensemble performance Two ensemble solutions were created and evaluated. The average of all solutions (EnsembleAll) and the average of the predictions from the top 50% in overall rank of the methods (EnsembleTop). EnsembleAll and EnsembleTop were compared to the individual methods for each task. The number of participating teams being 4 for Task 1 - EPVS, EnsembleTop in this case consists in the union of two best performing methods. ## 3 Results ### 3.1 Challenge submission and participating teams Over the period of the challenge, the data set has been requested for 353 downloads. Across the two validation periods, we received requests from 1 team at validation stage 1 and 4 teams at validation stage 2. The final submission of dockerized solutions and their documented description to be applied to the test sets was composed of 4 teams for Task 1 - EPVS, 9 teams for Task 2 - Microbleeds and 6 teams for Task 3 - Lacunes. Only 2 teams participated in all 3 tasks. Table 5 summarizes in which task each team participated. Team Name | Task 1 EPVS | Task 2 Microbleeds | Task 3 Lacunes ---|---|---|--- BigrBrain | | | Dawai | | | EMC_N | | | MixLacune | | | MixMicrobleed | | | MixMicrobleedNet | | | Neurophet | | | TeamTea | | | Tfff | | | TheGPU | | | ValdoNN | | | Zihao | | | Table 5: Participation of the teams across the different tasks Table 6 reflects for each task and team the average time needed to evaluate one case, the GPU memory consumption, the docker details for memory requirements (CPU/GPU) and the methods’ characteristics. The memory details are presented both as requested by the participants based on their training settings and as measured on a single case allowing for memory flooding. All methods using Stochastic Gradient Descent (SGD) as optimizer applied Nesterov Momentum with value of 0.99. Poly learning rate scheduling is defined as multiplying the learning rate by $\left(1-\frac{epoch}{epoch_{max}}\right)^{0.9}$. The following architectures were listed by the participating teams: 2D Unet [Ronneberger et al., 2015], 3D Unet [Çiçek et al., 2016], nnUnet [Isensee et al., 2021], MaskRCNN He et al. [2017], Mask-RetinaNet [Farady et al., 2020], ResNet [He et al., 2016]. Beyond the well-known Dice [Milletari et al., 2016] and binary cross-entropy losses, others such as focal loss [Lin et al., 2017] and blob loss [Kofler et al., 2022] were mentioned. Adam [Kingma and Ba, 2014], SGD [Gardner, 1984] and Ranger21 Wright and Demeure [2021] were the optimizers used. Table 6: Details of the methods of the participating teams for each task. Abbreviations: Aug. - Augmentation; BCE - Binary Cross Entropy; wBCE - weighted Binary Cross-Entropy; CSF - Cerebro spinal fluid; ES - Early Stopping; LR - Learning Rate; MAE - Mean Absolute Error; Mem - Memory; NM - Nesterov Momemtum (value 0.99); Norm. - Normalization; Optim. - Optimizer; PLRS - Poly learning rate schedule; Preproc. - Preprocessing; Pret. - Pretrained; Postproc. - Postprocessing;Req. - Requested; RF - Random Forest; SGD - Stochastic Gradient Descent; Val - Validation | Team | Time (min) | Mem (GB) | Req | Method | Loss | Dim. | Input | Patches | Preproc. | Optim. | LR | Stopping criterion | PostProc. | Aug. | Val% | Framework | Pret ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- Task 1 - EPVS | BigrBrain | 0.87 | 1.9 | 32 10 | UNet | Dice | 2D | All | 225 x 225 | Min-max Norm Resampling Cropping | SGD NM | 0.01 PLRS | 1000 ES 10 | | Rotation Zooms Shifts Flips | 20 | Pytorch Ignite | Neurophet | 1.92 | 1.7 | 10 20 | MaskRCNN | BCE Focal MAE | 2.5D | All | | Norm | | | | | | | | TeamTea | 1.38 | 3.7 | 10 8 | nnUNet | Dice | 2D | All | 256 x 224 | Cropping BF corr. Z-score Norm. Resampling | SGD NM | 0.01 PLRS | 1000 | | Zoom Flip Gaussian noise | 20 | | TheGPU | 8.2 | NA | 10 0 | RF | NA | 2D | T2 | | Cropping min-max Norm. | | | | | 33 | | | Task 2 - Microbleeds | BigrBrain | 0.9 | 2.7 | 32 10 | nnUNet | Dice | 2D | All | 512 x 512 | Min-max Norm Resampling Cropping | SGD NM | 0.01 PLRS | 1000 ES 10 | | Rotation Zooms Shifts Flips | 20 | Pytorch Ignite | Dawai | 11.2 | 42.2 | 128 48 | UNet | Blob | 3D | All | 192 x 192 x 32 | Quantile Norm. | Ranger 21 | | | | Flips Gaussian Noise Affine | | MONAI | X MixMicrobleed | 45.8 | 43.1 | 10 10 | MaskRCNN UNet | Dice BCE MAE | 2D 2.5D | 64 x 64 Whole | Z score Norm. Resampling | Adam | 0.000005 0.00005 | 15 50 | X | Affine Flips | 20 | | X | MixMicrobleedNet | 1.4 | 3.2 | 10 10 | nnUNet | Dice | 3D | All | | | SGD NM | 0.01 PLRS | | | | 0 | | TeamTea | 2 | 3.3 | 10 8 | UNet | Dice | 3D | All | 96 x 192 x 128 | Z-score Norm BF corr. Resampling Cropping | SGD NM | 0.01 PLRS | 1000 | | Zoom Flips Noise | 30 | nnUNet | Tfff | 1.5 | 6.5 | 10 8 | ResNet UNet | Dice | 2D | All | 320x320 | min-max Norm. | | | | | Flips Rotation Translation | | Pytorch Ignite | TheGPU | 5 | NA | 5 0 | RF | | 2D | T2* | | | | | | | | | | ValdoNN | 0.6 | 2.3 | 10 8 | nnUNet | Dice BCE | 2D | All | 512 x 512 | Z-score Norm. Resampling Cropping | SGD NM | 0.01 PLRS | 1000 | | Rigid Zoom Gaussian noise | | nnUNet | Zihao | 1.6 | 4.6 | 8 6 | UNet FCN/AlexNet | wBCE Dice | 3D | All | 20x20x16 24x24x20 | z-score Norm Resampling Cropping | SGD NM Adam | 0.01 PLRS | 150 80 100 | | Translation Rotation Flips | 20% (5) | Pytorch Ignite | Task 3 - Lacunes | BigrBrain | 0.8 | 2 | 32 10 | UNet | Dice | 2D | All | 384 x 320 | Min-max Norm Resampling Cropping | SGD NM | 0.01 PLRS | 1000 ES 10 | | Rotation Zooms Shifts Flips | | Pytorch Ignite | Dawai | 11.9 | 42.2 | 128 48 | UNet | Blob | 3D | All | 192 x 192 x 32 | Quantile Norm. Cropping | Ranger 21 | | | | | | MONAI | EMC_N | 32.1 | 42.7 | 24 12 | UNet | wBCE | 3D | All | | Resampling BF correction Norm (wrt CSF) | Adam | 0.0005 | ES(20) | X | Rotation Flips | 10% | Keras | X MixLacune | 9.9 | 6.1 | 10 10 | MaskRCNN UNet | Dice | 2D | All | 64 x 64 32 x 32 | Z-score Norm. | Adam | 0.00005 0.0001 | 20 30 | X | Flips | | | X Neurophet | 1.8 | 1.7 | 10 20 | MaskRetinaNet | BCE Focal MAE | 2.5D | All | | Norm. | Adam | 0.0001 | ReduceLR OnPlateau | | | | | TeamTea | 2 | 3.3 | 10 8 | UNet | Dice | 3D | All | 96 x 192 x 128 | Z-score Norm BF corr. Resampling Cropping | SGD NM | 0.01 PLRS | 1000 | | Zoom Flips Noise | | nnUNet | Among all the submissions, only one team (TheGPU) proposed an alternative to a deep learning solution. The majority of the proposed methods were trained as pure segmentation solutions and a few teams submitted a detection+segmentation solution based on Mask-RCNN [He et al., 2017] or Mask Retina net [Farady et al., 2020]. Across all tasks, when a deep learning solution was proposed, the UNet architecture was the most common choice. For all three tasks, the time required to process a case and the GPU memory requirements varied greatly. For Task 2 - Microbleeds for instance duration ranged from less than 1 minute to 45.8 min and memory consumption of 2.4 to 43 GB (allowing for memory flooding). In terms of the methodology for uncertainty assessments in Task 3 - Lacunes, the two teams submitting methods to all three tasks did not provide any uncertainty map. Among the 4 remaining teams, most used directly the probabilistic value of their output as measure of uncertainty while mixLacune defined an uncertainty zone at the border of their detected lacunes. For all teams, key characteristics of the proposed methods are summarized in table 6. Additional details can be found for each team on the OpenReview repository https://openreview.net/group?id=MICCAI.org/2021/Challenge/VALDO. ### 3.2 Metric values For each task the detection and the segmentation are reported across all teams. ##### Task 1 - Enlarged Perivascular Spaces (EPVS) The summary statistics for each team and each metric are reported in Table 7. Table 7: Metrics results for Task 1 - EPVS presented as Median [1st quartile - 3rd quartile] for all metrics. AED - Absolute Element Difference; AVD (in mm3) - Absolute Volume Difference. In bold the significantly best performance across the different teams (excluding the ensemble solutions) and in italic when there is no significant difference compared to the second best. | Detection | Segmentation ---|---|--- | F1 | AED | Mean Dice | AVD Bigrbrain | 35.81 [28.14 ; 40.42] | 14.50 [6.00 ; 34.50] | 61.09 [55.40 ; 66.57] | 45.30 [16.12 ; 89.12] Neurophet | 0.00 [0.00 ; 3.34] | 29.00 [13.00 ; 47.00] | 28.23 [23.27 ; 29.76] | 390.15 [250.72 ; 636.58] TeamTea | 17.12 [6.79 ; 25.90] | 41.00 [24.25 ; 69.25] | 55.07 [46.25 ; 64.23] | 106.05 [73.00 ; 175.86] TheGPU | 38.92 [28.87 ; 49.44] | 16.00 [9.00 ; 35.75] | 72.38 [64.97 ; 77.12] | 45.20 [23.79 ; 82.21] EnsembleAll | 38.62 [28.1 ; 44.82] | 24.00 [12.00 ; 46.00] | 64.33 [59.14 ; 68.40] | 96.15 [63.67 ; 151.69] EnsembleTop | 38.86 [31.19 ; 45.13] | 29.00 [15.25 ; 50.25] | 67.38 [58.24 ; 72.23] | 36.10 [20.15 ; 66.33] Figure 4 presents the distribution of metrics values for detection (top row) and segmentation metrics (bottom row) for Task 1 - EPVS. Figure 4: Distribution of metrics values across the different teams for detection metrics (top row) and segmentation metrics (bottom row) for Task 1 - EPVS ##### Task 2 - Microbleeds | Detection | Segmentation ---|---|--- | F1 | AED | Mean Dice | AVD Bigrbrain | 16.67 [0.00 ; 36.10] | 9.00 [5.00 ; 16.00] | 81.17 [71.86 ; 89.69] | 52.47 [15.45 ; 171.98] Dawai | 0.00 [0.00 ; 40.00] | 1.00 [1.00 ; 3.00] | 68.35 [52.99 ; 77.71] | 12.40 [6.29 ; 33.05] MixMicrobleed | 0.00 [0.00 ; 0.00] | 1e5 [499.5 ; 1e5] | 64.36 [55.79 ; 68.58] | 1e5 [4728 ; 1e5] MixMicrobleedNet | 68.42 [36.67 ; 100.00] | 1.00 [0.00 ; 1.00] | 84.01 [79.48 ; 87.62] | 8.77 [2.48 ; 24.30] TeamTea | 66.67 [0.00 ; 100.00] | 1.00 [0.00 ; 1.00] | 82.57 [74.65 ; 87.50] | 11.30 [1.81 ; 25.39] Tfff | 40.00 [18.18 ; 66.67] | 3.00 [1.00 ; 6.00] | 77.65 [62.43 ; 89.13] | 15.27 [4.33 ; 49.33] TheGPU | 0.00 [0.00 ; 0.00] | 4.00 [1.00 ; 10.00] | 49.46 [36.89 ; 78.14] | 602.89 [159 ; 1842.02] ValdoNN | 50.00 [0.00 ; 68.15] | 1.00 [1.00 ; 2.00] | 80.00 [66.67 ; 87.68] | 12.00 [3.14 ; 24.91] Zihao | 66.67 [20.83 ; 100.00] | 1.00 [0.00 ; 2.00] | 80.00 [73.34 ; 88.04] | 9.61 [3.20 ; 21.51] EnsembleAll | 66.67 [0.00 ; 100.00] | 1.00 [0.00 ; 1.00] | 81.22 [71.35 ; 87.27] | 12.87 [4.93 ; 27.26] EnsembleTop | 75.68 [38.18 ; 100.00] | 1.00 [0.00 ; 1.00] | 77.90 [29.91 ; 87.23] | 11.25 [2.81 ; 21.82] Table 8: Metrics results for Task 2 - Microbleeds presented as Median [1st quartile; 3rd quartile] for each metric. AED - Absolute Element difference; AVD - Absolute volume difference (in mm3). In bold, the significantly best performance per metric across teams (excluding the ensemble solutions) . Figure 5 presents the distribution of metrics values for detection (top row) and segmentation metrics (bottom row) for Task 2 - Microbleeds with Table 8 presenting the metrics values across all teams. Figure 5: Distribution of metrics values across the different teams for detection metrics (top row) and segmentation metrics (bottom row) for Task 2 - Microbleeds; AED - Absolute Element Difference; AVD - Absolute Volume Difference ##### Task 3 - Lacunes Table 9 presents the results obtained for Task 3 - Lacunes. | Detection | Segmentation ---|---|--- | F1 | AED | Mean Dice | AVD BigrBrain | 7.69 [5.06 ; 16.49] | 27.50 [20.25 ; 33] | 40.84 [27.02 ; 50.27] | 123.93 [79.49 ; 182.64] Dawai | 15.38 [0.00 ; 25.00] | 6.00 [3.00 ; 10.00] | 40.09 [26.20 ; 45.31] | 78.93 [26.94 ; 209.24] EMC_N | 3.92 [0.00 ; 54.55] | 2.00 [1.00 ; 4.75] | 20.49 [12.21 ; 34.08] | 125.60 [45.08 ; 375.96] MixLacune | 6.25 [0.00 ; 12.00] | 22.00 [13.50 ; 26.00] | 16.85 [10.31 ; 27.59] | 33.95 [16.88 ; 107.69] Neurophet | 4.55 [0.00 ; 10.53] | 20.00 [11.50 ; 34.00] | 8.82 [3.73 ; 15.33] | 471.40 [244.16 ; 891.16] TeamTea | 28.57 [0.00 ; 57.14] | 1.00 [0.00 ; 2.00] | 45.75 [36.74 ; 56.17] | 14.88 [0.00 ; 40.29] EnsembleAll | 28.57 [0.00 ; 60.87] | 1.00 [0.00 ; 2.00] | 37.98 [22.13 ; 44.55] | 13.05 [0.07 ; 61.03] EnsembleTop | 30.77 [0.00 ; 66.67] | 1.00 [0.00 ; 2.00] | 38.17 [25.48 ; 45.26] | 9.68 [1.05 ; 63.28] Table 9: Metrics results for Task 3 - Lacunes presented as median [1st quartile ; 3rd quartile]. AED - Absolute Element difference; AVD - Absolute volume difference (mm3). Bold font indicates best performance across the teams (excluding ensemble solutions) when significantly better than all others. Italic font indicates best performance when not significantly better than the second ranking while Table 10 shows the metrics for the uncertainty component of the task excluding BigrBrain and TeamTea who did not provide an uncertainty map. Table 10: Metrics related to uncertainty for Task 3 - Lacunes presented as median [1st quartile - 3rd quartile. AED - Absolute Element difference; AVD - Absolute volume difference (in mm3). | Detection Unc | Segmentation Unc ---|---|--- Dawai | 0.00 [0.00 ; 25.00] | 63.65 [0.00 ; 87.73] EMC_N | 100.00 [86.81 ; 100.00] | 0.00 [0.00 ; 67.94] MixLacune | 0.00 [0.00 ; 3.57] | 4.76 [0.00 ; 24.39] Neurophet | 0.00 [0.00 ; 6.82] | 0.00 [0.00 ; 23.18] Figure 6 presents the distribution of metrics values for detection (top row) and segmentation metrics (bottom row) for Task 3 - Lacunes. Figure 6: Distribution of metric values across the different teams for detection metrics (top row) and segmentation metrics (bottom row) for Task 3 - Lacunes Figure 7 shows the distribution of metrics values for the assessment of uncertainty applied for Task 3 - Lacunes. Figure 7: Distribution of metric values across the different teams for the assessment of uncertainty for Task 3 - Lacunes ### 3.3 Rankings Table 11 presents the overall ranking, according to the number of tasks undergone and for each individual task when relevant. Table 11: Ranking across all tasks grouped by number of tasks to which each team participated. Across all metrics, D refers to detection and S to segmentation, R to relative, A to absolute and U to uncertainty. DR refers to F1 score, DA to Absolute element difference, SR to Mean Dice, SA to absolute volume difference, DU to detection uncertainty and SU to segmentation uncertainty. Tot is the overall rank for a given task | Task 1 - EPVS | Task 2 - Microbleeds | Task 3 - Lacunes ---|---|---|--- Team | DR | DA | SR | SA | Tot | DR | DA | SR | SA | Tot | DR | DA | SR | SA | DU | SU | Tot TeamTea | 3 | 3.5 | 3 | 2.5 | 3 | 1.5 | 2.5 | 2.5 | 3 | 2 | 1.5 | 1 | 1 | 1 | | | 2 BigrBrain | 2 | 1.5 | 2 | 1 | 2 | 6 | 8 | 3.5 | 7 | 6 | 4 | 5.5 | 2.5 | 4.5 | | | 6 Dawai | | | | | | 5 | 7 | 7.5 | 5.5 | 7 | 3 | 3 | 2.5 | 3 | 2 | 1 | 1 TheGPU | 1 | 1.5 | 1 | 2.5 | 1 | 7 | 8 | 9 | 8 | 8 | | | | | | | Neurophet | 4 | 3.5 | 4 | 4 | 4 | | | | | | 5.5 | 5.5 | 6 | 6 | 3 | 3.5 | 5 MixMicrobleedNet | | | | | | 1.5 | 1 | 1 | 1 | 1 | | | | | | | Zihao Team | | | | | | 3 | 2.5 | 2.5 | 3 | 3 | | | | | | | ValdoNN | | | | | | 4 | 4.5 | 5 | 3 | 4 | | | | | | | Tfff | | | | | | 6 | 4.5 | 5 | 5.5 | 5 | | | | | | | MixMicrobleed | | | | | | 9 | 9 | 7.5 | 9 | 9 | | | | | | | EMC_N | | | | | | | | | | | 1.5 | 2 | 4.5 | 4.5 | 1 | 2 | 2 MixLacune | | | | | | | | | | | 5.5 | 4 | 4.5 | 2 | 4 | 3.5 | 4 Table 12 reflects the distribution of Kendall’s Tau coefficient when assessing the robustness of the ranking for each metric using 1000 bootstrap samples. Table 12: Distribution characteristics (mean and standard deviation) Kendall’s Tau correlation coefficient in % between final ranking and bootstrap samples (1000 samples). Across all metrics, D refers to detection and S to segmentation, R to relative, A to absolute and U to uncertainty. DR refers to F1 score, DA to Absolute element difference, SR to Mean Dice, SA to absolute volume difference, DU to detection uncertainty and SU to segmentation uncertainty. | DR | DA | SR | SA | DU | SU ---|---|---|---|---|---|--- Task 1 - EPVS | 96.13 (4.33) | 93.55 (7.39) | 97.87 (4.45) | 97.33 (4.02) | | Task 2 - Microbleeds | 98.11 (1.81) | 98.36 (1.70) | 98.19 (2.38) | 87.08 (6.62) | | Task 3 - Lacunes | 95.88 (6.57) | 97.46 (3.98) | 94.68 (3.26) | 93.19 (5.02) | 99.85 (1.13) | 95.82 (8.37) ### 3.4 Additional analyses #### 3.4.1 Clinical relevant markers ##### Task 1 - EPVS For Task 1, since the burden of PVS is currently clinically considered the most valuable insight, the Spearman correlation coefficient between predicted and reference burden across all test cases was calculated for overall volume and element count and is presented in Figure 8 along with the log-transformed relationship between reference and predicted burden in terms of volume (top) and count (bottom). Figure 8: Association between reference and predicted PVS burden across the participating teams for volume (top row) and count (bottom row). The Spearman rho (%) is indicated on each plot. ##### Task 2 - Microbleeds For cerebral microbleeds, classifying the absence or presence of any microbleeds was deemed clinically the most relevant assessment. Balanced accuracy over the test set varied from 29.5% for team Dawai to 87.3% for team MixMicrobleed. Figure 9 presents the confusion matrices for each of the teams. Figure 9: Confusion matrix regarding the classification of an image as containing at least one microbleed based on obtained prediction images. ##### Task 3 - Lacunes Similarly, Figure 10 shows the confusion matrix for correctly identifying cases that have at least one lacune. For the 6 participating teams, balanced accuracy was close to 0.5 for almost all teams as they predicted the presence of at least one lacune in almost all cases. Only TeamTea was able to recognize cases without lacunes, with 78.3% balanced accuracy. Figure 10: Confusion matrix regarding the classification of an image as containing at least one lacune based on obtained prediction images. #### 3.4.2 Cross-dataset variability Performance varied greatly across datasets, being systematically overall better on RSS dataset than others (SABRE or ALFA). For all three tasks, Figure 11 presents the variation of F1 and Mean Dice score across datasets for all teams and Table 13 presents median and interquartile range for all tasks across datasets for F1 score and Mean Dice. --- Figure 11: Distribution of results for F1 (left column) and Mean Dice (right column) across different datasets for the three tasks (each row represents one task). Table 13: F1 score and Mean Dice presented as median [1st quartile; 3rd quartile] across the different datasets for all three tasks | | F1 Score | Mean Dice | ---|---|---|---|--- | | ALFA | RSS | SABRE | ALFA | RSS | SABRE | Task 1 | BigrBrain | | 35.40 [27.89 ; 40.36] | 38.20 [31.19 ; 40.25] | | 63.56 [58.05 ; 67.29] | 43.10 [41.94 ; 46.92] | Neurophet | | 1.95 [0.00 ; 3.71] | 0.00 [0.00 ; 0.00] | | 28.6 [23.90 ; 30.17] | NA | TeamTtea | | 13.50 [6.02 ; 21.76] | 34.80 [25.10 ; 40.93] | | 59.66 [50.08 ; 65.55] | 45.05 [42.88 ; 46.91] | TheGPU | | 43.71 [34.55 ; 51.67] | 24.32 [2.61 ; 28.42] | | 73.41 [69.68 ; 78.6] | 34.96 [7.16 ; 39.25] | Task 2 | BigrBrain | 11.11 [0.00 ; 16.67] | 30.77 [13.81 ; 51.47] | 36.36 [21.81 ; 58.24] | 82.21 [73.3 ; 93.32] | 75.07 [69.41 ; 81.84] | 90.13 [84.63 ; 92.19] | Dawai | 0.00 [0.00 ; 0.00] | 41.43 [25.00 ; 66.67] | 0.00 [0.00 ; 0.00] | 57.20 [43.49 ; 70.91] | 69.47 [53.16 ; 77.82] | 63.33 [56.31 ; 69.23] | Mixmicrobleed | 0.00 [0.00 ; 0.00] | 0.00 [0.00 ; 0.75] | 0.00 [0.00 ; 0.42] | 0.00 [0.00 ; 0.00] | 58.90 [54.56 ; 67.04] | 68.62 [66.67 ; 79.41] | MixmicrobleedNet | 66.67 [0.00 ; 100] | 77.81 [66.67 ; 100.00] | 51.67 [50.00 ; 69.23] | 87.18 [74.71 ; 96.67] | 82.79 [79.82 ; 85.20] | 84.21 [79.35 ; 87.39] | TeamTea | 50.00 [0.00 ; 100.00] | 80.00 [66.67 ; 100.00] | 50.00 [30.22 ; 68.75] | 85.16 [65.38 ; 100.00] | 82.08 [77.83 ; 85.24] | 84.62 [74.65 ; 87.66] | Tfff | 20.00 [7.68 ; 40.00] | 65.15 [47.50 ; 76.41] | 40.00 [33.33 ; 55.91] | 80.00 [63.19 ; 99.46] | 68.58 [57.64 ; 80.72] | 86.19 [79.14 ; 89.46] | TheGPU | 0.00 [0.00 ; 0.00] | 0.00 [0.00 ; 9.95] | 0.00 [0.00 ; 10.01] | 79.43 [56.53 ; 83.76] | 40.00 [32.63 ; 48.54] | 66.67 [53.28 ; 79.70] | ValdoNN | 0.00 [0.00 ; 66.67] | 66.67 [38.82 ; 80.00] | 50.00 [32.14 ; 51.56] | 86.06 [70.24 ; 100.00] | 70.91 [62.08 ; 81.48] | 87.00 [81.51 ; 89.58] | Zihao | 50.00 [0.00 ; 100.00] | 74.81 [66.67 ; 94.23] | 45.00 [22.92 ; 63.54] | 87.71 [80.00 ; 100.00] | 76.98 [71.69 ; 80.46] | 85.42 [74.53 ; 88.85] | Task 3 | BigrBrain | | 7.41 [5.48 ; 14.91] | 8.39 [3.80 ; 17.75] | | 42.81 [27.39 ; 51.80] | 30.17 [22.30 ; 37.37] | Dawai | | 20.00 [0.00 ; 33.33] | 0.00 [0.00 ; 0.00] | | 39.88 [25.38 ; 44.82] | 57.14 [57.14 ; 57.14] | EMC_N | | 44.44 [0.00 ; 66.67] | 0.00 [0.00 ; 0.00] | | 21.42 [13.09 ; 34.24] | 2.20 [2.20 ; 2.20] | MixLacune | | 6.25 [0.00 ; 10.81] | 9.09 [0.00 ; 16.16] | | 16.85 [10.31 ; 28.04] | 15.45 [12.51 ; 19.3] | Neurophet | | 6.25 [0.00 ; 14.29] | 0.00 [0.00 ; 0.46] | | 8.82 [4.98 ; 14.65] | 11.37 [6.86 ; 15.88] | TeamTea | | 40.00 [0.00 ; 66.67] | 0.00 [0.00 ; 1.61] | | 45.75 [34.58 ; 55.75] | 64.66 [54.40 ; 74.92] | Ranking varied also slightly across datasets as indicated in Table 14. Table 14: Ranking calculated for each dataset separately | | ALFA | RSS | SABRE ---|---|---|---|--- Task 1 - EPVS | BigrBrain | | 2 | 1 Neurophet | | 4 | 4 TeamTea | | 3 | 2 TheGPU | | 1 | 3 Task 2 - Microbleeds | BigrBrain | 7 | 7 | 6 Dawai | 6 | 6 | 7 MixMicrobleed | 9 | 9 | 9 MixMicrobleedNet | 2 | 1 | 2 TeamTea | 3 | 2 | 1 Tfff | 5 | 5 | 5 TheGPU | 8 | 8 | 8 ValdoNN | 4 | 4 | 3 Zihao | 1 | 3 | 4 Task 3 - Lacunes | BigrBrain | | 5 | 5 Dawai | | 2.5 | 1 EMC_N | | 1 | 3 Mixlacune | | 4 | 2 Neurophet | | 6 | 6 TeamTea | | 2.5 | 4 #### 3.4.3 Regional variability Metrics variability for Task 1 - EPVS across different brain regions is illustrated for F1 and Mean Dice in Figure 12. Figure 12: F1 and Mean Dice distribution across the different brain regions for Task 1 - EPVS #### 3.4.4 Inter-rater variability Inter-rater variability was investigated for tasks and datasets for which two raters provided annotation for the same case (Task 1 - EPVS SABRE dataset, Task 3 - Lacunes all datasets) and results are presented in Table 15. Table 15: Metrics values (median [1st quartile - 3rd quartile] presented for the cases where a double rating was available in the test set. | Detection | Segmentation ---|---|--- | F1 score R1 | F1 score R2 | AED | Mean Dice R1 | Mean Dice R2 | AVD Task 1 - EPVS | 19.57 | 19.86 | 135.00 | 52.63 | 54.07 | 651.00 [13.48 ; 23.81] | [13.58 ; 23.81] | [96.25 ; 316.50] | [52.07 ; 54.51] | [51.43 ; 55.05] | [371.75 ; 2819.75] Task 3 - Lacunes | 48.45 | 55.84 | 0.50 | 59.03 | 59.49 | 21.51 [39.01 ; 61.88] | [0.00 ; 86.36] | [0.00 ; 1.00] | [43.72 ; 64.47] | [44.95 ; 65.88] | [0.00 ; 43.63] For Task 1 - EPVS, intra-rater detection was slightly lower than the best method but the inter-rater segmentation performance appeared to be better by quite a strong margin reaching 59.49% in comparison to the best method at 45.5%. The detection performance was notably higher for Task 3 - Lacunes with segmentation performance on par with the best performing method. #### 3.4.5 Ensembles For the creation of EnsembleTop, Task 1 - EPVS used predictions from team TheGPU and BigrBrain, Task 2 - Microbleeds used predictions from MixMicrobleedNet, TeamTea, Zihao, and ValdoNN, while for Task 3 - Lacunes, predictions from Dawai, TeamTea and EMC_N were used. Table 16 presents the values of the metrics and the corresponding ranking obtained for each type of ensemble (EnsembleAll, the average of all solutions, and EnsembleTop, the average of the top 50%) across the three tasks. When considering the clinical metrics, performance was higher for both ensemble solutions in Task 1 - EPVS reaching a correlation coefficient of 70.0% and 74.8% for EnsembleAll and EnsembleTop respectively for the count and 69.5 and 80.0% for the volume. For Task 2 - Microbleeds, balanced accuracy was of 77.0% for EnsembleAll and 79.6% for EnsembleTop ranking fourth and third compared to all the teams. Finally, for Task 3 - Lacunes, balanced accuracy reached 75.0% for EnsembleAll, down to 65.3$ for EnsembleTop slightly lower than the 78.0% obtained by TeamTea. Table 16: Metrics value presented as median [IQR] for the 4 common metrics across the different ensemble types for the three tasks along with associated ranking | | F1 | AD | Mean Dice | AVD ---|---|---|---|---|--- Task 1 - EPVS | EnsembleAll | 38.62 [28.10 ; 44.82] | 24.00 [12.00 ; 46.00] | 64.33 [59.14 ; 68.40] | 96.15 [63.67 ; 151.69] 3.5 | 3.5 | 4 | 2 EnsembleTop | 38.86 [31.19 ; 45.13] | 29.00 [15.25 ; 50.25] | 67.38 [58.24 ; 72.23] | 36.10 [20.15 ; 66.33] 1.5 | 3.5 | 2 | 2 Task 2 - Microbleeds | EnsembleAll | 66.67 [0.00 ; 100.00] | 1.00 [0.00 ; 1.00] | 81.22 [71.35 ; 87.27] | 12.87 [4.93 ; 27.26] 4 | 3 | 6.5 | 7 EnsembleTop | 75.68 [38.18 ; 100] | 1.00 [0.00 ; 1.00] | 77.90 [29.91 ; 87.23] | 11.25 [2.81 ; 21.82] 1 | 1 | 3 | 3 Task 3 - Lacunes | EnsembleAll | 28.57 [0.00 ; 60.87] | 1.00 [0.00 ; 2.00] | 37.98 [22.13 ; 44.55] | 13.05 [0.07 ; 61.03] 2.5 | 2 | 3.5 | 2 EnsembleTop | 30.77 [0.00 ; 66.67] | 1.00 [0.00 ; 2.00] | 38.17 [25.48 ; 45.26] | 9.68 [1.05 ; 63.28] 2.5 | 2 | 3.5 | 2 ## 4 Discussion This manuscript reports the design and outcome of the ”Where is VALDO?” challenge that took place as a satellite event of MICCAI 2021. Detection and segmentation of three types of markers of cerebral small vessel disease were evaluated as three distinct tasks namely enlarged perivascular spaces (Task 1), cerebral microbleeds (Task 2) and lacunes (Task 3). Among the 12 distinct participating teams, 9 teams provided a solution for Task 2 and 2 teams competed across all three tasks. Although the challenge was designed to address both detection and segmentation aspects, most of the proposed solutions were designed with a segmentation purpose only - the detection performance considered as a by-product of the prediction. This choice may have been influenced partially by the guidelines to provide only the probabilistic segmentation map that was then post- processed to identify the individual connected components instead of requesting instance segmentation and predicted detections as outputs. However, this strategy appeared to generally work well with segmentation performance being on par with detection performance across all three tasks. Interestingly, there was no strong relationship between memory, time expenditure and overall performance with some of the most greedy methods having lower performance than some of the most cost-effective solutions. Across all tasks, one team proposed a solution not relying on deep-learning and their strategy had the best performance for Task 1 - EPVS possibly because of the fact that EPVS may be relatively easy to characterise in terms of signal and shape signature. However, none of the proposed methods for Task 1 - EPVS made use of the weak annotation data (count on slices). Also, while some methods only used annotated slices, performance may have been lowered by the absence of use of the masks when only specific parts of a given axial slice were annotated (RSS Data). Most deep learning solutions described using a UNet style architecture at one point of their pipeline either as main network for one-stage methods or for the segmentation component for multi-stage solutions. Interestingly, despite four teams describing the use of the nnUNet [Isensee et al., 2021] architecture for Task 2 - Microbleeds, performance varied greatly across these teams with rank 1, 2, 5 and 7 out of 9. This could potentially be explained by the choice of input data, the dimensionality, or the framework chosen. In the context of microbleeds, using 3D information may be particularly relevant to avoid mimics. This observation highlights the importance of all these steps in the design of a relevant solution, the use of the whole extent of the training data being a key component of the winner’s method. Such consideration is particularly relevant when dealing with a modest number of training examples. When considering choices of augmentation, those involving local changes to input images and/or reference annotation (interpolation, intensity changes, spatial deformation) may cause inconsistencies in the case of very small objects of interest. In terms of dataset origins, performance was generally higher for the dataset with the highest resolution which was also for Task 2 - Microbleeds and Task 3 - Lacunes the dataset with the highest number of training cases. This is naturally expected as a direct impact on resolution on evaluation metrics and as an overfitting related property. The amount of training data (in terms of examples of lesions) appeared also to be relevant when comparing the performance of the methods of Task 1 across the different regions of interest, the regions with the most EPVS (centrum semi- ovale and basal ganglia) being the ones with the highest performance across all methods. This may not only be due to the sheer amount of training data in the remaining regions (hippocampus and mesencephalon) but also to the characteristics of the imaging sequences in these regions and the likelihood for mimics (cysts) and higher variability in presentation. Knowledge of the differences in performance across regions is particularly interesting clinically when associations with risk factors and or clinical function have been made specifically in specific anatomical regions in relation to Alzheimer’s Disease [Jiménez-Balado et al., 2018] and Parkinson’s disease [Duker and Espay, 2007]. For Task 1 - EPVS, even for the best teams, the performance presented a large variability which would make their adoption in clinical practice difficult. The overall good correlation between expected and predicted burden may however already be enough to make these tools valuable when investigating associations at population level. For Task 2 - Microbleeds, it appeared that, when correctly detected, the segmentation of lesions was very good. However, even in the best of teams there were issues at the detection level with both cases missed and cases wrongly considered as containing at least one microbleed. The best teams indicated very few lesions which would be relatively practical to visually inspect and reject if necessary. It is here the absence of a systematic bias towards overcall or undercall could make it difficult to integrate in clinical pipelines. For Task 3 - Lacunes, performance appeared quite poor on both detection and segmentation metrics, with a general large overcall of lacunes and when detecting them correctly a lower segmentation performance than for Task 2 - Microbleeds. Such performance would require too much time for editing and checking to be adopted in both clinical practice and research studies. When comparing the performance across all three tasks, it appeared that the performance was higher on tasks for which the variability in element appearance was lower (EPVS with linear shapes and microbleeds with spherical shapes compared to lacunes with more heterogeneous shapes). The metrics investigated as closest to the current clinical measures of interest were generally in agreement with the overall ranking of the challenge but showed stark differences in terms of clinical viability of the proposed solutions. While for Task 1 - EPVS and Task 2 - Microbleeds the proposed solutions achieved reasonable performance in terms of ”clinical” metric, only one team performed reasonably well for Task 3 - Lacunes, with all other solutions systematically finding many lacunes even when there were none. This may be due to the large variability in appearance (i.e. shape, location, intensity signature) as well as the lower number of examples of this type of lesions when compared to those of Task 1 - EPVS and Task 2 - Microbleeds. With all solutions generally producing many false positives, the time required to go through each case and reject many wrongly detected lesion candidates would be prohibitive for clinical adoption. One must however keep in mind that none of these solutions were optimized for this metric and may have performed differently otherwise. In this case the addition of auxiliary tasks in the learning framework to abide to a priori knowledge of burden distribution or to directly optimize such metrics may have interesting results. In a field where adequate research biomarkers have yet to be properly defined and proven to be reliable [Smith et al., 2019], these observations regarding clinical metrics may lead to define different tasks and solutions for the targeted markers according to their purpose: clinical practice or research. While location, individual volume and shape information may become of interest in the research context as potential new biomarkers, thereby highlighting segmentation as an interesting end-goal, these characteristics may not be yet relevant in the clinical context. In clinical practice, one could imagine a two-stage pipeline with 1) whole-image level classification favouring sensitivity for the flagging of scans where an assessment is required for the presence or absence of a specific marker 2) Specification of lesion location (if needed) for the scans that have been flagged as containing a marker. This second step may be particularly relevant when supporting diagnosis (e.g., distinction between amyloid angiopathy and hypertensive pathology according to microbleed location) or to the explanation of the clinical presentation (e.g., lacune on crucial white matter tract). A key aspect, not measured here, is the ability of the proposed methods to be used in clinical settings with scans likely to be of lower resolution and to have more artefacts as well as present simultaneously other markers of pathology (e.g stroke, tumours). With the continuous progress in acquisition protocols and the democratization of scanning abilities, research-grade scanning protocols such as those used in this challenge may become available routinely, thereby limiting issues of protocol related generalizability. However, cohort-related bias may be more difficult to overcome. In fact, in the challenge, data came only from population cohorts and did not include patients with dementia as would be frequent in memory clinics. While efforts were made to provide training examples from the whole spectrum of lesion burden, specific pathological presentations may be missing and the generalizability of the proposed solutions would need to be assessed in these contexts. ##### Conclusion In this challenge assessing the current segmentation and detection performance of three markers of cerebral small vessel disease, namely EPVS, Microbleeds and Lacunes, methods targeting directly the segmentation were often quite successful in detecting these small structures. Number of elements on which to train the solutions was strongly predictive of performance, both across tasks and regionally. Manually engineered features became in the case of EPVS relevant enough to compete with deep-learning based strategies. Strikingly, all the presented methods proposed a training based on dense labelling, discarding the weak labelling available for Task 1 - EPVS. While for Task 1 - EPVS and Task 2 - Microbleeds some demonstrated they could potentially be used for population-based research, the large variability in performance across cases may require lengthy visual censoring if they were to be used for individual cases. In this context, it could be relevant to further include the evaluation of performance variability in the assessed tasks. In addition, systematic assessment of prediction confidence (as proposed with the uncertainty metrics of Task 3 - Lacunes) would be of interest for the design of practical implementation. ##### Funding The challenge prizes were provided by Nvidia and Icometrix. The SABRE study was funded at baseline by the Medical Research Council, Diabetes UK, and British Heart Foundation and at follow-up by the Wellcome Trust (082464/Z/07/Z), British Heart Foundation (SP/07/001/23603, PG/08/103, PG/12/29/29497 and CS/13/1/30327) and Diabetes UK(13/0004774). The Rotterdam Scan Study is supported by the Erasmus MC University Medical Center, the Erasmus University Rotterdam, the Netherlands Organization for Scientific Research (NWO) Grant 918-46-615, the Netherlands Organization for Health Research and Development (ZonMW), the Research Institute for Disease in the Elderly (RIDE), and the European Union Seventh Framework Programme (FP7/2007–2013) under grant agreement No. 601055, VPH-DARE@IT, the Dutch Technology Foundation STW (Perspectief programme: Population Imaging Genetics The ALFA study is supported by the La Caixa Foundation. CHS is funded by an Alzheimer’s Society Junior Fellowship (AS-JF-17-011). KVW and SC are supported by the Deep Learning for Medical Image Analysis (DLMedIA) (project no. P15-26), funded by the Dutch Technology Foundation STW, which is part of the Netherlands Organisation for Scientific Research (NWO) and which is partly funded by the Ministry of Economic Affairs, with co-financing by Quantib. FD was funded by Netherlands Organisation for Health Research and Development 104003005. BM, BW and FK are supported through the SFB 824, subproject B12, supported by Deutsche Forschungsgemeinschaft (DFG) through TUM International Graduate School of Science and Engineering (IGSSE), GSC 81. IE is funded by DComEX (Grant agreement ID: 956201). BM acknowledges support by the Helmut Horten Foundation. MdG is an employee of, and holds shares in GSK. GSK had no role in the design of this challenge, or the interpretation of the results. MdB is supported by Netherlands Organisation for Scientific Research (NWO) project VI.C.182.042. SI and LL have received funding from the Innovative Medicines Initiative 2 Joint Undertaking under Amyloid Imaging to Prevent Alzheimer’s Disease (AMYPAD) grant agreement No. 115952 and European Prevention of Alzheimer’s Dementia (EPAD) grant No. 115736. This Joint Undertaking receives the support from the European Union’s Horizon 2020 Research and Innovation Programme and EFPIA. HJK was supported by the Galen and Hilary Weston Foundation under the Novel Biomarkers 2019 scheme (№UB190097). JLM is currently a full-time employee of Lundbeck and has served previously as a consultant or on advisory boards for the following for-profit companies, or has given lectures in symposia sponsored by the following for profit companies: Roche Diagnostics, Genentech, Novartis, Lundbeck, Oryzon, Biogen, Lilly, Janssen, Green Valley, MSD, Eisai, Alector, BioCross, GE Healthcare, and ProMIS Neurosciences. JDG is supported by the Spanish Ministry of Science and Innovation (RYC-2013-13054), has received research support from GE Healthcare, Roche Diagnostics and Hoffmann-La Roche and speaker’s fees from Biogen and Philips. ##### Acknowledgements We are particularly thankful to all participants of the ALFA, RSS and SABRE study. We also would like to thank the team of GrandChallenge.org for their technical support and guidance. The ALFA group study is composed of Müge Akinci, Eider M Arenaza-Urquijo, Annabella Beteta, Anna Brugulat-Serrat, Raffaele Cacciaglia, Alba Cañas, Irene Cumplido, Carme Deulofeu, Ruth Dominguez, Maria Emilio, Carles Falcon, Karine Fauria, Sherezade Fuentes, José Maria González de Echavarri-Gómez, Oriol Grau-Rivera, Laura Hernandez, Gema Huesa, Jordi Huguet, Paula Marne, Marta Milà-Alomà, Tania Menchón, Carolina Minguillon, Arcadi Navarro, Grégory Operto, Eva M Palacios, Eleni Palpatzis, Cleofé Peña-Gómez, Albina Polo, Sandra Pradas, Blanca Rodríguez-Fernández, Aleix Sala-Vila, Gonzalo Sánchez-Benavides, Gemma Salvadó, Mahnaz Shekari, Anna Soteras, Laura Stankeviciute, Marc Suárez-Calvet, Marc Vilanova, Natalia Vilor-Tejedor. ## References * Adams et al. [2015] Adams, H.H., Hilal, S., Schwingenschuh, P., Wittfeld, K., van der Lee, S.J., DeCarli, C., Vernooij, M.W., Katschnig-Winter, P., Habes, M., Chen, C., et al., 2015\. A priori collaboration in population imaging: the uniform neuro-imaging of virchow-robin spaces enlargement consortium. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring 1, 513–520. * Alistair [2002] Alistair, D.G., 2002. Hypertensive cerebral small vessel disease and stroke. Brain pathology 12, 358–370. * Antonelli et al. [2021] Antonelli, M., Reinke, A., Bakas, S., Farahani, K., AnnetteKopp-Schneider, Landman, B.A., Litjens, G., Menze, B., Ronneberger, O., Summers, R.M., van Ginneken, B., Bilello, M., Bilic, P., Christ, P.F., Do, R.K.G., Gollub, M.J., Heckers, S.H., Huisman, H., Jarnagin, W.R., McHugo, M.K., Napel, S., Pernicka, J.S.G., Rhode, K., Tobon-Gomez, C., Vorontsov, E., Huisman, H., Meakin, J.A., Ourselin, S., Wiesenfarth, M., Arbelaez, P., Bae, B., Chen, S., Daza, L., Feng, J., He, B., Isensee, F., Ji, Y., Jia, F., Kim, N., Kim, I., Merhof, D., Pai, A., Park, B., Perslev, M., Rezaiifar, R., Rippel, O., Sarasua, I., Shen, W., Son, J., Wachinger, C., Wang, L., Wang, Y., Xia, Y., Xu, D., Xu, Z., Zheng, Y., Simpson, A.L., Maier-Hein, L., Cardoso, M.J., 2021\. The medical segmentation decathlon. arXiv:2106.05735. * Atlason et al. [2019] Atlason, H.E., Love, A., Sigurdsson, S., Gudnason, V., Ellingsen, L.M., 2019. Segae: Unsupervised white matter lesion segmentation from brain mris using a cnn autoencoder. NeuroImage: Clinical 24, 102085\. * Buch et al. [2017] Buch, S., Cheng, Y.C.N., Hu, J., Liu, S., Beaver, J., Rajagovindan, R., Haacke, E.M., 2017\. Determination of detection sensitivity for cerebral microbleeds using susceptibility-weighted imaging. NMR in biomedicine 30, e3551. * Çiçek et al. [2016] Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O., 2016. 3d u-net: learning dense volumetric segmentation from sparse annotation, in: International conference on medical image computing and computer-assisted intervention, Springer. pp. 424--432. * Cordonnier et al. [2009] Cordonnier, C., Potter, G.M., Jackson, C.A., Doubal, F., Keir, S., Sudlow, C.L., Wardlaw, J.M., Salman, R.A.S., 2009\. Improving interrater agreement about brain microbleeds: development of the brain observer microbleed scale (bombs). Stroke 40, 94--99. * De Boer et al. [2009] De Boer, R., Vrooman, H.A., Van Der Lijn, F., Vernooij, M.W., Ikram, M.A., Van Der Lugt, A., Breteler, M.M., Niessen, W.J., 2009\. White matter lesion extension to automatic brain tissue segmentation on mri. Neuroimage 45, 1151--1161. * Duker and Espay [2007] Duker, A.P., Espay, A.J., 2007\. Parkinsonism associated with striatal perivascular space dilation. Neurology 68, 1540--1540. * Farady et al. [2020] Farady, I., Lin, C.Y., Rojanasarit, A., Prompol, K., Akhyar, F., 2020. Mask classification and head temperature detection combined with deep learning networks, in: 2020 2nd International Conference on Broadband Communications, Wireless Sensors and Powering (BCWSP), IEEE. pp. 74--78. * Gardner [1984] Gardner, W.A., 1984. Learning characteristics of stochastic-gradient-descent algorithms: A general study, analysis, and critique. Signal processing 6, 113--133. * Giau et al. [2019] Giau, V.V., Bagyinszky, E., Youn, Y.C., An, S.S.A., Kim, S.Y., 2019. Genetic factors of cerebral small vessel disease and their potential clinical outcome. International journal of molecular sciences 20, 4298. * Guerrero et al. [2018] Guerrero, R., Qin, C., Oktay, O., Bowles, C., Chen, L., Joules, R., Wolz, R., Valdés-Hernández, M.d.C., Dickie, D.A., Wardlaw, J., et al., 2018. White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks. NeuroImage: Clinical 17, 918--934. * Haffner et al. [2016] Haffner, C., Malik, R., Dichgans, M., 2016. Genetic factors in cerebral small vessel disease and their impact on stroke and dementia. Journal of Cerebral Blood Flow & Metabolism 36, 158--171. * He et al. [2017] He, K., Gkioxari, G., Dollár, P., Girshick, R., 2017\. Mask r-cnn, in: Proceedings of the IEEE international conference on computer vision, pp. 2961--2969. * He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770--778. * Ikram et al. [2020] Ikram, M.A., Brusselle, G., Ghanbari, M., Goedegebure, A., Ikram, M.K., Kavousi, M., Kieboom, B.C., Klaver, C.C., de Knegt, R.J., Luik, A.I., et al., 2020\. Objectives, design and main findings until 2020 from the rotterdam study. European journal of epidemiology 35, 483--517. * Ikram et al. [2015] Ikram, M.A., van der Lugt, A., Niessen, W.J., Koudstaal, P.J., Krestin, G.P., Hofman, A., Bos, D., Vernooij, M.W., 2015\. The rotterdam scan study: design update 2016 and main findings. European journal of epidemiology 30, 1299--1315. * Ingala et al. [2020] Ingala, S., Mazzai, L., Sudre, C.H., Salvadó, G., Brugulat-Serrat, A., Wottschel, V., Falcon, C., Operto, G., Tijms, B., Gispert, J.D., et al., 2020\. The relation between apoe genotype and cerebral microbleeds in cognitively unimpaired middle-and old-aged individuals. Neurobiology of Aging 95, 104--114. * Isensee et al. [2021] Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H., 2021. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18, 203--211. * Isensee et al. [2019] Isensee, F., Schell, M., Pflueger, I., Brugnara, G., Bonekamp, D., Neuberger, U., Wick, A., Schlemmer, H.P., Heiland, S., Wick, W., et al., 2019\. Automated brain extraction of multisequence mri using artificial neural networks. Human brain mapping 40, 4952--4964. * Jiménez-Balado et al. [2018] Jiménez-Balado, J., Riba-Llena, I., Garde, E., Valor, M., Gutiérrez, B., Pujadas, F., Delgado, P., 2018. Prevalence of hippocampal enlarged perivascular spaces in a sample of patients with hypertension and their relation with vascular risk factors and cognitive function. Journal of Neurology, Neurosurgery & Psychiatry 89, 651--656. * Kester et al. [2014] Kester, M.I., Goos, J.D., Teunissen, C.E., Benedictus, M.R., Bouwman, F.H., Wattjes, M.P., Barkhof, F., Scheltens, P., van der Flier, W.M., 2014. Associations between cerebral small-vessel disease and alzheimer disease pathology as measured by cerebrospinal fluid biomarkers. JAMA neurology 71, 855--862. * Kingma and Ba [2014] Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . * Kofler et al. [2022] Kofler, F., Shit, S., Ezhov, I., Fidon, L., Horvath, I., Al-Maskari, R., Li, H., Bhatia, H., Loehr, T., Piraud, M., Erturk, A., Kirschke, J., Peeken, J., Vercauteren, T., Zimmer, C., Wiestler, B., Menze, B., 2022. blob loss: instance imbalance aware loss functions for semantic segmentation. URL: https://arxiv.org/abs/2205.08209, doi:10.48550/ARXIV.2205.08209. * Kuijf et al. [2019] Kuijf, H.J., Biesbroek, J.M., De Bresser, J., Heinen, R., Andermatt, S., Bento, M., Berseth, M., Belyaev, M., Cardoso, M.J., Casamitjana, A., et al., 2019\. Standardized assessment of automatic segmentation of white matter hyperintensities and results of the wmh segmentation challenge. IEEE transactions on medical imaging 38, 2556--2568. * Lin et al. [2017] Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2017. Focal loss for dense object detection, in: Proceedings of the IEEE international conference on computer vision, pp. 2980--2988. * Maier et al. [2017] Maier, O., Menze, B.H., von der Gablentz, J., Häni, L., Heinrich, M.P., Liebrand, M., Winzeck, S., Basit, A., Bentley, P., Chen, L., et al., 2017\. Isles 2015-a public evaluation benchmark for ischemic stroke lesion segmentation from multispectral mri. Medical image analysis 35, 250--269. * Maier-Hein et al. [2020] Maier-Hein, L., Reinke, A., Kozubek, M., Martel, A.L., Arbel, T., Eisenmann, M., Hanbury, A., Jannin, P., Müller, H., Onogur, S., et al., 2020\. Bias: Transparent reporting of biomedical image analysis challenges. Medical image analysis 66, 101796\. * Mendrik et al. [2015] Mendrik, A.M., Vincken, K.L., Kuijf, H.J., Breeuwer, M., Bouvy, W.H., De Bresser, J., Alansary, A., De Bruijne, M., Carass, A., El-Baz, A., et al., 2015\. Mrbrains challenge: online evaluation framework for brain image segmentation in 3t mri scans. Computational intelligence and neuroscience 2015\. * Menze et al. [2014] Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., Burren, Y., Porz, N., Slotboom, J., Wiest, R., et al., 2014\. The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34, 1993--2024. * Milletari et al. [2016] Milletari, F., Navab, N., Ahmadi, S.A., 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation, in: 2016 fourth international conference on 3D vision (3DV), IEEE. pp. 565--571. * Modat et al. [2014] Modat, M., Cash, D.M., Daga, P., Winston, G.P., Duncan, J.S., Ourselin, S., 2014\. Global image registration using a symmetric block-matching approach. Journal of medical imaging 1, 024003\. * Molinuevo et al. [2016] Molinuevo, J.L., Gramunt, N., Gispert, J.D., Fauria, K., Esteller, M., Minguillon, C., Sánchez-Benavides, G., Huesa, G., Morán, S., Dal-Ré, R., et al., 2016\. The alfa project: a research platform to identify early pathophysiological features of alzheimer’s disease. Alzheimer’s & Dementia: Translational Research & Clinical Interventions 2, 82--92. * Østergaard et al. [2016] Østergaard, L., Engedal, T.S., Moreton, F., Hansen, M.B., Wardlaw, J.M., Dalkara, T., Markus, H.S., Muir, K.W., 2016\. Cerebral small vessel disease: capillary pathways to stroke and cognitive decline. Journal of Cerebral Blood Flow & Metabolism 36, 302--325. * Potter [2011] Potter, G.M., 2011. Neuroimaging of cerebral small vessel disease . * Rensma et al. [2018] Rensma, S.P., van Sloten, T.T., Launer, L.J., Stehouwer, C.D., 2018\. Cerebral small vessel disease and risk of incident stroke, dementia and depression, and all-cause mortality: a systematic review and meta-analysis. Neuroscience & Biobehavioral Reviews 90, 164--173. * Ritter et al. [2011] Ritter, F., Boskamp, T., Homeyer, A., Laue, H., Schwier, M., Link, F., Peitgen, H.O., 2011. Medical image analysis. IEEE pulse 2, 60--70. * Ronneberger et al. [2015] Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer. pp. 234--241. * Sigurdsson et al. [2022] Sigurdsson, S., Aspelund, T., Kjartansson, O., Gudmundsson, E., Jonsson, P.V., van Buchem, M.A., Gudnason, V., Launer, L.J., 2022\. Cerebrovascular risk-factors of prevalent and incident brain infarcts in the general population: the ages-reykjavik study. Stroke 53, 1199--1206. * Sled et al. [1998] Sled, J.G., Zijdenbos, A.P., Evans, A.C., 1998. A nonparametric method for automatic correction of intensity nonuniformity in mri data. IEEE transactions on medical imaging 17, 87--97. * Smith et al. [2019] Smith, E.E., Biessels, G.J., De Guio, F., De Leeuw, F.E., Duchesne, S., Düring, M., Frayne, R., Ikram, M.A., Jouvent, E., MacIntosh, B.J., et al., 2019\. Harmonizing brain magnetic resonance imaging methods for vascular contributions to neurodegeneration. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring 11, 191--204. * Sudre et al. [2019] Sudre, C.H., Anson, B.G., Ingala, S., Lane, C.D., Jimenez, D., Haider, L., Varsavsky, T., Tanno, R., Smith, L., Ourselin, S., et al., 2019\. Let’s agree to disagree: Learning highly debatable multirater labelling, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 665--673. * Sudre et al. [2015] Sudre, C.H., Cardoso, M.J., Bouvy, W.H., Biessels, G.J., Barnes, J., Ourselin, S., 2015\. Bayesian model selection for pathological neuroimaging data applied to white matter lesion segmentation. IEEE transactions on medical imaging 34, 2079--2102. * Tillin et al. [2013] Tillin, T., Hughes, A.D., Mayet, J., Whincup, P., Sattar, N., Forouhi, N.G., McKeigue, P.M., Chaturvedi, N., 2013\. The relationship between metabolic risk factors and incident cardiovascular disease in europeans, south asians, and african caribbeans: Sabre (southall and brent revisited)—a prospective population-based study. Journal of the American College of Cardiology 61, 1777--1786. * Timmins et al. [2021] Timmins, K.M., van der Schaaf, I.C., Bennink, E., Ruigrok, Y.M., An, X., Baumgartner, M., Bourdon, P., De Feo, R., Noto, T.D., Dubost, F., Fava-Sanches, A., Feng, X., Giroud, C., Group, I., Hu, M., Jaeger, P.F., Kaiponen, J., Klimont, M., Li, Y., Li, H., Lin, Y., Loehr, T., Ma, J., Maier-Hein, K.H., Marie, G., Menze, B., Richiardi, J., Rjiba, S., Shah, D., Shit, S., Tohka, J., Urruty, T., Walińska, U., Yang, X., Yang, Y., Yin, Y., Velthuis, B.K., Kuijf, H.J., 2021\. Comparing methods of detecting and segmenting unruptured intracranial aneurysms on tof-mras: The adam challenge. NeuroImage 238, 118216\. URL: https://www.sciencedirect.com/science/article/pii/S1053811921004936, doi:https://doi.org/10.1016/j.neuroimage.2021.118216. * Van Straaten et al. [2006] Van Straaten, E.C., Fazekas, F., Rostrup, E., Scheltens, P., Schmidt, R., Pantoni, L., Inzitari, D., Waldemar, G., Erkinjuntti, T., Mäntylä, R., et al., 2006\. Impact of white matter hyperintensities scoring method on correlations with clinical data: the ladis study. Stroke 37, 836--840. * Vernooij et al. [2008] Vernooij, M., van der Lugt, A., Ikram, M.A., Wielopolski, P., Niessen, W., Hofman, A., Krestin, G., Breteler, M., 2008\. Prevalence and risk factors of cerebral microbleeds: the rotterdam scan study. Neurology 70, 1208--1214. * Wardlaw et al. [2013] Wardlaw, J.M., Smith, E.E., Biessels, G.J., Cordonnier, C., Fazekas, F., Frayne, R., Lindley, R.I., T O’Brien, J., Barkhof, F., Benavente, O.R., et al., 2013\. Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration. The Lancet Neurology 12, 822--838. * Wiesenfarth et al. [2021] Wiesenfarth, M., Reinke, A., Landman, B.A., Eisenmann, M., Saiz, L.A., Cardoso, M.J., Maier-Hein, L., Kopp-Schneider, A., 2021\. Methods and open-source toolkit for analyzing and visualizing challenge results. Scientific reports 11, 1--15. * Wright and Demeure [2021] Wright, L., Demeure, N., 2021\. Ranger21: a synergistic deep learning optimizer. arXiv preprint arXiv:2106.13731 . * Yates et al. [2014] Yates, P.A., Villemagne, V.L., Ellis, K.A., Desmond, P.M., Masters, C.L., Rowe, C.C., 2014\. Cerebral microbleeds: a review of clinical, genetic, and neuroimaging associations. Frontiers in neurology 4, 205\. * Yushkevich et al. [2016] Yushkevich, P.A., Gao, Y., Gerig, G., 2016. Itk-snap: An interactive tool for semi-automatic segmentation of multi-modality biomedical images, in: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE. pp. 3342--3345. * Zhang et al. [2014] Zhang, C., Chen, Q., Wang, Y., Zhao, X., Wang, C., Liu, L., Pu, Y., Zou, X., Du, W., Pan, Y., et al., 2014\. Risk factors of dilated virchow-robin spaces are different in various brain regions. PloS one 9, e105505. * Zhu et al. [2010] Zhu, Y.C., Tzourio, C., Soumaré, A., Mazoyer, B., Dufouil, C., Chabriat, H., 2010\. Severity of dilated virchow-robin spaces is associated with age, blood pressure, and mri markers of small vessel disease: a population-based study. Stroke 41, 2483--2490.
# PURE: Passive mUlti-peRson idEntification via Deep Footstep Separation and Recognition Chao Cai, Ruinan Jin, Peng Wang, Liyuan Ye, Hongbo Jiang, and Jun Luo C. Cai and J. Luo is with the School of Computer Engineering, Nanyang Technological University, Singapore 639798. E-mail: {chris.cai<EMAIL_ADDRESS>R. Jin is with the School of Information and Safety Engineering, Zhongnan University of Economics and Law, e-mail<EMAIL_ADDRESS>P. Wang and L. Ye are with the School of Electronic Information and Communications, Huazhong University of Science and Technology, China. E-mail<EMAIL_ADDRESS>H. Jiang (corresponding author) is with the College of Computer Science and Electronic Engineering, Hunan University. E-mail<EMAIL_ADDRESS> ###### Abstract Recently, passive behavioral biometrics (e.g., gesture or footstep) have become promising complements to conventional user identification methods (e.g., face or fingerprint) under special situations, yet existing sensing technologies require lengthy measurement traces and cannot identify multiple users at the same time. To this end, we propose PURE as a passive multi-person identification system leveraging deep learning enabled footstep separation and recognition. PURE passively identifies a user by deciphering the unique “footprints” in its footstep. Different from existing gait-enabled recognition systems incurring a long sensing delay to acquire many footsteps, PURE can recognize a person by as few as only one step, substantially cutting the identification latency. To make PURE adaptive to walking pace variations, environmental dynamics, and even unseen targets, we apply an adversarial learning technique to improve its domain generalisability and identification accuracy. Finally, PURE can defend itself against replay attack, enabled by the richness of footstep and spatial awareness. We implement a PURE prototype using commodity hardware and evaluate it in typical indoor settings. Evaluation results demonstrate a cross-domain identification accuracy of over 90%. ###### Index Terms: User identification, source separation, footstep recognition, adversarial learning. ## 1 Introduction User identification technology is an indispensable element for building smart indoor applications, such as elderly and child care, personalized custom service, and surveillance in sensitive zones. As one example, a smart home recognizing its elderly host (also his/her walking direction) may predict his/her intentions and switch on home appliances (e.g., lights or TVs) accordingly. Taking supermarket as another example, identifying customer identities and hence retrieving the relevant shopping histories may enable a salesman (or simply an advertising system) to make more appropriate recommendations and thus more likely to cut a deal. Existing solutions for user identification, ranging from conventional computer vision [1, 2, 3, 4] and biometrics [5, 6, 7] to the emerging WiFi sensing [8, 9, 10, 11], have shown promising results for several important applications. However, the inherent limitations of existing solutions have prevented them from being widely applicable. In particular, computer vision techniques require a good lighting condition and line-of-sight (LoS), while they may raise critical privacy concerns. Biometrics often demand wearable instruments or users’ active involvements, hence causing potential discomfort and inconvenience. WiFi sensing approaches leveraging gait profile or breathing pattern appear to be viable, but they often fail to work in practice due to the severe interference from WiFi’s main function of communications and other co-spectrum devices. Recent developments on passive behavioral biometrics [12, 13] (e.g., footstep-enabled identification [14, 15]) can be promising alternatives to the aforementioned solutions as they incur no privacy issues and have less interference due to their limited sensing range. However, these approaches often require special sensing hardware [15, 12, 13] or multiple long-time footstep measurements [14]. Besides, they can only handle one user each time, rendering them less appealing for practical applications where multiple users may appear at the same time. In this paper, we revisit the footstep-enabled passive behavioral biometrics, and we propose PURE (Passive mUlti-peRson idEntification) to achieve a lightweight user identification, exploiting both commodity sensing devices and up-to-date deep learning techniques. PURE is built on a key observation that footsteps carry “footprints” unique to individuals and thus can be leveraged for effective user identification. These footprints can be passively captured by commodity acoustic sensing hardware, totally removing the need for active user involvements. PURE aims to extract such footprint information from as few as a single step, thus enabling a much faster identification than existing gait-oriented systems. In addition, PURE targets simultaneous multi-user identification, robustness against various background interference, as well as immunity to replay attack. Implementing the above ideas into a practical system entails several technical challenges. First of all, background noise and interferences (especially voices) may overwhelm footsteps, significantly affecting the system performance. Both detecting and extracting footsteps under such strong interference can be a formidable task. Second, footstep variations, caused by factors such as different walking paces, can lead to distinctive features hence affect the identification performance. In addition, footstep can carry environment-dependent information (e.g., floor material). Such domain conditions are largely irrelevant to individuals’ unique footprints, and can thus seriously degrade the identification performance if not sufficiently removed. Last but not least, replay attack that records someone’s footstep and cheats an identification system later by playing the recorded sounds could be a major hurdle for practical adoption in certain applications. Figure 1: Mel-Frequency Cepstral (MFC) of footsteps generated by five persons clearly show distinctive profiles. In PURE, we tackle these challenges via a series of delicate designs. In the presence of continuous voice signals, we explore the rhythmic patterns in the time-frequency (TF) representation to detect footsteps and employ a blind source separation algorithm with the a priori knowledge to extract footsteps. To exclude feature variations caused by environment-dependent information and walking pace discrepancy for user identification, we train the user identifier (a predictor) via an adversarial domain adaptation scheme to improve its generalizability. Finally, we leverage the following fact to thwart replay attack: the replayed sounds exhibit static spatial characters (e.g., Angle-of- Arrival) and reveal inconsistency between walking speed and step frequency. To summarize, this paper makes the following contributions: * • We propose PURE, an acoustic passive multi-person identification system with little infrastructure cost. * • We innovatively employ an adversarial learning scheme to combat feature variations introduced by environment-independent information or heterogeneous walking paces, thus improving the system generalizability and identification performance. * • We leverage the dynamic and smoothly changing spatial characters extracted from both structure-borne and air-borne footsteps to thwart the challenging replay attack. * • We implement PURE prototype using commodity hardware and extensively evaluate its performance under various practical settings; the results demonstrate a cross-domain identification accuracy up to 90%. The rest of this paper is organized as followed: Sec. 2 discusses footstep basics. In Sec. 3, we elaborate on the details of system design. Sec. 4 reports extensive performance evaluation results. We present a literature review in Sec. 5, and finally conclude the paper in Sec. 6. ## 2 Background and Motivation We first explain the basic theories about footstep, then we further provide rationales for our later designs via a few preliminary measurement studies. ### 2.1 Basic Acoustics of Footstep When a foot touches the ground, it causes minor vibrations at the impact point and radiates energy via both the air and solid medium behind the surface. The acoustic pressure radiated from this impact point can be characterized by Rayleigh’s surface integral [16]: $\displaystyle p\left(r,\theta,t\right)$ $\displaystyle=\frac{-2\rho}{\pi md}\int_{R}\int_{\phi}R\sum_{j}\sum_{k}\left(-1\right)^{\frac{j-1}{2}}\sin\left[j\pi\left(\frac{1}{2}+\frac{R}{a}\right)\right]$ $\displaystyle\times\left[\frac{\text{d}F}{\text{d}t}\left(t-d/c\right)*\cos\left(\omega_{jk}\left(t-d/c\right)\right)\right]\mathrm{d}R\mathrm{d}\phi,$ (1) where $(r,\theta)$ describes the position relative to the impact point with respect to the floor plane, $\rho$ is the mass density that characterizes the medium properties, $c$ is the speed of acoustic signals in a certain medium, $\omega$ is the frequency, $a$ is a constant, $d$ is the length of the leg, $F$ is the impact force being zero at all time except during the impact period, and $*$ denotes convolution. The acoustic pressure, generated by the impact event, mostly radiates through two common mediums. The one propagating through piston-like and non-dispersive air channel (we refer to as air-borne hereafter) has a constant speed (e.g., a speed of 340 m/s at a temperature of 25∘C [16]) and the corresponding waveform remains identical along the propagating path as far as there are no multipath reverberations. The other one traversing in solid medium, known as blending wave (we refer to as structure-borne hereafter), exhibits a dispersive phenomenon where the propagation speed $c_{f}$ (ranging from 2000 ​m/s to 3000 ​m/s) of a specific signal component is a function of its frequency $f$: ${c_{f}}=\sqrt[4]{Eh{f^{2}}\left[12\rho(1-v_{p}^{2})\right]^{-1}},$ (2) where $E,\rho,h$ are constants that characterize the property of a medium: $E$ quantifies the elastic property, $\rho$ is the mass density that indicates the stiffness, $h$ is the thickness, and $v_{p}$ is the phase velocity. Eqn. (2) implies that when detecting structure-borne footsteps from different distances relative to the impact point, the corresponding waveform can be different due to the frequency-dependent propagation speeds. (a) Feature embeddings of different users under different environments. (b) Feature embeddings under walking pace variations. Figure 2: MFC features embeddings (a) under different environments, and (b) under different walking speeds: s1 $=$ 0.2 ​m/s, s2 $=$ 0.5 ​m/s, s3 $=$ 1 ​m/s, “other” denotes other users at 0.2 ​m/s. ### 2.2 Richness of Footstep Acoustic Profile Eqn. (2.1) and (2) together show that footstep contains both air-borne and structure-borne signals and they involve a rich set of frequency components, carrying unique identity information. For example, both a person’s weight (impact force) and walking style (duration of impact) are closely related to the generated acoustic pressures. To verify the above intuition, we record footsteps from five persons under the same circumstance and inspect the corresponding Mel-Frequency Cepstrals (MFC) [17]. The MFC features depicted in Fig. 1 clearly show distinctive visual clues and thus imply the possibility of using footsteps for person identification. We then carry out measurements under different environments and Fig. 2(a) plots the embedded features. One may observe that the low-dimensional features among different users show clear boundaries, demonstrating the possibility of effective identification using footsteps. We also conduct a simple user study by playing several recording of footsteps produced by different persons; all audiences participating in our user study indeed claim that they can tell perceivable differences among these recordings. The distinctiveness of acoustic features in footsteps from different persons lays the foundation for our footstep enabled user identification system. However, the footstep features of the same person can be affected by domain conditions such as varying walking paces as demonstrated by Fig. 2(b). These feature distinctions, introduced by domain conditions (including, e.g., environment heterogeneity and walking pace variations), can be particularly detrimental to the identification accuracy and thus should be excluded. ### 2.3 Differences between Structure- and Air-borne Footsteps The structure-borne and air-borne components of footsteps have a sharp difference in their propagation speeds, as already discussed in Sec. 2.1; this offers us an opportunity to physically separate them. For instance, when the sampling rate is 192 ​kHz and we detect the footsteps at a distance of 2 ​m, a clean set of structure-borne signals (due to propagation difference) lasts for $\frac{1}{340}-\frac{1}{3000}\approx 5.4$ ​ms, an equivalent of 1000 samples. Figure 3: A footstep waveform contains both structure- and air-borne components that are temporally separated. As clear shown by the measurements illustrated in Fig. 3, the structure-borne component indeed arrives ahead of the air-borne one. Separating these two components allows us to exam their respective natures. As the air-borne component appears to have more complicated waveforms, we conjecture that it may be more suitable for the identification purpose than its structure-borne counterpart. To verify the above hypothesis, we have conducted measurements on the five persons in several environments using commodity microphones. We then utilize a Gaussian Mixture Model (GMM) [18] to identify those persons based on either structure-borne or air-borne components. The identification accuracy comparison under different Signal-to-Noise-Ratio (SNR) is shown in Fig. 4(a), which confirms the advantage of adopting the air-borne component for achieving a higher identification accuracy. Also, the experiments reveal that using traditional GMM method for identification is practically not feasible since the required SNR is over 60 ​dB. (a) Identification accuracy comparison. (b) Feature embeddings from different locations. Figure 4: Examining the properties of structure-borne and air-borne components of footsteps. (a) GMM applied on air- and structure-borne signals for identification. (b) Structure-borne component carries distance information. ### 2.4 The Edge of Structure-borne Footstep Although structure-borne footstep is less representative than its air-borne counterpart for identifying persons, it has a unique property, namely acoustic dispersion, as shown by Eqn. (2). The dispersion phenomenon indicates that the structure-borne waveform is modulated by distance, suggesting the possibility of using it for ranging. Our measurements in Fig. 4(b), showcasing clear boundaries between feature embeddings of structure-borne footsteps from different locations, further demonstrate the feasibility of using structure- borne footsteps for ranging. This range information, together with the angle of signal arrival, may give us an edge over attackers who replay footstep recordings, because the smoothly changing spatial characters of human footsteps cannot be fully imitated. Figure 5: The system architecture of PURE. ## 3 System Design PURE consists of a pipeline of signal processing and deep learning modules as shown in Fig. 5. It employs a microphone array [19] to capture raw audio streams and then denoise them via a background spectral subtraction, removing most stationary noises. Then footstep detection is performed by inspecting the energy change, as well as by adopting an audio classifier. During the above process elaborated in Sec. 3.1, if there exist continuous interferences such as voices, a source separation algorithm kicks in to decouple footsteps from the sound mixture, as detailed in Sec. 3.2. Finally, PURE applies a neural network to identify the user behind each footstep (Sec. 3.3), and it also extracts spatial information to counter replay attack (Sec. 3.4). ### 3.1 Background Noise Suppression and Footstep Detection Indoor places often contain common color noises [20] that can affect the SNR of captured footsteps and thus should be removed. From the measurements in Fig. 2, we can see that the spectra of footsteps almost spread over the entire available bandwidth. Therefore, using common bandwidth oriented filter to trim the out-of-band noise is infeasible. To this end, we utilize a multi-band spectral subtraction method [20] to suppress background noise and obtain about 3 ​dB gain in SNR. Figure 6: Low-dimensional outputs of GMM classifier show clear boundaries between different acoustic noises. After that, we apply abrupt energy detection, characterized by Root Mean Square (RMS) [21], for candidate footstep detection. The RMS of a sequence $\mathbf{x}=\\{x_{1},x_{2},\cdots,x_{L}\\}$ is defined by $E_{\mathrm{RMS}}(\mathbf{x})=\sum_{i=1}^{L}\sqrt{\frac{x^{2}_{1}+x^{2}_{2}+...+x^{2}_{L}}{L}}$. If the detected energy $E_{RMS}$ is above a certain threshold, a footstep may be captured. Since many transient noises can also exhibit high energy, a simple energy-based detection method would not be sufficient. To this end, we further utilize a Gaussian Mixture Model (GMM) [18] based audio classifier to recognize footsteps. We train this GMM classifier against common background noises and it almost achieves an oracle performance: as shown in Fig. 6, different acoustic signals can be clearly identified. ### 3.2 Footstep Extraction under Continuous Interference To extract footstep overwhelmed by continuous strong interference such as voice signals, we further utilize a Flexible Audio Source Separation Toolbox (FASST) [22]. This procedure is often suspended for the sake of efficiency; it is only invoked when footsteps are heavily interfered. To leverage FASST, we use the rhythmic pattern of footsteps in frequency domain to detect the ongoing walking paces. More specifically, we utilize auto-correlation of STFT magnitude to detect the presence of footsteps. The reason for not directly using auto-correlation in time-domain is that the repetitive features in time- domain waveform are likely to be under noise floor due to their relatively low volume. Suppose $\mathbf{V},\mathbf{V}\in\mathbb{R}^{P\times Q}$ denotes the STFT power spectrum where $P,Q$ are the dimensions of time and frequency, respectively, we first calculate the auto-correlation of $\mathbf{V}$ along the time frames to obtain $\mathbf{B}$, the operation of which can enhance the rhythmic pattern of footsteps: $\mathbf{B}(i,j)=\frac{1}{P-j+1}\sum_{k=1}^{P-j+1}\mathbf{V}(i,k)\mathbf{V}(i,k+j-1).$ (3) We then take the average of $\mathbf{B}$ over the frequency dimension and normalize the result with its first term, $b(j)=\frac{1}{Q}\sum_{i=1}^{Q}\mathbf{B}(i,j),b(j)=\frac{b(j)}{b(1)}$. The walking rhythmic features can then be inspected in $\mathbf{b}=[b(j)]$, which we denote as the Averaged Spectrogram Auto-Correlation Coefficient (ASACC). We further check whether the periodicity exhibited in $\mathbf{b}$ lies in a reasonable range as the frequency of normal walking pace is usually within $[0.8,2]$Hz [23]. Ideally, we can utilize $\mathbf{b}$ to obtain a soft binary mask for $\mathbf{V}$ and then extract the footsteps in the same vein as the proposal in [24]. However, as explain by [22], the mask-based approaches, even with ground truth labels, are inferior to the local instantaneous Gaussian mixture models in source separation. Therefore, we still resort to FASST for better separation performance. Our measurements in Fig. 7 further verify the above intuition as correlation in time domain shown in Fig. 7(a) exhibit no rhythmic features while the rhythm is evident in frequency domain as shown in Fig. 7(b). To further remove the residual noises in the separated signals, we apply a trained time-domain denoising network in [25] for footstep enhancement. (a) Time domain correlation. (b) Spectrum domain correlation. Figure 7: Time domain correlation of a mixture of footsteps and voice fails to reveal rhythmic features (a), while in frequency domain, such features can be clearly observed (b). Figure 8: Identification network (ID-Net) based on domain adversarial adaptation. ### 3.3 Domain Adapted Identification As we have mentioned in Sec. 2.1, the domain conditions, including walking speed variations and environment dynamics, can be particularly detrimental to our identification system. To preclude these domain information hence improve identification accuracy, we adopt domain adversarial adaptation. We formulate the identification problem as a classification problem: given a specific footstep $\mathbf{x}\in{X}$, we aim to learn a maximum likelihood estimator $\mathcal{G}:{X}\rightarrow{I}$, where $X$ and $I$ represent the spaces of input (STFT of footstep waveform) and user identity, respectively. In reality, an footstep $\mathbf{x}$ is sampled from a joint distribution $P(\mathbf{x},u,d,v,e)$, where $u\in I$ that characterizes footstep diversity, $d\in D$ denotes distance information, $v\in V$ represents walking pace, and $e\in E$ describe specific properties incurred by environment dynamics. We denote $d,v,e$ potentially harmful to our identification purpose as domain conditions. Apparently, only features characterizing the joint distribution $P(\mathbf{x},u)$ are desirable for our identification purpose but features induced by domain conditions $d,v,e$ should be eliminated. To this end, we use a deep neural network $G(\mathbf{x})$ to approximate $\mathcal{G}(\mathbf{x})$ and adopt an adversarial learning [26] to train $G$ so as to preclude the impact from $d,v,e$. Our ID-Net is made up of three parts, namely feature extractor, identity predictor, and domain discriminator, as shown in Fig. 8. The feature extractor $\mathbf{f}=G_{f}(\mathbf{x},{\boldsymbol{\theta}}_{f})$, parameterized by ${\boldsymbol{\theta}}_{f}$, compresses an STFF of footstep waveform $\mathbf{x}$ into $\mathbf{f}\in\mathbb{R}^{Q}$, a lower-dimensional feature vector. With $\mathbf{f}$, the identity predictor aims to recognize the user behind the footstep, while the goal of domain discriminator is to identify different domains. The input to the domain discriminator is $\mathbf{f}$ weighted by a latent vector extracted from the identity predictor. The goal of ID-Net is to extract a domain independent representation $\mathbf{f}$ so as to i) achieve high-accuracy user identification and ii) deceive the domain discriminator to misidentify domains. It is this domain independent $\mathbf{f}$ (involving only identity-specific features) that allows a cross- domain generalization of ID-Net. The identity predictor $\hat{P}_{u}=G_{u}(\mathbf{f},{\boldsymbol{\theta}}_{u})$ outputs a probability matrix, whose element $\hat{p}^{(i,j)}_{u}$ represents the probability of the $i$-th footstep belonging to the $j$-th user. The loss function for training the parameter ${\boldsymbol{\theta}}_{u}$ is categorical cross-entropy: $\mathcal{L}_{u}=-|I|^{-1}\textstyle{\sum_{i=1}^{|I|}\sum_{j=1}^{N_{u}}}\log\left(\hat{p}^{(i,j)}_{u}\right),$ (4) where $N_{u}$ denotes the number of users. To further force the identity predictor to learn features sufficiently discriminative and generalizable to identify unseen users, we introduce a center loss [3] in training the identity predictor: $\mathcal{L}_{C}=0.5~{}\textstyle{\sum_{i=1}^{|I|}}\left\|\mathbf{f}_{i}-\mathbf{c}_{z_{i}}\right\|^{2}_{2},$ (5) where $\mathbf{c}_{z_{i}}\in\mathbb{R}^{Q}$ is the center for the $z_{i}$-th class deep features, $\mathbf{f}_{i}\in\mathbb{R}^{Q}$ is the $i$-th deep feature, and the summation is performed over the input set $I$. We update $\mathbf{c}_{z_{i}}$ in a mini-batch (with size $m$) manner where the gradient of $\mathcal{L}_{C}$ with respect to $\mathbf{f}_{i}$ is calculated as $\mathbf{f}_{i}-\mathbf{c}_{z_{i}}$ with $\mathbf{c}_{j}=\mathbf{c}_{j}+\frac{\sum^{|I|}_{i=1}\mathcal{I}(z_{i}=j)(\mathbf{c}_{j}-\mathbf{f}_{i})}{1+\sum^{|I|}_{i=1}\mathcal{I}(z_{i}=j)}$, $\mathcal{I}(z_{i}=j)$ is an indicator function whose value is 1 if $z_{i}=j$; otherwise 0. We combine the center loss $\mathcal{L}_{c}$ with the categorical cross-entropy loss $\mathcal{L}_{u}$ to train the identity predictor as $\mathcal{L}_{\eta}=\mathcal{L}_{u}+\lambda\mathcal{L}_{c}$, where $\lambda$ is a scalar that balances the two losses. This loss function enables ID-Net’s identity predictor to maximize inter-class margins and minimize intra-class distances, thereby improving its generalizability. The loss for training the domain discriminator $G_{d}(\mathbf{f},{\boldsymbol{\theta}}_{d})$ is also categorical cross- entropy: $\mathcal{L}_{\delta}=-|I|^{-1}\textstyle{\sum_{i=1}^{|I|}\sum_{j=1}^{N_{d}}}\log(\hat{\delta}^{(i,j)}),$ (6) where $\hat{\delta}^{(i,j)}$ is the output probability for the $i$-th footstep originating from the $j$-th domain and $N_{d}$ denotes the number of domains. The overall training process aims to minimize both $\mathcal{L}_{\eta}$ and $\mathcal{L}_{\delta}$ by tuning their respective network parameters $\mathbf{\theta}_{u}$ and $\mathbf{\theta}_{d}$. In the meantime, $\mathbf{\theta}_{f}$ is tuned to minimize $\mathcal{L}_{\eta}$ but maximize $\mathcal{L}_{\delta}$ (via gradient reversal). This procedure forces $\mathbf{f}$ to retain only user-specific properties but discard those induced by domains, allowing ID-Net to handle footstep samples taken from unseen domains. ### 3.4 Thwarting Replay Attack via Spatial Clues As a user identification system, PURE may be adopted in applications where security is a concern (e.g., assisting authentication and surveillance). Under these circumstances, PURE needs to be resilient to potential attacks, among which replay attack is the most lethal one. Similar to all other acoustic- related identification methods (e.g., speaker recognition [27]), footsteps can be overheard (and recorded) by a microphone and then be replayed to attack the system in the sense of faking a certain user. Fortunately, footsteps contain dynamic and smoothly changing spatial clues induced by user movements. On the contrary, a recording clip of footsteps often exhibit only static spatial characters or may otherwise suggest abnormal trajectories, so it should be readily recognizable upon fully extracting the spatial clues contained in footsteps. To thwart replay attacks, we propose to leverage two pieces of spatial information, namely Time-of-Arrival (ToA) and Angle-of-Arrival (AoA), to filter out replayed footsteps. In particular, ToA is extracted from the structure-borne footstep and AoA is obtained from the air-borne counterpart. The key rationale is that footsteps from a lively walking person exhibit two traits: i) the moving speed suggested by ToA is bounded, as one should not move too fast indoors (it is also suspicious for a real person to move too fast in a security-concerned area) and ii) the moving trajectory implied by ToA should be naturally irregular, i.e., no person may move along a straight line. On the contrary, an replayed footstep clip always violate at least one of these two criteria, as explained next, regardless of whether the attack is aware of our countermeasure or not. We formulate the defense against replay attack as a Hypothesis Test [28], in which we define significance thresholds to judge whether a footstep can be accepted as an authentic one or rejected as a replayed one. In our implementation, two thresholds are defined: i) we use Spearman coefficient [29] $\pi$ to quantify the correlation between walking speeds $\mathbf{v}$ and step frequency $\mathbf{f}$, as replayed footsteps often exhibit rather low correlation while those of authentic footsteps are high. ii) we exploit the maximum difference between any detected AoA $\gamma_{\mathrm{diff}}$ to check the motion states: the AoAs should exhibit variance instead of being static. To be more specific, if $\pi\geq\bar{\pi}$ and $\gamma_{\mathrm{diff}}\geq\bar{\gamma}_{\mathrm{diff}}$ are both satisfied, we accept the footsteps; otherwise, we reject them, where $\bar{\pi}=0.8$ and $\bar{\gamma}_{\mathrm{diff}}=10^{\circ}$ are empirically set based on measurements. We apply beamforming techniques [30] to extract AoAs from footstep, but obtaining ToA, or equivalently range, is non-trivial. Intuitively, a range- sensitive acoustic fingerprinting strategy based on Sec. 2.1 can be used, but such fingerprints can be interfered by domain conditions. Therefore, we again resort to adversarial learning to preclude domain conditions with respect to ranging. Essentially, we train a neural network, R-Net, to infer range from a footstep $\mathbf{x}$. This R-Net follows the same design as ID-Net except that the identity predictor is replaced by a range estimator (for regression) and the domain conditions are modified accordingly. We omit the training details given their similarity to those presented in Sec. 3.3. We emulate a case to show how replayed footsteps exhibit abnormal properties and can thus be detected. We let a user (hence his/her footsteps) move from $[1,0]$ ​m to $[1,3.4]$ ​m, with a speed of 0.7 ​m/s (0.1 ​m/s variance) and a step frequency of 1 ​Hz (0.05 ​Hz variance). In the meantime, the microphone of PURE is located at $[1.5,2]$ ​m, while an attacker hides at the origin to record the footsteps. After a while, the attacker replays the recorded footsteps, trying to impersonate the legitimate user. As shown in Fig. 9(a), $\mathbf{v}_{\mathrm{replay}}$ has a significantly lower correlation with $\mathbf{f}$, compare with that of $\mathbf{v}_{\mathrm{live}}$: $\pi_{\mathrm{replay}}=0.32<\bar{\pi}=0.8<\pi_{\mathrm{live}}=0.87$. Meanwhile, Fig. 9(b) shows a fixed $\gamma_{\mathrm{diff}}=0$ for replayed footsteps, as opposed to the meaningful one for the live ones. (a) Correlation between $\mathbf{v}$ and $\mathbf{f}$. (b) AoA vs. range changes. Figure 9: Emulated scenario to compare (a) walking speed $\mathbf{v}$ vs. step frequency $\mathbf{f}$ (representing $\pi$) and (b) AoA vs. range changes (suggesting $\gamma_{\mathrm{diff}}$) between live and replayed footsteps. ## 4 Implementation and Performance Evaluation (a) A microphone array. (b) Experiment setting. Figure 10: Implementation with a microphone array (a) and corresponding experiment setting for evaluations (b). (a) Clean footsteps. (b) Voice. (c) Mixture. Figure 11: ASACC calculated from (a) clean footsteps, (b) voice, and (c) a mixture of clean footsteps and voice. ### 4.1 Implementation We implement a PURE prototype using a circular array backed by a Raspberry Pi4, as shown in Fig. 10(a). We configure the sampling rate as 192 ​kHz, the highest configuration on this platform, to capture as much structure-borne signals as possible due to their short duration. However, we downsample the air-borne footsteps to 16 ​kHz for identification purpose, in order to achieve computational efficiency. The signal processing modules (including footstep detection, GMM based classifier, and FASST audio source separation) are implemented using C++. The deep learning modules, namely the denoising network, ID-Net, and R-Net, are implemented using Tensorflow [31]. For the denoising network, we follow the routines in [25], which allows PURE to achieve a salient denoising performance and also a low runtime complexity. ID-Net takes STFT magnitude of dimension $32\times 16$ as its input, representing a footstep that lasts around $30$ ​ms. The feature extractor has two convolution layers each followed by a batch normalization layer and ReLU activation layer. The first one has 32 filters with a $5\times 3$ kernel and the second one has 64 filters with a $3\times 2$ kernel. After then, we flatten the output of the last convolution layer, add a dropout layer with drop probability of 0.65, and project it into a 16-dimensional feature vector. The identity predictor and domain discriminator both have only one fully connected layer with 16 neurons and adopt a sigmoid activation function. Their respective outputs depend on the number of users and domains involved in the training set. For the R-Net, the input is a footstep waveform with 500 samples. The feature extractor in R-Net has a 1D convolution layer, followed by a pooling layer, and a fully-connected layer. The corresponding filter size is 64, 16, and 16, respectively. A dropout layer with a probability of 0.5 is inserted before the fully-connected layer. The range estimator, together with the domain discriminator has only one hidden layer, and both has a filter size of 16. To gather training data for R-Net, we first deploy a centimeter-level localization system using Decwave UWB-based sensors [32]. We tie one sensor on a user’s foot when he/she walks so as to obtain ground truth locations, i.e., ToA and AoA labels. Also, we attach an IMU sensor on the user’s leg to help triggering the microphone array, so that it may correctly capture a footstep. We synthesize a training data set for the denosing network with clean footsteps from [33, 34] and speech from TED talks [35]. The noise is extracted from Diverse Environments Multichannel Acoustic Noise Database (DEMAND) [36]. We also collect footsteps under common floors (wood, stone) and circumstances (hall, indoor office, home appliance). We configure the signal SNR in a range of 5 ​dB, 10 ​dB, 20 ​dB, and 30 ​dB during training. We also utilize the footsteps from source separation to train the network in gaining the ability of minimizing residual interference signals. ### 4.2 Performance Evaluation We present extensive experiments in this section. We start with evaluating the source separation algorithm, followed by the denoising network. Then we seriously verify the identification accuracy. Finally, we report performance of defending against the replay attack. The experimental statistics, unless otherwise noted, are all obtained by repeating the same experiment 1,000 times. #### 4.2.1 Source Separation Performance As we mentioned in Sec. 3.2, the source separation module is activated if we detect rhythmic features in STFT spectrogram. Therefore, before source separation, we first evaluate the performance of this detection algorithm. Recall that we utilize the ASACC $\mathbf{b}$, calculated by Eqn. (3) for footstep detection, as the strongest energy of footsteps lies in the low frequency range, we thus only use the first three bins. As shown in Fig. 11, only footsteps that exhibit rhythmic features can generate periodic peaks in ASACC of Fig. 11(a), whereas voice signals hold no such properties in Fig. 11(b). And when footsteps are mixed with voice, using ASACC can still identify this rhythmic feature as shown in Fig. 11(c). The detailed steps of this detection algorithm proceed as follows. After obtaining $\mathbf{b}$, we estimate the beating frequency $k$ in $\mathbf{b}$ via Discrete Fourier Transform. If the magnitude $m_{k}$ of bin $k$ in spectrum goes beyond the average of its subsequent 20 bins by a certain threshold (10 in our case), namely $m_{k}>{\frac{1}{20}\sum^{k+20}_{i=k}m_{i}}+10$, we accept that current audio signals contain footsteps; otherwise, no footstep is detected and source separation is deactivated. This detection method allows us to achieve a 100% footstep detection even when the magnitude of voice is higher than footstep. To inspect the source separation performance, we first use clean footsteps blended with voice signals under different configurations as inputs and then check the quality of separated signals. Specifically, we synthesize mixtures of footstep and voice under different Source to Interference Ratio (SIR) [37] as inputs and evaluate the performance using SIR and Source to Distortion Ratio (SDR) [37]. These two evaluation matrices, namely SIR and SDR, are widely used to quantify source separation performance where $\mathrm{SIR}=10\log\frac{\|\mathbf{s}_{\mathrm{tgt}}\|^{2}_{2}}{\|\mathbf{e}_{\mathrm{itf}}\|^{2}_{2}}$, $\mathrm{SDR}=10\log\frac{\|\mathbf{s}_{\mathrm{tgt}}\|^{2}_{2}}{\|\mathbf{e}_{\mathrm{itf}}+\mathbf{e}_{\mathrm{nie}}+\mathbf{e}_{\mathrm{atf}}\|^{2}_{2}}$, with $\mathbf{s}_{\mathrm{tgt}},\mathbf{e}_{\mathrm{itf}},\mathbf{e}_{\mathrm{nie}}$, and $\mathbf{e}_{\mathrm{atf}}$ being respectively the target signal, interference signal, noise signal, and signal artifacts. The higher the value of these matrices, the better the target signal quality is. Both SIR and SDR in this experiment are calculated given footsteps as the primary signals, as opposed to common speech enhancement tasks where voices are the major concerns. (a) SIR before separation vs SDR after separation. (b) SIR before separation vs SIR after separation. Figure 12: Source separation performance. SDR (a) remains almost constant and SIR (b) is enhanced by source separation. The results are shown in Fig. 12. From Fig. 12(a), we can see that the SDR after source separation remains almost constant under different SIRs. This simply implies that the source separation algorithm introduces little distortion to the original footsteps, which is notably important for our later identification performance. As a matter of fact, we can barely perceive any distortion when playing the separated footsteps, except some residual voices. It is observable that after source separation in Fig. 12(b), SIR is significantly boosted, indicating a success removal of voice interference. (a) Time domain waveform. (b) STFT spectrogram. Figure 13: Time domain waveform (a) and STFT spectrogram (b) of mixed, original, and separated footsteps. We finally showcase waveforms and STFT spectrogram of footsteps after source separation, compared with the mixed and original recorded ones, in Fig. 13(a) and Fig. 13(b), respectively. In this experiments, the maximum voice magnitude is identical to that of footsteps, under which case, a user mostly notice voice but ignores footstep hence interference is strong. Though under severe interference, the separated footstep waveform is already rather clean as we can see from both Fig. 13(a) and Fig. 13(b): minor distortions and residuals may exist, but none of them introduce perceivable artifacts. #### 4.2.2 Denoising Performance The denoising network is used to filter out interference from the background subtraction and get rid of residual signals from the source separation. To evaluate the denoising performance, we synthesize mixtures of real-life recorded footsteps and voices under different SNRs and SDRs, generating a total of 1400 sound clips whose duration is within 20 ​s. Then we check the respective SNR and SDR after denosing. Our measurements in Fig. 14(a) reveal a maximum SNR gain around 30 ​dB (depending on the background noise type). The SNR gain can be noticeable when the SNR of input noisy signals is relatively low, e.g., below 20 ​dB. However, the network introduces little distortion to its inputs when SNR is high. The same goes for SDR as shown in Fig. 14(b). But this little distortion introduces no perceivable difference to the inputs. Meanwhile, we can deactivate the denoising module when SNR or SDR is high so as to prevent the possible distortion since the SNR or SDR can be roughly calculated. We finally showcase our denoising network in residual removal in Fig. 15. Both time domain waveform (Fig. 15(a)) and Frequency domain spectrogram (Fig. 15(b)) indicate the success of residual noise removal. The noise removal effect can be visualized by the less signal magnitude variations in time domain waveform and the less “blurred image” in spectrogram. (a) SNR. (b) SDR. Figure 14: SNR (a) and SDR (b) before and after denoising. (a) Time domain waveform. (b) STFT spectrogram. Figure 15: Denoising performance for residual removal. #### 4.2.3 Identification Performance (a) Accuracy over SNR. (b) Accuracy over SIR. Figure 16: Identification accuracy at various SNRs and SIRs. We conduct extensive measurements to evaluate the identification performance utilizing real-life recorded footsteps from six users. In our first study, we utilize data samples from the same domain (the same environment and walking speed) but only vary the user identity. In this study, we deactive center loss and apply no adversarial learning for ID-Net. The results shown in Fig. 16(a) reveal that even under a SNR of -12.5 ​dB, ID-Net still achieves 87% accuracy, demonstrating the feasibility of applying footstep for user identification. (a) Identification accuracy after different processing methods. (b) Impact of sampling rate. Figure 17: (a) Identification accuracy improved after each processing step. (b) Improving the sampling rate can boost the identification accuracy at low SNR. It should be noted that we only utilize one step for identification, if we incorporate multiple footsteps and use a majority voting strategy, the accuracy can be boosted to $1-C^{2}_{3}\times\left(1-0.90\right)^{2}\times 0.9-C_{3}^{3}(1-0.9)^{3}=97.2$%. We then check the identification accuracy under different levels of voice interference (SIR). The results in Fig. 16(b) show that even under severe interference (SIR = 0), ID-Net can still achieve an accuracy up to 87.13%. And if adding noise and thereby reducing the SNR for footsteps, the accuracy would drop to 60%, indicating ID-Net’s vulnerability to strong background interference. This also emphasizes the need for source separation and denoising, as the former can deliver SIR gain and the latter provides SNR gain, thereby promoting the identification accuracy. We next extensively explore the identification accuracy after source separation and denoising. The results shown in Fig. 17(a) reveals that source separation can achieve a maximum accuracy gain of 59.9% while denoising network can boost the performance by an average of 5%. We then explore the impact of sampling rate on the final identification performance and Fig. 17(b) shows the results. It can be observed that improving the sampling rate can contribute to better identification performance. The performance gain is marginal when SNR is sufficiently high ($>20$ ​dB), if the sampling rate hits 48 ​kHz. Since a higher sampling rate requires more computational power but achieves little performance improvement, we therefore adopt 16 ​kHz in our system. (a) Low-dimensional features without center loss. (b) Low-dimensional features with center loss. Figure 18: Low-dimensional feature visualization without (a) and with (b) center loss. It clearly shows the power of center loss in maximizing inter- class boundaries while minimizing intra-class distances. We next conduct an ablation study on the effectiveness of center loss, the impact of which on the classified features can be visualized in Fig. 18. It is observable that center loss can effectively maximize the inter-class boundaries and minimize intra-class distances. This ability could not only enhance the identification performance but also improve the generalizability of ID-Net. Applying center loss sometimes can push the identification accuracy to almost 100%. TABLE I: Accuracy without domain adaptation. Accuracy (%) | Distance | Speed | Environment ---|---|---|--- Distance | 76.4 | 57.2 | 9.11 Speed | 57.2 | 62.01 | 14.31 Environment | 9.11 | 14.31 | 12.04 We test the identification performance under different domains including speed, distance, and environment variations in the following experiments. Specifically, our footsteps are captured under: 1) three levels of walking speed, namely 0.2 ​m/s, 0.5 ​m/s, and 1 ​m/s, 2) different distances ranging from $0$ to $3$ ​m, 3) heterogeneous environments including common indoor office, home appliance, hall, corridor, etc that exhibit different ground materials and background interference. We first show the impact of domains on the identification performance when we deactivate center loss and domain predictor. The results are displayed in Table I and they tells us that domain conditions can have a notable impact on the identification accuracy. To read the statistics in Table I, each row and column indicate the number of domains involved in the training data. For instance, $(\text{row},\text{column})=\left(\text{Speed},\text{Distance}\right)=57.2\%$ means when the training data involves speed and distance variations, the identification accuracy is 57.2%. According to Eqn. (2.1), distance should not impose any negative impacts on the final results. But when data only contains distance variations, the identification accuracy is only 76.42%. We believe that this is caused by 1) SNR degradation due to propagation loss and 2) structural differences from place to place that cause heterogeneous features. Speed variations, equivalently leading to different impact forces, sabotage the identification accuracy to only 62.01%. And environment dynamics, introducing different medium properties, undermine the identification accuracy most. TABLE II: Accuracy with domain adaptation. Accuracy (%) | 0 Domain | 2 D. | 2 D. | 3 D. ---|---|---|---|--- One footstep | 1 | 88.6 | 84.92 | 81.75 2/3 | 1 | 95.92 | 90.33 | 88.16 3/5 | 1 | 96.53 | 94.47 | 90.73 We next explore the identification accuracy under domain adaptation. Particularly, we evaluate the identification accuracy under different number of domains and number of footsteps. The average results from 100 trials are shown in Table II and “2/3” means we incorporate three steps to identify each user and we accept the result if the same identity appears twice. It can be observable that domain adaptation can significantly improve identification accuracy, pushing the resulting accuracy from a minimal of 9% (without domain adaptation) to 81.75%. And if incorporating multiple footsteps, the accuracy can be further improved to 90.73%. (a) Identification accuracy under one footstep interference. (b) Identification accuracy under multiple footsteps. Figure 19: Identification accuracy when one footstep is mixed with partial another footstep components (a). Such a collision can reduce the identification accuracy. Identification accuracy drops when the number of users increases (b). As collision rate increases when more users are involved, the accuracy drops. We finally explore the identification accuracy under multiple user scenarios. We first explore the identification accuracy under the case when each footstep is interfered by only one another footstep. We check the identification accuracy when the footsteps are overlapped at different percentages, the results of which are displayed in Fig. 19(a). As the figure tells, the identification accuracy drops monotonically if the percentage of overlapped region increases. However, the accuracy is still around 50% if two steps are totally overlapped. This simply implies that ID-Net can still recognize these two footsteps but is unable to distinguish them. We then evaluate the performance when multiple person randomly walk in an indoor meeting room where we place the microphone array in the center. The results in Fig. 19(b) show that an identification accuracy around 80% can still be achieved even when there are three users. As the number of user increases, the collision between different footsteps happens more frequently hence worse performance. #### 4.2.4 Defend Against Replay Attack PURE leverages R-Net to extract spatial information, including range and AoA, from multiple consecutive footsteps to defend against replay attack. In this section, we first present the performance of R-Net in spatial information extraction and then inspect the defending performance based on these signatures. We run over 1000 trials in an indoor office $6.8\times 4.2$ ​m2 where we place the microphone array in the center. Fig. 20 shows the ranging and AoA estimation performance. In Fig. 20(a), We verify the ranging performance of R-Net under three cases to demonstrate its capability in domain adaptation. First, we utilize training data (70% of all data) from all the identities and domains to train R-Net, and we then verify the ranging performance using test data (the remaining 30%), which we refer to as Test with Domain Adaptation (Test w/DA). Second, we randomly remove one identity from the training data and after training, we apply inference on this particular identity, referred as Test new samples with Domain Adaptation (Test new samples w/DA). Third, we cut the domain predictor from R-Net and apply inference using the same setting with the second case, denoted as Test new samples without Domain adaptation (Test new samples w/o DA). The results in Fig. 20(a) exhibit an median error of around 0.3 ​m, even testing on samples that come from other domains and never participant in the training process. And if without domain adaptation, the median ranging errors would reach up to 1 ​m. The comparison of afore-mentioned results clearly demonstrates the salient performance of R-Net in domain adaptation, as well as in ranging. Fig. 20(b) shows AoA estimation errors are below $10^{\circ}$. These salient performance lays the foundation of our defending mechanism against replay attack. (a) Ranging performance. (b) AoA estimation performance. Figure 20: Performance in spatial clues extraction. We have checked our defending mechanism under several replay attack scenarios including different walking trajectories and hacking positions. Our measurements reveal that if the trajectories contain complex shapes such as “L” or circle shape segments, the detected $\pi$ could easily violates the threshold $\bar{\pi}$ so that we achieve 100% success in defending these attacks. When there involves only straight line trajectories, the variance of detected AoA revealed by replayed attacks never exceeds the preset $10^{\circ}$ threshold while live footsteps reveal a minimum variation around 32.8∘ due to location swing caused by the alternation between left and right legs. In conclusion, PURE can successfully defend against replay attack. ## 5 Related Work In this section, we survey the literature on user identification. Whereas common identification techniques have a broad categories, ranging from traditional computer vision, fingerprint sensing, and iris scan, they are rather irrelevant to our proposal. Therefore, we shall not review these common techniques but focus only on solutions that leverage emergent sensing techniques and adopt behavioral biometrics for user identification. The proposals of [11, 10, 8, 9] exploit the gait information to identify users during their walking. The basic idea behind these systems is that the particular walking cycle of each user can be sampled by WiFi signals. However, they may not be able to adapt to environment dynamics, and even walking direction variations can severely affect the identification accuracy. Meanwhile, they often fail to work in practice due to the severe interference from WiFi’s main function of communications and other co-spectrum devices. FootprintID [14] is a structural vibration based identification system. It employs Gephone [38] to sense the structural vibration caused by a footstep. For identification, it again relies on the gait patterns extracted from multiple structural vibration measurements; the reported identification accuracy for 10 people may reach up to 96%. However, this promising solution still leaves many open issues, including sensor location variation, multiple pedestrians interference, footwear variation. Other similar behavioral biometrics enabled identification system can be found in [39, 40, 41], a well as [13] where accelerometer and camera are used together as sensors. To summarize, existing technologies driven by behavioral biometrics often require multiple measurements hence long latency identification experience. On the contrary, PURE solves this problem elegantly by requiring as few as only one step. While PURE can be deemed as a type of behavioral biometrics, it is actually quite related to voiceprint recognition [42, 43]; it can be deemed as a “footstep-print” enabled identification system. PURE is similar to those acoustic fingerprint based systems [42, 43] but PURE is totally passive and thus can provide better user experience. The most similar work to PURE is the one from [15] where footstep patterns rather than gaits are used for identification. But this proposal requires an excessive number of piezoelectric sensors to capture footstep signals while PURE utilizes only commodity microphone, significantly reducing the deployment cost and rendering itself widely applicable for indoor scenarios. ## 6 Conclusion In this paper, we have explored the possibility of exploiting footsteps for passive user identification. We have proposed PURE as a multi-person identification system driven by a pipeline of signal processing and deep learning techniques. PURE demands as few as a single footstep to enable user identification and is immune to replay attacks. PURE is even feasible to work under continuous voice interference, thanks to a novel source separation and denoising network. To have PURE working across different domains, we have exploited domain adversarial adaptation scheme with a center loss to further enhance its generalization ability across different domains. We have implemented a prototype for PURE and extensively evaluated its performance; the results confirm that PURE achieves a cross-domain identification accuracy up to 90%. Since PURE outperforms existing passive identification system in both deployment cost and identification latency, we have the reason to believe that PURE has the potential for a wide adoption. ## References * [1] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “SphereFace: Deep Hypersphere Embedding for Face Recognition,” in _Proc. of IEEE/CVF CVPR_ , 2017, pp. 6738–6746. * [2] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive Angular Margin Loss for Deep Face Recognition,” in _Proc. of IEEE/CVF CVPR_ , 2019, pp. 4685–4694. * [3] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A Discriminative Feature Learning Approach for Deep Face Recognition,” in _Proc. of ECCV_ , 2016, pp. 499–515. * [4] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering,” in _Proc. of IEEE/CVF CVPR_ , 2015, pp. 815–823. * [5] Bin Fang and Yuan Yan Tang, “Elastic Registration for Retinal Images based on Reconstructed Vascular Trees,” _IEEE Transactions on Biomedical Engineering_ , vol. 53, no. 6, pp. 1183–1187, 2006. * [6] A. K. Jain, S. Prabhakar, and Lin Hong, “A Multichannel Approach to Fingerprint Cassification,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 21, no. 4, pp. 348–359, 1999. * [7] A. Jain, L. Hong, and S. Pankanti, “Biometric Identification,” _Communications of the ACM_ , vol. 43, no. 2, p. 90–98, 2000. * [8] W. Wang, A. X. Liu, and M. Shahzad, “Gait Recognition Using WiFi Signals,” in _Proc. of ACM UbiComp_ , 2016, p. 363–373. * [9] C. Shi, J. Liu, H. Liu, and Y. Chen, “Smart User Authentication through Actuation of Daily Activities Leveraging WiFi-Enabled IoT,” in _Proc. of ACM Mobihoc_ , 2017, pp. 5:1–10. * [10] Y. Zeng, P. H. Pathak, and P. Mohapatra, “WiWho: WiFi-Based Person Identification in Smart Spaces,” in _Proc. of ACM/IEEE IPSN_ , 2016, pp. 1–12. * [11] J. Zhang, B. Wei, W. Hu, and S. S. Kanhere, “WiFi-ID: Human Identification using WiFi Signal,” in _Proc. of IEEE DCOSS_ , 2016, pp. 75–82. * [12] P. Connor and A. Ross, “Biometric recognition by gait: A survey of modalities and features,” _Comput. Vis. Image Underst._ , vol. 167, pp. 1–27, 2018\. * [13] Y. He, J. Zhang, H. Shan, and L. Wang, “Multi-Task GANs for View-Specific Feature Learning in Gait Recognition,” _IEEE Transactions on Information Forensics and Security_ , vol. 14, pp. 102–113, 2019. * [14] S. Pan, T. Yu, M. Mirshekari, J. Fagert, A. Bonde, O. J. Mengshoel, H. Y. Noh, and P. Zhang, “FootprintID: Indoor Pedestrian Identification through Ambient Structural Vibration Sensing,” in _Proc. of ACM UbiComp_ , 2017, pp. 89:1–31. * [15] R. Vera-Rodríguez, J. Mason, J. Fierrez, and J. Ortega-Garcia, “Comparative Analysis and Fusion of Spatiotemporal Information for Footstep Recognition,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, pp. 823–834, 2013. * [16] A. Ross and G. Ostiguy, “Propagation of the Initial Transient Noise From an Impacted Plate,” _Elevier Journal of Sound and Vibration_ , vol. 301, pp. 28–42, 03 2007. * [17] S. Nakagawa, L. Wang, and S. Ohtsuka, “Speaker Identification and Verification by Combining MFCC and Phase Information,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 20, no. 4, pp. 1085–1095, 2012. * [18] J. Portelo, M. Bugalho, I. Trancoso, J. Neto, A. Abad, and A. Serralheiro, “Non-speech Audio Event Detection,” in _2009 IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2009, pp. 1973–1976. * [19] RaspberryPi, “ReSpeaker 6-Mic Circular Array Kit for Raspberry Pi,” https://wiki.seeedstudio.com/ReSpeaker_6-Mic_Circular_Array_kit_for_Raspberry_Pi/, accessed: 2020-08-08. * [20] S. Kamath and P. Loizou, “A Multi-band Spectral Subtraction Method for Enhancing Speech Corrupted by Colored Noise,” in _Proc. of IEEE ICASSP_ , 2002, pp. 4164–4172. * [21] E. Yuce, S. Minaei, and S. Tokat, “Root-Mean-Square Measurement of Distinct Voltage Signals,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 56, no. 6, pp. 2782–2787, 2007. * [22] A. Ozerov, E. Vincent, and F. Bimbot, “A General Flexible Framework for the Handling of Prior Information in Audio Source Separation,” _IEEE Transactions on Audio, Speech and Language Processing_ , vol. 20, no. 4, pp. 1118 – 1133, May 2012. * [23] A. Rai, K. K. Chintalapudi, V. N. Padmanabhan, and R. Sen, “Zee: Zero-Effort Crowdsourcing for Indoor Localization,” in _Proceedings of the 18th Annual International Conference on Mobile Computing and Networking_ , 2012, p. 293–304. * [24] Z. Rafii and B. Pardo, “REpeating Pattern Extraction Technique (REPET): A Simple Method for Music/Voice Separation,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 21, no. 1, pp. 73–84, 2013. * [25] A. Defossez, G. Synnaeve, and Y. Adi, “Real Time Speech Enhancement in the Waveform Domain,” 2020. * [26] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-Adversarial Training of Neural Networks,” _The Journal of Machine Learning Research (JMLR)_ , vol. 17, no. 1, pp. 2096–2030, Jan. 2016. * [27] A. Poddar, M. Sahidullah, and G. Saha, “Speaker Verification with Short Utterances: A Review of Challenges, Trends and Opportunities,” _IET Biometrics_ , vol. 7, no. 2, pp. 91–101, 2018. * [28] N. Jerzy and S. P. Egon, “On the problem of the most efficient tests of statistical hypotheses,” _Philosophical Transactions of the Royal Society of London_ , 1933. * [29] J. H. Zar, “Significance Testing of the Spearman Rank Correlation Coefficient,” _Journal of the American Statistical Association_ , vol. 67, no. 339, pp. 578–580, 1972. * [30] J. C. Chen, K. Yao, and R. E. Hudson, “Source Localization and Beamforming,” _IEEE Signal Processing Magazine_ , vol. 19, no. 2, pp. 30–39, 2002. * [31] TensorFlow, “An End-to-End Open Source Machine Learning Platform,” https://www.tensorflow.org/, accessed: 2020-08-08. * [32] Decwave, “EVK1000 Evaluation Kit,” https://www.decawave.com/product/evk1000-evaluation-kit/, accessed: 2020-11-11. * [33] F. Studio, “Footsteps Sound Effects Soundboard,” https://www.fesliyanstudios.com/royalty-free-sound-effects-download/footsteps-31?soundboard=1, 2020, accessed: 2020-11-6. * [34] E. Sound, “Footsteps Sound Effects,” https://www.epidemicsound.com/sound-effects/footsteps/, 2020, accessed: 2020-11-6. * [35] TED, “TED: ideas worth spreading,” https://www.ted.com, 2020, accessed: 2020-11-6. * [36] J. Thiemann, N. Ito, and E. Vincent, “The Diverse Environments Multi-Channel Acoustic Noise Database (DEMAND): A database of multichannel environmental noise recordings,” _The Journal of the Acoustical Society of America_ , vol. 133, p. 3591, 2013. * [37] E. Vincent, R. Gribonval, and C. Fevotte, “Performance Measurement in Blind Audio Source Separation,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 14, no. 4, pp. 1462–1469, 2006. * [38] J. M. Reynolds, _An introduction to applied and environmental geophysics; 2nd ed._ Hoboken, NJ: Wiley, 2011. * [39] D. Gafurov, E. Snekkenes, and P. Bours, “Gait Authentication and Identification Using Wearable Accelerometer Sensor,” in _2007 IEEE Workshop on Automatic Identification Advanced Technologies_ , 2007, pp. 220–225. * [40] H. J. Ailisto, M. Lindholm, J. Mantyjarvi, E. Vildjiounaite, and S.-M. Makela, “Identifying People from Gait Pattern with Accelerometers,” in _Biometric Technology for Human Identification II_ , vol. 5779, 2005, pp. 7–14. * [41] L. Rong, D. Zhiguo, Z. Jianzhong, and L. Ming, “Identification of Individual Walking Patterns Using Gait Acceleration,” in _2007 1st International Conference on Bioinformatics and Biomedical Engineering_ , 2007, pp. 543–546. * [42] N. Obin and A. Roebel, “Similarity Search of Acted Voices for Automatic Voice Casting,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 24, no. 9, pp. 1642–1651, 2016. * [43] R. A. Rashid, N. H. Mahalin, M. A. Sarijari, and A. A. Abdul Aziz, “Security system using biometric technology: Design and implementation of Voice Recognition System (VRS),” in _2008 International Conference on Computer and Communication Engineering_ , 2008, pp. 898–902.
# CoDrug: Conformal Drug Property Prediction with Density Estimation under Covariate Shift Siddhartha Laghuvarapu Department of Computer Science University of Illinois Urbana-Champaign Urbana, IL 61801 <EMAIL_ADDRESS>&Zhen Lin Department of Computer Science University of Illinois Urbana-Champaign Urbana, IL 61801 <EMAIL_ADDRESS>&Jimeng Sun Department of Computer Science Carle Illinois College of Medicine University of Illinois Urbana-Champaign Urbana, IL 61801 <EMAIL_ADDRESS> ###### Abstract In drug discovery, it is vital to confirm the predictions of pharmaceutical properties from computational models using costly wet-lab experiments. Hence, obtaining reliable uncertainty estimates is crucial for prioritizing drug molecules for subsequent experimental validation. Conformal Prediction (CP) is a promising tool for creating such prediction sets for molecular properties with a coverage guarantee. However, the exchangeability assumption of CP is often challenged with covariate shift in drug discovery tasks: Most datasets contain limited labeled data, which may not be representative of the vast chemical space from which molecules are drawn. To address this limitation, we propose a method called CoDrug that employs an energy-based model leveraging both training data and unlabelled data, and Kernel Density Estimation (KDE) to assess the densities of a molecule set. The estimated densities are then used to weigh the molecule samples while building prediction sets and rectifying for distribution shift. In extensive experiments involving realistic distribution drifts in various small-molecule drug discovery tasks, we demonstrate the ability of CoDrug to provide valid prediction sets and its utility in addressing the distribution shift arising from de novo drug design models. On average, using CoDrug can reduce the coverage gap by over 35% when compared to conformal prediction sets not adjusted for covariate shift. ## 1 Introduction Drug discovery is a challenging and complex task, with a high failure rate and limited understanding of the chemical and biological processes involved. These contribute to making drug discovery an extremely costly and time-consuming endeavor. Recently, advances in deep learning have aimed to reduce the cost of drug discovery by proposing AI methods for developing accurate property prediction models and De Novo drug design models: * • Property prediction models aim to aid the laborious and expensive stages of drug discovery by building accurate supervised learning models that take in a drug representation as input and output a target property [1, 2]. * • De novo drug design models, on the other hand, aim to discover new drug molecules that satisfy a set of pharmaceutical properties [3, 4, 5, 6]. With the high cost and significance of drug discovery, it is essential to have accurate and reliable uncertainty estimates in supervised learning models for property prediction. By providing set-valued or interval-valued estimates instead of solely relying on point estimates, uncertainty estimation enables more informed decision-making and reduces the risk of failures, making the drug discovery process more efficient. Conformal prediction (CP), pioneered by [7], offers a solution to uncertainty quantification for complex models like neural networks, by constructing provably valid prediction sets111Prediction intervals can be viewed as prediction sets, with each interval being a subset of $\mathbb{R}$. in supervised learning models. Its application to drug property prediction has also been explored for various drug discovery tasks [8, 9, 10]. A crucial assumption in the CP framework is that the test samples are exchangeable with the holdout set used to calibrate the algorithm. In drug discovery, there is often a limited amount of available training or validation data. Furthermore, de novo drug design models or screening datasets sample molecules from a large chemical space, making the exchangeability assumption invalid. In this paper, we deal with a situation where the training data originates from a distribution, $P(X)$, while the test data comes from a different distribution, $P^{test}(X)$. However, in both cases, the molecular properties, determined by the conditional distribution $P(Y|X)$, remain the same as they are governed by nature and unaffected by shifts in the input distribution, assuming testing parameters are stable. This is referred to as covariate shift. Although recent research in Conformal Prediction (CP) [11] suggests a method for correcting covariate shift, accurately estimating the precise level of covariate shift remains a practical challenge. This paper proposes a novel and practical method for Conformal Drug property prediction, dubbed as CoDrug, to improve coverage in the conformal prediction framework under covariate shift. We address the problem of non-exchangeability by quantifying the underlying covariate shift at test time and leverage recent advances in conformal prediction to obtain prediction sets. Further, we demonstrate applying CoDrug to obtain valid uncertainty estimates w.r.t. a target property on molecules sampled from de novo drug design models. We summarize our main contributions below: * • We propose a novel approach to create prediction sets for drug property prediction, dubbed as CoDrug. Using kernel density estimates (KDE) and recent advances in CP, CoDrug corrects for covariate shift at test time and creates prediction sets whose coverage rate is closer to the target. * • We show that the kernel density estimates are consistent, which means that asymptotically, the covariate shift is precisely adjusted for, and the coverage guarantee is recovered. * • We demonstrate the loss of coverage in property prediction tasks induced by two forms of distribution shifts - molecular scaffold splitting and molecular fingerprint splitting. Our experiments show that CoDrug effectively reduces the gap between actual and target coverage for prediction sets, with an average enhancement of up to 35% compared to the conformal prediction method without covariate shift adjustment. Additionally, in our experiments on molecules generated by de novo drug design models, we observe a 60% reduction in the coverage gap on average. ## 2 Related Work Recently, deep learning techniques have been extensively studied for their potential in drug discovery, specifically in developing accurate predictive and generative models. This led to various architectures for predicting drug properties from SMILES/SELFIES strings [12], molecular graph representations[2, 1] and self-supervised learning [13]. Another area of research focused on building generative models to discover novel molecules using variational autoencoder [5, 6] and reinforcement learning [3, 4, 14]. Furthermore, several methods have been proposed for addressing uncertainty quantification in molecule property prediction, utilizing various Bayesian techniques [15]. Recently, conformal prediction methods have gained increasing attention for drug property prediction [8, 9, 10, 16, 17]. However, these studies primarily focus on generating efficient conformal predictors, without taking into account distribution shifts. Although several benchmarking datasets [18, 19] and methods [20] have been developed for drug property prediction under distribution shift, the problem of uncertainty quantification under distribution shift is still open. Recent advancement in conformal prediction recovers the coverage guarantee for conformal prediction under known covariate shift [11]. [21] built upon [11] and proposed the Feedback Covariate Shift (FCS) method for the task of protein design. In practice, one cannot know the exact densities to measure the covariate shift. Like [21], we also leverage [11], but a key difference is that the training density is well-defined in [21] but unknown in ours, requiring us to estimate it. Additionally, our focus diverges from [21] as we concentrate on molecule property prediction rather than protein design. ## 3 Preliminaries Reliable estimation of drug properties is crucial for identifying potential drug candidates. Many essential drug properties, such as toxicity, efficacy, drug-drug interactions etc. are formulated as classification problems. Consider a classification task, with each data point $Z=(X,Y)\in\mathbb{R}^{d}\times[K]$ ($[K]=\\{0,1,2,...,K-1\\}$). For instance, in Fig. 1(a), we seek to construct prediction sets for the problem of solubility classification. (Note that in practice, most drug discovery tasks are formulated as binary classification problems, with $K=2$, but we present the general form of the methodologies.) While building an accurate base classifier ($f$) is important, we usually would like more than a point estimate of the solubility of the molecule, but also some “confidence level”. This could be encoded in the form of a prediction set denoted as $\hat{C}(X)\subseteq[K]$. The main goal we seek in such prediction sets is valid coverage: Given a target (e.g. 90%), we would like to construct a set-valued prediction (Fig. 1(a)) such that, if a molecule is water soluble, this prediction set will include the label “water soluble” with at least 90% probability. Formally, given $1-\alpha\in(0,1)$, and a new test molecule $(X_{N+1},Y_{N+1})$, we would like $\hat{C}$ to be $1-\alpha$ valid: $\displaystyle\mathbb{P}\\{Y_{N+1}\in\hat{C}(X_{N+1})\\}\geq 1-\alpha.$ (1) Conformal Prediction(CP) framework enables us to achieve such validity in Eq. 1. We will expand the details in Section 4.2. Remarkably, the only requirement of CP is a hold-out calibration set where the base classifier $f$ is not trained on222The assumption indicates that since the model $f$ was not trained on the calibration set, whatever over-fitting happens on $P^{train}_{Y|X}$, but not $P^{cal}_{Y|X}$. Roughly speaking, we assume the classifier’s performance on the calibration set is similar to that on an unseen test set.. One critical assumption for typical CP methods is that the test and calibration data are i.i.d (or exchangeable) which is rarely realistic in drug discovery tasks. On the other hand, although the distribution of molecules $X$ changes from calibration to test time, the conditional distribution $Y|X$ is unlikely to change as the molecular properties are determined by nature and remain the same under similar experimental conditions. Formally, if we denote our calibration set as $\\{(X_{i},Y_{i})\\}_{i=1}^{N}$ and the test point as $(X_{N+1},Y_{N+1})$, we have: $\displaystyle\forall i\in[N],(X_{i},Y_{i})\stackrel{{\scriptstyle i.i.d}}{{\sim}}P^{cal}$ $\displaystyle=P^{cal}_{X}\times P^{cal}_{Y|X}$ (2) $\displaystyle(X_{N+1},Y_{N+1})\sim P^{test}$ $\displaystyle=P^{test}_{X}\times P^{cal}_{Y|X}.$ (3) It is important to note that the test distribution $P^{test}$ maintains the same conditional distribution $P^{cal}_{Y|X}$ as the calibration distribution, a phenomenon known as covariate shift. This shift is prevalent in de novo drug design models, which require navigating a vast chemical space to pinpoint optimal molecules for a specific goal. However, in many drug discovery tasks, the datasets typically contain only a few thousand data points, representing a limited chemical space. Thus, when models trained on these smaller datasets are used on molecules drawn from the broader molecular space, they inevitably encounter covariate shift. Next we will lay out the exact details of constructing prediction sets with the presence of covariate shifts for supporting drug discovery applications. ## 4 CoDrug Method Figure 1: CoDrug overview: (a) A depiction of the conformal prediction (CP) framework. A valid prediction set includes the true label of the input molecule. (b) Standard procedure for computing quantiles from the calibration set when the test set is exchangeable. The calibration set’s "non-conformity" scores are sorted, and the (1-$\alpha$) quantile serves as the threshold for the conformal prediction set. (c),(d),(e) describe the CoDrug pipeline. (c) Training an Energy-based model using labeled and unlabeled data. (d) Density estimation: The model from (c) is used to estimate the density of the calibration and test sets. (e) Calibration under covariate shift: First, likelihood ratios $w_{i}$ are computed from the densities in (d). Then, Quantile is computed in a weighted fashion. Note how the quantile at $\alpha=0.3$ is shifted from 0.64 (in (b)) to 0.71 to account for the distribution shift. ### 4.1 Overview In the subsequent subsections, we describe the three primary components of CoDrug. In Section 4.2, we first provide a brief overview of inductive conformal prediction, presenting a method for constructing valid prediction sets in scenarios both without and with distribution shifts, presuming oracle access to the unknown distributions $P_{X}^{test}$ and $P_{X}^{cal}.$ Next, in Section 4.3, we present the details of the training aspects of the base energy-based classifier, emphasizing additional regularization using unlabeled data to enhance its capability to model varying molecule distributions. Finally, in Section 4.4, we employ kernel density estimation (KDE) on the embeddings or logits of the energy model trained in Section 4.3 to estimate the unknown distributions $P_{X}^{test}$ and $P_{X}^{cal}$, and rectify covariate shift using Section 4.2. As KDE is consistent, we regain the coverage guarantee asymptotically. Together, these elements constitute the pipeline depicted in Fig. 1. ### 4.2 Conformal Prediction Set In this section, we will explain how to use conformal prediction to construct valid prediction sets. We will start with the case without covariate shift, and then explain how to correct for covariate shift. #### 4.2.1 Conformal Prediction without Covariate Shift Conformal prediction, pioneered by [7], is a powerful framework to construct prediction sets with the guarantee in Eq. 1. We assume that a base classifier $f$ is trained on a training set $\mathcal{D}_{train}$, and we have a hold-out calibration set $\mathcal{D}_{cal}$. To simplify notation, we will denote the calibration set as $\\{Z_{i}\\}_{i=1}^{N}$ and the test point of interest as $Z_{N+1}$. We will also abuse the notation to use $\mathcal{D}$ to denote both the empirical calibration/test set as well as the underlying distribution. Note that we ignored the training samples because they are no longer used after the classifier $f$ is trained. We first introduce some useful definitions: (empirical) CDF and quantile function. The cumulative distribution function (CDF) $F$ of a set of values $\\{v_{i}\\}_{i=1}^{N}$ is defined as: $\displaystyle F_{\\{v_{i}\\}_{i=1}^{N}}\vcentcolon=\nicefrac{{1}}{{N}}\sum_{i=1}^{N}\delta_{v_{i}},\text{ where }\delta_{v}(x)\vcentcolon=\mathbbm{1}\\{x\geq v\\}$ (4) The quantile function with respect to a CDF $F$ is: $\displaystyle Quantile(\beta;F)$ $\displaystyle\vcentcolon=\inf\\{x:F(x)\geq\beta\\}$ (5) Given a target coverage level $1-\alpha\in(0,1)$, the Mondrian inductive conformal prediction set (Mondrian ICP) is given by: $\displaystyle\hat{C}(X_{N+1})$ $\displaystyle\vcentcolon=\\{y:1-p^{f}_{y}(X_{N+1})\leq t\\}$ (6) $\displaystyle\text{where }t$ $\displaystyle\vcentcolon=Quantile(1-\alpha;F_{\\{1-p^{f}_{Y_{i}}(X_{i})\\}\cup\\{\infty\\}}).$ (7) where, $p^{f}_{y}(x)$ corresponds to the softmax output of class $y$ from model $f$. Here, $\\{v_{i}\\}_{i=1}^{N}$ are defined by $v(x_{i},y_{i})=1-p^{f}_{y_{i}}(x_{i})$, which are called “nonconformity scores” [22] and measure how “anomalous” a point $z=(x,y)$ is with respect to other points from this distribution. Intuitively, we assign to each molecule a score using the same rule using $f$, which is trained on a separate data split $\mathcal{D}_{train}$. Now, we choose a threshold $t$ that is larger than $1-\alpha$ (e.g., 90%) of the molecules. Because of our i.i.d. assumption, if we sample another molecule $Z_{N+1}$ from the same distribution, we expect its score to be lower than this threshold with a probability of $1-\alpha$. For eg, in Fig. 1(b), notice how the threshold $t$, is computed as the value of the $Quantile$ function at $\alpha=0.7$ ($t=0.64$ in this case). We formally state the coverage guarantee without covariate shift, in the following theorem: ###### Theorem 4.1. Assume i.i.d. $\\{(X_{i},Y_{i})\\}_{i=1}^{N+1}$. The $\hat{C}$ in Eq. 6 satisfies: $\displaystyle\mathbb{P}\\{Y_{N+1}\in\hat{C}(X_{N+1})\\}\geq 1-\alpha.$ (8) Remarks: Theorem 4.1 is a result of classical Mondrian inductive conformal prediction [7]. In fact, in the classification setting, instead of the i.i.d. assumption, one could make a slightly milder assumption that data are exchangeable within each class. #### 4.2.2 Conformal Prediction with Covariate Shift While Theorem 4.1 provides a nice first step, the i.i.d. assumption poses a significant limitation in drug discovery. As mentioned in Eq. 3, the distributions of $X$ on $\mathcal{D}^{test}$ and $\mathcal{D}^{cal}$ can differ. In Eq. 7, we used the empirical CDF $F$ (Eq. 4) to choose the threshold $t$. This is because of our i.i.d. assumption: a particular molecule type appears with equal probability/density in both the calibration and test sets. This is no longer the case with covariate shift, which means our $F$ needs to account for such difference in $P_{X}$. Formally, recall that $P_{X}^{cal}$ and $P_{X}^{test}$ represent the density of the molecule $X$ for the calibration and test sets. We will assign a weight to each molecule $x$ that is proportional to the density/likelihood ratio $\nicefrac{{dP^{test}_{X}}}{{dP^{cal}_{X}}}$ in the empirical CDF, leading to: $\displaystyle F_{x_{N+1}}^{w}$ $\displaystyle\vcentcolon=\nicefrac{{w(x_{N+1})\delta_{\infty}+\sum_{i\in[N]}w(x_{i})\delta_{1-p^{f}_{y_{i}}(x_{i})}}}{{W}}$ (9) $\displaystyle w(x^{\prime})$ $\displaystyle\vcentcolon=\nicefrac{{dP^{test}_{X}(x^{\prime})}}{{dP^{cal}_{X}(x^{\prime})}},\forall x^{\prime}$ (10) $W=\sum_{i=1}^{N+1}w(x_{i})$ is just a normalizing factor. The subscript ${x_{N+1}}$ is used to highlight that our updated CDF now depends on the test molecule $x_{N+1}$ through the weights. Here, $w(x^{\prime})$ could be viewed as a likelihood ratio, and is crucial in adjusting for the covariate shift. For eg, in Fig. 1(e). Notice how the values of $w(x_{i})$ depend on the densities $P_{X}^{cal}$ and $P_{X}^{test}$. In the figure, the value of weighted $Quantile$ at $\alpha=0.3$ or the threshold $t$ is shifted from 0.64 to 0.71 to account for the shift. We formally state the modified theorem from [11] that recovers the coverage guarantee under covariate shift for Mondrian ICP: ###### Theorem 4.2. [11] Assume that $\tilde{P}_{X}$ is absolutely continuous with respect to $P_{X}$. For any $\alpha\in(0,1)$, let $F^{w}$ be defined as in Eq. 9, and $\displaystyle\hat{C}(x)$ $\displaystyle=\\{y:1-p^{f}_{y}(x)\leq Quantile(1-\alpha;F_{x}^{w})\\}\ $ (11) Then, $\displaystyle\mathbb{P}\\{Y_{N+1}\in\hat{C}(X_{N+1})\\}\geq 1-\alpha.$ (12) However, in practice, both $P^{test}_{X}$ and $P^{cal}_{X}$ are unknown, rendering Theorem 4.2 impractical. In section 4.4, we will provide a viable way to estimate $P^{test}_{X}$, and recover the guarantee asymptotically (namely with large calibration and test sets). In the following section, we will delve into our training methodology, which harnesses unlabeled data to effectively model varying molecular distributions. ### 4.3 CoDrug Training Methodology CoDrug handles distribution shift by proposing an energy-based model formulation [23]. The core idea behind an energy-based model is to construct a function $E$ that maps an input $x$ to a scalar value, known as energy. A collection of energy values can be transformed into a probability density function $p(x)$ through the Gibbs distribution $\displaystyle p(y|x)=\nicefrac{{e^{-E(x,y)/T}}}{{e^{-E(x)/T}}}$ (13) Consider a discriminative neural network $f$ used in a $K$ class classification setting. $f(x)$ maps an input $x$ into $K$ real-valued scalars, which are used to derive a conditional class-wise probability: $\displaystyle p(y|x)=\nicefrac{{e^{f_{y}(x)/T}}}{{\sum_{y^{\prime}}^{K}e^{f_{y^{\prime}}(x)/T}}}$ (14) where, $f_{y}(x)$ refers to the $y^{th}$ logit of the classifier $f(x)$. In this setting, the energy function $E(x)$ can be expressed in terms of the denominator of the softmax probabilities in Eq. 14. $\displaystyle E(x;f)=-T\cdot\log\sum_{y^{\prime}}^{K}e^{f_{y^{\prime}}(x)/T}$ (15) Directly using embeddings from a model trained on labeled data may not yield reliable density estimates, as the model lacks knowledge of data outside the training distribution. To overcome this, we co-train the model with unlabeled molecule data. This aids the model $f$ in effectively mapping molecules with distribution shifts to a distinct embedding space. We follow [24], and use an extra regularization term in the loss function to ensure energy separation between in-distribution and out-of-distribution data. The objective function is defined as follows: $\displaystyle\min_{\theta}\mathbb{E}_{(x,y)\sim{\mathcal{D}_{in}}}[-\log(p^{f}_{y}(x))]+\lambda\cdot L_{energy}$ (16) where $p^{f}_{y}(x)$ refers to the softmax outputs of the classification model $f$ for class $y$, and $\mathcal{D}_{in}$ is the in-distribution training data for which labels are available. The training objective is combined with an additional term $L_{energy}$ given by: $L_{energy}=\mathbb{E}_{(x_{in},y)\sim\mathcal{D}_{in}}(\max(0,(E({x_{in}})-m_{in}))^{2})+\mathbb{E}_{(x_{out},y)\sim\mathcal{D}_{out}}(\max(0,(m_{out}-E({x_{out}})))^{2})$ (17) where $\mathcal{D}_{out}$ refers to the subset of the unlabelled that is out- of-distribution (OOD). The objective of this term is to enforce a margin of separation between the training samples and the OOD data using the hyper- parameters $m_{in}$, $m_{out}$. Particularly, one term penalizes the model if the energy values for in-distribution data are higher than a certain value, while the other term penalizes if the OOD samples have an energy lower than a certain value. In the next section, we will explain how to use either the latent embedding of $f$ or the logits to estimate the density and correct for covariate shift. ### 4.4 Density Estimation As discussed earlier, we need to estimate $P^{test}_{X}$ and $P^{cal}_{X}$ to correct for the covariate shift. We resort to Kernel Density Estimation (KDE), a classical nonparametric method, to estimate the density of arbitrary distributions of molecules. For a set of data $X_{1},\ldots,X_{n}\overset{i.i.d}{\sim}\mathcal{D}$, KDE is given by: $\displaystyle\hat{p}_{h}(x;\mathcal{D})$ $\displaystyle=(nh)^{-1}\sum_{i=1}^{n}K(\nicefrac{{x-x_{i}}}{{h}}),$ (18) where $K$ is a fixed non-negative kernel function, and $h>0$ is a smoothing bandwidth. Such KDE estimates have nice asymptotic convergence properties, as formally stated in the following theorem: ###### Theorem 4.3. [25] Assume that the true density $p$ is square-integrable and twice differentiable and that its second-order partial derivatives are bounded, continuous, and square-integrable. If $K$ is spherically symmetric on $\mathbb{R}^{d}$, with a finite second moment, and we choose the bandwidth $h$ such that $\displaystyle\lim_{m\to\infty}h^{d}m$ $\displaystyle\to\infty\text{ and }\lim_{m\to\infty}h\to 0$ (19) then as $m\to\infty$, $\displaystyle\|\hat{p}_{h}(x)-p(x)\|_{2}\overset{P}{\to}0,$ (20) where $\overset{P}{\to}$ means convergence in probability. Note that commonly used kernels, such as Gaussian kernel, satisfy the requirements. Since $x$ here refers to molecular entities (e.g. SMILES strings), we cannot use a Gaussian kernel directly. Instead, we use embeddings or prediction logits produced by a trained model $f$ as the input to the kernel. Under the assumption that KDE accurately reflects the true density of the underlying distribution, we could construct kernel density estimators for both the calibration set and test sets (remember that we do not have access to the test labels but have access to the input $X$), and use $\displaystyle\hat{w}(x)$ $\displaystyle\vcentcolon=\nicefrac{{\hat{p}_{h_{test}}(x;\mathcal{D}_{test})}}{{\hat{p}_{h_{cal}}(x;\mathcal{D}_{cal})}}$ (21) to replace the unknown $w$ in Eq. 10, giving us the final prediction set: $\displaystyle\hat{C}^{\texttt{CoDrug}}(x)$ $\displaystyle=\\{y:1-p^{f}_{y}(x)\leq Quantile(1-\alpha;F_{x}^{\hat{w}})\\}$ (22) $\displaystyle F_{x_{N+1}}^{\hat{w}}$ $\displaystyle\vcentcolon=\nicefrac{{\hat{w}(x_{N+1})\delta_{\infty}+\sum_{i\in[N]}\hat{w}(x_{i})\delta_{1-p^{f}_{y_{i}}(x_{i})}}}{{\hat{W}}}$ (23) where $\hat{W}=\sum_{i=1}^{N+1}\hat{w}(x_{i})$ is a normalizing factor. Here, $\hat{p}_{h_{test}}(\cdot;\mathcal{D}_{test})$ is constructed using samples from the test data with an optimal bandwidth $h_{test}$ chosen on the test data via cross-validation, and $\hat{p}_{h_{cal}}(\cdot;\mathcal{D}_{cal})$ is constructed similarly but on the calibration data. It is clear that, as the number of samples from $\mathcal{D}_{cal}$ and $\mathcal{D}_{test}$ increases, $\hat{w}$ converges to $w$ in Eq. 10, and $\hat{W}^{\texttt{CoDrug}}$ recovers the coverage guarantee asymptotically. In practice, recovering asymptotic coverage on a finite amount of data is challenging. However, the coverage tends to approach the target value as we observe in our experiments. The overall procedure for density estimation is depicted in Fig. 1(d). Algorithm 1 summarizes all the components in Section 4. In Section 5, we will verify the efficacy of CoDrug in property prediction tasks, and molecules sampled from de novo drug design models. Algorithm 1 Procedure for Property Prediction Training: Split the dataset into training set $\mathcal{D}_{train}$ and calibration set $\mathcal{D}_{cal}=\\{z_{i}\\}_{i=1}^{N}$. Train a neural net classifier $f$ on $\mathcal{D}_{train}$ by minimizing Eq. 16. Compute the KDE $\hat{p}_{h_{cal}}(\cdot;\mathcal{D}_{cal})$ for all points in $\mathcal{D}_{cal}$ using Eq. 18. Test Time, for a test set $\mathcal{D}_{test}$: Compute KDE $\hat{p}_{h_{test}}(\cdot;\mathcal{D}_{test})$ for all points in $\mathcal{D}_{cal}$ using Eq. 18. For any $x_{N+1}\in\mathcal{D}_{test}$, compute $\hat{w}(x)$ and $\hat{w}(x_{i})$ for $x_{i}\in\mathcal{D}_{cal}$. Construct the prediction set $\hat{C}^{\texttt{CoDrug}}(x_{N+1})$ using Eq. 22. ## 5 Experiments In this section, we put our proposed method, CoDrug, to the test on various drug discovery tasks. Section 5.1 describes the datasets used and key implementation details. Section 5.2.1 empirically demonstrates the loss of validity in conformal prediction sets on different drug discovery datasets. Section 5.2.2 shows how the setup improves the validity of the conformal prediction sets. Section 5.3 confirms the utility of CoDrug in de novo drug design. We include additional details on implementation, datasets, and hyperparameters in the appendix. ### 5.1 Data and Implementation Details * • Splitting Strategies: To demonstrate the effectiveness of CoDrug under covariate shift, we use two different strategies when creating calibration/test splits. In both strategies, we try to create calibration and test splits that are dissimilar to each other, which is a challenging but realistic setting in drug discovery. We used the DeepChem[26] library for splitting. In scaffold splitting, the dataset is grouped based on chemical scaffolds, representing core structures of molecules. The test set and train set consist of different scaffolds. In fingerprint splitting, the dataset is partitioned based on Tanimoto similarity of molecular fingerprints [27]. Molecules with the highest dissimilarity in terms of Tanimoto similarity are included in the test set. * • Datasets: We use four binary classification datasets for toxicity prediction (AMES, Tox21, ClinTox) and activity prediction (HIV activity), obtained from TDC [28]. To train the Energy based model, we obtained the unlabelled data from the ZINC-250k dataset [29], a subset of the ZINC that covers a large chemical space. For each dataset and split type, we removed the molecules that are similar to the training (and calibration) set from the unlabelled dataset. * • Classification Model: The architecture of our classifier $f$ is AttentiveFP [1], a graph neural network-based model. We chose AttentiveFP as it has state- of-the-art results in several drug property prediction tasks. It is trained using the objective function described in Eq. 16. * • De Novo Drug Design Experiments: In Section 5.3, we perform experiments to construct conformal prediction sets on molecules sampled from de novo drug design models. As generative models, we use REINVENT[14] and GraphGA[30] (top- ranked methods in MolOpt [31] benchmark). The models are optimized to sample molecules w.r.t. three popularly used computational oracles - QED (quantitative estimate of drug-likeness), JNK3 activity, and GSK3B activity. For building the conformal prediction sets, we chose logP as our target property, assigning values in the range of [1.0,4.0] a class of Y=1, and Y=0 otherwise (representing the drug-like range [32]). We obtain the computational oracles from TDC [33], and generative models from MolOpt package[31]. ### 5.2 Property Prediction Results #### 5.2.1 Unweighted conformal prediction (baseline) In this section, we demonstrate the unpredictable behavior of the unweighted CP method without proper correction under distribution shift. Table 1 shows the results of conformal prediction under various distribution shift conditions. “Random” refers to the ideal/unrealistic scenario where the test and calibration samples are split randomly (aka. no distribution shift). “Scaffold” and “Fingerprint” denote scenarios in which there is a distribution shift between the test and training data outlined in the Methods section. In all scenarios, 15% training set is held out for calibration, and prediction sets are calculated using the algorithm described in Algorithm 1 without any correction. From Table 1, we observe that the Random configuration demonstrates little loss in coverage and coverage decreases under distribution shifts (Scaffold and Fingerprint). But for fingerprint and scaffold split, unweighted CP failed to provide target coverage and exhibit unpredictable behavior. For instance, at $\alpha=0.2$, under fingerprint split, Unweighted has a coverage of 0.34 against a target coverage of 0.8 for the AMES dataset, while achieving a very different coverage of 0.77 with scaffold split on the same dataset. | Random | Fingerprint | Scaffold | Random | Fingerprint | Scaffold ---|---|---|---|---|---|--- Dataset | $\alpha$=0.1 | $\alpha$=0.2 AMES(Y=0) | 0.94 | 1.00 | 0.82 | 0.85 | 1.00 | 0.66 AMES(Y=1) | 0.87 | 0.63 | 0.85 | 0.78 | 0.34 | 0.77 ClinTox(Y=0) | 0.88 | 0.78 | 0.84 | 0.77 | 0.58 | 0.75 ClinTox(Y=1) | 0.82 | 0.80 | 0.97 | 0.78 | 0.73 | 0.81 HIV(Y=0) | 0.90 | 0.93 | 0.91 | 0.80 | 0.89 | 0.81 HIV(Y=1) | 0.89 | 0.84 | 0.87 | 0.80 | 0.72 | 0.73 Tox21(Y=0) | 0.90 | 0.77 | 0.89 | 0.80 | 0.65 | 0.75 Tox21(Y=1) | 0.86 | 0.97 | 0.93 | 0.72 | 0.97 | 0.82 Table 1: Unweighted CP’s (baseline) coverage under various distribution shifts (or absence thereof) should ideally align closely with the target $1-\alpha$. However, in most datasets with fingerprint and scaffold splits—reflective of more realistic scenarios—the baseline method falls short. Often, substantial deviations in coverage confirm the unpredictability of unweighted CP when exchangeability ceases to apply. Here, values not significantly deviating from $1-\alpha$ at a p-value of 0.05 are highlighted in bold, indicating desirable performance. #### 5.2.2 Weighted conformal prediction improves coverage In Table 2, we present the benefits of using weighted CP via CoDrug. The table depicts results from conformal prediction using 3 different schemes. * • CoDrug (Energy): This variant of CoDrug uses weights computed from KDE on the prediction logits of the trained EBM, as described in Section 4.3. * • CoDrug (Feature): This variant of CoDrug builds the KDE instead on the features extracted from the penultimate layer of the trained EBM. * • Unweighted: Refers to the unweighted prediction conformal prediction (baseline). In both weighting schemes of CoDrug, we use KDE to estimate densities and find that weighting using energies improves the coverage towards the target coverage $1-\alpha$ in most cases. We notice the highest improvement in the Fingerprint splitting scenario for the AMES(Y=1) category, where the coverage improved from 0.63 to 0.88 (target coverage 0.9). Note that the coverage is “improved” if it is closer to $1-\alpha$ \- improvement does not always mean higher coverage, because an unusually high coverage often indicates unpredictable behavior of the underlying model. While our energy-weighting approach generally improves coverage, there are rare instances where it underperforms compared to the baseline. A prominent example is the ClinTox dataset with $Y=1$, which sees limited improvement or even a reduction in coverage. This is due to the constraints of the density estimation procedure, which relies on the quantity of available data. Notably, this dataset is the smallest and most imbalanced, with only 19 points in the calibration set for class $Y=1$. Additionally, our results show that using energy weighting leads to better overall coverage than directly weighting the features. This is likely because the energy values are two-dimensional, while the features are an eight- dimensional vector: As the dimension of the feature input to KDE increases, one typically requires more samples to get a high-quality density estimate. Dataset | Fingerprint Splitting | Scaffold Splitting ---|---|--- | CoDrug (Energy) | CoDrug (Feature) | Unweighted (baseline) | CoDrug (Energy) | CoDrug (Feature) | Unweighted (baseline) AMES(Y=0) | 0.93(0.03) | 0.87(0.03) | 1.00(0.00) | 0.85(0.02) | 0.89(0.02) | 0.82(0.01) AMES(Y=1) | 0.88(0.03) | 0.90(0.03) | 0.63(0.05) | 0.83(0.01) | 0.79(0.01) | 0.85(0.03) ClinTox(Y=0) | 0.86(0.04) | 0.76(0.02) | 0.78(0.02) | 0.90(0.03) | 0.83(0.01) | 0.84(0.00) ClinTox(Y=1) | 0.73(0.00) | 0.69(0.08) | 0.80(0.00) | 0.85(0.03) | 0.83(0.00) | 0.97(0.04) HIV(Y=0) | 0.89(0.06) | 0.87(0.07) | 0.93(0.04) | 0.82(0.08) | 0.82(0.04) | 0.91(0.01) HIV(Y=1) | 0.92(0.05) | 0.95(0.03) | 0.84(0.07) | 0.90(0.01) | 0.90(0.05) | 0.87(0.03) Tox21(Y=0) | 0.90(0.02) | 0.80(0.02) | 0.77(0.03) | 0.91(0.03) | 0.83(0.05) | 0.89(0.05) Tox21(Y=1) | 0.97(0.00) | 0.96(0.01) | 0.97(0.00) | 0.86(0.05) | 0.91(0.05) | 0.93(0.03) Table 2: Coverage of CoDrug and baseline unweighted CP, under different datasets and distribution shifts at $\alpha=0.1$. The realized coverage rate closest to the target coverage $1-\alpha$ (best) is marked in bold. The second best coverage (in case better than unweighted) is marked in bold and gray. Results are averaged over 5 random runs. Results for different $\alpha$ values are available in appendix. #### 5.2.3 Ablation studies In this section, we present an analysis of the importance of various components in the CoDrug pipeline - KDE, energy regularization term ( Eq. 16), and covariate shift correction. In addition to CoDrug (Energy) and Unweighted (baseline) reported in the previous section, we also compare with: * • CoDrug (NoEnergy): We use the same protocol as CoDrug (Energy) but the models are trained without the energy regularization term $\mathcal{L}_{energy}$ in Eq. 16. * • Logistic (Energy): In this experiment, the features are same as CoDrug (Energy), but KDE is not used to estimate densities. Instead, the weights $w(x_{i})$ in Eq. 10, are given by $\nicefrac{{\hat{p}_{(x_{i})}}}{{1-\hat{p}_{(x_{i})}}}$, where $\hat{p}_{(x_{i})}$ is obtained by fitting a classifier to features in calibration and test sets (suggested by [11]). The results are depicted in Table 3. In the table, we compile the "mean absolute coverage deviation" across all the datasets and different random runs from all the experiments reported in Table 2 at different values of $\alpha$ (i.e Mean of $\lvert\text{Observed\\_Coverage}-(1-\alpha)\rvert$ across the experimental runs). The results reveal that CoDrug (Energy), the proposed method, is closest to the target coverage in almost all different values of $\alpha$. We paid close attention to the "Tail 25%", where we presented the metrics for the worst 25% performing experiments and CoDrug (Energy) outperforms all the other variants in comparison by a substantial margin inducting that all the different components in the CoDrug pipeline are helpful. The mean absolute coverage deviation from the target at $\alpha=0.1$ for CoDrug (Energy) is 0.052, a relative improvement of about 35% over that of Unweighted (0.081). Method | Mean absolute coverage deviation | Mean absolute coverage deviation (tail 25%) ---|---|--- | $\alpha=$ 0.3 | $\alpha=$ 0.2 | $\alpha=$ 0.1 | $\alpha=$ 0.3 | $\alpha=$ 0.2 | $\alpha=$ 0.1 Unweighted (baseline) | 0.157 (0.14) | 0.12 (0.12) | 0.081 (0.07) | 0.347 (0.14) | 0.276 (0.13) | 0.176 (0.08) Logistic(Energy) | 0.123 (0.13) | 0.106 (0.13) | 0.083 (0.14) | 0.315 (0.11) | 0.263 (0.17) | 0.222 (0.22) CoDrug (NoEnergy) | 0.112 (0.11) | 0.083 (0.09) | 0.047 (0.05) | 0.288 (0.05) | 0.215 (0.08) | 0.112 (0.03) CoDrug (Energy) | 0.104 (0.09) | 0.079 (0.07) | 0.052 (0.05) | 0.233 (0.05) | 0.179 (0.07) | 0.11 (0.04) Table 3: Ablations: Results comparing various versions of the proposed framework. At each $\alpha$, the mean of deviations from target coverage across all the experiments and random seeds is computed (Smaller is better). CoDrug (Energy) has the least deviation from coverage and a substantial difference when only the worst performing 25% of the experiments are considered. ### 5.3 Application in de novo drug design. In this section, we examine CoDrug’s application in de novo drug design models, which navigate a large chemical space to find optimized molecules using a computational oracle. After molecule sampling, validating their experimental properties, such as ADMET (Absorption, Distribution, Metabolism, Excretion, Toxicity), is crucial for safety and efficacy. When a machine learning model trained on such properties is available, assessing the uncertainty associated with the predictions before experimental validation is critical. However, note that the distribution of sampled molecules may substantially deviate from the training data, affecting the prediction sets’ target coverage from CP. In this section, we demonstrate the application of CoDrug on molecules generated by a de novo drug design model. We consider the de novo drug design model as a black box, that has been optimized w.r.t a certain objective. We experiment with two models - GraphGA [30] and Reinvent [14]. To predict properties, we compiled a dataset of logP values, as it can be computed cheaply with a computational oracle. We note that in reality, this dataset could correspond to experimental properties like ADMET. However, since it is not feasible to validate these properties for molecules generated from de novo drug design models, we use logP to demonstrate the method. The results of our experiments are depicted in Table 4 at a target alpha value of 0.1. Our proposed method consistently enhances coverage in all cases and exhibits a substantial improvement over the unweighted version. For example, in the "gsk3b+qed" objective, the unweighted version has a coverage of 0.44 against a target of 0.9, whereas our proposed method improves coverage substantially. The mean absolute coverage deviation from the target at $\alpha=0.1$ for CoDrug (Energy) is 0.05, a relative improvement of over 60% on the Unweighted version (0.14). | | REINVENT | GraphGA ---|---|---|--- Objective | Y | CoDrug (Energy) | Unweighted | CoDrug (Energy) | Unweighted JNK3+QED | 0 | 0.95 (0.01) | 0.62 (0.12) | 0.86 (0.0) | 0.84 (0.01) JNK3+QED | 1 | 0.91 (0.01) | 0.99 (0.0) | 0.93 (0.01) | 0.89 (0.02) GSK3b+QED | 0 | 0.81 (0.04) | 0.44 (0.16) | 0.87 (0.0) | 0.75 (0.01) GSK3b+QED | 1 | 0.79 (0.08) | 1.0 (0.0) | 0.98 (0.0) | 1.0 (0.0) QED | 0 | 0.96 (0.0) | 0.83 (0.04) | 0.96 (0.0) | 0.69 (0.1) QED | 1 | 0.92 (0.01) | 0.84 (0.05) | 0.98 (0.0) | 0.99 (0.0) Table 4: Observed coverages on molecules sampled by generative models at $\alpha=0.1$. The realized coverage rate closest to the target coverage($1-\alpha$) are marked in bold. For each experiment, a set of 200 points optimized w.r.t. the "Objective" using the generative models GraphGA and REINVENT are sampled. The target property for prediction is logP (1.0 < logP < 4.0 is considered Y=1; Y=0 otherwise [32]). Using the proposed method improves coverage in almost all scenarios. ## 6 Limitations While we demonstrated that KDE can provide asymptotic coverage guarantees, this may not necessarily translate to improved performance in scenarios with limited sample sizes. In our experiments, we do acknowledge that there are a few instances where the improvement in coverage is limited, where the availability of data is limited. As such, a direction for further research is to explore ways to obtain likelihood ratios that are more data-efficient, particularly in scenarios with smaller calibration sets. It is worth noting that our current work focuses on addressing the coverage gap in classification tasks, and regression tasks were not included in this study. However, we recognize the importance of uncertainty quantification in regression settings, especially for various critical drug properties represented as regression problems, where our proposed framework can be extended with modifications. Furthermore, our current work primarily focuses on small molecules, yet covariate shift is a common phenomenon in various chemical and biological contexts. While this means that our framework could be more generally applied, obtaining high-quality feature vectors for computing likelihood in different applications remains a challenge that warrants further research. ## 7 Conclusion We present a new method for uncertainty quantification in drug discovery, CoDrug, that effectively addresses the problem of co-variate shifts in test data. The proposed method involves a combination of three key steps, training an energy-based model for feature extraction and base classification, performing density estimation using KDE, and use the KDE to correct for covariate shift in conformal prediction to recover valid coverage. The results obtained in this study demonstrate the effectiveness of CoDrug in predicting valid conformal prediction sets and its utility in de novo drug design experiments. Our current work is limited to classification tasks in small molecules, but exploring its application to other chemical and biological tasks with covariate shifts is interesting for future work, including adapting the framework for regression tasks. ## Acknowledgments and Disclosure of Funding This work was supported by NSF award SCH-2205289, SCH-2014438, and IIS-2034479. This project has been funded by the Jump ARCHES endowment through the Health Care Engineering Systems Center. ## References * [1] Zhaoping Xiong, Dingyan Wang, Xiaohong Liu, Feisheng Zhong, Xiaozhe Wan, Xutong Li, Zhaojun Li, Xiaomin Luo, Kaixian Chen, Hualiang Jiang, et al. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. Journal of medicinal chemistry, 63(16):8749–8760, 2019. * [2] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263–1272. PMLR, 2017. * [3] Tianfan Fu, Wenhao Gao, Connor W Coley, and Jimeng Sun. Reinforced genetic algorithm for structure-based drug design. arXiv preprint arXiv:2211.16508, 2022. * [4] Jiaxuan You, Bowen Liu, Zhitao Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. Advances in neural information processing systems, 31, 2018. * [5] Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268–276, 2018. * [6] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, pages 2323–2332. PMLR, 2018. * [7] Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world. Springer US, 2005. * [8] Jin Zhang, Ulf Norinder, and Fredrik Svensson. Deep learning-based conformal prediction of toxicity. Journal of chemical information and modeling, 61(6):2648–2657, 2021. * [9] Isidro Cortés-Ciriano and Andreas Bender. Concepts and applications of conformal prediction in computational drug discovery. arXiv preprint arXiv:1908.03569, 2019. * [10] Isidro Cortés-Ciriano and Andreas Bender. Deep confidence: a computationally efficient framework for calculating reliable prediction errors for deep neural networks. Journal of chemical information and modeling, 59(3):1269–1281, 2018. * [11] Ryan J. Tibshirani, Rina Foygel Barber, Emmanuel J. Candes, and Aaditya Ramdas. Conformal prediction under covariate shift, 2020. * [12] Mario Krenn, Florian Häse, AkshatKumar Nigam, Pascal Friederich, and Alan Aspuru-Guzik. Self-referencing embedded strings (selfies): A 100% robust molecular string representation. Machine Learning: Science and Technology, 1(4):045024, 2020. * [13] Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559–12571, 2020. * [14] Marcus Olivecrona, Thomas Blaschke, Ola Engkvist, and Hongming Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):1–14, 2017. * [15] Lior Hirschfeld, Kyle Swanson, Kevin Yang, Regina Barzilay, and Connor W Coley. Uncertainty quantification using neural networks for molecular property prediction. Journal of Chemical Information and Modeling, 60(8):3770–3780, 2020. * [16] Fredrik Svensson, Natalia Aniceto, Ulf Norinder, Isidro Cortes-Ciriano, Ola Spjuth, Lars Carlsson, and Andreas Bender. Conformal regression for quantitative structure–activity relationship modeling—quantifying prediction uncertainty. Journal of Chemical Information and Modeling, 58(5):1132–1140, 2018. * [17] Jiangming Sun, Lars Carlsson, Ernst Ahlberg, Ulf Norinder, Ola Engkvist, and Hongming Chen. Applying mondrian cross-conformal prediction to estimate prediction confidence on large imbalanced bioactivity data sets. Journal of chemical information and modeling, 57(7):1591–1598, 2017. * [18] Yuanfeng Ji, Lu Zhang, Jiaxiang Wu, Bingzhe Wu, Long-Kai Huang, Tingyang Xu, Yu Rong, Lanqing Li, Jie Ren, Ding Xue, et al. Drugood: Out-of-distribution (ood) dataset curator and benchmark for ai-aided drug discovery–a focus on affinity prediction problems with noise annotations. arXiv preprint arXiv:2201.09637, 2022. * [19] Shurui Gui, Xiner Li, Limei Wang, and Shuiwang Ji. Good: A graph out-of-distribution benchmark. arXiv preprint arXiv:2206.08452, 2022. * [20] Kehang Han, Balaji Lakshminarayanan, and Jeremiah Liu. Reliable graph neural networks for drug discovery under distributional shift. arXiv preprint arXiv:2111.12951, 2021. * [21] Clara Fannjiang, Stephen Bates, Anastasios N Angelopoulos, Jennifer Listgarten, and Michael I Jordan. Conformal prediction under feedback covariate shift for biomolecular design. Proceedings of the National Academy of Sciences, 119(43):e2204569119, 2022. * [22] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 2008. * [23] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and Fujie Huang. A tutorial on energy-based learning. Predicting structured data, 1(0), 2006. * [24] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. Advances in Neural Information Processing Systems, 33:21464–21475, 2020. * [25] José E. Chacón and Tarn Duong. Multivariate Kernel Smoothing and its Applications. Chapman and Hall, 2018. * [26] Bharath Ramsundar, Peter Eastman, Patrick Walters, Vijay Pande, Karl Leswing, and Zhenqin Wu. Deep Learning for the Life Sciences. O’Reilly Media, 2019. https://www.amazon.com/Deep-Learning-Life-Sciences-Microscopy/dp/1492039837. * [27] David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of chemical information and modeling, 50(5):742–754, 2010. * [28] Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. Therapeutics data commons: Machine learning datasets and tasks for drug discovery and development. arXiv preprint arXiv:2102.09548, 2021. * [29] John J Irwin and Brian K Shoichet. Zinc- a free database of commercially available compounds for virtual screening. Journal of chemical information and modeling, 45(1):177–182, 2005. * [30] Jan H Jensen. A graph-based genetic algorithm and generative model/monte carlo tree search for the exploration of chemical space. Chemical science, 10(12):3567–3572, 2019. * [31] Wenhao Gao, Tianfan Fu, Jimeng Sun, and Connor Coley. Sample efficiency matters: a benchmark for practical molecular optimization. Advances in Neural Information Processing Systems, 35:21342–21357, 2022. * [32] Y Gao, C Gesenberg, and W Zheng. Oral formulations for preclinical studies: principle, design, and development considerations. In Developing solid oral dosage forms, pages 455–495. Elsevier, 2017. * [33] Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. Artificial intelligence foundation for therapeutic science. Nature Chemical Biology, 18(10):1033–1036, 2022. * [34] Mufei Li, Jinjing Zhou, Jiajing Hu, Wenxuan Fan, Yangkang Zhang, Yaxin Gu, and George Karypis. Dgl-lifesci: An open-source toolkit for deep learning on graphs in life science. ACS omega, 6(41):27233–27238, 2021. * [35] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [36] Guy W Bemis and Mark A Murcko. The properties of known drugs. 1. molecular frameworks. Journal of medicinal chemistry, 39(15):2887–2893, 1996. ## Appendix A Proofs ### A.1 Proofs for Theorem 4.1 Denote $T_{i}=1-p^{f}_{Y_{i}}(X_{i})$ as a new random variable. Because of our i.i.d. assumption (and that $f$ is not trained on the calibration set), $T_{1},\ldots,T_{N+1}$ are also i.i.d., which means that for $m=0,1,\ldots,N$: $\displaystyle\mathbb{P}\\{|\\{i\in[N]:T_{N+1}>T_{i}\\}|=m\\}$ $\displaystyle\leq\frac{1}{N+1}$ (24) $\displaystyle\implies\mathbb{P}\\{|\\{i\in[N]:T_{N+1}>T_{i}\\}|\geq(N-m)\\}$ $\displaystyle\leq\frac{m+1}{N+1}$ (25) The left-hand-side probability is $\leq$ (instead of $<$) the right-hand-side due to the case when $p^{f}_{y}$ is not continuous. Note also that $\displaystyle 1-p^{f}_{Y_{N+1}}(X_{N+1})>t\implies|\\{i\in[N]:T_{N+1}>T_{i}\\}|\geq\lceil(1-\alpha)(N+1)\rceil$ (26) which means $\displaystyle\mathbb{P}\\{1-p^{f}_{Y_{N+1}}(X_{N+1})>t\\}\leq\frac{\lfloor\alpha(N+1)\rfloor}{N+1}\leq\alpha.$ (27) Finally, note that $\displaystyle Y_{N+1}\not\in\hat{C}(X_{N+1})\implies 1-p^{f}_{Y_{N+1}}(X_{N+1})>t,$ (28) we thus have $\displaystyle\mathbb{P}\\{Y_{N+1}\in\hat{C}(X_{N+1})\\}\geq 1-\alpha.$ (29) ∎ ### A.2 Proofs for Theorem 4.2 This is a direct result of Corollary 1 in [11], with our nonconformity score $1-p^{f}_{Y}(X)$ plugged in. ## Appendix B Data and implementation details ### B.1 Training Details of the base classifier: * • Deep Learning Frameworks: We use the Pytorch framework for the implementation of the models. The Graph Neural Network backbone is obtained from the DGL- LifeSci library [34]. * • Training hyperparameters: We train the model using the PyTorch Lightning Framework for training. We use the ADAM Optimizer [35]. The batch size is set to 64, and the learning rate is set to 0.001. * • Architecture Details: The model architecture consists of a GNN layer (Attentive FP [1]), , a readout layer, 2 hidden FCNN layers, and an output layer. The hidden state size in GNN is set to 512 dimensions. The linear layers have 256, and 8 dimensions respectively. * • Cheminformatics Processing : We use the RDKit library for handling molecular entities in Python. We use the DeepChem library for generating dataset splits. * • Energy Regularization hyperparameters : The parameters $m_{in}$ and $m_{out}$ in Eq. 15 are set to -5 and -35 respectively, and the parameter $\lambda$ in Eq. 16 is set to 0.01. All of these hyperparameters are obtained from the reference implementation in [24]. * • Splitting Ratio: The datasets are split in the ratio of 70:15:15, for training, calibration and testing the CP model. * • Error bars: All the experiments reported are over 5 random runs. The mean and the standard deviation across the random runs is reported in the table. ### B.2 Splitting strategies: In this section, we discuss the two strategies employed for creating calibration and test sets to ensure their dissimilarity. The implementation of these strategies was based on the DeepChem library [26]. We provide detailed explanations of the algorithms used for splitting below. * • Scaffold Splitting: The core structure of a molecule is represented by scaffolds [36]. Scaffold splitting aims to generate train and test sets that do not share any common scaffolds, thereby creating a challenging yet realistic scenario of distribution shift. This strategy is commonly used to evaluate out-of-distribution prediction algorithms [20, 18]. The scaffold splitting procedure is as follows: * – First, the scaffolds of all molecules in the datasets are identified. * – The scaffolds are sorted by their frequency in the dataset. * – The least frequent scaffolds are added to the test set until the desired number of test data points is reached. * – The remaining points are randomly divided into training and calibration sets. * • Fingerprint Splitting: A molecular fingerprint[27] is a compact binary representation of a molecule’s structural features, capturing important information about its chemical composition and spatial arrangement. Fingerprint splitting utilizes molecular fingerprints to create distinct train and test sets. The primary objective is to include data points in the test set that exhibit the least maximum pairwise Jaccard similarity of fingerprints. The fingerprint-splitting procedure is as follows: * – First, the molecular fingerprints are computed for all the molecules in the dataset using Extended Connectivity Fingerprints (ECFP) [27]. * – Pairwise Jaccard similarity is calculated for each data point in the dataset, considering its similarity with all other points. The Jaccard similarity between two fingerprints is determined by dividing the size of their intersection by the size of their union. * – The test set construction begins by selecting the data point with the least maximum Jaccard similarity to any other point in the dataset. This point is added to the test set. * – The iterative process continues until the desired number of test data points is reached. The remaining data points are assigned to the training/calibration set. ### B.3 Property prediction datasets #### B.3.1 Labelled data We used four commonly used classification benchmarking datasets from ADMET properties. The datasets are obtained from Therapeutics Data Commons (TDC) [28]. We include the statistics of all the datasets used for property prediction in Table 5. * • AMES Mutagenicity: Mutagenicity is a vital toxicity measure that measures the ability of the drug to induce genetic mutations. The dataset consists of toxicity classes for over 7000 compounds. * • Tox21: A data challenge that consists of qualitative toxicity measurements on 12 different targets. For the scope of this work, we picked the largest assay in the collection with over 6000 compounds. * • ClinTox: A collection of compounds that includes drugs that have failed in clinical trials for toxicity reasons and the ones with successful outcomes. This dataset contains about 1500 drugs. * • HIV Activity: The dataset from screening results published by Drug Therapeutics Program (DTP) AIDS Antiviral Screen. It measures the ability to inhibit HIV replication for over 40,000 compounds. Dataset | #Positive | #Negative | #Total | Task ---|---|---|---|--- Tox21 | 309 | 6,956 | 7,265 | prediction ClinTox | 112 | 1,366 | 1,478 | prediction AMES | 3,974 | 3,304 | 7,278 | prediction HIV | 1,443 | 39,684 | 41,127 | prediction ZINC | 0 | 0 | 250,000 | pre-training Table 5: Dataset statistics #### B.3.2 Unlabelled data To train the EBM, we obtained the unlabelled data from the ZINC-250k dataset [29]. This is a subset of the ZINC database, typically used for pre-training generative models, and covers a large chemical space. For each dataset and split type, we removed the molecules that are similar to the training (and calibration) set. * • In the case of scaffold splitting, we remove the molecules containing scaffolds present in the training set. * • In the case of Fingerprint splitting, we compute the Tanimoto similarity with respect to the molecules in the training set and include only those molecules with similarities less than the minimum pairwise similarity in the training set. ### B.4 Hyperparameters for Kernel Density Estimation(KDE): Determining the bandwidth($h$), for Kernel Density Estimation (KDE) is crucial for accurate density estimation. To find the optimal value of $h$, we employ K-fold cross-validation (CV) using the scikit-learn library, with $k=10$ folds. The following procedure is applied for each dataset: * • The dataset is divided into $k$ splits or folds. * • We fit the KDE model using a range of $h$ values, specifically choosing 25 uniformly spaced intervals from $10^{-1.3}$ to $10^{1}$. * • The fitted KDE models are evaluated by computing the log probability on the holdout split for each $h$ value. * • The $h$ value that yields the highest average log probability across the $k$ folds is selected as the optimal bandwidth for fitting the KDE model. ### B.5 Ablation models: As discussed in Section 5.2.3, we perform the following ablations: * • CoDrug (NoEnergy): The main objective of this variant of the method is to understand the significance of training with energy regularization described in 4.3. In this study, we follow the same protocol as CoDrug (Energy), but during the training, we do not use the regularization term $\mathcal{L}_{energy}$ in 16. * • Logistic (Energy): Logistic (Energy): In this variation, the training procedure for the model remains the same, but instead of employing Kernel Density Estimation (KDE) for estimating molecular density, we utilize a logistic classifier. This approach, initially utilized by Tibshirani et al. (2020) [11] in their experiments, involves training the logistic classifier using the same input features as those used in KDE. For training the classifier, the samples in the calibration set are labeled as 0, while the samples in the test set are labeled as 1. The weight assigned to a point $x$ in the calibration set is determined by the estimate $\hat{p}(x)$ of $\mathcal{P}(C=1|X=x)$ obtained from the classifier, and is calculated as $\nicefrac{{\hat{p}(x)}}{{1-\hat{p}(x)}}$. To implement the logistic classifier, we utilized the scikit-learn library, employing the default hyperparameters for the classifier. ### B.6 Details of de novo drug design experiments: In this section, we present details of the experiments described in Section 5.3. We first describe the de novo drug design models used for the experiments in Section B.6.1. We sample 200 points from these generative models on properties described in Section B.6.2. Our objective is to construct valid prediction sets on the sampled molecules pertaining to the property discussed in Section B.6.2. #### B.6.1 De novo drug design models As our main objective is to show that CoDrug is effective in estimating uncertainties on molecules sampled from de Novo drug design models, we experiment on molecules sampled from two different de novo drug design models - Reinvent and Graph GA. These two models are top-ranked on the MolOpt benchmark [31]. We used the implementation provided by MolOpt to run our experiments. * • Reinvent: Reinvent[14] is a reinforcement learning-based de novo drug design model. The model uses an RNN to generate SMILES strings. * • GraphGA: GraphGA [30] is a genetic algorithm based de novo drug design model that generates molecular graphs. #### B.6.2 Objective functions used for optimization In our experiments, we used molecule sets obtained from optimizing the molecules on the following properties. All the oracles are obtained from TDC [28]. * • QED: QED stands for Quantitative Estimate of Drug-likeness. It is a computational metric used in drug discovery to assess the "drug-likeness" of a compound. QED provides a quantitative measure of how likely a molecule is to possess drug-like properties based on its chemical structure. * • QED + JNK3 activity: JNK3 activity refers to the activity of a molecule against a c-Jun N-terminal Kinases-3 (JNK3) protein. This is a common Oracle function used in benchmarking de novo drug design models. The oracle is built using a random forest classifier using ECFP6 fingerprints using the ExCAPE-DB dataset. In addition, we also add QED score to the Oracle output to restrict molecule search to a "drug-like" region. * • QED + GSK3b activity: GSK3b, which stands for glycogen synthase kinase 3 beta, is an enzyme encoded by the GSK3b gene in humans. Dysregulation and abnormal expression of GSK3b have been linked to a heightened vulnerability to bipolar disorder. Similar to previous case, the oracle is built using a random forest classifier that utilizes ECFP6 fingerprints from the ExCAPE-DB dataset. We also add QED score to the Oracle output. #### B.6.3 Details of the property prediction experiments As discussed in the main paper, we choose logP as our target property, i.e. the property for which we wish to obtain uncertainty estimates. LogP, also known as the logarithm of the partition coefficient, is a property used to quantify the lipophilicity of a drug molecule. The lipophilicity of a drug molecule, as determined by LogP, plays a crucial role in its pharmacokinetic properties. It is accepted that a drug-like molecule would have logP in the range of [1.0, 4.0] [32] and hence, in our experiments, we assign a label of Y=1 for logP in [1.0, 4.0]; Y=0 otherwise. logP can be computed cheaply using a computational oracle, which obtained it from TDC[28]. Note that this property in reality would be an experimentally determined property (such as ADMET properties), but the obvious challenge in validating our method on such properties is that it is not possible to obtain ground truth values for novel molecules obtained from de novo drug design models. Nevertheless, since CP is agnostic to the underlying prediction model, we deem that the performance would remain robust across different properties and hence would potentially be beneficial in real-world drug discovery campaigns. For the curation of the training set, we randomly pick a set of 20 scaffolds from the ZINC250k dataset [29], and pick 500 points from each scaffold (making a total of 10000 points). This is our training and calibration data. We assign labels to this set based on the above-mentioned criteria and train the model using the same procedure as other property predictors as described in section 5. Note that the test set for this exercise is the molecules sampled from de novo drug design models. ## Appendix C Results In section Section 5, we have provided results for experiments at $\alpha=0.1$. Here, we provide additional results for the experiments at $\alpha=0.05$ and $\alpha=0.2$. ### C.1 Property prediction results Dataset | Fingerprint split ($\alpha=0.05$) | Fingerprint split ($\alpha=0.2$) ---|---|--- | CoDrug(energies) | CoDrug(features) | Unweighted(baseline) | CoDrug(energies) | CoDrug(features) | Unweighted(baseline) AMES(Y=0) | 0.96(0.07) | 0.93(0.09) | 1.00(0.01) | 0.85(0.06) | 0.78(0.06) | 1.00(0.00) AMES(Y=1) | 0.94(0.08) | 0.95(0.08) | 0.78(0.19) | 0.79(0.05) | 0.82(0.06) | 0.34(0.14) ClinTox(Y=0) | 0.94(0.08) | 0.83(0.04) | 0.91(0.07) | 0.68(0.07) | 0.67(0.04) | 0.58(0.04) ClinTox(Y=1) | 0.88(0.14) | 0.73(0.03) | 0.93(0.00) | 0.73(0.00) | 0.45(0.03) | 0.73(0.00) HIV(Y=0) | 0.94(0.10) | 0.94(0.14) | 0.96(0.08) | 0.81(0.08) | 0.74(0.13) | 0.89(0.07) HIV(Y=1) | 0.95(0.08) | 0.97(0.07) | 0.90(0.14) | 0.85(0.06) | 0.90(0.04) | 0.72(0.11) Tox21(Y=0) | 0.93(0.05) | 0.85(0.03) | 0.83(0.02) | 0.87(0.03) | 0.71(0.03) | 0.65(0.02) Tox21(Y=1) | 0.97(0.01) | 0.97(0.02) | 0.97(0.01) | 0.95(0.01) | 0.93(0.02) | 0.97(0.00) Dataset | Scaffold split ($\alpha=0.05$) | Scaffold split ($\alpha=0.2$) AMES(Y=0) | 0.92(0.03) | 0.93(0.04) | 0.90(0.04) | 0.73(0.03) | 0.80(0.02) | 0.66(0.04) AMES(Y=1) | 0.90(0.02) | 0.87(0.03) | 0.91(0.02) | 0.72(0.01) | 0.66(0.02) | 0.77(0.02) ClinTox(Y=0) | 0.95(0.05) | 0.89(0.01) | 0.94(0.02) | 0.80(0.03) | 0.74(0.01) | 0.75(0.01) ClinTox(Y=1) | 0.97(0.05) | 0.95(0.02) | 0.97(0.02) | 0.53(0.07) | 0.77(0.03) | 0.81(0.02) HIV(Y=0) | 0.89(0.05) | 0.88(0.07) | 0.96(0.01) | 0.72(0.07) | 0.71(0.06) | 0.81(0.01) HIV(Y=1) | 0.95(0.03) | 0.94(0.07) | 0.94(0.06) | 0.81(0.01) | 0.81(0.05) | 0.73(0.05) Tox21(Y=0) | 0.95(0.07) | 0.88(0.04) | 0.94(0.13) | 0.81(0.05) | 0.73(0.05) | 0.75(0.09) Tox21(Y=1) | 0.94(0.06) | 0.96(0.05) | 0.97(0.04) | 0.77(0.05) | 0.80(0.04) | 0.82(0.06) Table 6: Coverage of CoDrug and baseline unweighted CP, under different datasets and distribution shifts at $\alpha=0.05$ and $\alpha=0.2$. The realized coverage rate closest to the target coverage $1-\alpha$ (best) is marked in bold. The second best coverage (in case better than unweighted) is marked in bold and gray. Results are averaged over 5 random runs. ### C.2 De Novo Drug design experiment results | | REINVENT ($\alpha=0.05$) | GraphGA ($\alpha=0.05$) | REINVENT ($\alpha=0.2$) | GraphGA ($\alpha=0.2$) ---|---|---|---|---|--- Objective | Y | CoDrug (Energy) | Unweighted | CoDrug (Energy) | Unweighted | CoDrug (Energy) | Unweighted | CoDrug (Energy) | Unweighted QED | 0 | 0.97 (0.02) | 0.94 (0.06) | 0.97 (0.02) | 0.81 (0.23) | 0.87 (0.1) | 0.53 (0.25) | 0.75 (0.09) | 0.41 (0.26) QED | 1 | 0.97 (0.02) | 0.96 (0.07) | 1.0 (0.01) | 1.0 (0.01) | 0.81 (0.08) | 0.78 (0.26) | 0.9 (0.08) | 0.88(0.13) JNK3+QED | 0 | 0.89 (0.15) | 0.83 (0.34) | 0.94 (0.02) | 0.93 (0.09) | 0.72 (0.27) | 0.2 (0.4) | 0.73 (0.06) | 0.73 (0.15) JNK3+QED | 1 | 0.98 (0.01) | 1.0 (0.0) | 0.98 (0.05) | 0.92 (0.09) | 0.86 (0.1) | 0.94 (0.1) | 0.86 (0.04) | 0.64 (0.27) GSK3b+QED | 0 | 0.91 (0.13) | 0.63 (0.33) | 0.91 (0.03) | 0.77 (0.17) | 0.71 (0.21) | 0.39 (0.42) | 0.74 (0.05) | 0.36 (0.18) GSK3b+QED | 1 | 0.92 (0.15) | 1.0 (0.0) | 0.98 (0.02) | 1.0 (0.00) | 0.82 (0.37) | 0.95 (0.04) | 0.9 (0.07) | 0.97 (0.04) Table 7: Observed coverages on molecules sampled by generative models at $\alpha=0.05$ and $\alpha=0.2$. The realized coverage rate closest to the target coverage ($1-\alpha$) is marked in bold. For each experiment, a set of 200 points optimized w.r.t. the "Objective" using the generative models GraphGA and REINVENT are sampled, similar to the procedure in Section B.6.1. Using the proposed method improves coverage in almost all scenarios. ## Appendix D List of commonly used notations and terms Table 8: List of key notations used in the paper Symbol | Description ---|--- $\alpha$ | Refers to the user-defined target coverage level in conformal prediction. It determines the confidence level of the prediction regions. $\hat{C}(x_{i})$ | Corresponds to a prediction set of an input $x_{i}$ obtained from a conformal prediction method. $X_{i}$ | Refers to the input features corresponding to a data point $i$ in a machine learning model. $Y_{i}$ | Refers to the label corresponding to a data point $i$ in a machine learning model. $f$ | Refers to the base classifier of the model. It is the underlying algorithm or model used to make predictions on the data. $f_{y}(x)$ | Refers to the $y^{th}$ logit of the classifier $f$ on a data point $x$ without applying the softmax layer. $p^{f}_{y}(x)$ | denotes the classwise probability scores of the data point $x$ with respect to the model $f$ after the application of the softmax layer on the logits $f_{y}(x)$. It represents the probability of the data point belonging to class $y$. $E(x)$ | Refers to the energy of a data point $x$ in an energy-based model. The energy is computed based on the logits of the classifier. $F_{\\{v_{i}\\}_{i=1}^{N}}$ | Refers to an (unweighted) cumulative distribution function computed from a set of values $\\{v_{i}\\}_{i=1}^{N}$. $w(x_{i})$ | Refers to the likelihood ratio or weight assigned to a data point $i$ in weighted conformal prediction. It quantifies the importance of the data point during calibration. $F^{w}_{x_{N+1}}$ | Weighted cumulative distribution function computed using the likelihood ratios $w(x_{i})$ from weighted conformal prediction. It represents the distribution of the weighted values. $\mathcal{D}_{train}$ | Refers to the distribution of molecules from which the training set is sampled. It represents the underlying data distribution used to train a machine-learning model. $\mathcal{D}_{cal}$ | Refers to the distribution of molecules from which the calibration set is sampled. It represents the underlying data distribution used to calibrate a conformal prediction model. $P_{X}^{cal}$ | Refers to the true density of an input molecule $X$ in the calibration set distribution. $P_{X}^{test}$ | Refers to the true density of an input molecule $X$ in the test set distribution. $\hat{p}_{h_{test}}$ | Refers to the density of an input molecule $X$ computed from kernel density estimation (KDE) on the test set distribution. It is an estimation of the probability density function of the input molecule in the test set distribution. $\hat{p}_{h_{cal}}$ | Refers to the density of an input molecule $X$ computed from KDE on the calibration set distribution. It is an estimation of the probability density function of the input molecule in the calibration set distribution. Table 9: List of commonly used terms in the paper Term | Description ---|--- Activity (property) | Refers to the ability of a drug to bind to a specific target molecule and produce a biological effect. It is an important property to consider in drug discovery. ADMET properties | Refers to the absorption, distribution, metabolism, excretion, and toxicity of a drug candidate. These properties play a critical role in determining the safety and efficacy of a drug. Alpha | In conformal prediction, alpha refers to the user-defined confidence level used to construct the prediction sets. The parameter determines the amount of error that the user is willing to tolerate in the predictions and is typically set to a small value, such as 0.1 or 0.2. A smaller alpha typically results in a wider prediction set. Calibration( of conformal prediction) | In the Mondrian Inductive Conformal Prediction framework, calibration refers to the procedure of using the calibration set to determine the threshold for each class. The objective is that the proportion of true labels across prediction sets matches the desired confidence level, as specified by the alpha parameter Calibration set | A calibration set is a labeled subset of the dataset held out from the training set used to estimate the threshold for each class (in Mondrian ICP). Conformal Prediction (CP) | A framework for constructing reliable prediction intervals or sets at a desired confidence level for a given machine learning model. The framework can be used with any machine learning model. Coverage | Coverage of a conformal predictor is the proportion of times that the true label falls within the prediction sets produced by the predictor, over all the inputs. Covariate shift | A phenomenon that occurs when the distribution of the input data changes between the training and testing phases of a machine learning model. It is assumed that the conditional distribution of the target variable given the input features (P(Y|X)) remains the same across the training and test sets. Cumulative Distribution Function (CDF) | CDF gives the cumulative probability of the random variable taking on a value less than or equal to a particular value. De novo Drug design model | A machine learning model used to generate novel drug candidates with desired properties. These models are based on generative models such as Variational Autoencoders, Reinforcement Learning, or Genetic algorithm and explore large chemical space. Energy-based model | A type of model that learns a function that assigns low energy scores to data points that are similar to the training data and high energy scores to data points that are dissimilar. Exchangeability | Exchangeability refers to the property of a sequence of random variables such that the joint distribution of any permutation of the variables is the same as the joint distribution of the original sequence. Independent and identically distributed (IID) implies exchangeability. Exchangeability is an important consideration in the Conformal prediction framework. Fingerprint splitting | A method used to divide a dataset of molecules into training and testing sets based on the similarity of their molecular fingerprints. Generative model | A type of machine learning model that learns the distribution of a dataset and can be used to generate new data points (in this case drug molecules) with similar properties. Kernel Density Estimation | Kernel density estimation (KDE) is a non-parametric method for estimating the probability density function of a random variable based on a set of observations. It involves placing a kernel at each data point and summing the kernels to obtain a smoothed estimate of the density function. Mondrian ICP | Mondrian Inductive Conformal Prediction (Mondrian ICP) is a variant of the conformal prediction framework that provides class-wise coverage guarantees for multi-class classification problems. The prediction sets are constructed to provide class-wise coverage guarantees, meaning that they are guaranteed to contain the true class label with a certain probability (determined by a user-defined confidence level) for each class. Prediction set | A prediction set is a set of candidate labels (class values) for a given input. A prediction set is considered valid if it contains the true class label of an input. Quantile Function | A function that maps a probability to a corresponding value in a distribution. It is the inverse of the cumulative distribution function. Scaffold splitting | A method used to divide a dataset of molecules into training and testing sets while ensuring that the two sets have similar scaffold diversity. Toxicity | Refers to the potential of a drug to cause harm to living organisms. It is an important ADMET property to consider in drug discovery. Validity | A conformal predictor is said to be valid if its coverage level is equal to the user-defined significance level (usually denoted by alpha) used to construct the prediction sets.
# Semi-supervised Long-tailed Recognition using Alternate Sampling Bo Liu UC, San Diego <EMAIL_ADDRESS>Haoxiang Li Wormpex AI Research <EMAIL_ADDRESS>Hao Kang Wormpex AI Research <EMAIL_ADDRESS>Nuno Vasconcelos UC, San Diego <EMAIL_ADDRESS>Gang Hua Wormpex AI Research <EMAIL_ADDRESS> ###### Abstract Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes. While techniques have been proposed to achieve a more balanced training loss and to improve tail classes data variations with synthesized samples, we resort to leverage readily available unlabeled data to boost recognition accuracy. The idea leads to a new recognition setting, namely semi-supervised long-tailed recognition. We argue this setting better resembles the real-world data collection and annotation process and hence can help close the gap to real-world scenarios. To address the semi-supervised long-tailed recognition problem, we present an alternate sampling framework combining the intuitions from successful methods in these two research areas. The classifier and feature embedding are learned separately and updated iteratively. The class-balanced sampling strategy has been implemented to train the classifier in a way not affected by the pseudo labels’ quality on the unlabeled data. A consistency loss has been introduced to limit the impact from unlabeled data while leveraging them to update the feature embedding. We demonstrate significant accuracy improvements over other competitive methods on two datasets. ## 1 Introduction Large-scale datasets, which contain sufficient data in each class, has been a major factor to the success of modern deep learning models for computer vision tasks, such as object recognition. These datasets are usually carefully curated and balanced to have an uniform data distribution over all classes. This balanced data distribution favors model training but could be impractical in many real world applications, where the frequency of samples from different classes can be imbalanced, leading to a long-tailed data distribution. As shown in Figure 1(b), several highly populated classes take up most of the labeled samples, and some of the classes only have very few samples during training. Figure 1: Comparison of different recognition paradigms: a) statistics of CIFAR-10 when used as a Semi-supervised Recognition benchmark; b) typical data distribution over classes in long-tailed recognition; c) the proposed Semi- supervised long-tail recognition setting, in which both labeled and unlabeled subsets follow the same underlying long-tailed data distribution. The long-tailed recognition problem has been widely studied in the literature. One major challenge in this setting [16, 10, 28] to deep learning model training is the tendency of under-fitting in less-populated classes. The root causes of this under-fitting are the imbalanced training data distribution as well as the scarcity of data samples in the tail classes. More specifically, with an imbalanced training data distribution, when several head classes take up most of the training samples, tail classes contribute little in the training loss. The model is such that biased towards head classes. Prior works [14, 2, 10, 28] tried to mitigate the issue by re- sampling the training data to be a balanced distribution or calibrating the sample weights in calculating the loss. However, still the scarcity of tail class data samples limits the intra-class variations and overall recognition accuracy. Methods focusing on few-shot learning have been introduced to address this problem through data augmentation and data synthesis [23, 7, 15]. In this work, we resort to a different path to leverage massive unlabeled real data in training to help improve the long-tailed recognition accuracy. Since data collection is much cheaper and accessible comparing to data annotation, additional unlabeled real data could readily be available in many real-world scenarios. This semi-supervised learning setting has been intensively studied in the literature [12, 18, 21, 1, 20]. However, as shown in Figure 1(a), when we carefully look at the data distribution of the widely used benchmarks, we observe well-balanced labeled subset and unlabeled subset. As discussed above, the manually curated balanced distribution, can lead to a gap to real-world scenarios. This is especially true in unlabeled data. Without labels, people have no way to balance the data among classes. In this paper, we propose a more realistic and challenging setting, namely semi-supervised long-tailed recognition. As shown in Figure 1(c), we assume a long-tailed data distribution of the overall dataset and both the labeled and unlabeled subsets of training data follow the same underlying long-tailed data distribution. This setting generally resembles a realistic data collection and annotation workflow. After collecting the raw data, one has no knowledge of its class distribution before annotation. As it is expensive to annotate the full corpus, a common practice is to randomly sample a subset for annotation under a given labeling budget. When the raw data follows a long-tailed class distribution, we should expect the same in the labeled subset. While this new recognition paradigm shares the challenges in both semi- supervised learning and long-tailed recognition, there is no readily naive solution to it. Methods in long-tailed recognition rely on class labels to achieve balanced training, which are not available in the unlabeled portion in the semi-supervised long-tailed recognition. Prior semi-supervised methods without considering the long-tailed distribution could fail as well. Taking one of the competitive baseline methods for example, Yang et al. [26] proposed to firstly train a recognition model with the labeled subset to generate pseudo labels for the unlabeled subset, then the model is fine-tuned with the full training dataset. However, when the labeled subset follows a long-tailed distribution, the pseudo labels are much less accurate for tail classes than head classes. As a result, the overall pseudo labels quality could be too bad to leverage (See Section 4.5 for results in CIFAR-10-SSLT and ImageNet-SSLT). To address the semi-supervised long-tailed recognition problem, we present a method designed specifically for this setting. We bring the successful class- balanced sampling strategy and combined it with model decoupling in an alternate learning framework to overcome the difficulty of balancing unlabeled training data. Inspired by [10], we decouple the recognition model into a feature embedding and a classifier, and train them with random sampling and class-balanced sampling respectively. As we are targeting at a semi-supervised setting, the classifier is only trained on labeled data to get around the difficulty of applying correctly class-balanced sampling on unlabeled data, aligning with the intuition that the classifier needs more robust supervision than the feature embedding. After that, with the proposed alternative learning framework, we improve model by updating the feature embedding and the classifier iteratively. We assign pseudo labels with the up-to-date classifier and observed gradually improved accuracy of pseudo labels over iterations. The pseudo labels are then incorporated in fine-tuning the feature embedding with a regularization term to limit its potential negative impacts. Similar iterative design has been proposed in semi-supervised learning literature [12, 21] but important implementation details differ. To summarize, in this paper, 1) we resort to semi-supervised learning to help improve long-tailed recognition accuracy and identify practical gap of current semi-supervised recognition datasets due to their well-balanced unlabeled subset; 2) we propose a new recognition paradigm named semi-supervised long- tailed recognition better resembling real-world data collection and annotation workflow; 3) we propose a new alternative sampling method to address the semi- supervised long-tailed recognition and demonstrate significant improvements on several benchmarks. ## 2 Related Work Long-tailed recognition has been recently studied a lot [25, 17, 14, 27, 16, 24]. Several approaches have been proposed, including metric learning [17, 27], loss weighting [14], and meta-learning [24]. Some methods design dedicated loss functions to mitigate the data imbalanced problem. For example, lift loss [17] introduces margins between many training samples. Range loss [27] encourages data from the same class to be close and different classes to be far away in the embedding space. The focal loss [14] dynamically balances weights of positive, hard negative, and easy negative samples. As reported by Liu et al [16], when applied to long-tailed recognition, many of these methods improved accuracy of the few-shot group, but at the cost of lower accuracy over the many-shot classes. Other methods, e.g. LDAM-DRW [2] replace cross-entropy loss with LDAM loss. This adds a calibration factor to the original cross-entropy loss. When combined with loss re-weighting, it improves the accuracy in all splits in long-tailed recognition. However, it can not be easily generalized to semi- supervised learning. Because both the calibration factor and the loss weight are calculated based on the number of samples of each class. In face recognition and person re-identification, the datasets are mostly with long-tailed distribution. LEAP [15] augmented data samples from tail (few- shot) classes by transferring intra-class variations from head (many-shot) classes. Instead of data augmentation, we introduce unsupervised data to improve the performance of long-tailed recognition. A recent work [26] rethinks the value of labels in imbalance learning. As part of the discussion, semi-supervised learning is included. However, only the basic pseudo label solution and simple datasets, such as CIFAR and SVHN, are discussed. More recent works [10, 28] with improved long-tailed recognition share the observation that feature embedding and the classifier should be trained with different sampling strategies. In this work, we adopt our method on this observation to learn the feature embedding model with random sampling and train the classifier with class-balanced sampling. This design is further closely compatible with semi-supervised learning under alternate learning. Semi-supervised learning has been extensively discussed in recognition discipline [12, 18, 21]. One common observation is to optimize the traditional cross-entropy loss together with a regularization term that regulates the perturbation consistency of unlabelled data. Ladder net [18] is introduced to minimise the reconstruction loss between the network outputs from a given sample and its perturbation. It is then simplified in [12] as two temporal modules: $\Pi$-Model and Temporal Ensembling. The Temporal Ensembling encourages the output of the network from unlabeled data to be similar to its counterpart from previous training epoch. More recently, Mean Teacher [21] extends it by assembling along training. Instead of storing previous predictions, they assemble a Teacher model by calculating the moving average of the training network, i.e. the Student. The Teacher is then used to provide the consistency of predictions to the Student. In addition to that, MA-DNN [4] introduces a memory module to maintain the category prototypes and provide regularization for learning with unlabeled data. Label propagation [13] is also considered with the help of label graph. More recently, Mixmatch [1] and Fixmatch [20] improve the performance by introducing powerful data augmentations and perturbation consistencies. All the semi-supervised methods above do not separate labeled data during semi-supervised training. In fact, it is beneficial to combine labeled data and unlabeled data in a certain proportion [12, 21]. However, without further knowledge, we have no insight how to deal with this combination when long- tailed distribution is included. Furthermore, long-tailed learning methods require calibration or re-sampling based on the class distribution. This combination of labeled and unlabeled data makes the distribution unstable. In result, this is not suitable for long-tailed recognition. Recently, Salsa [19] proposes to decouple the supervised learning from semi- supervised training. Our method follows the alternate training scheme from it, because it is surprisingly compatible with long-tailed learning. In practice, our method differs from Salsa in the following aspects. First, we adopt class-balanced sampling in supervised learning to deal with the long-tailed distribution. Second, we use supervised learning instead of self-supervised learning as initialization. We find that self-supervised learning results in inferior performance in long-tailed scenario. Third, the re-initialization is not needed. Because our initialization is already from supervised learning, there is not a specific starting point to re-initialize the model. In fact, this enhances the soft constraint between the two stages in [19]. With the models continuously optimized along alternate learning, our method achieves superior performance while maintains the same amount of training epochs as fine-tuning simply on pseudo labels. Figure 2: Initialization procedure. A recognition model is first trained with random sampling. After that the feature embedding is used to train a new classifier with class-balanced sampling. In the diagram, CNN components that are updated during training are highlighted in red. Figure 3: Diagram of alternate learning. CNN modules in green line is only used in forwarding. Those in red are fine-tuned with the corresponding loss. In Stage 1, samples from ${\cal U}$ are forwarded through $f$ and $g$. ${\cal U^{\prime}}$ consists of samples from ${\cal U}$, and pseudo labels acquired from $g$. In Stage 2, $f$ and $g^{\prime}$ are trained on the combination of ${\cal D}$ and ${\cal U^{\prime}}$. In Stage 3, only the classifier $g$ is trained. $f$ is fixed and only used in forwarding. ## 3 Method In this section, we will introduce the proposed method to semi-supervised long-tailed recognition. The semi-supervised long-tailed recognition problem is first defined, and some notations are clarified. The decoupling strategy of long-tailed recognition is then discussed. This is also the initialization phase of our method. After that, the alternate learning scheme with 3 stages is fully discussed. The proposed method is outlined in Algorithm 1. ### 3.1 Semi-supervised Long-tailed Recognition We start by defining the semi-supervised long-tailed recognition problem. Consider an image recognition problem with a labeled training set ${\cal D}=\\{(\mathbf{x}_{i},\mathbf{y}_{i});i=1,\ldots,N\\}$, where $x_{i}$ is an example and $y_{i}\in\\{1,\ldots,C\\}$ its label, where $C$ is the number of classes. For semi-supervised learning, there is also an unsupervised training subset ${\cal U}=\\{\mathbf{x}_{i};i=1,\ldots,M\\}$. Although the labels of data in ${\cal U}$ are not available, every sample has its label from $\\{1,\ldots,C\\}$. For class $j$, we have $n_{j}$ samples from ${\cal D}$ and $m_{j}$ samples from ${\cal U}$. With the assumption that supervised and unsupervised data follow the same distribution, we have the fact $\frac{n_{j}}{N}=\frac{m_{j}}{M},\quad\forall j.$ (1) The testing set, on the other hand, in order to evaluate the performance on every class without bias, is balanced sampled on all classes in $\\{1,\ldots,C\\}$. ### 3.2 Model Decoupling and Data Sampling A CNN model combines a feature embedding $\mathbf{z}=f(\mathbf{x};\theta)\in\mathbb{R}^{d}$, and a classifier $g(\mathbf{z})\in[0,1]^{C}$. Embedding $f(\mathbf{x};\theta)$ is implemented by several convolutional layers of parameters $\theta$. The classifier operates on the embedding to produce a class prediction $\hat{y}=\arg\max_{i}g_{i}(\mathbf{z})$. In this work, we adopt the popular linear classifier $g(\mathbf{z})=\nu(\mathbf{W}\mathbf{x}+\mathbf{b})$, where $\nu$ is the softmax function. Standard (random sampling) training of the CNN lies on mini-batch SGD, where each batch is randomly sampled from training data. A class $j$ of $n_{j}$ training examples has probability $\frac{n_{j}}{N}$ of being represented in the batch. Without loss of generality, we assume classes sorted by decreasing cardinality, i.e. $n_{i}\leq n_{j}$, $\forall i>j$. In the long-tailed setting, where $n_{1}\gg n_{C}$, the model is not fully trained on classes of large index $j$ (tail classes) and under-fits. This can be avoided with recourse to non-uniform sampling strategies, the most popular of which is class-balanced sampling. This samples each class with probability $\frac{1}{C}$, over-sampling tail classes. [10, 28] shows that while classifier benefits from class-balanced sampling, feature embedding is more robust in random sampling. Practically, [10] achieves this by decoupling the training into two stages, and train the feature embedding with random sampling in the first stage, and classifier the second with class-balanced sampling. ### 3.3 Initialization The initialization of the proposed method follows the decoupling from [10]. The two-stage initialization is illustrated in Figure 2. A CNN model is first trained with random sampling. A feature embedding $\mathbf{z}=f(\mathbf{x};\theta)\in\mathbb{R}^{d}$, and a classifier $g^{\prime}(\mathbf{z})\in[0,1]^{C}$ are acquired. After convergence, the classifier is re-initialized and trained with class-balanced sampling, with the feature embedding fixed. This results in a class-balanced classifier $g(\mathbf{z})\in[0,1]^{C}$. Both the feature embedding and the classifier are trained on the supervised training subset ${\cal D}$. Algorithm 1 Alternate Learning for Semi-supervised Long-tailed Recognition 1:Initialization: Input: supervised training subset ${\cal D}$ Output: feature embedding $f$, random-trained classifier $g^{\prime}$, class-balanced classifier $g$ A CNN model is trained with random sampling. The feature embedding $f$ is then used to train a new classifier with class-balanced sampling. 2:Alternate Learning: 3:for $i=1,\ldots,N$ do 4: Stage 1: Label assignment $f$ and $g$ are used to assign labels to all samples in the unlabeled subset ${\cal U}$. The set U combined with assigned labels is ${\cal U^{\prime}}$ 5: Stage 2: Semi-supervised training Fine-tune $f$ and $g^{\prime}$ on the set ${\cal D}\cup{\cal U^{\prime}}$. Random Sampling is used to minimize the semi- supervised loss ${\cal L}_{semi}$ 6: Stage 3: Supervised training Fine-tune $g$ on the set ${\cal D}$. Class- balanced Sampling is used to minimize the supervised loss ${\cal L}_{sup}$. $f$ is used to calculate features, but is not fine-tuned. ### 3.4 Alternate Learning After obtaining an initialized model, most semi-supervised learning methods fine-tune the model on a combination of supervised and unsupervised samples. This is, however, incompatible with our long-tailed recognition model. When applied on unsupervised data, we have no ground truth for class-balanced sampling. One can make a sacrifice by relying on pseudo labels assigned by the initialized model. But the effectiveness will depend on the accuracy of pseudo labels. It is even worse when considering the fact that long-tailed models usually have better performance on highly populated classes and worse on few-shot classes. Class-balanced sampling over-samples few-shot classes, while down- samples many-shot. This means, in general, the worse part of pseudo labels contributes more to the training loss than it should be, while the better part contributes less. Another difficulty is the model compatibility when combining the long-tailed model to semi-supervised learning methods. Many semi-supervised learning methods evolve the model and pseudo labels at the same time. For example, Mean Teacher [21] assembles the teacher model by moving average and trains the student with consistency loss. When it comes to long-tailed model, it is not clear when we should update the feature embedding or classifier. And it is also difficult to incorporate both random and class-balanced sampling. Inspired by [19], which separates supervised learning apart from semi- supervised learning, we propose an alternate learning scheme. The supervised training on data ${\cal D}$, and semi-supervised training on data ${\cal D}\cup{\cal U}$ are carried out in an alternate fashion together with model decoupling and different data sampling strategies. In practice, after initialization, we have a feature embedding $\mathbf{z}=f(\mathbf{x};\theta)$, a classifier $g^{\prime}(\mathbf{z})$ trained with random sampling, and a classifier $g(\mathbf{z})$ trained with class-balanced sampling. In [10], only $g(\mathbf{z})$ is used in testing. However, we keep the randomly trained classifier $g^{\prime}(\mathbf{z})$ for further usage. The training scheme iterates among $3$ stages for $N$ loops, which are shown in Figure 3. Stage 1: Label assignment. In this stage, pseudo labels are assigned for the unsupervised subset ${\cal U}$. The feature embedding $f(\mathbf{x};\theta)$ and class-balanced classifier $g(\mathbf{z})$ are used. The choice of classifier is equivalent to the long-tailed model when tested for better overall accuracy. The unsupervised subset with pseudo labels is ${\cal\hat{U}}=\\{(\mathbf{x}_{i},\mathbf{\hat{y}}_{i});i=1,\ldots,M\\}$, where $\mathbf{\hat{y}}_{i}$ are pseudo labels. Stage 2: Semi-supervised training. After label assignment, we have pseudo labels for all unsupervised data. The model is the fine-tuned on the combination of true and pseudo labels, i.e. on ${\cal D}\cup{\cal\hat{U}}$. In this stage, random sampling is used to update the feature embedding $f(\mathbf{x};\theta)$ and the randomly-trained classifier $g^{\prime}(\mathbf{z})$. The classification is optimized by cross-entropy loss: ${\cal L}_{CE}=\sum_{(\mathbf{x}_{i},y_{i})\in{\cal D}\cup{\cal\hat{U}}}-\log g^{\prime}_{y_{i}}(f(\mathbf{x}_{i};\theta)),$ (2) where $g^{\prime}_{y_{i}}$ is the $y_{i}$-th element of $g^{\prime}$. In semi-supervised learning literature, a regularization loss is usually applied to maintain the consistency for unlabeled data. This consistency loss captures the fact that data points in the neighborhood usually share the same label. We adopt this idea and implement the temporal consistency from [12]. In practice, the class probabilities are acquired from $g^{\prime}$. Given the class probability $p^{e-1}$ from epoch $e-1$, and the class probability $p^{e}$ from epoch $e$, the loss is KL-divergence between the two. ${\cal L}_{consist}=\sum_{(\mathbf{x}_{i},y_{i})\in{\cal D}\cup{\cal\hat{U}}}\sum_{j}p^{e-1}_{j}\log\frac{p^{e-1}_{j}}{p^{e}_{j}},$ (3) where $p^{e-1}_{j}$ and $p^{e}_{j}$ are the $j$-th element of $p^{e-1}$ and $p^{e}$ respectively. Overall, the semi-supervised learning loss is the combination of the two. ${\cal L}_{semi}={\cal L}_{CE}+\lambda{\cal L}_{consist}.$ (4) Stage 3: Supervised training. We update the class-balanced classifier $g(\mathbf{z})$ based on the refined feature embedding, which is fine-tuned with semi-supervised learning in Stage 2. Specifically, the fine-tuning is applied with class-balanced sampling and only on the supervised subset ${\cal D}$. In this stage, only classifier is updated. The feature embedding is fixed and only used in forwarding. Given the class-balanced version of supervised subset ${\cal D^{\prime}}$, the cross-entropy loss for classification is ${\cal L}_{sup}=\sum_{(\mathbf{x}_{i},y_{i})\in{\cal D^{\prime}}}-\log g_{y_{i}}(f(\mathbf{x}_{i};\theta)),$ (5) where $g_{y_{i}}$ is the $y_{i}$-th element of $g$. ### 3.5 Insight of the Design Feature embedding is trained with random sampling and semi-supervised learning. This is consistent with long-tailed model in the sampling scheme. It also follows the fact that feature embedding is less prone to noisy labels. Actually, in self-supervised learning literature [6, 8, 3], the feature embedding can even be learned without labels. Classifier is learned with class-balance sampling and only supervised data. This is again the same as the supervised version. And by avoiding fitting the classifier on pseudo labels, we prevent the wrong labels from propagating through the whole training process. Given the fact that the pseudo labels are provided by the classifier, if classifier is still optimize on those, wrong labels can be easily maintained in the fine-tuned version of the classifier. Training the classifier only on labeled data also avoids the dilemma of class- balancing on unlabeled data. Without ground truth labels, class-balanced sampling can only rely on pseudo labels, which are not perfect. And the fact that pseudo labels have more errors on few-shot classes is specially not suitable for class-balanced sampling. Because when few-shot classes are over- sampled, those errors are also scaled up during training. Table 1: Results(Accuracy in $\%$) on CIFAR-10-SSLT. ResNet-18 is used for all methods. | Imbalance factor=100 | Imbalance factor=1000 ---|---|--- Method | Overall | Many-Shot | Medium-Shot | Few-Shot | Overall | Many-Shot | Medium-Shot | Few-Shot LDAM-DRW (L) [2] | 67.4 | 79.7 | 54.2 | 68.1 | 46.2 | 70.3 | 36.3 | 35.6 Pseudo-Label + L | 69.6 | 69.7 | 55.1 | 80.2 | 48.4 | 74.0 | 39.3 | 36.0 Mean Teacher [21] \+ L | 69.9 | 69.7 | 57.3 | 79.4 | 48.3 | 75.7 | 41.4 | 32.9 Decoupling (D) [10] | 64.0 | 91.1 | 63.0 | 44.4 | 45.8 | 86.5 | 47.2 | 14.4 Pseudo-Label + D | 68.9 | 92.7 | 70.8 | 49.8 | 46.5 | 89.0 | 47.0 | 14.2 Ours | 71.3 | 89.5 | 67.7 | 60.2 | 66.7 | 84.4 | 69.4 | 51.4 Table 2: Results(Accuracy in $\%$) on ImageNet-SSLT. ResNet-18/50 are used for all methods. For many-shot $t>100$, for medium-shot $t\in(10,100]$, and for few-shot $t\leq 10$, where $t$ is the number of labeled samples. | ResNet-18 | ResNet-50 ---|---|--- Method | Overall | Many-Shot | Medium-Shot | Few-Shot | Overall | Many-Shot | Medium-Shot | Few-Shot LDAM-DRW (L) [2] | 21.3 | 42.6 | 27.0 | 8.6 | 24.9 | 51.2 | 31.1 | 9.9 Pseudo-Label + L | 17.6 | 22.4 | 20.9 | 12.6 | 23.9 | 44.0 | 30.0 | 11.1 Mean Teacher [21] \+ L | 21.3 | 41.8 | 28.1 | 7.6 | 25.6 | 49.1 | 31.8 | 11.7 Decoupling (D) [10] | 24.8 | 53.9 | 31.1 | 8.7 | 27.2 | 58.5 | 34.2 | 9.8 Pseudo-Label + D | 25.3 | 47.6 | 32.1 | 11.1 | 27.7 | 52.2 | 34.7 | 12.4 Ours | 26.5 | 52.0 | 33.9 | 10.7 | 29.0 | 57.1 | 36.5 | 12.3 ## 4 Experiments ### 4.1 Datasets We manually curate two semi-supervised long-tailed recognition benchmarks. CIFAR-10-SSLT. For easy comparison and ablation, we compose a lightweight semi-supervised long-tailed dataset based on CIFAR-10 [11]. Following [2], we randomly sample the training set of CIFAR-10 under an exponential function with imbalance ratios in $\\{100,1000\\}$ (the ratio of most populated class to least populated). The unsupervised subset is collected from Tiny Images [22] following the strategy introduced in [26]. The class distribution of unlabeled data is always the same as the labeled one, with $5$ times larger. For better description and comparison, we assign the $10$ classes into $3$ splits: many-shot, medium-shot, few-shot, with many-shot the most populated $3$ classes, medium-shot the medium $3$, and few-shot the least $4$ classes. ImageNet-SSLT. To evaluate the effectiveness of semi-supervised long-tailed recognition methods on large-scale datasets, we assemble a challenging dataset from ImageNet (ILSVRC-2012) [5]. The supervised subset is sampled with Lomax distribution with shape parameter $\alpha=6$, scale parameter $\lambda=1000$. It contains $41,134$ images from $1000$ classes, with the maximum of $250$ images per class and the minimum of $2$ samples. The unsupervised subset is sampled under the same distribution with an unsupervised factor $4$, i.e. $|{\cal U}|=4|{\cal D}|$. The $1000$ classes are divided into 3 splits based on the amount of labeled data $n$: many-shot ($n>100$), medium-shot ($10<n\leq 100$), few-shot ($n\leq 10$). In result, the dataset has $140$ many-shot, $433$ medium-shot, and $427$ few-shot classes. Methods are evaluated under all classes and each class split. ### 4.2 Network Architecture ResNet-18 [9] is used on both CIFAR-10-SSLT and ImageNet-SSLT for fast experiments and comparison. ResNet-50 [9] is used on ImageNet-SSLT to show how methods scale up to larger networks. ### 4.3 Comparison Methods To our best knowledge, there is no available method designated for semi- supervised long-tailed recognition. We explore typical long-tailed recognition methods and semi-supervised recognition methods, and combine them as baselines. Long-tailed Recognition. We consider two long-tailed methods, one for loss calibration and the other for re-sampling. LDAM-DRW [2] converts cross-entropy loss to LDAM loss with calibration factors based on class counts. It further regulates the loss with a loss weight also from class counts. Decoupling [10] decouples the training of embedding and classifier with different sampling strategies. This is also the initialization in our method. Semi-supervised Recognition. Pseudo-Label is a basic semi-supervised learning algorithm and can be easily combined with other models. It contains two phases. The first phase is initialization, the recognition model is trained on labeled data. Predictions of the initialized model are assigned on unlabeled data, i.e. pseudo labels. The initialized model is then trained or fine-tuned on the combination of labeled and unlabeled data. In practice, we combine Pseudo-Label method with the two long-tailed recognition models to create two semi-supervised long-tailed recognition baselines. Pseudo-Label combined with LDAM-DRW is the method used in [26]. Mean Teacher [21] is a well-known semi-supervised learning method. It contains a Student model that is trained with SGD and a Teacher model that is updated with moving average of the Student. It is, however, unclear how to train it with Decoupling. We only implement LDAM loss with Student training. ### 4.4 Training Detail In initialization, the feature embedding is trained with $200$ epochs, and classifier is learned in $10$ epochs after that. Stage 2 contains $40$ epochs of fine-tuning of the embedding on the whole dataset. In $5$ loops of stages, it is in total $200$ epochs of embedding fine-tuning. There are also $10$ epochs of classifier fine-tuning in Stage 3 per loop. In semi-supervised learning loss (4), $\lambda=1$ is used. SGD optimizer with learning rate of $0.1$ is used with cosine annealing during training in all stages. The momentum is $0.9$, and weight decay is $0.0005$. All comparison methods are implemented with the hyper-parameters in their papers. The codes from authors are used when available. ### 4.5 Results CIFAR-10-SSLT results are shown in Table 1 with imbalance ratio $100$ and $1000$. Our methods outperforms all other methods in overall accuracy. Our initialized model is equivalent to Decoupling, which shows the worst performance among all methods. Alternate learning improves the overall performance more than $7\%$ when imbalance factor is $100$, and $20\%$ with imbalance factor $1000$. Most of the improvement is from medium and few-shot classes. The larger improvement on the more imbalanced distribution shows that our method is more effective with more skewed dataset. When Pseudo-Label is added upon Decoupling, around $5\%$ improvement is achieved with imbalance factor $100$. But this improvement diminishes when the data is more imbalanced. This implies the fact that Pseudo-Label is more sensitive to bad tail class labelling. With the improvement upon Pseudo-Label, our method has the same amount of training epochs on unsupervised data. The extra calculation in our methods compared to Pseudo-Label is from Stage 1 and 2. However, the classifier training is only on supervised data, and only the linear classifier is updated. And label assignment does not involve any back-propagation. The extra time on these two stages are trivial compared to the training of the whole model on the whole dataset. LDAM-DRW provides very competitive results without any semi-supervised learning methods when imbalance factor is $100$. However, it scales up bad when combined with semi-supervised techniques. By adding Pseudo-Label, it only improves $2\%$ of overall accuracy. After looking at the splits results, we find that it improves the few-shot performance at the cost of many-shot. We believe this is because the wrong balancing factor introduced in LDAM loss. It does not match the true distribution, and skews the training process. Mean Teacher makes little difference from Pseudo-Label on LDAM-DRW. ImageNet-SSLT results are shown in Table 2. Our methods outperforms all baseline methods with both ResNet-18 and -50 architectures. The ImageNet-SSLT setting is really challenging that all of the methods give below $30\%$ overall accuracy. In fact, our method is the only one that improves the few- shot performance while maintains the many-shot accuracy. On ImageNet-SSLT, Pseudo-Label based methods lose efficacy, because it improves few-shot performance with sacrifice on many-shot. This sacrifice is sometimes big, such as Pseudo-Label+LDAM-DRW with ResNet-18. This is not observed when Pseudo-Label is used on CIFAR-10-SSLT, where it improves the many-shot performance. This may be due to the bad many-shot pseudo-label quality on ImageNet-SSLT. Unlike CIFAR-10-SSLT, where the initialized model has $90\%$ of accuracy on many-shot, many-shot performance on ImageNet-SSLT is only around $50\%$. These wrong labels can mislead the training and lower the performance of Pseudo-Label methods. Our method, on the other hand, updates the pseudo labels iteratively, and is less prone to this problem. Specifically, adding Pseudo-Label on LDAM-DRW decreases the overall performance. This can be explained by the fact that the balancing factor in it does not match the true distribution. Mean Teacher improves upon LDAM-DRW when ResNet-50 is used. But it is still not as good as ours. ### 4.6 Ablations We further study the training choices of alternate learning. This consists of two parts, i.e. the sampling choices and semi-supervised learning choices. Results on CIFAR-10-SSLT with imbalance factor $100$ are listed in Table 3. Sampling choice. Currently, during alternate learning we use random sampling in Stage 2 and class-balance sampling in Stage 3. This is consistent with long-tailed recognition [10]. However, other combinations are possible. Results are listed in the first 3 lines of Table 3, with naming format: {sampling in Stage 2}+{sampling in Stage 3}. In method names, “R” stands for random sampling and “C” stands for class-balanced. None of the 3 alternatives can beat the initialized model (Decoupling). This is expected. When the classifier is randomly trained (“R+R” and “C+R”), the model performs bad on few-shot classes. This will in turn harm the training of embedding by pseudo labels on unsupervised subset. “C+C” trains the feature embedding with class-balanced sampling. However, it is balancing on pseudo labels, which can be wrong. The results show that this balancing yields inferior feature embedding. Semi-supervised learning choice. We train feature embedding with the whole dataset, i.e. ${\cal D}\cup{\cal U^{\prime}}$, and the classifier with labeled subset ${\cal D}$. Other combinations can also be investigated. The classifier can also be semi-supervise trained, i.e. on ${\cal D}\cup{\cal U^{\prime}}$. At the same time, feature embedding is trained with or without ${\cal U^{\prime}}$. We show the results in the last 2 lines of Table 3. In these two experiments, the classifier is always trained on ${\cal D}\cup{\cal U^{\prime}}$. The difference is whether ${\cal U^{\prime}}$ is used for embedding learning. Compared to the regular setting, where the classifier is trained on ${\cal D}$, when we train it on ${\cal D}\cup{\cal U^{\prime}}$, the performance is slightly lower. This can be explained by the fact that wrong pseudo labels in ${\cal U^{\prime}}$ can be propagated through loops if the classifier is optimized on them. This is especially true for few-shot classes, where the accuracy is low. Because of class-balanced sampling, the impact of few-shot classes is amplified. When compared to Table 1, the main performance drop is from few-shot classes. This confirms our assumption. However, when we further remove the unsupervised training of embedding, the performance drops a lot. It is even worse than the initialized model (Decoupling). In this case, the feature embedding should be equivalent to that of the initialization. The only difference is the classifier. This further proves the fact that fine-tuning classifier on pseudo-labels harms the performance. Accuracy on unsupervised training subset. In Stage 1, we assign pseudo labels for all samples in ${\cal U}$. Table 4 reveals how the accuracy changes along loops in all splits. Few-shot split performance improves much faster than others. This proves the effectiveness of our alternate learning scheme, and explains why our method outperforms the baselines by a large margin in few- shot classes. The unsupervised subset has a long-tailed distribution, so the overall performance is dominated by many-shot. However, alternate learning still gets benefits from the improvement on few-shot split. Accuracy on different splits is more useful when we analyze how the model evolves during training. Table 3: Ablation results(Accuracy in $\%$) on CIFAR-10-SSLT, Imbalance factor $100$ is used. Sampling methods are denoted as R for random, and C for class-balanced. The last two method names shows where the embedding is trained. Method | Overall | Many-Shot | Medium-Shot | Few-Shot ---|---|---|---|--- R + R | 50.9 | 93.0 | 57.8 | 14.1 C + R | 61.2 | 91.3 | 62.6 | 37.6 C + C | 63.3 | 91.2 | 64.4 | 41.6 ${\cal D}\cup{\cal U^{\prime}}$ | 70.1 | 89.6 | 68.7 | 56.5 ${\cal D}$ | 63.3 | 91.6 | 61.9 | 43.2 Table 4: Pseudo label accuracy on unlabeled training subset. CIFAR-10-SSLT with imbalance ratio $100$ is used. Compared to testing set, the unsupervised subset is not balanced. In result, the overall accuracy is higher than that on testing set, because of the domination of many-shot classes. The results in many/medium/few-shot splits are more useful. Loop | Overall | Many-Shot | Medium-Shot | Few-Shot ---|---|---|---|--- 0 | 87.7 | 92.3 | 63.0 | 41.8 1 | 87.9 | 92.3 | 64.0 | 48.1 2 | 87.8 | 92.1 | 64.7 | 52.2 3 | 87.8 | 91.8 | 65.3 | 55.8 4 | 87.7 | 91.6 | 65.8 | 57.8 ## 5 Conclusion This work introduces the semi-supervised long-tailed recognition problem. It extends the long-tailed problem with unsupervised data. With the property of labeled and unlabeled data obeying the same distribution, this problem setting follows the realistic data collection and annotation workflow. A method based on alternate learning is proposed. By separating supervised training from semi-supervised and decoupling the sampling methods, it incorporates the decoupling training scheme in long-tailed recognition with semi-supervised learning. Experiments show that the proposed method outperforms all current baselines. When results are split based on class cardinality, the method exhibits its robustness to defective pseudo labels. This is especially true for few-shot classes. ## References * [1] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019. * [2] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. arXiv preprint arXiv:1906.07413, 2019. * [3] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. * [4] Yanbei Chen, Xiatian Zhu, and Shaogang Gong. Semi-supervised deep learning with memory. In Proceedings of the European Conference on Computer Vision (ECCV), pages 268–283, 2018. * [5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. * [6] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. * [7] Bharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In Proceedings of the IEEE International Conference on Computer Vision, pages 3018–3027, 2017. * [8] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729–9738, 2020. * [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [10] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decoupling representation and classifier for long-tailed recognition. In Eighth International Conference on Learning Representations (ICLR), 2020. * [11] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\. * [12] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. * [13] Qimai Li, Xiao-Ming Wu, and Zhichao Guan. Generalized label propagation methods for semi-supervised learning. 2018\. * [14] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017. * [15] Jialun Liu, Yifan Sun, Chuchu Han, Zhaopeng Dou, and Wenhui Li. Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2970–2979, 2020. * [16] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2537–2546, 2019. * [17] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4004–4012, 2016. * [18] Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder networks. arXiv preprint arXiv:1507.02672, 2015. * [19] Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Semi-supervised learning with scarce annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 762–763, 2020. * [20] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685, 2020. * [21] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780, 2017. * [22] Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence, 30(11):1958–1970, 2008. * [23] Yu-Xiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7278–7286, 2018. * [24] Yu-Xiong Wang and Martial Hebert. Learning to learn: Model regression networks for easy small sample learning. In European Conference on Computer Vision, pages 616–634. Springer, 2016. * [25] Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Learning to model the tail. In Advances in Neural Information Processing Systems, pages 7029–7039, 2017. * [26] Yuzhe Yang and Zhi Xu. Rethinking the value of labels for improving class-imbalanced learning. arXiv preprint arXiv:2006.07529, 2020. * [27] Xiao Zhang, Zhiyuan Fang, Yandong Wen, Zhifeng Li, and Yu Qiao. Range loss for deep face recognition with long-tailed training data. In Proceedings of the IEEE International Conference on Computer Vision, pages 5409–5418, 2017. * [28] Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9719–9728, 2020.
# MS-TP-20-42 Xenon-1T excess as a possible signal of a sub-GeV hidden sector dark matter Amin Aboubrahima111Email<EMAIL_ADDRESS>Michael Klasena222Email: <EMAIL_ADDRESS>and Pran Nathb333Email: <EMAIL_ADDRESS> aInstitut für Theoretische Physik, Westfälische Wilhelms-Universität Münster, Wilhelm-Klemm-Straße 9, 48149 Münster, Germany bDepartment of Physics, Northeastern University, Boston, MA 02115-5000, USA ###### Abstract We present a particle physics model to explain the observed enhancement in the Xenon-1T data at an electron recoil energy of 2.5 keV. The model is based on a $U(1)$ extension of the Standard Model where the dark sector consists of two essentially mass degenerate Dirac fermions in the sub-GeV region with a small mass splitting interacting with a dark photon. The dark photon is unstable and decays before the big bang nucleosynthesis, which leads to the dark matter constituted of two essentially mass degenerate Dirac fermions. The Xenon-1T excess is computed via the inelastic exothermic scattering of the heavier dark fermion from a bound electron in xenon to the lighter dark fermion producing the observed excess events in the recoil electron energy. The model can be tested with further data from Xenon-1T and in future experiments such as SuperCDMS. ## 1 Introduction Recently the Xenon-1T experiment [1] has analyzed events in the low-energy region of 1–30 keV of electron recoil energy with an exposure of 0.65 tonne- years, while claiming a low background rate of $76\pm 2_{\,\mathrm{stat}}$ events/(tonne-year-keV). The experiment observed an excess of recoil electrons over the background in the 2$-$3 keV range. The collaboration analyzed the axion couplings to electrons, photons and nucleons, and the neutrino magnetic moment as possible sources for the signal. However, these models appear to be in strong tension with stellar constraints [2, 3, 4, 5]. Another possible source of this excess is traces of tritium in xenon of size $(6.2\pm 2.0)\times 10^{-25}$ mol/mol. The experiment currently can neither confirm nor exclude such a possibility. Since the publication of the Xenon-1T results, a variety of models have been proposed which include light sterile neutrinos [6, 7, 8, 9], a goldstino [10], an inflaton [11], string-motivated models [12, 13], boosted dark matter [14, 15, 16], and a variety of other models [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In this work we discuss the possibility that the observed effect is a signal from dark matter in the hidden sector. While there are models in the literature where the hidden sector is used for the explanation of the Xenon-1T excess, our analysis differs from them in several ways. In our analysis we use the Stueckelberg extension of the Standard Model where the Stueckelberg sector consists of an additional $U(1)$ gauge boson interacting via kinetic mixing with the visible sector. In many previous works a single Dirac fermion is used which is then split into two Majoranas which are given different masses [32, 30]. In our analysis we consider two Dirac fermions in the hidden sector carrying $U(1)$ charges with a small mass splitting and interacting with the dark photon generated by the Stueckelberg mechanism. In the analysis, both the freeze-in and freeze-out mechanisms operate to generate the desired relic density. The analysis given here satisfies all the relevant constraints on kinetic mixing between the hidden sector and the visible sector and on the dark photon mass from the CRESST 2019 DM-nucleon scattering cross section, from the neutrino experiment CHARM [42] whose results are reinterpreted as limits on the dark photon [43] as well as from the Planck relic density experiment. The outline of the rest of the paper is as follows: In section 2 we discuss the model to explain the Xenon-1T result. In section 3 an analysis of the dark matter relic density is given. A discussion of the inelastic dark matter-electron scattering is given in section 4. Event detection rates in the Xenon-1T detector are discussed in section 5. Constraints on the model and a fit to the Xenon-1T data is given in section 6. Our conclusions are given in section 7. Several details of the calculation are given in the Appendix. ## 2 Stueckelberg extension with hidden sector dark fermions We extend the Standard Model (SM) gauge group by an extra $U(1)_{X}$ under which the SM is neutral. The extra gauge field $C^{\mu}$ mixes with the SM $U(1)_{Y}$ hypercharge $B^{\mu}$ via kinetic mixing [44]. Further, we use the Stueckelberg mechanism [45] to generate mass for the gauge boson of the hidden sector. The total Lagrangian is then given by $\mathcal{L}=\mathcal{L}_{\rm SM}+\Delta\mathcal{L},$ (1) with $\displaystyle\Delta\mathcal{L}\supset$ $\displaystyle-\frac{1}{4}C_{\mu\nu}C^{\mu\nu}-\frac{\delta}{2}C_{\mu\nu}B^{\mu\nu}+g_{X}J^{\mu}_{X}C_{\mu}-\frac{1}{2}(\partial_{\mu}\sigma+M_{1}C_{\mu}+M_{2}B_{\mu})^{2},$ (2) where $g_{X}$ is the gauge coupling in the hidden sector, $J_{X}$ is the hidden sector current and $\sigma$ is a pseudoscalar field which is absorbed in a gauge-invariant way via the Stueckelberg mechanism to give mass to the extra neutral gauge boson which we call $\gamma^{\prime}$ (the dark photon). Further one may introduce matter in the hidden sector which is neutral under $U(1)_{Y}$ but charged under $U(1)_{X}$ [45, 46]. More generally one may have both kinetic mixing and mass mixing [47]. The kinetic energy terms in Eqs. (1) and (2) can be diagonalized by a $GL(2,\mathbb{R})$ transformation $\displaystyle\left(\begin{matrix}C^{\mu}\cr B^{\mu}\end{matrix}\right)=\left(\begin{matrix}c_{\delta}&0\cr- s_{\delta}&1\end{matrix}\right)\left(\begin{matrix}C^{\prime\mu}\cr B^{\prime\mu}\end{matrix}\right),$ (3) where $c_{\delta}=1/(1-\delta^{2})^{1/2}$ and $s_{\delta}=\delta/(1-\delta^{2})^{1/2}$. In the Standard Model the neutral gauge boson sector arises from the hypercharge gauge boson $B^{\mu}$ and the third component of the $SU(2)_{L}$ gauge field $A^{\mu}_{a}$ ($a=$ 1$-$3), which leads to a $2\times 2$ mass squared matrix after spontaneous symmetry breaking. It contains one massless mode, the photon, and a massive mode, the $Z$-boson. Inclusion of the Stueckelberg gauge field $C_{\mu}$ enlarges the $2\times 2$ mass squared matrix of the neutral gauge boson sector in the standard model to a $3\times 3$ mass squared matrix in the Stueckelberg extended model. Thus after spontaneous electroweak symmetry breaking and the Stueckelberg mass growth, and on including the $GL(2,\mathbb{R})$ transformation to obtain a diagonal and a normalized kinetic energy for the gauge bosons, the $3\times 3$ mass squared matrix of neutral vector bosons in the basis $(C^{\prime}_{\mu},B^{\prime}_{\mu},A^{3}_{\mu})$ is given by $\displaystyle\mathcal{M}^{2}_{V}=\left(\begin{matrix}M_{1}^{2}\kappa^{2}+\frac{1}{4}g^{2}_{Y}v_{H}^{2}s^{2}_{\delta}&\kappa\epsilon M_{1}^{2}-\frac{1}{4}g^{2}_{Y}v_{H}^{2}s_{\delta}&\frac{1}{4}g_{Y}g_{2}v_{H}^{2}s_{\delta}\cr\kappa\epsilon M_{1}^{2}-\frac{1}{4}g^{2}_{Y}v_{H}^{2}s_{\delta}&\epsilon^{2}M_{1}^{2}+\frac{1}{4}g^{2}_{Y}v_{H}^{2}&-\frac{1}{4}g_{Y}g_{2}v_{H}^{2}\cr\frac{1}{4}g_{Y}g_{2}v_{H}^{2}s_{\delta}&-\frac{1}{4}g_{Y}g_{2}v_{H}^{2}&\frac{1}{4}g^{2}_{2}v_{H}^{2}\cr\end{matrix}\right),$ (4) where $g_{2}$ is the $SU(2)_{L}$ gauge coupling, $\kappa=(c_{\delta}-\epsilon s_{\delta})$, $\epsilon=M_{2}/M_{1}$ and $v_{H}$ is the Higgs VEV. The mass- squared matrix of Eq. (4) has one zero eigenvalue which is the photon, while the other two eigenvalues are $M^{2}_{\pm}=\frac{1}{2}\left\\{M^{2}_{0}\pm\sqrt{M_{0}^{4}-M_{1}^{2}v_{H}^{2}\Big{[}(\kappa^{2}+\epsilon^{2})g_{2}^{2}+g^{2}_{Y}c^{2}_{\delta}\Big{]}}~{}\right\\},$ (5) where $M^{2}_{0}=(\kappa^{2}+\epsilon^{2})M_{1}^{2}+\dfrac{1}{4}v_{H}^{2}(g_{Y}^{2}c^{2}_{\delta}+g_{2}^{2})$. Here $M_{-}$ is identified as the $\gamma^{\prime}$ boson mass, while $M_{+}$ as the $Z$ boson. The diagonalization of the mass-squared matrix of Eq. (4) can be done via two orthogonal transformations, where the first is given by [47] $\displaystyle\mathcal{O}=\left(\begin{matrix}1/c_{\delta}&-s_{\delta}/c_{\delta}&0\cr s_{\delta}/c_{\delta}&1/c_{\delta}&0\cr 0&0&1\cr\end{matrix}\right),$ (6) which transforms the mass matrix to $\mathcal{M^{\prime}}^{2}_{V}=\mathcal{O}^{T}\mathcal{M}^{2}_{V}\mathcal{O}$, $\displaystyle\mathcal{M^{\prime}}^{2}_{V}=\left(\begin{matrix}M_{1}^{2}&M_{1}^{2}\bar{\epsilon}&0\cr M_{1}^{2}\bar{\epsilon}&M_{1}^{2}\bar{\epsilon}^{2}+\frac{1}{4}g^{2}_{Y}v_{H}^{2}c^{2}_{\delta}&-\frac{1}{4}g_{Y}g_{2}v_{H}^{2}c_{\delta}\cr 0&-\frac{1}{4}g_{Y}g_{2}v_{H}^{2}c_{\delta}&\frac{1}{4}g^{2}_{2}v_{H}^{2}\cr\end{matrix}\right),$ (7) where $\bar{\epsilon}=\epsilon c_{\delta}-s_{\delta}$. The gauge eigenstates of $\mathcal{M^{\prime}}^{2}_{V}$ can be rotated into the corresponding mass eigenstates $(\gamma^{\prime},Z,\gamma)$ using the second transformation such that $\mathcal{R}^{T}\mathcal{M^{\prime}}^{2}_{V}\mathcal{R}=\text{diag}(m^{2}_{\gamma^{\prime}},m^{2}_{Z},0)$ where the rotation matrix is given by $\displaystyle\mathcal{R}=\left(\begin{matrix}\cos\psi\cos\phi-\sin\theta\sin\phi\sin\psi&\sin\psi\cos\phi+\sin\theta\sin\phi\cos\psi&-\cos\theta\sin\phi\cr\cos\psi\sin\phi+\sin\theta\cos\phi\sin\psi&\sin\psi\sin\phi-\sin\theta\cos\phi\cos\psi&\cos\theta\cos\phi\cr-\cos\theta\sin\psi&\cos\theta\cos\psi&\sin\theta\cr\end{matrix}\right).$ (8) Here the mixing angles are given by $\tan\phi=\bar{\epsilon},~{}~{}~{}\tan\theta=\frac{g_{Y}}{g_{2}}c_{\delta}\cos\phi,$ (9) and $\tan 2\psi\simeq\frac{2\bar{\epsilon}m^{2}_{Z}\sin\theta}{m^{2}_{\gamma^{\prime}}-m^{2}_{Z}+(m^{2}_{\gamma^{\prime}}+m^{2}_{Z}-m^{2}_{W})\bar{\epsilon}^{2}},$ (10) where $m_{W}=g_{2}v_{H}/2$, $m_{\gamma^{\prime}}\equiv M_{-}$ and $m_{Z}\equiv M_{+}$. Since the dark photon mixes with the SM gauge bosons, it will couple with the SM fermions and so $\mathcal{L}_{\rm SM}=\frac{g_{2}}{2\cos\theta}\bar{\psi}_{f}\gamma^{\mu}\Big{[}(v_{f}-\gamma_{5}a_{f})Z_{\mu}+(v^{\prime}_{f}-\gamma_{5}a^{\prime}_{f})A^{\gamma^{\prime}}_{\mu}\Big{]}\psi_{f}+e\bar{\psi}_{f}\gamma^{\mu}Q_{f}A_{\mu}\psi_{f},$ (11) where $f$ runs over all SM fermions and the vector and axial couplings are given by $\displaystyle v_{f}$ $\displaystyle=\cos\psi[(1-\bar{\epsilon}\tan\psi\sin\theta)T_{3f}-2\sin^{2}\theta(1-\bar{\epsilon}\csc\theta\tan\psi)Q_{f}],$ (12) $\displaystyle a_{f}$ $\displaystyle=\cos\psi(1-\bar{\epsilon}\tan\psi\sin\theta)T_{3f},$ $\displaystyle v^{\prime}_{f}$ $\displaystyle=-\cos\psi[(\tan\psi+\bar{\epsilon}\sin\theta)T_{3f}-2\sin^{2}\theta(\bar{\epsilon}\csc\theta+\tan\psi)Q_{f}],$ $\displaystyle a^{\prime}_{f}$ $\displaystyle=-\cos\psi(\tan\psi+\bar{\epsilon}\sin\theta)T_{3f}.$ Here $T_{3f}$ is the third component of the isospin and $Q_{f}$ is the electric charge. We assume that the hidden sector where $C^{\mu}$ resides contains two mass degenerate Dirac fermions $D_{1}$ and $D_{2}$ with the common mass $\mu$ which, however, carry different charges $Q_{1}$ and $Q_{2}$ under the $U(1)_{X}$ gauge group. The interaction Lagrangian for the hidden sector is then given by $\displaystyle\mathcal{L}^{\rm int}_{D}=$ $\displaystyle-g^{\gamma^{\prime}}_{X}Q_{1}\bar{D}_{1}\gamma^{\mu}D_{1}A_{\mu}^{\gamma^{\prime}}-g^{\gamma^{\prime}}_{X}Q_{2}\bar{D}_{2}\gamma^{\mu}D_{2}A_{\mu}^{\gamma^{\prime}}-g^{Z}_{X}Q_{1}\bar{D}_{1}\gamma^{\mu}D_{1}Z_{\mu}-g^{Z}_{X}Q_{2}\bar{D}_{2}\gamma^{\mu}D_{2}Z_{\mu},$ (13) with $g^{\gamma^{\prime}}_{X}=g_{X}(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})$ and $g^{Z}_{X}=g_{X}(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})$, where $\mathcal{R}_{ij}$ are elements of the matrix in Eq. (8). To generate inelastic scattering we need to split the masses of the $D$-fermions. To this end we add a $U(1)_{X}$ gauge violating mass terms $\Delta\mu(\bar{D}_{1}D_{2}+\text{h.c.})$ so that the Lagrangian for the $(D_{1},D_{2})$ mass terms is given by $\displaystyle\mathcal{L}^{\rm mass}_{D}=-\mu(\bar{D}_{1}D_{1}+\bar{D}_{2}D_{2})-\Delta\mu(\bar{D}_{1}D_{2}+\bar{D}_{2}D_{1}),$ (14) We can now go to the mass diagonal basis with Dirac fermions $D_{1}^{\prime}$ with mass $m_{1}=\mu-\Delta\mu$ and $D_{2}^{\prime}$ with mass $m_{2}=\mu+\Delta\mu$ and we assume $m_{2}>m_{1}$ so $D_{2}^{\prime}$ is the heavier of the two dark fermions. In this basis, Eq. (13) takes the form $\displaystyle-\mathcal{L}^{\rm int}_{D}$ $\displaystyle=\frac{1}{2}g^{\gamma^{\prime}}_{X}(Q_{1}+Q_{2})\left(\bar{D}_{1}^{\prime}\gamma^{\mu}D_{1}^{\prime}+\bar{D}_{2}^{\prime}\gamma^{\mu}D_{2}^{\prime}\right)A_{\mu}^{\gamma^{\prime}}$ (15) $\displaystyle+\frac{1}{2}g^{\gamma^{\prime}}_{X}(Q_{1}-Q_{2})(\bar{D}_{1}^{\prime}\gamma^{\mu}D_{2}^{\prime}+\bar{D}_{2}^{\prime}\gamma^{\mu}D_{1}^{\prime})A_{\mu}^{\gamma^{\prime}}$ $\displaystyle+\frac{1}{2}g^{Z}_{X}(Q_{1}+Q_{2})\left(\bar{D}_{1}^{\prime}\gamma^{\mu}D_{1}^{\prime}+\bar{D}_{2}^{\prime}\gamma^{\mu}D_{2}^{\prime}\right)Z_{\mu}$ $\displaystyle+\frac{1}{2}g^{Z}_{X}(Q_{1}-Q_{2})(\bar{D}_{1}^{\prime}\gamma^{\mu}D_{2}^{\prime}+\bar{D}_{2}^{\prime}\gamma^{\mu}D_{1}^{\prime})Z_{\mu}.$ From Eq. (11) and Eq. (15) we note that the dark photon has couplings with both the visible sector and the hidden sector. Thus from Eq. (11) we find that the dark photon couples with quarks and leptons in the visible sector while from Eq. (15) we find that the dark photon has couplings with $D_{1}^{\prime},D_{2}^{\prime}$ in the dark sector. These couplings allow for an inelastic scattering process to occur where a dark fermion hits a bound electron in a xenon atom producing, in an exothermic process, a recoil electron with excess energy, i.e., $\displaystyle e+D_{2}^{\prime}\to e^{\prime}+D_{1}^{\prime},$ (16) where the final electron receives an extra boost in energy from the mass difference $\Delta m=m_{2}-m_{1}=2\Delta\mu$. ## 3 Dark matter relic density Since the hidden sector matter has feeble couplings with the Standard Model particles, they are never in thermal equilibrium with the visible sector and the usual freeze-out analysis for the computation of the relic density does not apply. However, the hidden sector particles can be produced via the annihilation of the Standard Model particles into dark photons and dark fermions via these feeble interactions and the computation of the relic density in this case is computed using the freeze-in mechanism [48]. Within the dark sector itself the dark fermions and the dark photons interact via normal size interactions and are in thermal equilibrium up to a certain freeze-out temperature $T_{f}$ via the process $D\bar{D}\to\gamma^{\prime}\gamma^{\prime}$. However, below the dark sector freeze-out temperature $T_{f}$, the dark fermions and the dark photons decouple and the dark photons decay to the visible sector much before the big bang nucleosynthesis, which leaves the dark fermions as the only DM candidates. In our model we assume that the visible and hidden sectors have the same temperature. One can consider sectors with different temperatures in the early universe, but it has been shown that the two sectors will eventually thermalize and that the effect on the relic density is minimal [49]. For calculating the DM relic density of $D^{\prime}_{1}$ and $D^{\prime}_{2}$, we assume $m_{1}\simeq m_{2}\simeq m_{D}$ and write only one Boltzmann equation for the dark fermion. In this limit, we have $\frac{dY_{D}}{dx}\approx-1.32M_{\rm Pl}\frac{h_{\rm eff}(T)}{g^{1/2}_{\rm eff}(T)}\frac{m_{D}}{x^{2}}\left(-\langle\sigma v\rangle_{D\bar{D}\to i\bar{i}}Y_{D}^{\rm eq^{2}}+\langle\sigma v\rangle_{D\bar{D}\to\gamma^{\prime}\gamma^{\prime}}Y^{2}_{D}\right),$ (17) where $Y_{D}=n/s$ is the comoving number density (or yield) of DM, $h_{\rm eff}$ and $g_{\rm eff}$ are the entropy and energy density numbers of degrees of freedom, $M_{\rm Pl}$ is the reduced Planck mass ($M_{\rm Pl}\sim 2.4\times 10^{16}$ GeV) and $x=m_{D}/T$. The first term on the right-hand-side of Eq. (17) is the production of dark matter particles via the freeze-in mechanism and the second term produces DM depletion. Here the thermally averaged cross- section is given by $\langle\sigma v\rangle^{D\bar{D}\to ab}(x)=\frac{x}{8m^{5}_{D}K^{2}_{2}(x)}\int_{4m_{D}^{2}}^{\infty}ds~{}\sigma(s)\sqrt{s}\,(s-4m_{D}^{2})K_{1}\left(\frac{\sqrt{s}}{m_{D}}x\right),$ (18) while the equilibrium yield is given by $Y_{D}^{\rm eq}(x)=\frac{45}{4\pi^{4}}\frac{g_{D}}{h_{\rm eff}(T)}x^{2}K_{2}(x).$ (19) Here $g_{D}$ is the dark fermion number of degrees of freedom, $K_{1}$ and $K_{2}$ are the modified second order Bessel functions of degree one and two. The Boltzmann equation Eq. (17) is solved numerically to determine the yield at the present time $Y_{\infty}$ which gives us the relic density $\Omega h^{2}=\frac{m_{D}Y_{\infty}s_{0}h^{2}}{\rho_{c}},$ (20) where $s_{0}$ is today’s entropy density, $\rho_{c}$ is the critical density and $h=0.678$ denotes the present Hubble expansion rate in units of 100 km s-1 Mpc-1. We select three benchmarks with different masses for the dark photon and for the dark fermions. The benchmarks are listed in Table 1 along with the couplings, the relic density and the inelastic DM-electron scattering cross- section. In the analysis, we set $\epsilon=0$, i.e. we assume no mass mixing. Model | $m_{D}$ (GeV) | $m_{\gamma^{\prime}}$ (MeV) | $g_{X}$ | $\delta$ | $\Omega h^{2}$ | $\bar{\sigma}_{e}$ (cm2) ---|---|---|---|---|---|--- ​​(a) | 1.00 | 55 | 0.040 | $4.0\times 10^{-5}$ | 0.125 | $2.80\times 10^{-44}$ (b) | 0.50 | 60 | 0.025 | $6.0\times 10^{-5}$ | 0.121 | $1.75\times 10^{-44}$ (c) | 0.30 | 300 | 0.055 | $7.5\times 10^{-4}$ | 0.116 | $1.19\times 10^{-44}$ Table 1: Input parameters, the relic density $\Omega h^{2}$ and the inelastic DM-electron scattering cross section $\bar{\sigma}_{e}$ for the benchmarks used in this analysis. The DM relic density satisfies, within theoretical uncertainties, the experimental value from the Planck Collaboration [50] $(\Omega h^{2})_{\rm Planck}=0.1198\pm 0.0012.$ (21) Following the dark freeze-out of DM in the hidden sector, conversion processes $D^{\prime}_{2}\bar{D}^{\prime}_{2}\longleftrightarrow D^{\prime}_{1}\bar{D}^{\prime}_{1}$ remain active. The ratio of the number densities of $D^{\prime}_{1}$ and $D^{\prime}_{2}$ is Boltzmann suppressed, i.e., $n_{2}/n_{1}\sim\exp(-\Delta m/T_{c})$, where $T_{c}$ is the temperature below which conversion processes shut off. One can safely assume that $n_{2}\sim n_{1}$ as long as $T_{c}\gg\Delta m$. Focusing on the process $D^{\prime}_{2}\bar{D}^{\prime}_{2}\longrightarrow D^{\prime}_{1}\bar{D}^{\prime}_{1}$, we determine $T_{c}$ for which $n_{2}\langle\sigma v\rangle_{D^{\prime}_{2}\bar{D}^{\prime}_{2}\longrightarrow D^{\prime}_{1}\bar{D}^{\prime}_{1}}/H\sim 1,$ (22) where $H$ is the Hubble parameter. In Fig. 1 we plot $n\langle\sigma v\rangle$ versus $x$ for benchmarks (a) (left panel) and (c) (right panel). The figure shows the conversion process (blue) along with two other processes, $D\bar{D}\to i\bar{i}$ (red) and $D\bar{D}\to\gamma^{\prime}\gamma^{\prime}$ (yellow) and the Hubble parameter (purple). For benchmark (a), one finds that for the value of the kinetic mixing chosen the DM remains out of equilibrium with the SM for the entire $x$ range, while increasing the kinetic mixing causes such a process to enter in equilibrium for a range of $x$ as shown for benchmark (c). This process, however, decouples before other decoupling processes. Figure 1: A plot of $n\langle\sigma v\rangle$ and the Hubble parameter $H(T)$ as a function of $x=m_{D}/T$ for benchmark (a) (left panel) and (c) (right panel). Dark freeze-out sets in before the DM conversion process shuts off for model (a). The analysis of model (b) is similar to that of model (a), while both processes occur nearly at the same temperature for model (c). As also shown in Fig. 1, dark freeze-out occurs for $x\sim 20$ for all benchmarks. Turning to the conversion process, for benchmark (a), such a process shuts off at $x\sim 1.12\times 10^{4}$, which for the DM mass of benchmark (a) corresponds to $T_{c}\sim 90$ keV. The mass gap of interest here is $\Delta m\sim 2.8$ keV which makes the number densities of $D^{\prime}_{1}$ and $D^{\prime}_{2}$ almost equal. One can see that the conversion process shuts off much later than the other ones. This is also observed for benchmark (b). Unlike (a) and (b), benchmark (c) shows the freeze-out and conversion process shutting off occur at the same $x\sim 20$ which corresponds to a temperature much larger than $\Delta m$. Thus for practical purposes hereafter, we assume that the DM density is divided equally between $D^{\prime}_{1}$ and $D^{\prime}_{2}$. After the conversion process terminates, the dark fermion $D_{2}^{\prime}$ can decay to $D_{1}^{\prime}$. The only decay channel would be $D_{2}^{\prime}\to D_{1}^{\prime}\bar{\nu}\nu$, which is further suppressed by the dark photon coupling to the neutrinos which is proportional to the kinetic mixing, and further the decay is phase-space suppressed because of the small mass gap $\Delta m\sim\mathcal{O}$(keV). The total 3-body decay width is (including three neutrino generations) $\Gamma_{D_{2}^{\prime}\to D_{1}^{\prime}\nu\bar{\nu}}=\frac{x_{\nu}^{2}}{256\pi^{3}m^{4}_{\gamma^{\prime}}}\left[\frac{f(m_{1},m_{2})}{m_{2}^{3}}+24m_{1}^{3}(m_{1}^{2}+m_{1}m_{2}+m^{2}_{2})\log\left(\frac{m_{2}}{m_{1}}\right)\right],$ (23) where $f(m_{1},m_{2})=(m_{2}^{2}-m_{1}^{2})(m_{1}^{6}-2m_{2}m_{1}^{5}-7m_{2}^{2}m_{1}^{4}-20m_{2}^{3}m_{1}^{3}-7m_{2}^{4}m_{1}^{2}-2m_{2}^{5}m_{1}+m_{2}^{6}).$ (24) Since $m_{1}=m_{2}-\Delta m$ and expanding in $\Delta m$, we get to lowest order in $\Delta m$ $\Gamma_{D_{2}^{\prime}\to D_{1}^{\prime}\nu\bar{\nu}}\simeq\frac{x_{\nu}^{2}(\Delta m)^{5}}{40\pi^{3}m^{4}_{\gamma^{\prime}}},$ (25) where for a small gauge kinetic mixing, $g_{X}^{\gamma^{\prime}}\approx g_{X}$ and $x_{\nu}\sim g_{X}g_{Y}(Q_{1}-Q_{2})\left(\frac{m_{\gamma^{\prime}}}{m_{Z}}\right)\delta.$ (26) In this work we are interested in $\Delta m\sim 3$ keV and $\delta\sim 10^{-5}$ which results in a decay lifetime of $D_{2}^{\prime}$ order $10^{13}$ years which means that $D_{2}^{\prime}$ is stable over the lifetime of the universe. Thus, in this model dark matter is constituted of two dark fermions with essentially degenerate masses which are of order 1 GeV. ## 4 DM-electron scattering cross-section A DM particle can undergo an elastic scattering with a bound electron in a xenon atom, but such a scattering can deliver only a few eV to the electron which is not sufficient to explain the Xenon-1T excess. However, an inelastic exothermic down-scattering can impart a recoil energy to the electron equivalent to the mass difference between the incoming and outgoing DM particles. The model considered here allows for the desired small mass splitting between the two Dirac fermions $D^{\prime}_{1}$ and $D^{\prime}_{2}$ so that the heavier fermion $D^{\prime}_{2}$ down-scatters to $D^{\prime}_{1}$. Next, we compute the inelastic scattering cross-section of the process described $D_{2}^{\prime}(\vec{p}_{1})+e(\vec{p}_{2})\to D_{1}^{\prime}(\vec{p}_{3})+e^{\prime}(\vec{p}_{4})$. Assuming the dark photon mass is much greater than the momentum transfer, the averaged matrix element squared for this process is given by $\displaystyle\overline{|\mathcal{M}|^{2}}$ $\displaystyle=\frac{2\bar{g}_{X}^{2}g_{2}^{2}}{m^{4}_{\gamma^{\prime}}\cos^{2}\theta}\Bigg{\\{}\frac{1}{2}(a_{f}^{\prime 2}-v_{f}^{\prime 2})\Big{[}(m_{1}-m_{2})^{2}-(t+2m_{1}m_{2})\Big{]}m^{2}_{e}+\frac{1}{4}(v_{f}^{\prime 2}+a_{f}^{\prime 2})\Big{[}(m_{2}^{2}+m_{e}^{2}-u)$ $\displaystyle\times(m_{1}^{2}+m_{e}^{2}-u)+(s-m_{1}^{2}-m_{e}^{2})(s-m_{2}^{2}-m_{e}^{2})-2m_{1}m_{2}(2m_{e}^{2}-t)\Big{]}\Bigg{\\}},$ (27) where $\bar{g}_{X}=\frac{1}{2}g_{X}(Q_{1}-Q_{2})$ and $s,t,u$ are the Mandelstam variables. The directional matrix element for free electron-DM scattering is given by $\overline{|\mathcal{M}(\vec{q})|^{2}}=\overline{|\mathcal{M}(q)|^{2}}\times|F_{DM}(q)|^{2},$ (28) where the form factor $F_{DM}(q)$ can be taken as 1 for a small momentum transfer. The electron-DM scattering differential cross-section is given by $\displaystyle\frac{d\bar{\sigma}_{e}}{d\Omega}$ $\displaystyle=\frac{1}{64\pi^{2}s}\frac{|\vec{p}_{3}|}{|\vec{p}_{1}|}\overline{|\mathcal{M}(\vec{q})|^{2}}.$ (29) Keeping the velocity dependence in the Mandelstam variables, we have $t=-q^{2}\simeq-\Delta m\left(1-\sqrt{\frac{2m_{e}v^{2}}{\Delta m}}\cos\theta_{\rm CM}\right),~{}~{}~{}s\sim(m_{2}+m_{e})^{2}\left(1+\frac{\mu_{De}}{m_{2}+m_{e}}v^{2}\right),$ (30) and integrating over $\theta_{\rm CM}$, the scattering angle in the CM frame, and $\phi$, we get for the DM-e scattering cross-section $\bar{\sigma}_{e}\simeq\frac{\bar{g}_{X}^{2}g_{2}^{2}}{16\pi m^{4}_{\gamma^{\prime}}\cos^{2}\theta}\left(\frac{4\mu^{2}_{De}}{1+\frac{\mu_{De}}{m_{2}+m_{e}}v^{2}}\right)\left[v^{\prime 2}_{f}+(a_{f}^{\prime 2}+v^{\prime 2}_{f})v^{2}\right],$ (31) where $\mu_{De}=\frac{m_{2}m_{e}}{m_{2}+m_{e}}$ is the dark matter-electron reduced mass. For $v\sim 10^{-3}$, one can discard the velocity-dependent piece and get $\bar{\sigma}_{e}\simeq\frac{\bar{g}_{X}^{2}g_{2}^{2}}{4\pi\cos^{2}\theta}\frac{\mu^{2}_{De}}{m^{4}_{\gamma^{\prime}}}v_{f}^{\prime 2}.$ (32) The values of the cross-section for the three benchmarks are shown in Table 1. One finds that the cross-section depends on the gauge coupling in the dark sector and on kinetic mixing which enters in the expression of $v^{\prime}_{f}$. Such quantities are constrained by experiments which we discuss in section 6. ## 5 Detection rate at Xenon-1T We give here a quantitative analysis of the excess seen in the event rate in the Xenon-1T experiment arising from $D_{2}^{\prime}$ with mass $m_{2}$ scattering inelastically off an electron into $D_{1}^{\prime}$ with mass $m_{1}$ delivering a recoil energy $E_{R}$ to the electron. Energy conservation for this process gives $\frac{q^{2}}{2m_{2}}-vq\cos\eta=\Delta m-E_{R},$ (33) where $\eta$ is the angle between the incoming $D_{2}^{\prime}$ momentum and the momentum transfer $\vec{q}$. Taking $m_{1}\approx m_{2}$, the range of momentum transfer is given by $q_{\pm}=\begin{cases}m_{2}v\pm\sqrt{m_{2}^{2}v^{2}-2m_{2}(E_{R}-\Delta m)},&\text{for }E_{R}>\Delta m\\\ \pm m_{2}v+\sqrt{m_{2}^{2}v^{2}-2m_{2}(E_{R}-\Delta m)},&\text{for }E_{R}<\Delta m.\end{cases}$ (34) The recoil energy can be expressed in terms of the mass difference and in the limit $\Delta m\ll m_{e}\ll m_{2}$, we have [51] $E_{R}\simeq\Delta m\left(1-\sqrt{\frac{2m_{e}v^{2}}{\Delta m}}\cos\theta_{\rm CM}\right),$ (35) with $q^{2}\simeq 2m_{e}E_{R}$, where $\theta_{\rm CM}$ is the scattering angle in the CM frame. The velocity-averaged differential cross-section for inelastic DM scattering is $\frac{d\langle\sigma v\rangle}{dE_{R}}=\int_{v_{\rm min}}^{v_{\rm max}}\frac{f(v)}{v}dv\frac{\bar{\sigma}_{e}}{2m_{e}}\int_{q_{-}}^{q_{+}}dq\,a_{0}^{2}qK(E_{R},q),$ (36) where the Bohr radius $a_{0}=1/(\alpha_{\rm em}m_{e})$ $(\alpha_{\rm em}\simeq 1/137)$ and $K(E_{R},q)$ is the atomic factorization factor (shown in Fig. 2) and $f(v)$ is the standard Boltzmann velocity distribution after integrating the angular part. In Eq. (36) the integral on $dq$, i.e., $K^{\prime}(E_{R})\equiv\int_{q_{-}}^{q_{+}}dq\,a_{0}^{2}qK(E_{R},q),$ (37) can be directly evaluated by using Fig. 2. Figure 2: The atomic ionization factor $K$ summed over all possible atomic electrons dominated by $n=3$ for Xe at electron recoil energy $E_{R}=2$ keV. The plot is a function of the momentum transfer $q$, taken from Ref. [52]. From Eq. (35), the range of the electron recoil energy is $E_{R}^{\pm}\simeq\Delta m\left(1\pm\sqrt{\frac{2m_{e}v^{2}}{\Delta m}}\right),$ (38) and the differential cross-section in this range becomes $\frac{d\langle\sigma v\rangle}{dE_{R}}=\frac{\bar{\sigma}_{e}}{2m_{e}}K^{\prime}(E_{R})\int_{0}^{v_{\rm max}}\frac{f(v)}{v}dv~{}\Theta(E_{R}-E_{R}^{-})\Theta(E_{R}^{+}-E_{R}),$ (39) where for $E_{R}^{+}-E_{R}^{-}\ll E_{R}^{\pm}$, and where $\Theta(E_{R}-E_{R}^{-})\Theta(E_{R}^{+}-E_{R})\simeq(E_{R}^{+}-E_{R}^{-})\delta(E_{R}-\Delta m)$. Thus we have $\frac{d\langle\sigma v\rangle}{dE_{R}}=\sqrt{\frac{2\Delta m}{m_{e}}}\bar{\sigma}_{e}K^{\prime}(E_{R})\delta(E_{R}-\Delta m)\int_{0}^{v_{\rm max}}f(v)dv.$ (40) Note that for $v_{\rm max}=\sqrt{2\Delta m/m_{e}}\gg v_{0}$ (the most probable velocity), we get $\int_{0}^{v_{\rm max}}f(v)dv\simeq 1$. In practice, the electron recoil energy is not manifested as a Dirac delta function but rather smeared by the detector resolution. This can be modeled by [1] $\sigma_{r}=a\sqrt{E_{R}}+b~{}E_{R},$ (41) with $a=(0.310\pm 0.004)\sqrt{\text{keV}}$ and $b=0.0037\pm 0.0003$. We assume the resolution function is a Gaussian of the form $R_{S}(E,E_{R})=\frac{1}{\sqrt{2\pi}\sigma_{r}}\exp\left[-\frac{(E-E_{R})^{2}}{\sigma^{2}_{r}}\right]~{}\alpha(E),$ (42) where $\alpha(E)$ is the efficiency given in Fig. 2 of Ref. [1] which we take to be 0.8 for our purposes. As a result, the DM detection rate is $\displaystyle\frac{dR}{dE}$ $\displaystyle=n_{\rm Xe}\frac{\rho_{2}}{m_{2}}\int\frac{d\langle\sigma v\rangle}{dE_{R}}R_{S}(E,E_{R})dE_{R}$ $\displaystyle=n_{\rm Xe}~{}\rho_{2}\sqrt{\frac{2\Delta m}{m_{e}}}\frac{\bar{\sigma}_{e}}{m_{2}}K^{\prime}(\Delta m)R_{S}(E,\Delta m),$ (43) where $n_{\rm Xe}\simeq 4.2\times 10^{27}/$ton is the number of xenon atoms in the detector and $\rho_{2}\simeq 0.15$ GeV/cm3 assuming that $D_{2}^{\prime}$ makes half the amount of the observed relic density. At the recoil energy of interest, $E_{R}\simeq\Delta m\simeq 2.5$ keV and $K^{\prime}(\Delta m)\simeq 19.4$. The event detection rate becomes $\frac{dR}{dE}\simeq(1.5\times 10^{45}~{}\text{GeV/cm}^{2})\frac{\bar{\sigma}_{e}}{m_{2}}R_{S}(E,\Delta m),$ (44) which has units of $(\text{t}\cdot\text{yr}\cdot\text{keV})^{-1}$. ## 6 Constraints and fit to Xenon-1T data Using Eq. (44) we attempt to fit the theory predictions of the model based on the benchmarks (a), (b) and (c) of Table 1 to the Xenon-1T data. But before doing so, let us discuss the stringent experimental constraints that must be satisfied. Those constraints are summarized in Fig. 3. Here CRESST-III (2019) [53] gives the most sensitive limits on the DM-nucleon scattering for the light mass range, 0.1$-$10 GeV. The DM-nucleon scattering cross-section against a nucleus with mass number $A$ and proton number $Z$ can be written as $\sigma_{N}=\frac{g_{X}^{2}(Q_{1}+Q_{2})^{2}g_{2}^{2}v_{f}^{\prime 2}}{16\pi\cos^{2}\theta}\frac{\mu^{2}_{DN}}{m^{4}_{\gamma^{\prime}}}\left(\frac{Z}{A}\right)^{2},$ (45) where $\mu_{DN}$ is the DM-nucleon reduced mass and $Z/A\sim 0.5$. Figure 3: Top panels: Exclusion limits for benchmarks (a), (b) and (c) of Table 1 in the kinetic mixing-dark photon mass plane specific to our model with constraints from the CRESST 2019 DM-nucleon scattering cross-section and the projected SuperCDMS limit. Also shown is the dark photon constraint from CHARM. The green patches show the regions where the DM relic density is satisfied and the grey stripes indicate the region producing the fit to the Xenon-1T excess. Bottom panels: Constraints from the Planck experiment on the DM annihilation to two $e^{+}e^{-}$ pair. Dark photon experiments [54] such as CHARM set limits on visible and invisible decays of the dark photon. These limits are shown in the kinetic mixing-dark photon mass plane in the top panels of Fig. 3 for benchmarks (a) (left), (b) (center) and (c) (right) recast to our model parameters. Also shown are the SuperCDMS [55] sensitivity projections for benchmarks (a) and (b) while (c) remains out of reach of SuperCDMS. In the bottom panels, we present the recast limits from the Planck experiment [56, 57] for the same benchmarks in the gauge coupling $g_{X}$-dark photon mass plane. These limits pertain to the annihilation of DM to two new vector bosons followed by their decay to two $e^{+}e^{-}$ pairs. The benchmarks of Table 1 satisfy all of the above constraints. There is a larger parameter space than the benchmarks of Table 1 where the experimental limits can be evaded while satisfying the relic density and producing the correct fit to the Xenon-1T excess. We exhibit those regions in the top panels of Fig. 3. The green patches show the parts of the parameter space giving the correct DM relic density for a specific choice of the dark coupling $g_{X}$ and the grey stripes indicate the regions producing the correct fit to the Xenon-1T excess. Figure 4: The event rate at Xenon-1T plotted over the background for benchmarks (purple line) vs. the electron recoil energy for models (a) (blue line), (b) (red line) and (c) (yellow line) of Table 1. We exhibit in Fig. 4 the Xenon-1T data points (SR1) for the electron recoil events in the detector along with the background-only hypothesis (B0). The event rates for the three benchmarks in our model are plotted over the background showing a clear enhancement near 2.5$-$2.8 keV as expected. Lighter DM particles give a larger event rate since the latter is dependent on $\bar{\sigma}_{e}/m_{2}$. Thus more data in the future which can measure the height of the peak more accurately can determine more precisely the allowed range of the dark matter mass. ## 7 Conclusion In this work we have investigated the Xenon-1T signal as arising from sub-GeV hidden sector dark matter. Specifically we consider a $U(1)$ Stueckelberg extension of the Standard Model with two dark Dirac fermions degenerate in mass charged under the $U(1)$ gauge group. The fermion mass degeneracy is broken by a small mass mixing term which removes the degeneracy and produces two Dirac fermions $D_{1}^{\prime},D_{2}^{\prime}$ where $D_{2}^{\prime}$ is heavier than $D_{1}^{\prime}$ with a mass splitting size $\sim$ 3 keV. The observed effect is explained by the exothermic inelastic scattering $eD_{2}^{\prime}\to eD_{1}^{\prime}$. The scattering occurs via exchange of a dark photon which is the massive gauge boson of the hidden sector. The coupling of the dark photon to the electron arises from gauge kinetic mixing of the $U(1)$ gauge boson of the hidden sector with the gauge boson of the $U(1)_{Y}$ hypercharge. In the work here we have given a detailed analysis of the dark matter relic density constituted of dark fermions while the dark photons decay before the BBN time. As noted above there are two dark fermions in the system $D_{1}^{\prime}$ and $D_{2}^{\prime}$ where $D_{2}^{\prime}$ has a mass slightly greater than that of $D_{1}^{\prime}$ and decays to $D_{1}^{\prime}$ via the channel $D^{\prime}_{2}\to D_{1}^{\prime}+\nu\bar{\nu}$. However, the lifetime for this decay is larger than the age of the universe and so for all practical purposes dark matter is constituted of two dark fermions $D_{2}^{\prime}$ and $D_{1}^{\prime}$ in essentially equal amount. In the analysis of the relic density one encounters stringent constraints on kinetic mixing and on the dark photon mass from CRESST 2019 DM-nucleon scattering cross section, and from the CHARM experiment. We have translated these constraints for our analysis for the model points considered in Table 1 and show that the model parameters of Table 1 are consistent with these constraints (Fig. 3). We note in passing that these constraints, i.e., on the kinetic mixing and on the dark photon mass, are projected to become more stringent from future data from SuperCDMS as shown in Fig. 3 putting further constraints on the allowed parameter space of GeV size dark matter models. Further, the Planck experiment gives constraints on the $U(1)_{X}$ gauge coupling and the dark photon mass from the dark matter annihilation to $4e$ which arise in our case from the annihilation channel $D^{\prime}\bar{D}^{\prime}\to\gamma^{\prime}\gamma^{\prime}\to 4e$. Here again we show that our model is consistent with these constraints as exhibited in Fig. 3. In Fig. 4 we showed that the models listed in Table 1, which satisfy all the known constraints on kinetic mixing, on the gauge coupling of the $U(1)_{X}$ of the dark sector, on the dark photon mass, and generate relic density of dark matter consistent with the Planck experiment, can explain the Xenon-1T excess. We noted that the size of the peak for the excess events is model- dependent and more data in the future from the Xenon-1T collaboration will help delineate the nature of the dark sector more accurately. Further checks on the model can also come from additional data from the direct detection experiments that focus on the low mass region of dark matter in the GeV region. Acknowledgments: The research of AA and MK was supported by the BMBF under contract 05H18PMCC1, while the research of PN was supported in part by the NSF Grant PHY-1913328. ## Appendix In this appendix we give further details of the analysis presented in the main body of the paper. Thus in appendix A we discuss the generation of the mass term for the $D_{1}$ and $D_{2}$ quarks and of the mixing term involving $\bar{D}_{1}D_{2}+\bar{D}_{2}D_{1}$ from a Higgs mechanism. In appendix B we give the cross-sections for the annihilation of $D\bar{D}$ into $q\bar{q},\ell\bar{\ell}$, $\nu\bar{\nu}$ and $\gamma^{\prime}\gamma^{\prime}$ and the cross section for the conversion process $D_{2}\bar{D}_{2}\to D_{1}\bar{D}_{1}$. In appendix C details of the partial decay widths of the dark photon $\gamma^{\prime}$ are given, i.e., $\gamma^{\prime}\to\ell\bar{\ell},q\bar{q},\nu\bar{\nu}$. ## Appendix A Generation of $\bar{D}_{1}D_{2}+\bar{D}_{2}D_{1}$ term from spontaneous symmetry breaking We now show that the term $\Delta\mu(\bar{D}_{1}D_{2}+\bar{D}_{2}D_{1})$ can be produced via spontaneous breaking which also gives mass to the $U(1)_{X}$ gauge boson. Thus consider a $U(1)_{X}$ gauge field coupled to a complex scalar field charged under the $U(1)_{X}$ with charge $Q_{\phi}$, and further that the complex scalar couples to the Dirac fermions $D_{1},D_{2}$. Specifically we consider the Lagrangian $\displaystyle\mathcal{L}_{\phi}=-|(\partial_{\mu}\phi- ig_{X}Q_{\phi}C_{\mu}\phi)|^{2}-V(\phi\phi^{*})-\lambda\frac{H^{c}H}{\Lambda}(\bar{D}_{1}D_{1}+\bar{D}_{2}D_{2})-(\lambda^{\prime}\bar{D}_{1}D_{2}\phi+\text{h.c.}).$ (46) The Lagrangian above is invariant under $U(1)$ gauge transformations when $\displaystyle-Q_{1}+Q_{2}+Q_{\phi}=0,$ (47) where $Q_{1}$ and $Q_{2}$ are the $U(1)_{X}$ charges of $D_{1}$ and $D_{2}$. The potential $V(\phi)$ gives a VEV $\phi_{0}=\langle\phi\rangle$ and after spontaneous breaking the dark photon and the dark fermions get masses as follows $\displaystyle\mathcal{L}_{\rm m}=-\frac{1}{2}m^{2}_{\gamma^{\prime}}A^{\mu^{\prime}}A^{\prime}_{\mu}-\mu(\bar{D}_{1}D_{1}+\bar{D}_{2}D_{2})-\Delta\mu(\bar{D}_{1}D_{2}+\bar{D}_{2}D_{1}),$ (48) where $\displaystyle m_{\gamma^{\prime}}$ $\displaystyle=\sqrt{2}g_{X}Q_{\phi}\phi_{0},$ $\displaystyle\mu$ $\displaystyle=\lambda\frac{v^{2}}{\Lambda},$ $\displaystyle\Delta\mu$ $\displaystyle=\lambda^{\prime}\phi_{0},$ (49) where $v\sim 250$ GeV is the standard model Higgs VEV. For $g_{X}\sim Q_{\phi}\sim 1$, an $m_{\gamma^{\prime}}$ in the range 50$-$300 MeV requires $\phi_{0}\sim 100$ MeV. A Dirac fermion mass $\mu\sim 1$ GeV, requires the cutoff scale $\Lambda\sim 100$ TeV. The cutoff scale could have a low scale string origin. Further, $\Delta\mu\sim 2$ keV requires $\lambda^{\prime}\sim 10^{-5}$. Such a small $\lambda^{\prime}$ could also have a low scale string origin. Thus quite remarkably if we assume that ‘flavor changing’ $\Delta\mu$ term arises from a higher dimensional operator such as $(H^{c}H/\Lambda^{2})(\bar{D}_{1}D_{2}\phi+\bar{D}_{2}D_{1}\phi^{*})$, then after spontaneous breaking it produces a $\lambda^{\prime}\sim v^{2}/\Lambda^{2}\sim 10^{-5}$ which is precisely the size we want to generate $\Delta\mu\sim 2$ keV. ## Appendix B Relevant cross-sections We present here the relevant cross-sections needed for the computation of the dark matter relic density. In the computations of these cross-sections, we have assumed $m_{D^{\prime}_{1}}\approx m_{D^{\prime}_{2}}\approx m_{D}$ as the mass difference between their masses is tiny and has no substantial effect on the size of the cross sections computed below. 1. 1. $D\bar{D}\to Z/\gamma^{\prime}\to q\bar{q}$: The total cross-section for the process $D\bar{D}\to Z/\gamma^{\prime}\to q\bar{q}$ is given by $\displaystyle\sigma^{D\bar{D}\to q\bar{q}}(s)=$ $\displaystyle\frac{c^{2}_{X}g_{X}^{2}g_{2}^{2}}{8\pi s\cos^{2}\theta}\sqrt{\frac{s-4m^{2}_{q}}{s-4m^{2}_{D}}}\Bigg{[}\frac{(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})^{2}(\alpha^{2}\eta_{q}T^{2}_{3q}-2\alpha\beta\kappa_{q}Q_{q}T_{3q}+2\beta^{2}Q_{q}^{2}\kappa_{q})}{(s-m_{Z}^{2})^{2}+m^{2}_{Z}\Gamma_{Z}^{2}}$ $\displaystyle+\frac{(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})^{2}(\alpha^{\prime 2}\eta_{q}T^{2}_{3q}-2\alpha^{\prime}\beta^{\prime}\kappa_{q}Q_{q}T_{3q}+2\beta^{\prime 2}Q_{q}^{2}\kappa_{q}))}{(s-m^{2}_{\gamma^{\prime}})^{2}+m^{2}_{\gamma^{\prime}}\Gamma_{\gamma^{\prime}}^{2}}$ $\displaystyle-2(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})\Big{\\{}Q_{q}\beta(2\beta^{\prime}Q_{q}-\alpha^{\prime}T_{3q})\kappa_{q}$ $\displaystyle+\alpha T_{3q}(\alpha^{\prime}T_{3q}\eta_{q}-\beta^{\prime}Q_{q}\kappa_{q})\Big{\\}}\times\frac{(s-m^{2}_{Z})(s-m^{2}_{\gamma^{\prime}})+\Gamma_{Z}\Gamma_{\gamma^{\prime}}m_{Z}m_{\gamma^{\prime}}}{[(s-m_{Z}^{2})^{2}+m^{2}_{Z}\Gamma_{Z}^{2}][(s-m^{2}_{\gamma^{\prime}})^{2}+m^{2}_{\gamma^{\prime}}\Gamma_{\gamma^{\prime}}^{2}]}\Bigg{]},$ (50) where $m_{q}$, $m_{Z}$ and $m_{\gamma^{\prime}}$ are the quark, $Z$ and $\gamma^{\prime}$ masses, respectively, and $T_{3q}=1/2(-1/2)$ and $Q_{q}=2/3(-1/3)$ for up-(down)-type quarks, and with $\displaystyle\kappa_{q}$ $\displaystyle=(s+2m^{2}_{D})(s+2m^{2}_{q}),~{}~{}~{}\eta_{q}=(s+2m^{2}_{D})(s-m^{2}_{q}),$ (51) $\displaystyle\alpha$ $\displaystyle=\cos\psi-\bar{\epsilon}\sin\theta\sin\psi,~{}~{}~{}\beta=\sin^{2}\theta\cos\psi-\bar{\epsilon}\sin\theta\sin\psi,$ $\displaystyle\alpha^{\prime}$ $\displaystyle=\sin\psi+\bar{\epsilon}\sin\theta\cos\psi,~{}~{}~{}\beta^{\prime}=\sin^{2}\theta\sin\psi+\bar{\epsilon}\sin\theta\cos\psi.$ 2. 2. $D\bar{D}\to Z/\gamma^{\prime}\to\ell\bar{\ell}$: The total cross-section for the process $D\bar{D}\to Z/\gamma^{\prime}\to\ell\bar{\ell}$ is given by $\displaystyle\sigma^{D\bar{D}\to\ell\bar{\ell}}(s)=$ $\displaystyle\frac{c^{2}_{X}g_{X}^{2}g_{2}^{2}}{96\pi s\cos^{2}\theta}\sqrt{\frac{s-4m^{2}_{\ell}}{s-4m^{2}_{D}}}\Bigg{[}\frac{(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})^{2}(\alpha^{2}\eta_{\ell}-4\alpha\beta\kappa_{\ell}+8\beta^{2}\kappa_{\ell})}{(s-m_{Z}^{2})^{2}+m^{2}_{Z}\Gamma_{Z}^{2}}$ $\displaystyle+\frac{(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})^{2}(\alpha^{\prime 2}\eta_{\ell}-4\alpha^{\prime}\beta^{\prime}\kappa_{\ell}+8\beta^{\prime 2}\kappa_{\ell})}{(s-m_{\gamma^{\prime}}^{2})^{2}+m^{2}_{\gamma^{\prime}}\Gamma_{\gamma^{\prime}}^{2}}$ $\displaystyle+2(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})\Big{\\{}2\beta(\alpha^{\prime}-4\beta^{\prime})\kappa_{\ell}$ $\displaystyle-\alpha(\alpha^{\prime}\eta_{\ell}-2\beta^{\prime}\kappa_{\ell})\Big{\\}}\times\frac{(s-m^{2}_{Z})(s-m^{2}_{Z^{\prime}})+\Gamma_{Z}\Gamma_{\gamma^{\prime}}m_{Z}m_{\gamma^{\prime}}}{[(s-m_{Z}^{2})^{2}+m^{2}_{Z}\Gamma_{Z}^{2}][(s-m^{2}_{\gamma^{\prime}})^{2}+m^{2}_{\gamma^{\prime}}\Gamma_{\gamma^{\prime}}^{2}]}\Bigg{]},$ (52) where $\displaystyle\kappa_{\ell}$ $\displaystyle=(s+2m^{2}_{D})(s+2m^{2}_{\ell}),~{}~{}~{}\eta_{\ell}=(s+2m^{2}_{D})(s-m^{2}_{\ell}).$ (53) 3. 3. $D\bar{D}\to Z/\gamma^{\prime}\to\nu\bar{\nu}$: The total cross-section for the process $D\bar{D}\to Z/\gamma^{\prime}\to\nu\bar{\nu}$ is given by $\displaystyle\sigma^{D\bar{D}\to\nu\bar{\nu}}(s)$ $\displaystyle=\frac{c^{2}_{X}g_{X}^{2}g^{2}_{2}}{32\pi\cos^{2}\theta}\frac{(s+2m^{2}_{D})s^{1/2}}{\sqrt{s-4m^{2}_{D}}}\Bigg{\\{}\frac{\alpha^{\prime 2}(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})^{2}}{(s-m_{\gamma^{\prime}}^{2})^{2}+m^{2}_{\gamma^{\prime}}\Gamma_{\gamma^{\prime}}^{2}}+\frac{\alpha^{2}(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})^{2}}{(s-m_{Z}^{2})^{2}+m^{2}_{Z}\Gamma_{Z}^{2}}$ $\displaystyle-2\alpha\alpha^{\prime}(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})(\mathcal{R}_{12}-s_{\delta}\mathcal{R}_{22})$ $\displaystyle\times\frac{(s-m^{2}_{Z})(s-m^{2}_{\gamma^{\prime}})+\Gamma_{Z}\Gamma_{\gamma^{\prime}}m_{Z}m_{\gamma^{\prime}}}{[(s-m_{Z}^{2})^{2}+m^{2}_{Z}\Gamma_{Z}^{2}][(s-m^{2}_{\gamma^{\prime}})^{2}+m^{2}_{\gamma^{\prime}}\Gamma_{\gamma^{\prime}}^{2}]}\Bigg{\\}}.$ (54) In all of the above, the coefficient $c_{X}$ is defined as $c_{X}=\begin{cases}\frac{1}{2}(Q_{1}+Q_{2}),&\text{for }\bar{D}^{\prime}_{1}D^{\prime}_{1}/\bar{D}^{\prime}_{2}D^{\prime}_{2}\\\ \frac{1}{2}(Q_{1}-Q_{2}),&\text{for }\bar{D}^{\prime}_{1}D^{\prime}_{2}/\bar{D}^{\prime}_{2}D^{\prime}_{1}.\end{cases}$ (55) 4. 4. $D^{\prime}_{i}\bar{D}^{\prime}_{j}\longleftrightarrow\gamma^{\prime}\gamma^{\prime}$: The total cross-section for the process $D^{\prime}_{i}\bar{D}^{\prime}_{j}\to\gamma^{\prime}\gamma^{\prime}$ is $\sigma^{D^{\prime}_{i}\bar{D}^{\prime}_{j}\to\gamma^{\prime}\gamma^{\prime}}(s)=\sum_{ij}c_{ij}^{4}\sigma_{0}(s),$ (56) where $\displaystyle\sigma_{0}(s)=\frac{g_{X}^{4}(\mathcal{R}_{11}-s_{\delta}\mathcal{R}_{21})^{4}}{8\pi s(s-4m^{2}_{D})}$ $\displaystyle\Bigg{\\{}-\frac{\sqrt{(s-4m^{2}_{\gamma^{\prime}})(s-4m^{2}_{D})}}{m^{4}_{\gamma^{\prime}}+m^{2}_{D}(s-4m^{2}_{\gamma^{\prime}})}[2m^{4}_{\gamma^{\prime}}+m^{2}_{D}(s+4m^{2}_{D})]$ $\displaystyle+\frac{\log A}{s-2m^{2}_{\gamma^{\prime}}}(s^{2}+4m^{2}_{D}s+4m^{4}_{\gamma^{\prime}}-8m^{4}_{D}-8m_{D}^{2}m^{2}_{\gamma^{\prime}})\Bigg{\\}},$ (57) and $A=\frac{s-2m^{2}_{\gamma^{\prime}}+\sqrt{(s-4m^{2}_{\gamma^{\prime}})(s-4m^{2}_{D})}}{s-2m^{2}_{\gamma^{\prime}}-\sqrt{(s-4m^{2}_{\gamma^{\prime}})(s-4m^{2}_{D})}},$ (58) with $c_{ij}=\begin{cases}\frac{1}{2}(Q_{1}+Q_{2}),&\text{for }i=j=1,2\\\ \frac{1}{2}\sqrt{(Q_{1}+Q_{2})(Q_{1}-Q_{2})},&\text{for }i,j=\\{1,2\\},i\neq j.\end{cases}$ (59) The reverse processes are given by $9(s-4m^{2}_{\gamma^{\prime}})\sigma^{\gamma^{\prime}\gamma^{\prime}\to D^{\prime}_{i}\bar{D}^{\prime}_{j}}(s)=8(s-4m^{2}_{D})\sigma^{D^{\prime}_{i}\bar{D}^{\prime}_{j}\to\gamma^{\prime}\gamma^{\prime}}(s).$ (60) 5. 5. The conversion process $D_{2}^{\prime}\bar{D_{2}^{\prime}}\longrightarrow D_{1}^{\prime}\bar{D_{1}^{\prime}}$: $\sigma^{D_{2}^{\prime}\bar{D_{2}^{\prime}}\longrightarrow D_{1}^{\prime}\bar{D_{1}^{\prime}}}\simeq\frac{g_{X}^{4}(Q_{1}+Q_{2})^{4}}{4\pi}\frac{m^{2}_{D}}{m^{4}_{\gamma^{\prime}}}\frac{(1-5r+7r^{2})}{(1-r)^{2}},$ (61) where $r=m^{2}_{\gamma^{\prime}}/4m^{2}_{D}$. ## Appendix C Decay widths for the processes $\gamma^{\prime}\to\ell\bar{\ell},q\bar{q},\nu\bar{\nu}$ 1. 1. The decay width of $\gamma^{\prime}$ to leptons is given by $\displaystyle\Gamma_{\gamma^{\prime}\to\ell\bar{\ell}}=\frac{g_{2}^{2}}{24\pi m_{\gamma^{\prime}}\cos^{2}\theta}\sqrt{1-\left(\frac{2m_{\ell}}{m_{\gamma^{\prime}}}\right)^{2}}$ $\displaystyle\Bigg{[}\frac{1}{4}\alpha^{\prime 2}(m^{2}_{\gamma^{\prime}}-m^{2}_{\ell})-\alpha^{\prime}\beta^{\prime}(m^{2}_{\gamma^{\prime}}+2m^{2}_{\ell})$ $\displaystyle+2\beta^{\prime 2}(m^{2}_{\gamma^{\prime}}+2m^{2}_{\ell})\Bigg{]}.$ (62) 2. 2. The decay width of $\gamma^{\prime}$ to quarks is given by $\displaystyle\Gamma_{\gamma^{\prime}\to q\bar{q}}=\frac{g_{2}^{2}}{8\pi m_{\gamma^{\prime}}\cos^{2}\theta}\sqrt{1-\left(\frac{2m_{q}}{m_{\gamma^{\prime}}}\right)^{2}}$ $\displaystyle[\alpha^{\prime 2}(m^{2}_{\gamma^{\prime}}-m^{2}_{q})T_{3q}^{2}-2\alpha^{\prime}\beta^{\prime}(m^{2}_{\gamma^{\prime}}+2m^{2}_{q})Q_{q}T_{3q}$ $\displaystyle+2\beta^{\prime 2}(m^{2}_{\gamma^{\prime}}+2m^{2}_{q})Q_{q}^{2}].$ (63) 3. 3. The decay width of $\gamma^{\prime}$ to neutrinos is given by $\Gamma_{\gamma^{\prime}\to\nu\bar{\nu}}=\frac{g_{2}^{2}}{32\pi\cos^{2}\theta}m_{\gamma^{\prime}}\alpha^{\prime 2}.$ (64) ## References * [1] E. Aprile et al. [XENON], Phys. Rev. D 102, no.7, 072004 (2020) doi:10.1103/PhysRevD.102.072004 [arXiv:2006.09721 [hep-ex]]. * [2] N. Viaux, M. Catelan, P. B. Stetson, G. Raffelt, J. Redondo, A. A. R. Valcarce and A. Weiss, Phys. Rev. Lett. 111, 231301 (2013) doi:10.1103/PhysRevLett.111.231301 [arXiv:1311.1669 [astro-ph.SR]]. * [3] M. M. Miller Bertolami, B. E. Melendez, L. G. Althaus and J. Isern, JCAP 10, 069 (2014) doi:10.1088/1475-7516/2014/10/069 [arXiv:1406.7712 [hep-ph]]. * [4] T. Battich, A. H. Córsico, L. G. Althaus, M. M. Miller Bertolami and M. M. M. Bertolami, JCAP 08, 062 (2016) doi:10.1088/1475-7516/2016/08/062 [arXiv:1605.07668 [astro-ph.SR]]. * [5] M. Giannotti, I. G. Irastorza, J. Redondo, A. Ringwald and K. Saikawa, JCAP 10, 010 (2017) doi:10.1088/1475-7516/2017/10/010 [arXiv:1708.02111 [hep-ph]]. * [6] S. Shakeri, F. Hajkarim and S. S. Xue, [arXiv:2008.05029 [hep-ph]]. * [7] V. V. Khruschov, [arXiv:2008.03150 [hep-ph]]. * [8] G. Arcadi, A. Bally, F. Goertz, K. Tame-Narvaez, V. Tenorth and S. Vogl, [arXiv:2007.08500 [hep-ph]]. * [9] A. N. Khan, Phys. Lett. B 809, 135782 (2020) doi:10.1016/j.physletb.2020.135782 [arXiv:2006.12887 [hep-ph]]. * [10] J. Cao, X. Du, Z. Li, F. Wang and Y. Zhang, [arXiv:2007.09981 [hep-ph]]. * [11] F. Takahashi, M. Yamada and W. Yin, [arXiv:2007.10311 [hep-ph]]. * [12] A. Karozas, S. F. King, G. K. Leontaris and D. K. Papoulias, [arXiv:2008.03295 [hep-ph]]. * [13] L. A. Anchordoqui, I. Antoniadis, K. Benakli and D. Lust, Phys. Lett. B 810, 135838 (2020) doi:10.1016/j.physletb.2020.135838 [arXiv:2007.11697 [hep-th]]. * [14] Y. Jho, J. C. Park, S. C. Park and P. Y. Tseng, Phys. Lett. B 811, 135863 (2020) doi:10.1016/j.physletb.2020.135863 [arXiv:2006.13910 [hep-ph]]. * [15] B. Fornal, P. Sandick, J. Shu, M. Su and Y. Zhao, Phys. Rev. Lett. 125, no.16, 161804 (2020) doi:10.1103/PhysRevLett.125.161804 [arXiv:2006.11264 [hep-ph]]. * [16] L. Delle Rose, G. Hütsi, C. Marzo and L. Marzola, [arXiv:2006.16078 [hep-ph]]. * [17] M. Millea, [arXiv:2007.05659 [astro-ph.CO]]. * [18] F. Arias-Aragon, F. D’Eramo, R. Z. Ferreira, L. Merlo and A. Notari, [arXiv:2007.06579 [hep-ph]]. * [19] H. N. Long, D. V. Soa, V. H. Binh and A. E. Cárcamo Hernández, [arXiv:2007.05004 [hep-ph]]. * [20] P. Athron, C. Balázs, A. Beniwal, J. E. Camargo-Molina, A. Fowlie, T. E. Gonzalo, S. Hoof, F. Kahlhoefer, D. J. E. Marsh and M. T. Prim, et al. [arXiv:2007.05517 [astro-ph.CO]]. * [21] T. Li, [arXiv:2007.00874 [hep-ph]]. * [22] C. Cai, H. H. Zhang, M. T. Frandsen, M. Rosenlyst and G. Cacciapaglia, Phys. Rev. D 102, no.7, 075018 (2020) doi:10.1103/PhysRevD.102.075018 [arXiv:2006.16267 [hep-ph]]. * [23] C. Gao, J. Liu, L. T. Wang, X. P. Wang, W. Xue and Y. M. Zhong, Phys. Rev. Lett. 125, no.13, 131806 (2020) doi:10.1103/PhysRevLett.125.131806 [arXiv:2006.14598 [hep-ph]]. * [24] C. W. Chiang and B. Q. Lu, [arXiv:2007.06401 [hep-ph]]. * [25] G. Choi, T. T. Yanagida and N. Yokozaki, Phys. Lett. B 810, 135836 (2020) doi:10.1016/j.physletb.2020.135836 [arXiv:2007.04278 [hep-ph]]. * [26] N. Okada, S. Okada, D. Raut and Q. Shafi, Phys. Lett. B 810, 135785 (2020) doi:10.1016/j.physletb.2020.135785 [arXiv:2007.02898 [hep-ph]]. * [27] S. Baek, J. Kim and P. Ko, Phys. Lett. B 810, 135848 (2020) doi:10.1016/j.physletb.2020.135848 [arXiv:2006.16876 [hep-ph]]. * [28] Y. Gao and T. Li, [arXiv:2006.16192 [hep-ph]]. * [29] M. Lindner, Y. Mambrini, T. B. d. Melo and F. S. Queiroz, [arXiv:2006.14590 [hep-ph]]. * [30] J. Bramante and N. Song, Phys. Rev. Lett. 125, no.16, 161805 (2020) doi:10.1103/PhysRevLett.125.161805 [arXiv:2006.14089 [hep-ph]]. * [31] D. Aristizabal Sierra, V. De Romeri, L. J. Flores and D. K. Papoulias, Phys. Lett. B 809, 135681 (2020) doi:10.1016/j.physletb.2020.135681 [arXiv:2006.12457 [hep-ph]]. * [32] K. Harigaya, Y. Nakai and M. Suzuki, Phys. Lett. B 809, 135729 (2020) doi:10.1016/j.physletb.2020.135729 [arXiv:2006.11938 [hep-ph]]. * [33] N. F. Bell, J. B. Dent, B. Dutta, S. Ghosh, J. Kumar and J. L. Newstead, Phys. Rev. Lett. 125, no.16, 161803 (2020) doi:10.1103/PhysRevLett.125.161803 [arXiv:2006.12461 [hep-ph]]. * [34] M. Du, J. Liang, Z. Liu, V. Tran and Y. Xue, [arXiv:2006.11949 [hep-ph]]. * [35] S. Chakraborty, T. H. Jung, V. Loladze, T. Okui and K. Tobioka, [arXiv:2008.10610 [hep-ph]]. * [36] D. Borah, S. Mahapatra, D. Nanda and N. Sahu, [arXiv:2007.10754 [hep-ph]]. * [37] A. Bally, S. Jana and A. Trautner, Phys. Rev. Lett. 125, no.16, 161802 (2020) doi:10.1103/PhysRevLett.125.161802 [arXiv:2006.11919 [hep-ph]]. * [38] J. Kim, T. Nomura and H. Okada, Phys. Lett. B 811, 135862 (2020) doi:10.1016/j.physletb.2020.135862 [arXiv:2007.09894 [hep-ph]]. * [39] Y. Farzan and M. Rajaee, [arXiv:2007.14421 [hep-ph]]. * [40] D. Choudhury, S. Maharana, D. Sachdeva and V. Sahdev, Phys. Rev. D 103, no.1, 015006 (2021) doi:10.1103/PhysRevD.103.015006 [arXiv:2007.08205 [hep-ph]]. * [41] K. S. Babu, S. Jana and M. Lindner, JHEP 10, 040 (2020) doi:10.1007/JHEP10(2020)040 [arXiv:2007.04291 [hep-ph]]. * [42] F. Bergsma et al. [CHARM], Phys. Lett. B 166, 473-478 (1986) doi:10.1016/0370-2693(86)91601-1 * [43] S. N. Gninenko, Phys. Lett. B 713, 244-248 (2012) doi:10.1016/j.physletb.2012.06.002 [arXiv:1204.3583 [hep-ph]]. * [44] B. Holdom, Phys. Lett. B 166, 196-198 (1986) doi:10.1016/0370-2693(86)91377-8; Phys. Lett. B 259, 329 (1991). doi:10.1016/0370-2693(91)90836-F * [45] B. Kors and P. Nath, Phys. Lett. B 586, 366-372 (2004) doi:10.1016/j.physletb.2004.02.051 [arXiv:hep-ph/0402047 [hep-ph]]; JHEP 07, 069 (2005) doi:10.1088/1126-6708/2005/07/069. * [46] K. Cheung and T. C. Yuan, JHEP 03, 120 (2007) doi:10.1088/1126-6708/2007/03/120 [arXiv:hep-ph/0701107 [hep-ph]]. * [47] D. Feldman, Z. Liu and P. Nath, Phys. Rev. D 75, 115001 (2007) doi:10.1103/PhysRevD.75.115001 [arXiv:hep-ph/0702123 [hep-ph]]; Phys. Rev. D 79, 063509 (2009) doi:10.1103 /PhysRevD.79.063509. * [48] L. J. Hall, K. Jedamzik, J. March-Russell and S. M. West, JHEP 03, 080 (2010) doi:10.1007/JHEP03(2010)080 [arXiv:0911.1120 [hep-ph]]. * [49] A. Aboubrahim, W. Z. Feng, P. Nath and Z. Y. Wang, [arXiv:2008.00529 [hep-ph]]. * [50] N. Aghanim et al. [Planck], Astron. Astrophys. 641, A6 (2020) doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]]. * [51] H. M. Lee, [arXiv:2006.13183 [hep-ph]]. * [52] B. M. Roberts and V. V. Flambaum, Phys. Rev. D 100, no.6, 063017 (2019) doi:10.1103/PhysRevD.100.063017 [arXiv:1904.07127 [hep-ph]]. * [53] A. H. Abdelhameed et al. [CRESST], Phys. Rev. D 100, no.10, 102002 (2019) doi:10.1103/PhysRevD.100.102002 [arXiv:1904.00498 [astro-ph.CO]]. * [54] R. Essig, J. A. Jaros, W. Wester, P. Hansson Adrian, S. Andreas, T. Averett, O. Baker, B. Batell, M. Battaglieri and J. Beacham, et al. [arXiv:1311.0029 [hep-ph]]. * [55] R. Agnese et al. [SuperCDMS], Phys. Rev. D 95, no.8, 082002 (2017) doi:10.1103/PhysRevD.95.082002 [arXiv:1610.00006 [physics.ins-det]]. * [56] P. A. R. Ade et al. [Planck], Astron. Astrophys. 594, A13 (2016) doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]]. * [57] T. R. Slatyer, Phys. Rev. D 93, no.2, 023527 (2016) doi:10.1103/PhysRevD.93.023527 [arXiv:1506.03811 [hep-ph]].
Hadron Molecules Revisted R.S. Longacrea aBrookhaven National Laboratory, Upton, NY 11973, USA ###### Abstract Hadron Molecules are particles made out of hadrons that are held together by self interactions. In this report we discuss seven such molecules and their self interactions. The $f_{0}(980)$, $a_{0}(980)$, $f_{1}(1400)$, $\Delta N(2150)$ and $\pi_{1}(1400)$ molecular structure is given. We predict that two more states the $K\overline{K}K(1500)$ and $a_{1}(1400)$ should be found. ## 1 Introduction The first two molecular states $f_{0}(980)$ and $a_{0}(980)$ are the isosinglet and the isotriplet states of the $K$$\overline{K}$ bound system[1]. This binding requires a quark-spin hyperfine interaction in the over all $q$ $q$ $\overline{q}$ $\overline{q}$ system. We will see that this binding is different from the particle exchange mechanisms that bind the rest of molecules of this report. The exchanges of Ref.[1] that bind the $K$$\overline{K}$ system are quark exchanges where the quark-spin hyperfine interaction leads to an attractive potential. This attractive potential can only make states if the mesons of the fall apart mode $q$ $\overline{q}$ \- $q$ $\overline{q}$ are below threshold. Thus we have states of $K$$\overline{K}$ lying just below threshold in scalar channel($0^{++}$). This is somewhat like the deuteron in the $p$ $n$ system. The work of Ref.[1] has one flaw with regards to long-range color van der Waals-type forces[2]. It has been pointed out that confining potentials of the type used in Ref.[1] leads to a long-range power-law residual potential between color singlets of $r^{-2}$ between two mesons. Ref.[1] calculates the potential between mesons to be given by $V_{vdW}=-{{20MeV}\over{r^{2}}}.$ (1) This is to be compared with the coulomb force between charged mesons $V_{coulomb}=-{{1.5MeV}\over{r}}.$ (2) The r in equation 1 and equation 2 is given in fm. Let us compare the potential strength between them by setting them equal $V_{vdW}=V_{coulomb}=-{{20MeV}\over{r^{2}}}=-{{1.5MeV}\over{r}}.$ (3) This occurs at the distance of 13.3 fm which is a very large distance. The size of the scalar bound states are much smaller 2 fm at best. This long range van der Waals force implies that gluons are massless(like the photon) and can travel to the edge of the universe in a virtual state(like photons in EM- fields). Gluons can only try to travel to the edge if they are in a color singlet state (glueballs or glueloops). This would lead to an exponential cutoff $V_{r}=-{{e^{-\mu r}}\over{r}},$ (4) where $\mu$ is the glueball mass. Sine mesons are much lighter than glueballs, and the pion is the lightest, it is the longest carrier of the strong force. Meson exchanges will be the binding force of the other hadron molecules of this report. The report is organized in the following manner: Sec. 1 is the introduction to $f_{0}(980)$ and $a_{0}(980)$. Sec. 2 looks at particle exchange calculations and the generation of $f_{1}(1400)$. Sec. 3 consider a dibaryon state $\Delta N(2150)$ and the similarity to $f_{1}(1400)$. This similarity predicts the $a_{1}(1400)$ state. Sec. 4 an exotic meson state $1^{-+}$ which is seen in $\eta$ $\pi$ p-wave scattering $\pi_{1}(1400)$ is explained. Sec. 5 predicts another molecular state $K\overline{K}K(1500)$ which should be found. Sec. 6 is the summary and discussion. ## 2 $K$$\overline{K}$$\pi$ as an interacting system In order to bind $K$$\overline{K}$$\pi$ together, we need to develop a dynamical theory that uses particle exchange mechanism not the quark exchange that led to the van der Walls forces[2]. It is standard to break up the scattering of a three particle system into a sum of isobar spectator scatterings[3]. To complete this task we develop a unitary isobar model which has long-range particle exchange forces. We will assume that the only interaction among the three particles are isobars decaying into two particles where one of these particle exchange with the spectator forming another isobar. This one-particle-exchange(OPE) occurs in the $K^{*}$$\overline{K}$, $\overline{K^{*}}$$K$ and $a_{0}$$\pi$ isobar systems. See Figure 1(a) for the OPE mechanism(note $a_{0}$ is the $\delta$ isobar which is its older name). We choose as our dynamical framework the Blankenbecler-Sugar formalism[4] which yields set of coupled integral equations for amplitudes $X(K^{*}\rightarrow K^{*}$), $X(K^{*}\rightarrow\overline{K^{*}}$), $X(K^{*}\rightarrow a_{0}$), $X(\overline{K^{*}}\rightarrow K^{*}$), $X(\overline{K^{*}}\rightarrow\overline{K^{*}}$), $X(\overline{K^{*}}\rightarrow a_{0}$), $X(a_{0}\rightarrow K^{*}$), $X(a_{0}\rightarrow\overline{K^{*}}$), and $X(a_{0}\rightarrow a_{0}$). These amplitudes describe the isobar quasi-two-body processes $K^{*}\overline{K}\rightarrow K^{*}\overline{K}$, $K^{*}\overline{K}\rightarrow\overline{K^{*}}K$, $K^{*}\overline{K}\rightarrow a_{0}\pi$, etc., whose solution are Lorentz invariant, and satisfy two- and three-body unitarity, and cluster properties. In operator formalism these equations have the structure(a schematic representation is show in Figure 1(b)), $X_{ba}=B_{ba}(W_{E})+B_{bc}(W_{E})G_{c}(W_{E})X_{ca}(W_{E});a,b,c=K^{*},\overline{K^{*}},a_{0}.$ (5) In equation 5, $W_{E}$ is the overall c.m. energy of the three-particle system. The index $c$ which is summed over represent each isobar. Integration is over solid angles where each total angular momentum is projected out as an individual set of coupled equations. The c.m. momentum of the spectator which has the same magnitude as the isobar is integrated from zero to $\infty$. Thus all effective masses of isobars are probed starting at the kinematic limit down to negative $\infty$. All details are given in Ref.[5]. $G_{c}(W_{E})$ is the propagator of isobar $c$ and $W_{E}$ determines the kinematic limit where c.m. momentum of the spectator and the isobar is zero. In Figure 2 we show the Breit-Wigner shape of the $K^{*}$ propagator $G_{K^{*}}(2.0)$ as a function of isobar mass($K\pi$). In Figure 3 we show the Breit-Wigner shape of the $a_{0}$ propagator $G_{a_{0}}(2.04)$ as a function of isobar mass($K\overline{K}$). Figure 1: (a) Long-range one-particle-exchange(OPE) mechanism. Isobars $K^{*}$, $\overline{K^{*}}$, or $\delta$($a_{0}$ newer name) plus $\overline{K}$, $K$ or $\pi$ which absorbs the exchange particle which has decayed from the isobar forming another isobar. (b) Unitary sum of OPE diagrams in terms of coupled integral equations. Figure 2: The absolute value squared of the imaginary part divided by the propagator for $K\pi$ propagation of the $K^{*}$ which is an $I={{1}\over{2}}$ and $J=1$ mode. This is equal to the square of the T-matrix scattering of this $K\pi$ mode. Figure 3: The absolute value squared of the imaginary part divided by the propagator for $K\overline{K}$ propagation of the $a_{0}$ which is an $I=1$ and $J=0$ mode. This is equal to the square of the T-matrix scattering of this $K\overline{K}$ mode. Equation 5 can be rewriting as $\sum_{k}(\delta_{ik}-M_{ik})X_{kj}=B_{ij};i,j,k=K^{*},\overline{K^{*}},a_{0}.$ (6) This Fredholm integral equation leads to a Fredholm determinant as a function of $W_{E}$ for each partial wave or total $J$ projection. We have solved this Fredholm determinant for two $J^{PC}$ states $0^{-+}$ and $1^{++}$. The results of this analysis is shown in Figure 4. We see no binding effect in the $0^{-+}$ determinant, while in the $1^{++}$ channel there is a large effect around $1.40$ GeV. At the energy of $1.40$ GeV the $K^{*}$ and $\overline{K^{*}}$ the peaks of Figure 2 are just coming into play. Since for $1^{++}$ they are in a s-wave they have maximum effect. In the $0^{-+}$ these peak are suppressed by a p-wave barrier. Around the mass $1.40$ GeV one can form a picture of the system being $K$$\overline{K}$($a_{0}$) molecule at the center of gravity with a light pion revolving in a p-wave orbit. The momentum of the pion is such that at each half-revolution a $K^{*}$ or a $\overline{K^{*}}$ is formed(see Figure 5). The phase shift and production cross sections of this molecular state is explored in detail in Ref.[5]. Figure 4: The value of 1 over the Fredholm determinant squared for $J^{PC}$ = $1^{++}$ and $J^{PC}$ = $0^{-+}$ as a function of $K$$\overline{K}$$\pi$ mass(smooth curves). Figure 5: The meson system mainly resonates in the s-wave $K^{*}$$\overline{K}$ and $K$$\overline{K^{*}}$ mode with a pion rotating in a p-wave about a $K$ $\overline{K}$ system which forms a isospin triplet. The pion moves back and forth forming $K^{*}$ and $\overline{K^{*}}$ states with one $K$ or $\overline{K}$. Figure 6: The dibaryon system mainly resonates in the s-wave $\Delta$ $N$ mode with a pion rotating in a p-wave about a spin aligned $N$ $N$ system which forms a isospin singlet. The pion moves back and forth forming $\Delta$ states with one nucleon and then the other. ## 3 Two more molecular states ### 3.1 Dibaryon state $\Delta N(2150)$ as a molecule. The dibaryon state interacts in three two-body scattering channels. Its mass is 2.15 GeV and has a strong interaction resonance decay width of 100 MeV. It interacts in the $N$$N$ d-wave spin anti-aligned[6], $d$$\pi$ p-wave spin aligned[7], and $\Delta$$N$ s-wave spin aligned[8]. The dibaryon system mainly resonates in the s-wave $\Delta$$N$ mode with a pion rotating in a p-wave about a spin aligned $N$$N$ system which forms a isospin singlet. The pion moves back and forth forming $\Delta$ states with one nucleon and then the other(see Figure 6). All three isospin states of the pion can be achieved in this resonance. Thus we can have $\pi^{+}$$d$, $\pi^{0}$$d$, and $\pi^{-}$$d$ states. If the pion is absorbed by any of the nucleons it under goes a spin flip producing a d-wave $N$$N$ system. The resonance decays into $N$$N$, $\pi$$d$, or $\pi$$N$$N$. In the last section we saw a meson system that had an analogous orbiting pion in a p-wave mode about a $K\overline{K}$ in a s-wave[5]. Both systems have a similar lifetime or width of $\sim$ .100 GeV[9]. ### 3.2 $a_{1}(1400)$ state is predicted Unlike the $f_{1}(1400)$ the $\Delta N(2150)$ has an isosinglet at the center of motion. The $K$$\overline{K}$ isosinglet state of Sec. 1 could form the center of motion for an isotriplet molecular state $a_{1}(1400)$. The set of integral equation would be the same as in the $f_{1}(1400)$ case making a similar Fredholm determinant. Like for $\Delta N(2150)$ which had a $d\pi$ decay mode, one would expect that there would be a $f_{0}(980)$ $\pi$ decay mode. We can calculate the branching ration of $f_{1}(1400)$ to $a_{0}$$\pi$ from the Dalitz plot calculated using equation 20 of Ref.[5]. The ratio in the plot going into $a_{0}$$\pi$ is 22%. The reason this mode is so small is because $\sqrt{Imag(D_{a_{0}})}\over{|D_{a_{0}}|}$ is much smaller than $\sqrt{Imag(D_{K^{*}})}\over{|D_{K^{*}}|}$[5]. Where as the ratios of $Imag(D_{a_{0}})\over{|D_{a_{0}}|}$ and $Imag(D_{K^{*}})\over{|D_{K^{*}}|}$ are one at resonance(see Figure 2 and 3). For the $\Delta N(2150)$ the $d\pi$ branching ratio is 25%[9]. We should expect that the branching of $a_{1}(1400)\rightarrow f_{0}\pi$ should be the same as $f_{1}(1400)\rightarrow a_{0}\pi$. Dr. Suh-Urk Chung has claimed such a state has been observed[10]. ## 4 Exotic state $J^{PC}$ = $1^{-+}$ $\pi_{1}(1400)$ as a molecule. In Sec. 2 we explained the $f_{1}$(1420) seen in $\overline{K}K\pi$[5]. Following the same approach we can demonstrate the possibility that the $\pi_{1}$(1400) is a $\overline{K}K\pi\pi$ molecule, where the $\overline{K}K\pi$ in a relative s-wave with the other $\pi$ orbiting them in a p-wave. Since the $\overline{K}K\pi$ is resonating as the $\eta$(1295), it is possible that the offshell $\overline{K}K\pi(\eta)$ would couple to the ground state $\eta$, thus creating a $\eta\pi$ p-wave decay mode. As was done in Ref.[5], we need to arrange a set of Born terms connecting all of the possible intermediate isobar states of the $\overline{K}K\pi\pi$ system ($\eta$(1295)$\pi$, $a_{0}$(980)$\rho$(770), $K_{1}$(1270)$\overline{K}$ or $\overline{K_{1}}(1270)$$K$). We assume that the only interaction among the particles occurs through one-particle exchange (OPE), thus connecting the above isobar states (Figure 7). In order to completely derive the dynamics one would have to develop a true four-body scattering mechanism with OPE Born terms connecting two- and three-body isobar states. We can take a short cut and use the three-body formalism developed in Ref. [5], if we note that the set of diagrams (Figure 8) could be summed using a true four-body formalism, and be replaced by the Born term of Figure 9. Here the $a_{0}$(980) is treated as a stable particle and the $\pi\pi$ p-wave phase shift ($\rho_{med}$) is assumed to be modified by the sum of terms in Figure 8. With this assumption, then binding can occur if we use the $N/D$ propagators for the $\eta$(1295)(see Figure 10) and $\rho_{med}$(see Figure 11). In Figure 11 we also show the unaltered p-wave phase shift ($\rho$). Figure 12 shows the final state enhancement times the $\eta$(1295) $\pi$ p-wave kinematics. The bump is driven by the collision on the Dalitz plot of the $\eta$(1295) Breit-Wigner (Figure 10) and the rapid increase of the $\pi\pi$ p-wave phase shift (Figure 11). We have suggested the possibility that the $\pi_{1}$(1400) is a final state interaction for the $K\bar{K}\pi$ system in a s-wave orbiting by a $\pi$ in a p-wave. The $\eta\pi$ decay mode is generated by the off shell appearance of the $\eta$ from the $K\bar{K}\pi$ system ($0^{-+}$). Our model thus predicts that a strong $J^{PC}=1^{-+}$ should be seen in the $K\bar{K}\pi\pi$ system at around 1.4 GeV/c2. If the $\pi_{1}$(1400) is only seen in the $\eta\pi$ channel then its hard to understand three facts about its production. First, that the force between the $\eta$ and $\pi$ in a p-wave should be repulsive (QCD) [11]. This is not a problem if the $\eta\pi$ is a minor decay mode. Second, why should the production be so small compared to the $a_{2}$ which has only a 14% branching to $\eta\pi$? One would think it should be produced in unnatural parity exchange not natural. Again this is not a problem if minor decay mode. Figure 7: One-particle-exchange (OPE) Born terms for $\overline{K}K\pi\pi$ system. Figure 8: The set of infinite terms where all $K$ and $\overline{K}$ exchanges are summed. Figure 9: The Born that is used in the three-body effective analysis, where the $\pi\pi$ p-wave is altered by the sum of terms in Figure 8. Figure 10: The absolute value squared of the imaginary part of the $\eta$ (1290) propagator divided by the complete propagator, thus forming the square of the T-matrix scattering amplitude. Figure 11: The absolute value squared of the imaginary part of the $\pi\pi$ p-wave phase shift: the solid line is the modified phase shift; the dashed line is the original vacuum phase shift which is the $\rho$ meson. Figure 12: The value of 1 over the Fredholm determinate squared times the kinematics of p-wave $\pi\eta$(1290). Finally, it is reasonable to think that the largest decay amplitude would be the modes that have an $a_{0}$(980) in the final state. However in Ref.[5] the same conclusion was initially drawn, except when one puts in all the numerical factors the $a_{0}$ modes become suppressed. The explanation comes from the very powerful attraction of the kaons in the $a_{0}$ mode. The isobar decay amplitude is proportional to $\sqrt{N}/D$ both $N$ and $D$ are large numbers while the ratio is near one at the threshold(see last section and Ref.[5]). Thus the decay amplitude becomes proportional to $1/\sqrt{N}$ . We predict that the major mode could be $\pi\pi$ p-wave having no $\rho$ peak (work above) forming a $K\pi\pi$ or a $\overline{K}\pi\pi~{}J^{p}=1^{+}$ plus a $\overline{K}$ or $K$ with overall $G$-parity minus. The $K\pi\pi$ should more or less be a phase space distribution. ## 5 $K\overline{K}K(1500)$ state is predicted We saw in Sec.1 and Sec. 2 that the $K$$\overline{K}$ system had attraction through the $a_{0}(980)$ resonance. It seems only natural to investigate the possibility that a three-K molecule might exist. This is only worthwhile if we consider only exotic quantum numbers. The only exotic quantum number which can be be obtained is the isotopic spin. Thus a set of coupled equations for the $K$$\overline{K}$$K$ system in a overall s-wave with isotopic spin of $3\over{2}$ is created[5]. The Fredholm determinate squared times of the equations is shown in Figure 13. Isopin spin $3\over{2}$ implies there are four states $K^{+}$$\overline{K^{0}}$$K^{+}$,$K^{+}$$K^{-}$$K^{+}$, $K^{0}$$\overline{K^{0}}$$K^{0}$, and $K^{0}$$K^{-}$$K^{0}$. The $K^{+}$$\overline{K^{0}}$$K^{+}$ is double charged. There would also be a $K^{-}$$K^{0}$$K^{-}$ which is the anti-matter state of the $K^{+}$$\overline{K^{0}}$$K^{+}$. These states are unique to this type of binding mechanism. ## 6 Summary and Discussion In this report we have discussed seven possible hadron molecular states. These states are particles made out of hadrons that are held together by self interactions. The seven molecules and their self interactions are explored. The $f_{0}(980)$, $a_{0}(980)$ relied quark exchange forces which made states of $K$$\overline{K}$ lying just below threshold in scalar channel($0^{++}$). This is somewhat like the deuteron in the $p$ $n$ system. The $f_{1}(1400)$, $\Delta N(2150)$ and $\pi_{1}(1400)$ molecular structure are held together by long range particle exchange mechanisms not the quark exchange that led to the van der Walls forces[2]. These exchange mechanisms also predicts that two more states the $K\overline{K}K(1500)$ and $a_{1}(1400)$ should be found. For the $a_{1}(1400)$ the set of integral equation would be the same as in the $f_{1}(1400)$ case making a similar Fredholm determinant. Like for $\Delta N(2150)$ which had a $d\pi$ decay mode, one would expect that there would be a $f_{0}(980)$ $\pi$ decay mode. Dr. Suh-Urk Chung has claimed such a state has been observed[10]. Figure 13: The value of 1 over the Fredholm determinant squared for $J^{P}$ = $0^{-}$ as a function of $K$$\overline{K}$$K$ mass(smooth curves). ## 7 Acknowledgments This research was supported by the U.S. Department of Energy under Contract No. DE-AC02-98CH10886. ## References * [1] J. Weinstein and N. Isgur, Phys. Rev.Lett. 48 (1982) 659; Phys. Rev. D 27 (1983) 588; Phys. Rev. D 41 (1990) 2236. * [2] O.W. Greenburg and H.J. Lipkin, Nucl. Phys. A370 (1981) 349. * [3] D.J. Herndon, Phys. Rev. D 11 (1975) 3165. * [4] R. Blankenbecler and R. Sugar, Phys. Rev. 142 (1966) 1051. * [5] R. Longacre, Phys. Rev. D 42 (1990) 874. * [6] R.A. Arndt et al., Phys. Rev. C 76 (2007) 025209. * [7] C.H. Oh et al., Phys. Rev. C 56 (1997) 635. * [8] D. Schiff and J. Tran Thanh Van, Nucl. Phys. B5 (1968) 529. * [9] R. Longacre,arXiv:1311.3609[hep-ph]. * [10] Suh-Urk Chung(private communication). * [11] T. Barnes(private communication).
11institutetext: Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Belgium 22institutetext: Data Mining and Modeling for Biomedicine, Center for Inflammation Research, VIB-UGent, Belgium # GroupEnc: encoder with group loss for global structure preservation David Novak 1122 0000-0003-4574-9093 Sofie Van Gassen 1122 0000-0002-7119-5330 Yvan Saeys 1122 0000-0002-0415-1506 ###### Abstract Recent advances in dimensionality reduction have achieved more accurate lower- dimensional embeddings of high-dimensional data. In addition to visualisation purposes, these embeddings can be used for downstream processing, including batch effect normalisation, clustering, community detection or trajectory inference. We use the notion of structure preservation at both local and global levels to create a deep learning model, based on a variational autoencoder (VAE) and the stochastic quartet loss from the SQuadMDS algorithm. Our encoder model, called GroupEnc, uses a ‘group loss’ function to create embeddings with less global structure distortion than VAEs do, while keeping the model parametric and the architecture flexible. We validate our approach using publicly available biological single-cell transcriptomic datasets, employing $R_{\mathrm{NX}}$ curves for evaluation. ###### Keywords: Dimensionality reduction Autoencoders Bioinformatics. ## 1 Introduction Autoencoders (AEs) are neural networks which encode high-dimensional (HD) input data as a low-dimensional (LD) latent representation and decode this into a reconstruction of the input. In training, reconstruction error is minimised via back-propagation. In the field of bioinformatics, we have seen impressive applications of autoencoders and variational autoencoders (VAEs; probabilistic models based on AEs) in dimensionality reduction (DR) for the purposes of visualisation [14, 4] and downstream data processing, including batch effect correction and cell population clustering [2, 3, 7]. This pertains to large and high-dimensional single-cell datasets, which quantify biological features per cell in a tissue sample of interest. Examples of these methods include single-cell RNA sequencing (scRNA-seq), flow cytometry, mass cytometry (CyTOF) and CITE-seq. We introduce and evaluate GroupEnc: a stand-alone encoder module that optimises the group loss: a differentiable loss function that imposes a scale- agnostic structure-preserving constraint on the learned LD embedding. This is a modification of the stochastic quartet loss in SQuadMDS [8], applied in a deep learning context here. This results in a parametric model that can run on GPU. We achieve similar local structure preservation and better global structure preservation than a VAE model, as tested on 5 single-cell transcriptomic datasets. Compared to previously published alternative triplet- based loss functions proposed for VAEs [14, 1], the group loss does not require computation of a k-nearest-neighbour graph of the input data. ## 2 Method We describe the methodology used to create LD embeddings of HD data and to evaluate them. ### 2.1 Model training In an autoencoder architecture, HD input $\mathbf{X_{n\times d}}\in\mathcal{X}$ is encoded as LD representation $\mathbf{Z_{n\times w}}\in\mathcal{Z}$ (where $\mathcal{X}=\mathbb{R}^{d},\mathcal{Z}=\mathbb{R}^{w},w<d$) and reconstructed as an approximation $\mathbf{\hat{X}_{n\times d}}$. The encoder $E_{\Phi}:\mathcal{X}\rightarrow\mathcal{Z}$ transforms $\mathbf{X}$ to $\mathbf{L}$, and the decoder $D_{\theta}:\mathcal{Z}\rightarrow\mathcal{X}$ transforms $\mathbf{L}$ to $\mathbf{\hat{X}}$. Parameters of the AE (encoder weights $\Phi$ and decoder weights $\Theta$) are learned so as to reduce a reconstruction loss. In our baseline VAE model, we use the mean square error (MSE) as reconstruction loss. In a VAE, the latent representation $\mathbf{L}$ is sampled from a distribution $\mathcal{D}$ in latent space. The encoder and decoder networks are probabilistic, and an extra term quantifying the Kullback-Leibler (KL) divergence between $\mathcal{D}$ and a latent prior (isotropic Gaussian distribution) is used as an additional loss term during training. In contrast, our current GroupEnc model only consists of a variational encoder and sampler (without a decoder), trained to minimise a group loss along with the KL divergence from prior. The group loss adapts the notion of the quartet loss function, computed using quartet-normalised distances between original and embedded points, from SQuadMDS [8]. The normalised distances are used to calculate a differentiable cost function per each randomly drawn quartet of points. We denote Euclidean distances between any HD input points or LD embedded points indexed $i$ and $j$ as $\delta_{ij}$ and $d_{ij}$, respectively. To compute a group-normalised distance between two points in the same group (for a quartet, quintet, sextet, etc.), we use all pairwise distances within that group. For HD and LD points, respectively, we get group- normalised distance formulas $\delta^{\mathrm{norm}}_{ij}=\frac{\delta_{ij}}{\sum^{\gamma-1}_{a=1}\sum^{\gamma}_{b=a+1}\delta_{ab}}$ (1) $d^{\mathrm{norm}}_{ij}=\frac{d_{ij}}{\sum^{\gamma-1}_{a=1}\sum^{\gamma}_{b=a+1}d_{ab}}$ (2) where $\gamma$ is the number of points in each group. The difference in group-normalised distances in HD and LD, which ought to be minimised, is used to calculate the cost function $g=\sum\limits^{\gamma-1}_{a=1}\sum\limits^{\gamma}_{b=a+1}(\delta^{\mathrm{norm}}_{ab}-d^{\mathrm{norm}}_{ab})^{2}$ (3) of a group (a group cost). This is visualised in Figure 1. The GroupEnc model is trained on shuffled batches of input data using the Adam optimiser. Partitioning of points into groups is done dynamically at the batch level, and the size of the groups ($\gamma$) is specified as a hyperparameter. The group loss value per each point i in the training batch is assigned as the cost value of the group for which i is the first point, and the group loss term per batch is averaged across the batch. Therefore, GroupEnc imposes a constraint on the latent distribution $\mathcal{D}$ instead of reconstruction loss to compute weight updates. Figure 1: A schematic illustration of GroupEnc training and inference is shown. In training, a batch of high-dimensional points is used as input to the encoder. Parameters of the model are adjusted in each pass via back- propagation to minimise the group loss, which quantifies divergence in relative distances within randomly assigned groups of points in a batch of input points versus its embedding. The trained encoder then outputs a low- dimensional embeddding of the input. ### 2.2 Dimensionality reduction quality assessment To assess structure preservation (SP) in an embedding, we use the $R_{\mathrm{NX}}$ curve, a previously proposed quality assessment metric [9]. This curve quantifies the overlap between ordering of neighbours to a reference point in HD versus in LD for all neighbourhood sizes, from $1$ to $(N-1)$ (with sample size $N$), averaged across all reference points. To compute this, we denote neighbourhood ranks of a point $j$ (neighbour) with respect to a point $i$ (reference point) as $\rho_{ij}$ and $r_{ij}$ in HD and in LD, respectively. Non-self neighbourhoods of HD and LD points, respectively, are then denoted as $\nu_{i}^{K}=\\{j:1\leq\rho_{ij}\leq K\\}$ and $n_{i}^{K}=\\{j:1\leq r_{ij}\leq K\\}$ for neighbourhood size $K$. For dataset size $N$, the $Q_{\mathrm{NX}}$ value for a specific value of $K$ is calculated as $Q_{\mathrm{NX}}(K)=\frac{1}{KN}\sum\limits_{i=1}^{N}|\nu_{i}^{K}\cap n_{i}^{K}|$ (4) To obtain the full $Q_{\mathrm{NX}}$ curve, we calculate this score for $K$ from 1 to $(N-1)$. It turns out that a random embedding results in $Q_{\mathrm{NX}}(K)\approx\frac{K}{N-1}$. $R_{\mathrm{NX}}$, as opposed to $Q_{\mathrm{NX}}$, corrects for chance, and is computed as $R_{\mathrm{NX}}(K)=\frac{(N-1)Q_{\mathrm{NX}}(K)-K}{N-1-K}$ (5) We quantify SP as the area-under-curve (AUC) for an $R_{\mathrm{NX}}$ curve of an embedding of interest. Specifically, Local SP is the AUC of the curve where neighbourhood size ($K$) is re-scaled logarithmically ($lnK$ is used), to up- weight local neighbourhoods while not setting a hard cut-off for local versus global. Moreover, Global SP is the AUC with a linear scale for $K$, therefore without the emphasis on local neighbourhoods of the reference points. In both cases, a higher SP score is better. ## 3 Results We compare a VAE (trained to minimise reconstruction error and KL-divergence from prior) and a GroupEnc model (encoder-only, trained to minimise group loss)111The encoder module, in both cases, consisted of layers sized $(32,64,128,32)$ and the VAE decoder module of layers consisted of layers sized $(32,128,64,32)$. The Adam optimiser with a learning rate of $0.001$ was used for $500$ epochs of training with batch size of $512$.. We tested structure preservation (SP) in embeddings of dimensionality 2, 5 and 10, with different values of hyperparameter $\gamma$ (group size), looking at Local and Global SP separately. We use 5 single-cell RNA-sequencing (scRNA-seq) datasets [10, 5, 13, 18, 19], comprising high-dimensional feature vectors describing the identity of single biological cells in a tissue sample of interest. These features are levels of transcription of labelled genes. The datasets are listed in Table 3. Local SP and Global SP scores are summarised in Figure 2 and shown in Tables 2 and 3 in full. Time required to train each model can be found in Table 4, with a single node of a GPU cluster (16-core Intel Xeon Gold 6242 processor with NVIDIA Volta V100 GPU) with 16 GB of usable RAM made available each time. 5 runs (with different random seeds) were run to collect the scores. Figure 2: Boxplots of Local and Global SP scores across 5 runs for embeddings of each dataset, obtained from VAE and GroupEnc model for different group size ($\gamma$) values (written as ‘Group($\gamma$)’). Subplots are sorted by target dimensionalities (columns). Dataset name | Biological source | Feature set | Number of samples ---|---|---|--- Ziegler | human nasopharynx | 32,871 genes | 32,588 cells Shekhar | mouse retina | 24,904 genes | 44,994 cells Ximerakis | mouse brain | 14,699 genes | 37,069 cells Farrell | zebrafish embryos | 17,239 genes | 38,731 cells Liu | mouse brain | 17,482 genes | 26,187 cells Table 1: Datasets used for DR benchmark and their brief descriptions. Dim | Model | Dataset ---|---|--- Liu | Farrell | Shekhar | Ximerakis | Ziegler $10d$ | VAE | $0.506{\scriptstyle\pm 0.005}$ | $0.556{\scriptstyle\pm 0.011}$ | $0.407{\scriptstyle\pm 0.006}$ | $0.487{\scriptstyle\pm 0.009}$ | $0.551{\scriptstyle\pm 0.006}$ GroupEnc ($\gamma=4$) | $0.453{\scriptstyle\pm 0.006}$ | $0.518{\scriptstyle\pm 0.003}$ | $0.392{\scriptstyle\pm 0.002}$ | $0.440{\scriptstyle\pm 0.002}$ | $0.488{\scriptstyle\pm 0.007}$ GroupEnc ($\gamma=5$) | $0.456{\scriptstyle\pm 0.006}$ | $0.513{\scriptstyle\pm 0.011}$ | $0.392{\scriptstyle\pm 0.002}$ | $0.439{\scriptstyle\pm 0.005}$ | $0.490{\scriptstyle\pm 0.007}$ GroupEnc ($\gamma=6$) | $0.452{\scriptstyle\pm 0.005}$ | $0.518{\scriptstyle\pm 0.004}$ | $0.391{\scriptstyle\pm 0.003}$ | $0.439{\scriptstyle\pm 0.002}$ | $0.489{\scriptstyle\pm 0.010}$ $5d$ | VAE | $0.384{\scriptstyle\pm 0.007}$ | $0.465{\scriptstyle\pm 0.006}$ | $0.354{\scriptstyle\pm 0.004}$ | $0.431{\scriptstyle\pm 0.005}$ | $0.448{\scriptstyle\pm 0.007}$ GroupEnc ($\gamma=4$) | $0.318{\scriptstyle\pm 0.002}$ | $0.442{\scriptstyle\pm 0.007}$ | $0.334{\scriptstyle\pm 0.002}$ | $0.414{\scriptstyle\pm 0.004}$ | $0.443{\scriptstyle\pm 0.003}$ GroupEnc ($\gamma=5$) | $0.321{\scriptstyle\pm 0.005}$ | $0.447{\scriptstyle\pm 0.005}$ | $0.335{\scriptstyle\pm 0.003}$ | $0.411{\scriptstyle\pm 0.002}$ | $0.444{\scriptstyle\pm 0.004}$ GroupEnc ($\gamma=6$) | $0.316{\scriptstyle\pm 0.003}$ | $0.444{\scriptstyle\pm 0.004}$ | $0.334{\scriptstyle\pm 0.002}$ | $0.412{\scriptstyle\pm 0.005}$ | $0.443{\scriptstyle\pm 0.002}$ $2d$ | VAE | $0.262{\scriptstyle\pm 0.015}$ | $0.281{\scriptstyle\pm 0.009}$ | $0.256{\scriptstyle\pm 0.005}$ | $0.304{\scriptstyle\pm 0.009}$ | $0.285{\scriptstyle\pm 0.010}$ GroupEnc ($\gamma=4$) | $0.187{\scriptstyle\pm 0.005}$ | $0.260{\scriptstyle\pm 0.005}$ | $0.223{\scriptstyle\pm 0.003}$ | $0.304{\scriptstyle\pm 0.005}$ | $0.278{\scriptstyle\pm 0.001}$ GroupEnc ($\gamma=5$) | $0.185{\scriptstyle\pm 0.005}$ | $0.262{\scriptstyle\pm 0.005}$ | $0.224{\scriptstyle\pm 0.002}$ | $0.305{\scriptstyle\pm 0.003}$ | $0.278{\scriptstyle\pm 0.001}$ GroupEnc ($\gamma=6$) | $0.183{\scriptstyle\pm 0.006}$ | $0.260{\scriptstyle\pm 0.003}$ | $0.224{\scriptstyle\pm 0.001}$ | $0.304{\scriptstyle\pm 0.010}$ | $0.277{\scriptstyle\pm 0.001}$ Table 2: Local SP for 5 datasets, 3 embedding dimensionalities (‘Dim’) and 4 models (VAE and GroupEnc with group size $\gamma$ of 4, 5 and 6). Mean and standard deviation are shown. Dim | Model | Dataset ---|---|--- Liu | Farrell | Shekhar | Ximerakis | Ziegler $10d$ | VAE | $0.510{\scriptstyle\pm 0.016}$ | $0.690{\scriptstyle\pm 0.019}$ | $0.670{\scriptstyle\pm 0.012}$ | $0.632{\scriptstyle\pm 0.015}$ | $0.709{\scriptstyle\pm 0.014}$ GroupEnc ($\gamma=4$) | $0.681{\scriptstyle\pm 0.002}$ | $0.847{\scriptstyle\pm 0.005}$ | $0.793{\scriptstyle\pm 0.005}$ | $0.820{\scriptstyle\pm 0.005}$ | $0.840{\scriptstyle\pm 0.007}$ GroupEnc ($\gamma=5$) | $0.685{\scriptstyle\pm 0.009}$ | $0.844{\scriptstyle\pm 0.005}$ | $0.796{\scriptstyle\pm 0.004}$ | $0.826{\scriptstyle\pm 0.006}$ | $0.841{\scriptstyle\pm 0.006}$ GroupEnc ($\gamma=6$) | $0.686{\scriptstyle\pm 0.005}$ | $0.845{\scriptstyle\pm 0.004}$ | $0.793{\scriptstyle\pm 0.005}$ | $0.828{\scriptstyle\pm 0.004}$ | $0.837{\scriptstyle\pm 0.014}$ $5d$ | VAE | $0.401{\scriptstyle\pm 0.032}$ | $0.633{\scriptstyle\pm 0.012}$ | $0.580{\scriptstyle\pm 0.016}$ | $0.602{\scriptstyle\pm 0.024}$ | $0.657{\scriptstyle\pm 0.020}$ GroupEnc ($\gamma=4$) | $0.577{\scriptstyle\pm 0.005}$ | $0.788{\scriptstyle\pm 0.005}$ | $0.708{\scriptstyle\pm 0.003}$ | $0.739{\scriptstyle\pm 0.005}$ | $0.793{\scriptstyle\pm 0.002}$ GroupEnc ($\gamma=5$) | $0.576{\scriptstyle\pm 0.008}$ | $0.787{\scriptstyle\pm 0.003}$ | $0.710{\scriptstyle\pm 0.003}$ | $0.735{\scriptstyle\pm 0.004}$ | $0.795{\scriptstyle\pm 0.005}$ GroupEnc ($\gamma=6$) | $0.575{\scriptstyle\pm 0.007}$ | $0.788{\scriptstyle\pm 0.004}$ | $0.710{\scriptstyle\pm 0.003}$ | $0.738{\scriptstyle\pm 0.007}$ | $0.793{\scriptstyle\pm 0.005}$ $2d$ | VAE | $0.355{\scriptstyle\pm 0.045}$ | $0.512{\scriptstyle\pm 0.036}$ | $0.465{\scriptstyle\pm 0.029}$ | $0.504{\scriptstyle\pm 0.028}$ | $0.552{\scriptstyle\pm 0.035}$ GroupEnc ($\gamma=4$) | $0.474{\scriptstyle\pm 0.009}$ | $0.703{\scriptstyle\pm 0.007}$ | $0.599{\scriptstyle\pm 0.003}$ | $0.637{\scriptstyle\pm 0.013}$ | $0.687{\scriptstyle\pm 0.002}$ GroupEnc ($\gamma=5$) | $0.469{\scriptstyle\pm 0.012}$ | $0.701{\scriptstyle\pm 0.007}$ | $0.599{\scriptstyle\pm 0.005}$ | $0.614{\scriptstyle\pm 0.016}$ | $0.689{\scriptstyle\pm 0.001}$ GroupEnc ($\gamma=6$) | $0.466{\scriptstyle\pm 0.013}$ | $0.704{\scriptstyle\pm 0.003}$ | $0.598{\scriptstyle\pm 0.003}$ | $0.622{\scriptstyle\pm 0.028}$ | $0.686{\scriptstyle\pm 0.004}$ Table 3: Global SP for 5 datasets, 3 embedding dimensionalities (‘Dim’) and 4 models (VAE and GroupEnc with group size $\gamma$ of 4, 5 and 6). Mean and standard deviation are shown. Dim | Model | Dataset ---|---|--- Campbell | Farrell | Shekhar | Ximerakis | Ziegler $10d$ | VAE | $89.7{\scriptstyle\pm 9.4}$ | $126.2{\scriptstyle\pm 15.9}$ | $138.2{\scriptstyle\pm 15.8}$ | $116.4{\scriptstyle\pm 11.2}$ | $103.9{\scriptstyle\pm 11.3}$ GroupEnc ($\gamma=4$) | $151.6{\scriptstyle\pm 18.2}$ | $221.8{\scriptstyle\pm 19.1}$ | $263.0{\scriptstyle\pm 35.2}$ | $211.6{\scriptstyle\pm 20.3}$ | $187.9{\scriptstyle\pm 24.0}$ GroupEnc ($\gamma=5$) | $150.2{\scriptstyle\pm 15.5}$ | $228.7{\scriptstyle\pm 27.0}$ | $252.1{\scriptstyle\pm 25.6}$ | $222.9{\scriptstyle\pm 23.2}$ | $172.9{\scriptstyle\pm 5.3}$ GroupEnc ($\gamma=6$) | $156.7{\scriptstyle\pm 20.0}$ | $221.9{\scriptstyle\pm 23.8}$ | $252.6{\scriptstyle\pm 28.6}$ | $213.1{\scriptstyle\pm 21.4}$ | $189.5{\scriptstyle\pm 22.1}$ $5d$ | VAE | $78.9{\scriptstyle\pm 3.5}$ | $121.9{\scriptstyle\pm 10.8}$ | $137.8{\scriptstyle\pm 15.4}$ | $120.1{\scriptstyle\pm 12.9}$ | $100.2{\scriptstyle\pm 6.7}$ GroupEnc ($\gamma=4$) | $155.9{\scriptstyle\pm 20.0}$ | $227.2{\scriptstyle\pm 29.2}$ | $258.6{\scriptstyle\pm 34.7}$ | $207.2{\scriptstyle\pm 24.0}$ | $179.9{\scriptstyle\pm 17.4}$ GroupEnc ($\gamma=5$) | $148.5{\scriptstyle\pm 17.1}$ | $226.7{\scriptstyle\pm 28.9}$ | $249.9{\scriptstyle\pm 30.2}$ | $207.8{\scriptstyle\pm 24.4}$ | $177.3{\scriptstyle\pm 18.3}$ GroupEnc ($\gamma=6$) | $150.0{\scriptstyle\pm 21.0}$ | $227.2{\scriptstyle\pm 28.2}$ | $251.3{\scriptstyle\pm 26.4}$ | $218.1{\scriptstyle\pm 27.8}$ | $178.1{\scriptstyle\pm 18.1}$ $2d$ | VAE | $85.4{\scriptstyle\pm 1.6}$ | $123.3{\scriptstyle\pm 10.7}$ | $149.5{\scriptstyle\pm 12.9}$ | $117.1{\scriptstyle\pm 12.2}$ | $100.1{\scriptstyle\pm 11.1}$ GroupEnc ($\gamma=4$) | $151.2{\scriptstyle\pm 2.2}$ | $226.8{\scriptstyle\pm 27.3}$ | $282.0{\scriptstyle\pm 27.1}$ | $209.0{\scriptstyle\pm 21.3}$ | $181.6{\scriptstyle\pm 18.0}$ GroupEnc ($\gamma=5$) | $159.7{\scriptstyle\pm 16.0}$ | $217.3{\scriptstyle\pm 24.0}$ | $255.6{\scriptstyle\pm 23.5}$ | $209.0{\scriptstyle\pm 22.7}$ | $179.1{\scriptstyle\pm 17.3}$ GroupEnc ($\gamma=6$) | $154.4{\scriptstyle\pm 13.7}$ | $218.0{\scriptstyle\pm 24.9}$ | $280.1{\scriptstyle\pm 31.2}$ | $218.0{\scriptstyle\pm 30.0}$ | $180.7{\scriptstyle\pm 16.4}$ Table 4: Model training time in seconds across 5 datasets, 3 embedding dimensionalities (‘Dim’) and 4 models (VAE and GroupEnc with group size $\gamma$ of 4, 5 and 6). Mean and standard deviation are shown. For the Farrell dataset, we also plot the 2-dimensional embeddings from both models and label individual embedded points using annotation provided by the authors (Figure 3). The labels are ordered and correspond to developmental stages of cells in zebrafish embryogenesis. This is to show the developmental gradient is more apparent in the GroupEnc embedding. Figure 3: 2-dimensional embeddings of the Farrell dataset obtained using VAE and GroupEnc ($\gamma=4$) with colour labels according to labelled developmental stages of embedded cells. The results show that, intuitively, both local and global structures in terms of neighbour ranks are preserved worse with decreased dimensionality of the embedding, and this holds across all tested datasets and models (VAE and GroupEnc with group sizes of 4, 5 and 6). Furthermore, the VAE model generally outperforms the GroupEnc models when it comes to Local SP. However, we see consistently better Global SP for GroupEnc, concordant with the scale-agnostic nature of the group loss that GroupEnc optimises. Differences between GroupEnc models with different group sizes are not significant. ## 4 Discussion Faithful reconstructions of global relationships in lower-dimensional embeddings are of interest for purposes of visualisation, as well as the potential for downstream processing of data. We set out to design a deep learning model that uses a loss function for scale-agnostic preservation of randomly sampled structures [8]. We have done this to demonstrate the improvement in global structure preservation (versus VAE) via this loss function and that it can be used in a deep learning context, which has the advantage of providing a parametric model to be trained on a subset of data and used to transform new samples. The use of geometric priors (similarity matrices, topological priors) with VAEs for dimensionality reduction [16, 7] is another promising avenue of research in analyses of high-dimensional datasets. With data that is high- dimensional and noisy by its nature (of which biological single-cell data is an instance), feature engineering by the means of constructing such lower- dimensional embeddings can help extract more salient information about the differential expression of genes in cells, continuous developmental gradients or batch effects between cohorts of samples. In general, preserving global structures, as opposed to constraining the optimisation process to local structure preservation (as in t-SNE [15] or UMAP [11]) can prove beneficial for analysing hierarchical relationships, developmental gradients and pathways. Our future work in dimensionality reduction of biological data will focus on effective reconstruction of trajectories, tackling noise and an extended range of evaluation metrics, both unsupervised and supervised. ## 5 Code availability We make a TensorFlow implementation of GroupEnc, including Bash scripts for generating benchmarking jobs (on Slurm) with custom datasets, available at github.com/saeyslab/GroupEnc. ### 5.1 Data availability We downloaded the Shekhar and Liu datasets via the scRNAseq R package [12] using the functions ShekharRetinaData and LiuBrainData and converted them to AnnData objects using the scDIOR [6] packages for R/Python interoperability. Other datasets come from the Single Cell Portal111https://singlecell.broadinstitute.org/single_cell and are accessible using the following accession numbers. * • Farrell: SCP162 * • Ximerakis: SCP263 * • Ziegler: SCP1289 * • Liu: SCP2161 ### 5.2 Data pre-processing We used the scanpy package version 1.9.1 [17] for data pre-processing. We applied the following Python code for scaling, normalisation and principal component analysis (PCA) prior to running the DR algorithms: import scanpy as sc hd = sc.pp.normalize_per_cell(X, copy=True) # assume X is count matrix # (numpy.ndarray) hd = sc.pp.log1p(hd, copy=True) hd = sc.pp.scale(hd, max_value=10, copy=True) data = sc.tl.pca(hd, svd_solver=‘arpack’, n_comps=50, copy=True) The Farrell dataset was an exception, where already scaled data was used, and only the PCA step remained. ## References * [1] E. Amid and M. K. Warmuth. TriMap: Large-scale Dimensionality Reduction Using Triplets, Mar. 2022. arXiv:1910.00204 [cs, stat]. * [2] M. Amodio, D. van Dijk, K. Srinivasan, W. S. Chen, H. Mohsen, K. R. Moon, A. Campbell, Y. Zhao, X. Wang, M. Venkataswamy, A. Desai, V. Ravi, P. Kumar, R. Montgomery, G. Wolf, and S. Krishnaswamy. Exploring single-cell data with deep multitasking neural networks. Nature Methods, 16(11):1139–1145, Nov. 2019. * [3] L. Chen, W. Wang, Y. Zhai, and M. Deng. Deep soft K-means clustering with self-training for single-cell RNA sequence data. NAR Genomics and Bioinformatics, 2(2):lqaa039, June 2020. * [4] J. Ding, A. Condon, and S. P. Shah. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models. Nature Communications, 9(1):2002, May 2018. * [5] J. A. Farrell, Y. Wang, S. J. Riesenfeld, K. Shekhar, A. Regev, and A. F. Schier. Single-cell reconstruction of developmental trajectories during zebrafish embryogenesis. Science, 360(6392):eaar3131, June 2018. * [6] H. Feng, L. Lin, and J. Chen. scDIOR: single cell RNA-seq data IO software. BMC Bioinformatics, 23(1):16, Dec. 2022. * [7] A. Kopf, V. Fortuin, V. R. Somnath, and M. Claassen. Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data. PLOS Computational Biology, 17(6):e1009086, June 2021. * [8] P. Lambert, C. de Bodt, M. Verleysen, and J. A. Lee. Stochastic quartet approach for fast multidimensional scaling. pages 417–422, 2021. * [9] J. A. Lee, D. H. Peluffo-Ordóñez, and M. Verleysen. Multi-scale similarities in stochastic neighbour embedding: Reducing dimensionality while preserving both local and global structure. Neurocomputing, 169:246–261, Dec. 2015. * [10] Y. Liu, E. L. Savier, V. J. DePiero, C. Chen, D. C. Schwalbe, R.-J. Abraham-Fan, H. Chen, J. N. Campbell, and J. Cang. Mapping visual functions onto molecular cell types in the mouse superior colliculus. Neuron, 111(12):1876–1886.e5, 2023. * [11] L. McInnes, J. Healy, and J. Melville. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, Sept. 2020. arXiv:1802.03426 [cs, stat]. * [12] D. Risso and M. Cole. scRNAseq: Collection of Public Single-Cell RNA-Seq Datasets, 2021. R package version 2.8.0. * [13] K. Shekhar, S. W. Lapan, I. E. Whitney, N. M. Tran, E. Z. Macosko, M. Kowalczyk, X. Adiconis, J. Z. Levin, J. Nemesh, M. Goldman, S. A. McCarroll, C. L. Cepko, A. Regev, and J. R. Sanes. Comprehensive Classification of Retinal Bipolar Neurons by Single-Cell Transcriptomics. Cell, 166(5):1308–1323.e30, Aug. 2016. * [14] B. Szubert, J. E. Cole, C. Monaco, and I. Drozdov. Structure-preserving visualisation of high dimensional single-cell datasets. Scientific Reports, 9(1):8914, June 2019. * [15] L. van der Maaten and G. Hinton. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9(86):2579–2605, 2008. * [16] R. Vandaele, B. Kang, J. Lijffijt, T. De Bie, and Y. Saeys. Topologically Regularized Data Embeddings, Mar. 2022. arXiv:2110.09193 [cs, stat]. * [17] F. A. Wolf, P. Angerer, and F. J. Theis. SCANPY: large-scale single-cell gene expression data analysis. Genome Biology, 19(1):15, Dec. 2018. * [18] M. Ximerakis, S. L. Lipnick, B. T. Innes, S. K. Simmons, X. Adiconis, D. Dionne, B. A. Mayweather, L. Nguyen, Z. Niziolek, C. Ozek, V. L. Butty, R. Isserlin, S. M. Buchanan, S. S. Levine, A. Regev, G. D. Bader, J. Z. Levin, and L. L. Rubin. Single-cell transcriptomic profiling of the aging mouse brain. Nature Neuroscience, 22(10):1696–1708, Oct. 2019. * [19] C. G. Ziegler, V. N. Miao, A. H. Owings, A. W. Navia, Y. Tang, J. D. Bromley, P. Lotfy, M. Sloan, H. Laird, H. B. Williams, M. George, R. S. Drake, T. Christian, A. Parker, C. B. Sindel, M. W. Burger, Y. Pride, M. Hasan, G. E. Abraham, M. Senitko, T. O. Robinson, A. K. Shalek, S. C. Glover, B. H. Horwitz, and J. Ordovas-Montanes. Impaired local intrinsic immunity to SARS-CoV-2 infection in severe COVID-19. Cell, 184(18):4713–4733.e22, 2021.
# Hyperbolic L-space knots and their formal semigroups Masakazu Teragaito Department of Mathematics and Mathematics Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-hiroshima, Japan 739-8524. <EMAIL_ADDRESS> ###### Abstract. For an L-space knot, the formal semigroup is defined from its Alexander polynomial. It is not necessarily a semigroup. That is, it may not be closed under addition. There exists an infinite family of hyperbolic L-space knots whose formal semigroups are semigroups generated by three elements. In this paper, we give the first infinite family of hyperbolic L-space knots whose formal semigroups are semigroups generated by five elements. ###### 2020 Mathematics Subject Classification: Primary 57K10 The author has been supported by JSPS KAKENHI Grant Number 20K03587. ## 1\. Introduction A knot in the $3$–sphere is called an L–space knot if it admits a Dehn surgery yielding an L–space. An L–space $Y$ is a rational homology $3$-sphere with the simplest Heegaard Floer homology, that is, $\mathrm{rank}\,\widehat{HF}(Y)=|H_{1}(Y;\mathbb{Z})|$. Typical examples are the knots admitting lens space surgeries, such as torus knots. There are several known constraints for L–space knots [25, 26]. Such a knot $K$ is fibered, and its Alexander polynomial $\Delta_{K}(t)$ has the form of (1.1) $\Delta_{K}(t)=1-t^{a_{1}}+t^{a_{2}}-\dots+t^{a_{2k}},$ where $0<a_{1}<a_{2}<\dots<a_{2k}$ for some $k$, and $a_{2k}$ equals to twice of the knot genus. Also, it is known that $a_{1}=1$ by [11]. In general, it seems that there remained little to be clarified about the distribution of $a_{i}$. See [32, 33] for lens space surgery case. Wang [35] introduced the formal semigroup $\mathcal{S}$ for an L–space knot $K$. It is a set of nonnegative integers defined from the formal power series expansion $\frac{\Delta_{K}(t)}{1-t}=\Sigma_{s\in\mathcal{S}}t^{s}\in\mathbb{Z}[[t]].$ The form of (1.1) implies that $0\in\mathcal{S}$. Hence, if a formal semigroup is a semigroup, then it is a monoid. Nevertheless, we use the term “formal semigroup” in deference to previous research. Essentially, [6, 7] discuss the same notion precedently. In [29], it is called the support of the Turaev torsion. We remark that $\Delta_{K}(t)/(t-1)$ is usually called the Reidemeister–Milnor torsion or Turaev torsion in literatures. For example, a torus knot of type $(2,7)$ has the Alexander polynomial $1-t+t^{2}-t^{3}+t^{4}-t^{5}+t^{6}$. Hence $\mathcal{S}=\\{0,2,4\\}\cup\mathbb{Z}_{\geq 6}$. Another typical example is the $(-2,3,7)$–pretzel knot. Its Alexander polynomial is $1-t+t^{3}-t^{4}+t^{5}-t^{6}+t^{7}-t^{9}+t^{10}$, so $\mathcal{S}=\\{0,3,5,7,8\\}\cup\mathbb{Z}_{\geq 10}$. (In this paper, we use the notation $\mathbb{Z}_{\geq m}$ for the set of integers bigger than or equal to $m$.) In general, a formal semigroup is a subset of $\mathbb{Z}_{\geq 0}$, and we use the addition as the binary operation. As seen from the above example, a formal semigroup is not necessarily closed under the addition. It is an easy and known fact that a torus knot of type $(p,q)\ (p,q>0)$ has the formal semigroup $\langle p,q\rangle\ (=\\{ap+bq\mid a,b\geq 0\\})$, which is a semigroup (see [7, Example 1.10]). Also, the formal semigroup of an iterated torus L–space knot is a semigroup [35]. However, this is not the case for the $(-2,3,7)$–pretzel knot, because $3\in\mathcal{S}$ but $6\not\in\mathcal{S}$. More generally, it is straightforward to verify that any hyperbolic Montesinos L–space knot has the formal semigroup which is not a semigroup. Because such a knot is known to be the $(-2,3,2n+1)$–pretzel knot with $n\geq 3$ by [4], and its Alexander polynomial given by [12] immediately implies that $3\in\mathcal{S}$ but $6\not\in\mathcal{S}$. Also, we checked that most of hyperbolic Berge knots have formal semigroups which are not semigroups. Hence it is not too much to say that the formal semigroup of a hyperbolic L–space knot is less likely to become an actual semigroup. In [35, Question 2.8], Wang asked if there exists an L–space knot which is not an iterated torus knot and whose formal semigroup is a semigroup. As explained in [2], the author found two hyperbolic L–space knots K8_201 and K9_449 whose formal semigroups are actual semigroups. Indeed, they are the only knots among $630$ hyperbolic L–space knots listed by Dunfield. (This list is found in [1]. There were two unclassified knots in Dunfield’s data, but they are confirmed to be L–space knots by [3]. The formal semigroups of these two are not semigroups.) On the other hand, Baker and Kegel [2] show that K9_449 is the only knot among Dunfield’s list that is not the closure of a positive braid, and try to generalize it to an infinite family of hyperbolic L–space knots $\\{K_{n}\\}$, where $K_{1}$ is K9_449. The author also found that their knots give the first infinite family of hyperbolic L–space knots whose formal semigroups are semigroups. More precisely, the formal semigroup of $K_{n}$ is $\langle 4,4n+2,4n+5\rangle$. See [2] for more details. For a finite set of positive integers $\\{p_{1},p_{2},\dots,p_{k}\\}$, $\langle p_{1},p_{2},\dots,p_{k}\rangle=\\{a_{1}p_{1}+a_{2}p_{2}+\dots+a_{k}p_{k}\mid a_{i}\in\mathbb{Z}_{\geq 0}\\}$ is a semigroup under the addition. Since we include 0 as a coefficient, the identity element $0$ is excluded from generators. For such a semigroup, the rank is defined to be the minimal cardinality of a generating set. Thus, the formal semigroup of the hyperbolic L–space knot $K_{n}$ in [2] is a semigroup of rank three. On the other hand, the cabling formula of [35] implies that the semigroup of an iterated torus L–space knot can have arbitrarily high rank. Then it is natural to ask the following. ###### Question 1.1. Does there exist a hyperbolic L–space knot whose formal semigroup is a semigroup with arbitrarily high rank? The purpose of the present paper is to construct a new family of hyperbolic L–space knots whose formal semigroups are semigroups of rank $5$. ###### Theorem 1.2. There exists an infinite family of hyperbolic L–space knots whose formal semigroups are semigroups of rank $5$. The formal semigroup of an L–space knot is related to its knot Floer complex (see [16]). However, the meaning of closedness under addition in the formal semigroup seems to be missing. ## 2\. The family of hyperbolic L–space knots For any integer $n\geq 1$, let $\beta_{n}$ be the $6$–braid defined as $\beta_{n}=(\sigma_{3}\sigma_{2}\sigma_{4}\sigma_{1}\sigma_{3}\sigma_{5}\sigma_{2}\sigma_{4}\sigma_{3})^{2n+1}\sigma_{3}\sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{2},$ where $\sigma_{i}$ is the standard generator in the $6$–strand braid group. See Fig. 1. Let $K_{n}$ be the knot obtained as the closure of $\beta_{n}$. Figure 1. The braid $\beta_{n}$. The knot $K_{n}$ is the closure of $\beta_{n}$. Since $\beta_{n}$ is positive, $K_{n}$ is fibered ([31]), and its genus $g(K_{n})$ is $9n+7$ as seen from an Euler characteristic calculation $(\text{number of strand})-(\text{number of crossing})=6-(9(2n+1)+10)=1-2g(K_{n}).$ We remark that $\beta_{0}$ can be defined, but $K_{0}$ is the $(2,11)$–cable of the trefoil. Hence this is excluded from our interest. Theorem 1.2 is the direct consequence of the next theorem. ###### Theorem 2.1. For $n\geq 1$, let $K_{n}$ be the knot defined as above. Then we have: * (1) $K_{n}$ is a hyperbolic L–space knot; and * (2) its formal semigroup is a semigroup $\langle 6,6n+4,6n+8,12n+11,12n+15\rangle$. The proof of Theorem 2.1 is divided in the remaining sections. In Section 3, we prove that $(18n+22)$–surgery on $K_{n}$ yields an L–space by using the Montesinos trick. In Section 4, we calculate the Alexander polynomial and the formal semigroup. Finally, we prove that $K_{n}$ is hyperbolic in Section 5. ## 3\. Montesinos trick and L–space surgery In this section, we prove that each $K_{n}$ admits a Dehn surgery yielding an L–space by using the Montesinos trick. The Montesinos trick is the standard tool introduced by [21]. For a strongly invertible link $L$ in $S^{3}$, the resulting manifold by Dehn surgery on $L$ is described as the double branched cover of some link $\ell$. The surgery corresponds to a tangle replacement. For details, see [24, 36]. For $K_{n}$, Fig. 2 shows a surgery diagram of $K\cup C_{1}\cup C_{2}$, where performing $-1/n$–surgery on $C_{1}$ and $1/2n$–surgery on $C_{2}$ changes $K$ to $K_{n}$. This diagram can be changed into a strongly invertible position as illustrated in Fig. 3. We remark that $r$–surgery on $K$ corresponds to $(18n+r)$–surgery on $K_{n}$. Figure 2. A surgery diagram $K\cup C_{1}\cup C_{2}$. Performing $-1/n$–surgery on $C_{1}$ and $1/2n$–surgery on $C_{2}$ changes $K$ to our $K_{n}$. Figure 3. A strongly invertible position of the link $K\cup C_{1}\cup C_{2}$ with an axis. ###### Theorem 3.1. For $n\geq 1$, $(18n+22)$–surgery on $K_{n}$ yields an L–space. ###### Proof. For the link in Fig. 3, take a quotient by the involution around the axis. Note that the surgery coefficient on $K$ is $22$. In the diagram of Fig. 3, the blackboard framing (or writhe) of $K$ is $19$. Hence, $22$–surgery on $K$ corresponds to the tangle replacement by the $3$–tangle. Rational tangle replacements corresponding to the surgeries on $K,C_{1},C_{2}$ yield a link $\ell$, whose double branched cover is the result of $(18n+22)$–surgery on $K_{n}$. Figures 4, 5, 6 and 7 show the deformation of $\ell$. Figure 4. A deformation of $\ell$, where a rectangle box means horizontal half-twists. The integer indicates the number of half-twists, which is right- handed if it is positive, or left-handed otherwise. Figure 5. Continued from Figure 4. Figure 6. Continued from Figure 5. Figure 7. Continued from Figure 6. In the next lemma, we prove that the double branched cover of the link $\ell$ is an L–space, so the proof is complete. ∎ ###### Lemma 3.2. The double branched cover of the link $\ell$ is an L–space. ###### Proof. First, $\det\ell=18n+22$. This is easily calculated from the Goeritz matrix for the checkerboard coloring of the diagram of $\ell$ in Fig. 7. Let $c$ be the crossing as indicated in Fig. 7. Then we have $\ell_{0}$ and $\ell_{\infty}$ by smoothing $c$ as shown in Fig. 8, and it is also a direct calculation to see $\det\ell_{0}=4n+5$ and $\det\ell_{\infty}=14n+17$ from Figs. 9 and 10. Hence $\det\ell=\det\ell_{0}+\det\ell_{\infty}$ holds. By [27, Proposition 2.1], the triple of thee double branched covers of $\ell$, $\ell_{0}$ and $\ell_{\infty}$ forms a triad. Thus [26, Proposition 2.1] claims that if the double branched covers of $\ell_{0}$ and $\ell_{\infty}$ are L–spaces, then so is the double branched cover of $\ell$. Figure 8. Two resolutions. Figure 9. The resolution $\ell_{0}$ is a Montesinos knot. Figure 10. The resolution $\ell_{\infty}$ and its further resolutions $\ell_{\infty 0}$ and $\ell_{\infty\infty}$ at the crossing $d$. We can see that $\ell_{0}$ is the Montesinos knot $M(-\frac{n+1}{2n+1},\frac{1}{2},\frac{2}{4n-1})$ as shown in Fig. 9. When $n=1$, $\ell_{0}$ is the knot 8_20, which is non-alternating, but quasi- alternating. Then the double branched cover is an L–space by [27]. If $n>1$, then $\ell_{0}$ is not quasi-alternating by [14]. Nevertheless, we can show the following. ###### Claim 3.3. The double branched cover of $\ell_{0}$ is an L–space. ###### Proof of Claim 3.3. The double branched cover of $\ell_{0}$ is the Seifert fibered space $M(0;-\frac{n+1}{2n+1},\frac{1}{2},\frac{2}{4n-1})=M(-1;\frac{n}{2n+1},\frac{1}{2},\frac{2}{4n-1})$. (We use the convention of [20], which is the same as [2].) Theorem 1.1 of [20] combined with [19] claims that such a Seifert fibered space $M(-1;r_{1},r_{2},r_{3})\ (1\geq r_{1}\geq r_{2}\geq r_{3}>0)$ is an L–space if and only if there are no relatively prime integers $m>a>0$ such that $mr_{1}<a<m(1-r_{2})$ and $mr_{3}<1$. First, assume $n=1$. Then $r_{1}=2/3$, $r_{2}=1/2$ and $r_{3}=1/3$, so $1-r_{2}=1/2$. Since $r_{1}>1-r_{2}$, we have no solution $m$ satisfying $mr_{1}<m(1-r_{2})$. Suppose $n\geq 2$. Then $r_{1}=1/2$, $r_{2}=n/(2n+1)$ and $r_{3}=2/(4n-1)$, so $1-r_{2}=(n+1)/(2n+1)$. We assume that there are coprime integers $m$ and $a$ such that $m/2<a<m(n+1)/(2n+1)$ and $2m/(4n-1)<1$. Then the first gives $0<2a-m<\frac{m}{2n+1},$ and the second gives $m<2n-1/2$. Combining these yields $0<2a-m<\frac{4n-1}{4n+2}<1.$ Since $a$ and $m$ are integers, this is a contradiction. ∎ For the other resolution $\ell_{\infty}$, we further perform two smoothings at the crossing $d$ as shown in Fig. 10. Then we have a link $\ell_{\infty 0}$ and a knot $\ell_{\infty\infty}$ as shown there. In particular, a direct calculation on Fig. 10 (or, Figs. 11 and 12) shows that $\det\ell_{\infty 0}=4n+14$ and $\det\ell_{\infty\infty}=10n+3$. Hence $\det\ell_{\infty}=\det\ell_{\infty 0}+\det\ell_{\infty\infty}$ holds. Figure 11. The resolution $\ell_{\infty 0}$ is a Montesinos link. ###### Claim 3.4. The double branched covers of $\ell_{\infty 0}$ and $\ell_{\infty\infty}$ are L–spaces. ###### Proof of Claim 3.4. The link $\ell_{\infty 0}$ is the Montesinos link $M(\frac{1}{2n+1},-\frac{1}{2},\frac{2n+3}{4n+8})$ as shown in Fig. 11. Although this is not quasi-alternating by [14], we can show that the double branched cover is an L–space as before. The double branched cover is the Seifert fibered space $M(0;\frac{1}{2n+1},-\frac{1}{2},\frac{2n+3}{4n+8})=M(-1;\frac{1}{2n+1},\frac{1}{2},\frac{2n+3}{4n+8})$. As in the proof of Claim 3.3, set $r_{1}=1/2$, $r_{2}=(2n+3)/(4n+8)$ and $r_{3}=1/(2n+1)$. Suppose that there are coprime integers $m$ and $a$ such that $mr_{1}<a<m(1-r_{2})$ and $mr_{3}<1$. Then $0<2a-m<\frac{m}{2n+4}<\frac{2n+1}{2n+4}<1,$ a contradiction. The link $\ell_{\infty\infty}$ is the Montesinos knot $M(-\frac{1}{2},\frac{n}{2n+1},\frac{5}{10n+7})$ as shown in Fig. 12, which is not quasi-alternating by [14] again. The double branched cover is the Seifert fibered space $M(0;-\frac{1}{2},\frac{n}{2n+1},\frac{5}{10n+7})=M(-1;\frac{1}{2},\frac{n}{2n+1},\frac{5}{10n+7})$. Figure 12. The resolution $\ell_{\infty\infty}$ is a Montesinos knot. Set $r_{1}=1/2$, $r_{2}=n/(2n+1)$ and $r_{3}=5/(10n+7)$. Suppose that there are coprime integers $m$ and $a$ as above. Then $0<2a-m<\frac{m}{2n+1}<\frac{10n+7}{10n+5}.$ Hence $2a-m=1$. Then $a<m(1-r_{2})$ implies $2n+1<m$. Combining with $mr_{3}<1$ gives $10n+5<5m<10n+7,$ which is impossible. ∎ By [26, Proposition 2.1] and [27, Proposition 2.1], the double branched cover of $\ell_{\infty}$ is an L–space, so is that of $\ell$. ∎ We remark that computer experiments suggest that the knot $K_{n}$ does not admit a nontrivial exceptional surgery. This situation brings us a difficulty to find candidates of slopes for L–space surgeries. Also, since $K_{n}$ has genus $9n+7$, if $r$–surgery on $K_{n}$ yields an L–space, then $r\geq 2(9n+7)-1=18n+13$ by [28]. In fact, it is known that any $r\ (\geq 18n+13)$ yields an L–space. We selected the slope $18n+22$ for our proof, but there might be a better slope for a proof. ## 4\. Alexander polynomials and formal semigroups In this section, we calculate the Alexander polynomial $\Delta_{K_{n}}(t)$ of $K_{n}$, and its formal semigroup. For the former, we mimic the argument in [2, 5]. ###### Theorem 4.1. The Alexander polynomial of $K_{n}$ is given as $\Delta_{K_{n}}(t)=t^{6n+4}+\sum_{i=0}^{n}(A_{1}+A_{2}+A_{3}+A_{4}+A_{5}),$ where $\displaystyle A_{1}$ $\displaystyle=t^{6(n-i)}-t^{6(n-i)+1},$ $\displaystyle A_{2}$ $\displaystyle=t^{6(n+i)+6}-t^{6(n+i)+5},$ $\displaystyle A_{3}$ $\displaystyle=t^{6(n+i)+8}-t^{6(n+i)+7},$ $\displaystyle A_{4}$ $\displaystyle=t^{6(n+i)+10}-t^{6(n+i)+9},$ $\displaystyle A_{5}$ $\displaystyle=t^{6(2n+i)+14}-t^{6(2n+i)+13}.$ ###### Proof. Let $L=K\cup C_{1}\cup C_{2}$ be the oriented link as shown in Fig. 13. We remark that this is modified from the link in Fig. 2 to reduce the number of crossing by changing the surgery coefficients. (This process is not critical, because we only need to calculate the multivariable Alexander polynomial.) Figure 13. A modified surgery diagram of $L=K\cup C_{1}\cup C_{2}$. This is simpler than the previous link in Figure 2. It has the multivariable Alexander polynomial $\begin{split}\Delta_{L}(x,y,z)&=(x^{3}-1)(x^{5}y^{4}z^{2}+x^{3}y^{5}z^{2}-x^{3}y^{4}z^{2}+x^{2}y^{4}z^{2}+x^{4}y^{2}z+x^{3}y^{3}z\\\ &\quad-x^{3}y^{2}z-x^{2}y^{3}z+x^{2}y^{2}z+xy^{3}z+x^{3}y-x^{2}y+x^{2}+y).\end{split}$ (We used [15] for the calculation.) Performing $-1/(n+1)$–surgery on $C_{1}$ and $1/(2n+2)$–surgery on $C_{2}$ changes the link $K\cup C_{1}\cup C_{2}$ to $K_{n}\cup C_{1}^{n}\cup C_{2}^{n}$. Clearly, these links have homeomorphic exteriors. Hence the induced isomorphism of the homeomorphism on their homology groups relates the Alexander polynomials of two links. (See [9, 23].) Let $\mu_{K}$, $\mu_{C_{1}}$ and $\mu_{C_{2}}$ be the homology classes of meridians of $K$, $C_{1}$ and $C_{2}$, respectively. We assume that each (oriented) meridian has linking number one with the corresponding knot. Moreover, let $\lambda_{K}$, $\lambda_{C_{1}}$ and $\lambda_{C_{2}}$ be the homology classes of their oriented longitudes. Similarly, we have homology classes of meridians, $\mu_{K_{n}}$, $\mu_{C_{1}^{n}}$ and $\mu_{C_{2}^{n}}$ of $K_{n}$, $C_{1}^{n}$ and $C_{2}^{n}$. Then we have $\mu_{K_{n}}=\mu_{K},\quad\mu_{C_{1}^{n}}=\mu_{C_{1}}-(n+1)\lambda_{C_{1}},\quad\mu_{C_{2}^{n}}=\mu_{C_{2}}+(2n+2)\lambda_{C_{2}}.$ Since $\lambda_{C_{1}}=6\mu_{K}$ and $\lambda_{C_{2}}=3\mu_{K}$, $\mu_{C_{1}^{n}}=\mu_{C_{1}}-6(n+1)\mu_{K},\quad\mu_{C_{2}^{n}}=\mu_{C_{2}}+6(n+1)\mu_{K}.$ Thus $\mu_{K_{n}}=\mu_{K},\quad\mu_{C_{1}}=\mu_{C_{1}^{n}}+6(n+1)\mu_{K},\quad\mu_{C_{2}}=\mu_{C_{2}^{n}}-6(n+1)\mu_{K}.$ Hence we have the relation between the Alexander polynomials as (4.1) $\Delta_{K_{n}\cup C_{1}^{n}\cup C_{2}^{n}}(x,y,z)=\Delta_{L}(x,yx^{6(n+1)},zx^{-6(n+1)}).$ On the other hand, since $\mathrm{lk}(K_{n},C_{2}^{n})=\mathrm{lk}(K,C_{2})=3$, the Torres condition [34] gives $\displaystyle\Delta_{K_{n}\cup C_{1}^{n}\cup C_{2}^{n}}(x,y,1)$ $\displaystyle=(x^{3}y^{0}-1)\Delta_{K_{n}\cup C_{1}^{n}}(x,y)$ $\displaystyle=(x^{3}-1)\Delta_{K_{n}\cup C_{1}^{n}}(x,y).$ Similarly, since $\mathrm{lk}(K_{n},C_{1}^{n})=\mathrm{lk}(K,C_{1})=6$, $\Delta_{K_{n}\cup C_{1}^{n}}(x,1)=\frac{x^{6}-1}{x-1}\Delta_{K_{n}}(x).$ Thus, $\Delta_{K_{n}}(x)=\frac{x-1}{x^{6}-1}\Delta_{K_{n}\cup C_{1}^{n}}(x,1)=\frac{x-1}{(x^{6}-1)(x^{3}-1)}\Delta_{K_{n}\cup C_{1}^{n}\cup C_{2}^{n}}(x,1,1).$ Then (4.1) gives $\displaystyle\Delta_{K_{n}}(t)$ $\displaystyle=\frac{t-1}{(t^{6}-1)(t^{3}-1)}\Delta_{L}(t,t^{6(n+1)},t^{-6(n+1)})$ $\displaystyle=\frac{t^{2}(t-1)}{t^{6}-1}(t^{18n+19}+t^{12n+15}+t^{12n+11}+t^{6n+8}+t^{6n+4}+1)$ $\displaystyle\stackrel{{\scriptstyle.}}{{=}}\frac{t^{18n+19}+t^{12n+15}+t^{12n+11}+t^{6n+8}+t^{6n+4}+1}{t^{5}+t^{4}+t^{3}+t^{2}+t+1}.$ (Recall that $\stackrel{{\scriptstyle.}}{{=}}$ means equivalence up to units.) Next, we calculate (4.2) $(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\bigl{(}t^{6n+4}+\sum_{i=0}^{n}(A_{1}+A_{2}+A_{3}+A_{4}+A_{5})\bigr{)}.$ First, $(t^{5}+t^{4}+t^{3}+t^{2}+t+1)t^{6n+4}=t^{6n+9}+t^{6n+8}+t^{6n+7}+t^{6n+6}+t^{6n+5}+t^{6n+4}.$ Next, $\displaystyle(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}A_{1}$ $\displaystyle=(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}(t^{6(n-i)}-t^{6(n-i)+1})$ $\displaystyle=(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}t^{6(n-i)}(1-t)$ $\displaystyle=(1-t^{6})\sum_{i=0}^{n}t^{6(n-i)}$ $\displaystyle=\sum_{i=0}^{n}t^{6(n-i)}-\sum_{i=0}^{n}t^{6(n-i)+6}$ $\displaystyle=1-t^{6n+6}.$ Similarly, $\displaystyle(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}A_{2}$ $\displaystyle=(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}(t^{6(n+i)+6}-t^{6(n+i)+5})$ $\displaystyle=(t^{6}-1)\sum_{i=0}^{n}t^{6(n+i)+5}$ $\displaystyle=\sum_{i=0}^{n}t^{6(n+i)+11}-\sum_{i=0}^{n}t^{6(n+i)+5}$ $\displaystyle=t^{12n+11}-t^{6n+5},$ $\displaystyle(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}A_{3}$ $\displaystyle=(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}(t^{6(n+i)+8}-t^{6(n+i)+7})$ $\displaystyle=(t^{6}-1)\sum_{i=0}^{n}t^{6(n+i)+7}$ $\displaystyle=\sum_{i=0}^{n}t^{6(n+i)+13}-\sum_{i=0}^{n}t^{6(n+i)+7}$ $\displaystyle=t^{12n+13}-t^{6n+7},$ $\displaystyle(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}A_{4}$ $\displaystyle=(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}(t^{6(n+i)+10}-t^{6(n+i)+9})$ $\displaystyle=(t^{6}-1)\sum_{i=0}^{n}t^{6(n+i)+9}$ $\displaystyle=\sum_{i=0}^{n}t^{6(n+i)+15}-\sum_{i=0}^{n}t^{6(n+i)+9}$ $\displaystyle=t^{12n+15}-t^{6n+9},$ $\displaystyle(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}A_{5}$ $\displaystyle=(t^{5}+t^{4}+t^{3}+t^{2}+t+1)\sum_{i=0}^{n}(t^{6(2n+i)+14}-t^{6(2n+i)+13})$ $\displaystyle=(t^{6}-1)\sum_{i=0}^{n}t^{6(2n+i)+13}$ $\displaystyle=\sum_{i=0}^{n}t^{6(2n+i)+19}-\sum_{i=0}^{n}t^{6(2n+i)+13}$ $\displaystyle=t^{18n+19}-t^{12n+13}.$ These show that (4.2) is equal to $t^{18n+19}+t^{12n+15}+t^{12n+11}+t^{6n+8}+t^{6n+4}+1$. Hence $\Delta_{K_{n}}(t)=t^{6n+4}+\sum_{i=0}^{n}(A_{1}+A_{2}+A_{3}+A_{4}+A_{5}).$ ∎ ###### Lemma 4.2. For $n\geq 1$, let $\mathcal{T}=\langle 6,6n+4,6n+8,12n+11,12n+15\rangle$. The the semigroup $\mathcal{T}$ has rank $5$. ###### Proof. Let $G$ be a generating set of $\mathcal{T}$. It suffices to show that $\\{6,6n+4,6n+8,12n+11,12n+15\\}\subset G$. Since $6$ is the minimal nonzero element of $\mathcal{T}$, we need $6\in G$. Except the multiples of $6$, $6n+4$ is the minimal element of $\mathcal{T}$ and $6n+8$ is the next, so $6n+4,6n+8\in G$. Assume $12n+11=6a+(6n+4)b+(6n+8)c$ for $a,b,c\geq 0$. Then $1\equiv 0\pmod{2}$, so $12n+11\not\in\langle 6,6n+4,6n+8\rangle$. Hence $12n+11\in G$. Finally, assume $12n+15=6a+(6n+4)b+(6n+8)c+(12n+11)d$ for $a,b,c,d\geq 0$. Then $1\equiv d\pmod{2}$, so $d\neq 0$. In fact, $d=1$. We have $4=6a+(6n+4)b+(6n+8)c$. Since $n\geq 1$, this is impossible. Hence $12n+15\in G$. ∎ ###### Theorem 4.3. The formal semigroup $\mathcal{S}$ of $K_{n}$ is a semigroup of rank $5$: $\mathcal{S}=\langle 6,6n+4,6n+8,12n+11,12n+15\rangle.$ ###### Proof. By Theorem 4.1, $\Delta_{K_{n}}(t)=t^{6n+4}+\sum_{i=0}^{n}(A_{1}+A_{2}+A_{3}+A_{4}+A_{5}).$ Hence, as a formal power series, $\displaystyle\frac{\Delta_{K_{n}}(t)}{1-t}$ $\displaystyle=\frac{t^{6n+4}}{1-t}+\sum_{i=0}^{n}\Bigl{(}\frac{A_{1}}{1-t}+\frac{A_{2}}{1-t}+\frac{A_{3}}{1-t}+\frac{A_{4}}{1-t}+\frac{A_{5}}{1-t}\Bigr{)}$ $\displaystyle=t^{6n+4}\sum_{j=0}^{\infty}t^{j}+\sum_{i=0}^{n}(t^{6(n-i)}-t^{6(n+i)+5}-t^{6(n+i)+7}-t^{6(n+i)+9}-t^{6(2n+i)+13})$ $\displaystyle=\sum_{j=0}^{\infty}t^{6n+4+j}+\sum_{i=0}^{n}t^{6(n-i)}-t^{6n+5}\sum_{i=0}^{n}(t^{6i}+t^{6i+2}+t^{6i+4})-\sum_{i=0}^{n}t^{12n+13+6i}.$ Then $\mathcal{S}=\mathbb{Z}_{\geq 6n+4}\cup B-C-D,$ where $B=\\{0,6,12,\dots,6n\\}$, $C=\\{6n+5,6n+7,6n+9,\dots,12n+5,12n+7,12n+9\\}$ and $D=\\{12n+13,12n+19,\dots,18n+13\\}$. Let $\mathcal{T}=\langle 6,6n+4,6n+8,12n+11,12n+15\rangle$. We need to show that $\mathcal{S}=\mathcal{T}$. First, if $m\geq 18n+14$, then $m\in\mathcal{S}$. Thus $\mathbb{Z}_{\geq 18n+14}\subset\mathcal{S}$. To show that $\mathbb{Z}_{\geq 18n+14}\subset\mathcal{T}$, it suffices to verify that $18n+14,18n+15,\dots,18n+19\in\mathcal{T}$, since $6\in\mathcal{T}$. This follows from $18n+14\equiv 6n+8,18n+15\equiv 12n+15,18n+16\equiv 6n+4,$ $18n+17\equiv 12n+11,18n+18\equiv 6\pmod{6},$ $18n+19=(6n+8)+(12n+11).$ Next, the set $\mathbb{Z}_{<18n+14}$ of nonnegative integers less than $18n+14$ is decomposed into the congruence classes of modulo 6, $Z_{0},Z_{1},Z_{2},\dots,Z_{5}$, where $Z_{i}=\\{m\mid 0\leq m<18n+14,m\equiv i\pmod{6}\\}$. Then $B\subset Z_{0}$, $D\subset Z_{1}$, and $C\subset Z_{1}\cup Z_{3}\cup Z_{5}$. We examine each congruence class. * • $Z_{0}\subset\mathcal{S}$ and $Z_{0}\subset\mathcal{T}$. Thus $Z_{0}\cap\mathcal{S}=Z_{0}\cap\mathcal{T}=Z_{0}$. * • $Z_{1}\cap\mathcal{S}=\varnothing$. * • $Z_{2}\cap\mathcal{S}=\\{6n+8,6n+14,\dots,18n+8\\}\subset\mathcal{T}$. * • $Z_{3}\cap\mathcal{S}=\\{12n+15,12n+21,\dots,18n+9\\}\subset\mathcal{T}$. * • $Z_{4}\cap\mathcal{S}=\\{6n+4,6n+10,\dots,18n+10\\}\subset\mathcal{T}$. * • $Z_{5}\cap\mathcal{S}=\\{12n+11,12n+17,\dots,18n+11\\}\subset\mathcal{T}$. Hence $\mathcal{S}\cap\mathbb{Z}_{<18n+14}\subset\mathcal{T}$. Conversely, let $m\in Z_{1}$. If $m\in\mathcal{T}$, then we need to use $12n+11$ or $12n+15$ to yield $m$. Since $m\leq 18n+13$, either * (1) $m=(12n+11)+r$ and $r\leq 6n+2$, $r\equiv 2\pmod{6}$, or * (2) $m=(12n+15)+r$ and $r\leq 6n-2$, $r\equiv 4\pmod{6}$. However, there is no $r\in\mathcal{T}$ satisfying these. Hence $Z_{1}\cap\mathcal{T}=\varnothing$. Let $m\in Z_{2}\cap\mathcal{T}$. If $m<6n+8$, then $m\leq 6n+2$. So, there is no such element in $\mathcal{T}$. Hence $Z_{2}\cap\mathcal{S}=Z_{2}\cap\mathcal{T}$. Similarly, let $m\in Z_{4}\cap\mathcal{T}$. If $m<6n+4$, then $m\leq 6n-2$. But there is no such element in $\mathcal{T}$. Hence $Z_{4}\cap\mathcal{S}=Z_{4}\cap\mathcal{T}$. Let $m\in Z_{3}\cap\mathcal{T}$. Again, we need to use $12n+11$ or $12n+15$ to yield $m$. Then, we have either * (3) $m=(12n+11)+r$ and $r\leq 6n-2$ and $r\equiv 4\pmod{6}$, or * (4) $m=(12n+15)+r$ and $r\leq 6n-6$ and $r\equiv 0\pmod{6}$. For (3), there is no $r\in\mathcal{T}$. For (4), $r\in\\{0,6,12,\dots,6n-6\\}$, so $m\in Z_{3}\cap\mathcal{S}$. Hence $Z_{3}\cap\mathcal{S}=Z_{3}\cap\mathcal{T}$. Finally, let $m\in Z_{5}\cap\mathcal{T}$. Again, we have either * (5) $m=(12n+11)+r$ and $r\leq 6n$ and $r\equiv 0\pmod{6}$, or * (6) $m=(12n+15)+r$ and $r\leq 6n-4$ and $r\equiv 2\pmod{6}$. For (6), there is no $r\in\mathcal{T}$. For (5), $r\in\\{0,6,12,\dots,6n\\}$, so $m\in Z_{5}\cap\mathcal{S}$. Hence $Z_{5}\cap\mathcal{S}=Z_{5}\cap\mathcal{T}$. ∎ ## 5\. Hyperbolicity In this section, we prove that our knot $K_{n}$ is a hyperbolic knot for any $n\geq 1$ by using the fact that $K_{n}$ has tunnel number one. ###### Lemma 5.1. For $n\geq 1$, $K_{n}$ has tunnel number one, hence $K_{n}$ is prime. ###### Proof. Figure 14 shows an unknotting tunnel $\gamma$ for $K_{n}$. The series of isotopy as illustrated in Fig. 14 indicates that the outside of a regular neighborhood of $K_{n}\cup\gamma$ is a genus two handlebody. ∎ Figure 14. The unknotting tunnel $\gamma$ and the series of isotopy of $N(K_{n}\cup\gamma)$. ###### Theorem 5.2. For $n\geq 1$, $K_{n}$ is a hyperbolic knot. ###### Proof. By Theorem 4.3, the formal semigroup of $K_{n}$ is a semigroup of rank $5$. Since the formal semigroup of a torus knot is a semigroup of rank two (see [7]), $K_{n}$ is not a torus knot. Assume that $K_{n}$ is a satellite knot for a contradiction. By Lemma 5.1, $K_{n}$ has tunnel number one. Then Morimoto and Sakuma’s classification [22] tells us that $K_{n}$ has a torus knot $T(p,q)$ as its companion. Since $K_{n}$ has bridge number at most $6$, the companion has bridge number at most three [30]. More precisely, either the companion is $3$–bridge and the wrapping number of the pattern is two, or the companion is $2$–bridge and the wrapping number is two or three. By Theorem 3.1, $K_{n}$ is an L–space knot. Then [10] implies that the pattern knot is also an L–space knot. Furthermore, [5, Theorem 1.17] claims that the pattern is braided in the pattern solid torus. In particular, the wrapping number coincides with the winding number there. We divide the argument into two cases. Case 1. Suppose that $K_{n}$ has a companion $T(3,q)\ (|q|>3)$ and a braided pattern knot $P$. Then the wrapping number and winding number of $P$ are equal to two. This means that $K_{n}$ is a $2$–cable of $T(3,q)$. However, the cabling formula of [35] shows that its formal semigroup has rank three. Hence this case is impossible. Case 2. Suppose that $K_{n}$ has a companion $T=T(2,q)\ (|q|\geq 3)$ and a braided pattern knot $P$. We may assume that $q>0$ by taking the mirror image of $K_{n}$, if necessary. If the wrapping number is two, then $K_{n}$ is a $2$–cable, which is a contradiction again. Hence the wrapping number and the winding winding number are equal to three. For the Alexander polynomials, we have $\Delta_{K_{n}}(t)=\Delta_{T}(t^{3})\Delta_{P}(t)$ (see [8]). Here, $\Delta_{T}(t^{3})=1-t^{3}+t^{6}-\dots+t^{3(q-1)}$. Also, this implies $g(K_{n})=3g(T)+g(P)$ (see [13, Lemma 2.6]). By [17, Theorem 3.1], the only closed $3$–braids which are L–space knots are torus knots and twisted torus knots $T(3,t;2,s)$ with $ts>0$. Here, $T(3,t;2,s)$ is obtained from $T(3,t)$ by adding $s$ full twists on two adjacent strings. Again, a $3$–cable is excluded. We need to recall the construction of [22] of tunnel number one satellite knots. Let $k_{1}\cup k_{2}$ be a $2$–bridge link in $S^{3}$. We remark that each component $k_{i}$ is unknotted. The exterior of $k_{2}$ is a solid torus $J$ containing $k_{1}$ in its interior. Here, the longitude of $J$ is the meridian of $k_{2}$. For the companion $T$, consider the homeomorphism $f$ from $J$ to the regular neighborhood $N(T)$ of $T$, which sends the longitude of $J$ to the $(1,2q)$–curve on $\partial N(T)$. This $(1,2q)$–curve corresponds to a regular fiber of the Seifert fibration in the exterior of $T$. Then the image $f(k_{1})$ gives our $K_{n}$. Since the pattern knot $P$ is defined so as to preserve the preferred longitudes of $J$ and $N(T)$, $P$ is obtained from $k_{1}$ in $J$ by adding $2q$–full twists. Conversely, if we add $(-2q)$-full twists on $P$, then the result is unknotted. By the classification of twisted torus knots which are unknotted in [18], $T(3,2;2,-1)$, $T(3,2;2,-2)$, $T(3,1;2,-1)$ and their mirror images $T(3,-2;2;1)$, $T(3,-2;2,2)$, $T(3,-1;2,1)$ give all $3$–strand twisted torus knots, which are unknotted. Thus Table 1 is the list of possible pattern knot $P$ with genus. (Each knot has a positive braid presentation, so its genus is calculated as in Section 2.) Since $P$ is an L–space knot and not a $3$–cable, (1), (2) and (3) are excluded by [17]. (In fact, (1) gives a $3$–cable.) | Knot | Genus ---|---|--- (1) | $T(3,6q+2;2,-1)$ | $6q$ (2) | $T(3,6q+2;2,-2)$ | $6q-1$ (3) | $T(3,6q+1;2,-1)$ | $6q-1$ (4) | $T(3,6q-2;2,1)$ | $6q-2$ (5) | $T(3,6q-2;2,2)$ | $6q-1$ (6) | $T(3,6q-1;2,1)$ | $6q-1$ Table 1. List of the pattern knot $P$ and its genus. Recall that $g(K_{n})=9n+7$ and $g(T)=(q-1)/2$. If $g(P)=6q-1$, then $9n+7=3(q-1)/2+6q-1$, so $18n+14=3(q-1)+12q-2$. Then $18n+14\equiv-2\pmod{3}$, a contradiction. Thus (4) remains. For this case, $9n+7=3(q-1)/2+6q-2$ gives $6n+7=5q$. Then $n\equiv 3\pmod{5}$. Set $n=5m+3\ (m\geq 0)$. Then $q=6m+5$. We have $\Delta_{K_{n}}(-1)=\Delta_{T}(-1)\Delta_{P}(-1)$, and $\Delta_{K_{n}}(-1)=10n+11=50m+41$ from Theorem 4.1. However, $\Delta_{T}(-1)=q=6m+5$. Then $6m+5$ does not divide $50m+41$, a contradiction. Thus we have shown that our $K_{n}$ is hyperbolic. ∎ ###### Proof of Theorem 2.1. By Theorems 3.1 and 5.2, $K_{n}$ is a hyperbolic L–space knot. Its formal semigroup is described in Theorem 4.3. ∎ ## Acknowledgments The author would like to thank Ken Baker, Marc Kegel and Kimihiko Motegi for valuable communication, and Yukinori Kitadai for his help of computer calculation. The author also thanks the referee for valuable suggestions and comments. ## References * [1] C. Anderson, K. Baker, X. Gao, M. Kegel, K. Le, K. Miller, S. Onaran, G. Sangston, S. Tripp, A. Wood and A. Wright, L–space knots with tunnel number $>1$ by experiment, preprint. arXiv:1090.00790. * [2] K. Baker and M. Kegel, Census L–space knots are braid positive, except for one that is not, preprint. arXiv:2203.12013. * [3] K. Baker, M. Kegel and D. McCoy, The search for alternating and quasi-alternating surgeries on asymmetric knots, preprint. * [4] K. Baker and A. Moore, Montesinos knots, Hopf plumbings, and L–space surgeries, J. Math. Soc. Japan. 70 (2018) 95–110. * [5] K. Baker and K. Motegi, Seifert vs. slice genera of knots in twist families and a characterization of braid axes, Proc. Lond. Math. Soc. (3) 119 (2019) 1493–1530. * [6] M. Borodzik and C. Livingston, Heegaard Floer homology and rational cuspidal curves, Forum Math. Sigma. 2 (2014), Paper No. e28, 23 pp. * [7] M. Borodzik and C. Livingston, Semigroups, $d$–invariants and deformations of cuspidal singular points of plane curves, J. Lond. Math. Soc. (2) 93 (2016) 439-463. * [8] G. Burde, H. Zieschang and M. Heusener, Knots, Third, fully revised and extended edition, (De Gruyter Studies in Mathematics, 5. De Gruyter, Berlin, 2014.) * [9] R. H. Fox, Free differential calculus II, Ann. of Math. (2) 59 (1954) 196–210. * [10] J. Hanselman, J. Rasmussen and L. Watson, Bordered Floer homology for manifolds with torus boundary via immersed curves, preprint. arXiv:1604.03466. * [11] M. Hedden and L. Watson, On the geography and botany of knot Floer homology, Selecta Math. (N.S.) 24 (2018) 997–1037. * [12] E. Hironaka, The Lehmer polynomial and pretzel links, Canad. Math. Bull. 44 (2001) 440–451. * [13] J. Hom, T. Lidman and F. Vafaee, Berge–Gabai knots and L–space satellite operations, Algebr. Geom. Topol. 14 (2014) 3745–3763. * [14] A. Issa, The classification of quasi-alternating Montesinos links, Proc. Amer. Math. Soc. 146 (2018) 4047–4057. * [15] K. Kodama, The software “KNOT”, a tool for knot theory, available at http://www.math.kobe-u.ac.jp/HOME/kodama/knot.html * [16] D. Krcatovich, A restriction on the Alexander polynomials of L–space knots, Pacific J. Math. 297 (2018) 117–129. * [17] C. R. S. Lee and F. Vafaee, On $3$–braids and L–space knots, Geom. Dedicata. 213 (2021) 513–521. * [18] S. Lee, Twisted torus knots that are unknotted, Int. Math. Res. Not. (2014) 4958–4996. * [19] P. Lisca and G. Matić, Transverse contact structures on Seifert $3$–manifolds, Algebr. Geom. Topol. 4 (2004) 1125–1144. * [20] P. Lisca and A. Stipsicz, Ozsváth-Szabó invariants and tight contact $3$–manifolds. III, J. Symplectic Geom. 5 (2007) 357–384. * [21] J. M. Montesinos, Surgery on links and double branched covers of $S^{3}$, in Knots, groups, and $3$–manifolds (Papers dedicated to the memory of R. H. Fox), pp. 227–259. (Ann. of Math. Studies, No. 84, Princeton Univ. Press, Princeton, N.J., 1975. ) * [22] K. Morimoto and M. Sakuma, On unknotting tunnels for knots, Math. Ann. 289 (1991) 143–167. * [23] H. Morton, The Alexander polynomial of a torus knot with twists, J. Knot Theory Ramifications. 15 (2006) 1037–1047. * [24] K. Motegi and K. Tohki, Hyperbolic L–space knots and exceptional Dehn surgeries, J. Knot Theory Ramifications. 23 (2014) 1450079, 13 pp. * [25] Y. Ni, Knot Floer homology detects fibred knots, Invent. Math. 170 (2007) 577–608. * [26] P. Ozsváth and Z. Szabó, On knot Floer homology and lens space surgeries, Topology. 44 (2005) 1281–1300. * [27] P. Ozsváth and Z. Szabó, On the Heegaard Floer homology of branched double-covers, Adv. Math. 194 (2005) 1–33. * [28] P. Ozsváth and Z. Szabó, Knot Floer homology and rational surgeries, Algebr. Geom. Topol. 11 (2011) 1–68. * [29] J. Rasmussen and S. Rasmussen, Floer simple manifolds and L–space intervals, Adv. Math. 322 (2017) 738–805. * [30] H. Schubert, Über eine numerische Knoteninvariante, Math. Z. 61 (1954) 245–288. * [31] J. Stallings, Constructions of fibred knots and links, in Algebraic and geometric topology, Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976, Part 2, pp. 55–60, (Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978.) * [32] M. Tange, On the Alexander polynomial of lens space knots, Topology Appl. 275 (2020) 107124, 37 pp. * [33] M. Tange, The third term in lens surgery polynomials, Hiroshima Math. J. 51 (2021) 101–109. * [34] G. Torres, On the Alexander polynomial, Ann. of Math. (2) 57 (1953) 57–89. * [35] S. Wang, Semigroups of L–space knots and nonalgebraic iterated torus knots, Math. Res. Lett. 25 (2018) 335–346. * [36] L. Watson, A surgical perspective on quasi-alternating links, in Low-dimensional and symplectic topology, 39–51, (Proc. Sympos. Pure Math., 82, Amer. Math. Soc., Providence, RI, 2011. )
11institutetext: Manipal Institute of Technology 11email: {nemakallu.rakshit, ankita.ghosh1, yash.maurya1, <EMAIL_ADDRESS> 22institutetext: SRM Institute of Science and Technology 22email<EMAIL_ADDRESS> # IS-CAM: Integrated Score-CAM for axiomatic-based explanations Rakshit Naidu 11 Ankita Ghosh 11 Yash Maurya 11 Shamanth R Nayak K 11 Soumya Snigdha Kundu 22 ###### Abstract Convolutional Neural Networks have been known as black-box models as humans cannot interpret their inner functionalities. With an attempt to make CNNs more interpretable and trustworthy, we propose IS-CAM (Integrated Score-CAM), where we introduce the integration operation within the Score-CAM pipeline to achieve visually sharper attribution maps quantitatively. Our method is evaluated on 2000 randomly selected images from the ILSVRC 2012 Validation dataset, which proves the versatility of IS-CAM to account for different models and methods. ###### Keywords: Explainable AI Interpretable ML. ## I Introduction Convolutional Neural Networks (CNNs) are paramount when it comes to solving state of the art vision problems. The deployment of these models in sensitive situations such as the medical and security industry cannot be done without understanding and interpreting the intuition of the models as that will greatly increase the chances for model failure and deplete the confidence of the model. To overcome these concerns and maintain the sensitivity of the task, a new research direction was put forward in order to build explainable models with CAMs [12]. Explainable models not only help in recognizing the drawbacks but also help in generating insights and accumulation of valuable information in tandem to the model’s inference. It also helps in debugging the model and removing bias. Our work builds upon the CAM-based approaches [10] [9] , which acquire attribution maps by a linear combination of the weights and the activation maps. While there are two different approaches to using CAMs we focused on the gradient-free approach as there are issues pertaining to gradient CAMs such as saturation and false confidence [7]. One of the first approaches towards a gradient-free method was Score-CAM [10], but due to its coarse localization, it tends to lead to erratic localizations in certain cases. Our contributions to overcome the existing issues are: * • We propose a new axiomatic-based approach IS-CAM, which is combined within the Score-CAM pipeline to produce sharper attribution maps. * • We attain improved performance in comparison to previous CAM-based methods. We quantitatively evaluate over faithfulness and localization tasks, which indicate better localized decision-related features of IS-CAM. ## II Related Work IntegratedGrad: [9] demonstrated their ability to debug a network by extracting certain rules from the network, thereby enabling the users to engage more with the models and understand the network’s predictions. They introduced two axioms for attribution methods, namely: Sensitivity (if there is a feature difference between the input and the baseline and have different predictions, then the differing feature should be assigned a non-zero attribution) and Implementation Invariance (if two networks give the same output for all inputs, despite having different implementations, the attributions should be equal in these two functionally equivalent networks). The Integrated gradient along the ${i}^{th}$ dimension is denoted by: $\left(x_{i}-x_{i}^{\prime}\right)\times\int^{1}_{\alpha=0}\dfrac{\partial F(x^{\prime}+\alpha\times\left(x-x^{\prime}\right))}{\partial x_{i}}d\alpha$ (1) where ${x}$ is the input and ${x^{\prime}}$ is the baseline. $\dfrac{\partial F\left(x\right)}{\partial x_{i}}$ represents the gradient of ${F(x)}$ along the ${i}^{th}$ dimension. Class Activation Maps: The inspiration driving CAM [12] is that each activation map $A^{k}_{l}$, where $A$ denotes the activation map for the $k$-th channel and $l$-th layer, contains distinctive spatial information about the input $X$. For a given class $c$, the input to the softmax $S_{c}$ is $\sum\limits_{k}w_{c}^{k}A_{l}^{k}$ where $w_{c}^{k}$ is the weight corresponding to class $c$ for $k$-th layer and $A_{l}^{k}$ represents the global pooling layer. CAM $L^{c}_{CAM}$ can be defined as $L^{c}_{CAM}=ReLU\left(\sum\limits_{k}w_{c}^{k}A_{l-1}^{k}\right)$ (2) Grad-CAM: As CAM is limited to GAP-based CNN models, Grad-CAM [7] was developed to generalize for a wider range of CNN architectures. To obtain each neuron for a decision of interest, Grad-CAM uses the gradient information flowing into the last convolutional layer. Considering an activation map $A^{k}$ for the $k$-th channel, Grad-CAM $L^{c}_{Grad-CAM}$ for target class $c$ can be defined as $L^{c}_{Grad-CAM}=ReLU\left(\sum\limits_{k}\alpha_{c}^{k}A^{k}\right)$ (3) where $\alpha_{c}^{k}$ represents the neuron importance weights. $\alpha_{c}^{k}=\frac{1}{Z}\sum\limits_{i}\sum\limits_{j}\frac{\partial Y_{c}}{\partial A^{k}_{ij}}$ where $Y_{c}$ is the score computed for the target class, $(i,j)$ represents the location of the pixel and $Z$ denotes the total number of pixels. Some other variations of Grad-CAM like Grad-CAM++ and Smooth Grad-CAM++ serve as a comparison for our algorithm in the sections that follow. Score-CAM: In Score-Cam [10], the weights of the score obtained for a specific target class $c$ are utilized. Score-CAM disposes of the reliance on the gradient and provides a more generalized framework as it only requires access to the class activation map and output scores. Considering an activation map $A^{k}_{l}$ for $k$-th channel and $l$-th convolutional layer, Score-CAM $L^{c}_{Score-CAM}$ can be defined as $L^{c}_{Score-CAM}=ReLU\left(\sum\limits_{k}\alpha_{c}^{k}A^{k}_{l}\right)$ (4) where $\alpha_{c}^{k}$ denotes the channel-wise Increase of Confidence performed on $A^{k}_{l}$ in order to measure the importance of the activation map. ## III Proposed Approach In this section, we explain our approach on how we combine IntegratedGrad [9] within the Score-CAM pipeline. Figure 1 shows our pipeline. We set a parameter $N$ as the number of intervals between the range [0, 1]. As the integration operation is analogous to the summation operation, we calculate scores of the maps at each step of the interval from $0$ to $1$. Finally, we calculate the average of the scores generated as the mean operation is sensitive to changes in the saliency maps generated at each step of the process. Note that $M_{0}=0$. Integrating over the input mask: $L^{c}_{IS-CAM}=ReLU\left(\sum_{k}\alpha^{c}_{k}A^{k}_{l}\right)$ (5) $where$ $\alpha^{c}_{k}=\dfrac{\sum^{N}_{i=1}\left(C(M_{i})\right)}{N}$ (6) $M_{i+1}\leftarrow M_{i}+\left((X_{0}*A^{k}_{l})*\frac{i}{N}\right)$ (7) Normalization: As the spatial region needs to focused on the object in the image, we leverage the features within a particular region by following the same normalization function as stated in [10], [11]. The normalization used in the algorithm is given as: $s\left(A^{k}_{l}\right)=\dfrac{A^{k}_{l}-min(A^{k}_{l})}{max(A^{k}_{l})-min(A^{k}_{l})}$ (8) Figure 1: Pipeline of the proposed IS-CAM approach. The saliency map is produced by the linear combination of the average scores after ”integration” and the upsampled activation maps. The average score is obtained from performing summation over the normalized input mask at every interval. ## IV Experiments In this section, we conduct experiments to evaluate the effectiveness of the proposed explanation method. Our setup is similar to that described in [1], [6], [10]. First, a qualitative output comparison of the architectures by visualization on the ILSVRC 2012 Validation set in section A. Second, we assess the fairness of the interpretations of architectures for object recognition in section B. Third, the Energy-based pointing game (proposed in [1]) is used to evaluate the bounding boxes for the class-conditional object localization in a given image in section C over 2000 uniformly random selected images from the ILSVRC Validation Set 2012. Our comparative analysis extends to five other known CAM methods, Grad-CAM [7], Grad-CAM++ [1], Smooth Grad-CAM++ [5] Score-CAM [10], and Smoothed Score- CAM [11]. The images are resized with a fixed size (224, 224, 3), condensed into the [0,1] range and then, normalized using ImageNet [2] weights (mean vector : [0.485, 0.456, 0.406] and standard deviation vector [0.229, 0.224, 0.225]). For simplicity, baseline image $X_{b}$ is set to 0(as shown in Channel-wise Increase in Confidence [10]). A. _Visual Comparison_ To perform this experiment, 2,000 images were randomly selected from the 2012 ILSVRC Validation Set. Fig 2 shows a few photos comparing our approach to prevailing CAM approaches. Here, we used N = 15 and $\sigma$ = 2 for SS-CAM. Even though we achieve comparable visual results to Score-CAM, we perform better quantitatively in terms of the Faithfulness explanations as shown in the next section. Figure 2: Depicts the Imagenet Labels (Row-wise): Basenji, Capuchin and Whippet. This figure is used for a Visual Comparison of our approach with the other existing approaches. We use $N=10$ here. B. _Faithfulness Evaluations_ Faithfulness evaluations are carried out as described in Grad-CAM++ [1] for the purpose of Object Recognition. Three metrics called Average Drop, Average Increase In Confidence, and Win $\%$ are implemented. These metrics are tested for 2000 images randomly chosen from the ILSVRC 2012 Validation set, using the pre-trained VGG-16 model. To perform this sub-experiment, we used N = 15 and $\sigma$ = 2 (for SS-CAM). Table I: Average AUC scores of the Insertion curve(the higher, the better) and Deletion curve(the lower, the better) over all the 2000 images. CAM techniques | Insertion % | Deletion % ---|---|--- Grad-CAM | 45.25 | 11.25 G-CAM ++ 111Grad-CAM++ | 44.94 | 11.41 SG-CAM++ 222Smooth Grad-CAM++ | 42.68 | 13.43 Score-CAM | 48.22 | 9.92 SS-CAM | 45.92 | 11.46 IS-CAM | 48.13 | 9.92 Insertion and Deletion Curves are used to calculate the Area Under Curve (AUC) metric to understand how many pixels of the saliency map will either add or reduce the scores of the resulting fractioned maps. We average the resulting pixel values at each stage(deleting/inserting 224 pixels) over all the 2000 images and produce graphs in Figure 3. The Deletion operation demonstrates the ability to remove the map information pixel-wise. A sharp decline and a lower AUC of the generated scores imply a good explanation. The Insertion operation evaluates the ability to reconstruct the saliency map from a given baseline. A sharp rise and higher AUC of the generated scores imply a good explanation. Figure 3: Insertion and Deletion curve charts for Table I. 1. 1. Average Drop %: The Average Drop refers to the maximum positive difference in the predictions made by the prediction using the input image and the prediction using the saliency map. It is given as: $\sum_{i=1}^{N}\frac{max(0,Y_{i}^{c}-O_{i}^{c})}{Y_{i}^{c}}\times 100$. Here, Yic refers to the prediction score on class c using the input image i and Oic refers to the prediction score on class c using the saliency map produced over the input image i. 2. 2. Increase in Confidence %: The Average Increase in Confidence is denoted as: $\sum_{i=1}^{N}\frac{Fun(Y_{i}^{c}<O_{i}^{c})}{N}\times 100$ where Fun refers to a boolean function which returns 1 if the condition inside the brackets is true, else the function returns 0. The symbols are referred to as shown in the above experiment for Average Drop. 3. 3. Win %: The Win percentage refers to the decrease in the model’s confidence for an explanation map generated by IS-CAM to the confidence generated by another algorithm. This metric is compared to the confidence generated by SS-CAM [11] maps and Score-CAM [10] maps with IS-CAM maps. When our approach is compared to SS-CAM, we get 59.25% and when compared to Score-CAM, we get 52.35% using VGG-16(higher is better); which indicates that IS-CAM performs better with respect to this metric. The AUC scores, Average Drop and Increase in Confidence indicate that IS-CAM performs better on an overall perspective. While Score-CAM performs well in AUC scores it fails to do so in Average Drop and Inc% using VGG-16 . Likewise, SS-CAM does well in Average Drop and Inc% but it fails to do so in AUC scores. IS-CAM does well in both perspectives which shows its profound versatility. Table II: Average Drop (the lower, the better) and Average Increase in Confidence (the higher, the better) across 2000 ILSVRC Validation images. CAM | VGG-16 | Resnet | SqueezeNet ---|---|---|--- Techniques | Avg Drop% | Avg Inc% | Avg Drop% | Avg Inc% | Avg Drop% | Avg Inc% Score-CAM | 66.03 | 51.85 | 64.23 | 53.55 | 13.42 | 60.85 SS-CAM | 79.15 | 51.30 | 64.53 | 54.80 | 12.06 | 64.85 IS-CAM | 63.30 | 52.35 | 64.85 | 53.50 | 13.00 | 62.15 C. _Localization Evaluations_ This section accomplishes evaluations related to Bounding boxes. A metric known as Energy-based pointing game, as introduced in [10], is employed for our localization experiments. This helps in calculating how much energy of the saliency map falls within the given Bounding box. This is achieved in two steps. The first step of this is where the input image is binarized, specifically with the interior of the Bounding box marked as 1 and the region outside the Bounding box as 0. This is then multiplied element-wise with the saliency map generated for the input image and summed over to calculate proportion ratio which is given as - $Proportion=\frac{\sum L^{c}_{(i,j)\in bbox}}{\sum L^{c}_{(i,j)\in bbox}+\sum L^{c}_{(i,j)\notin bbox}}$. We evaluate this metric on 2000 randomly selected images from the ILSVRC 2012 Validation set [2]. These images are then fed to 3 pre-trained models, namely, VGG-16 [8], ResNet-18(Residual Network with 18 layers) [3], and SqueezeNet1.0 [4]. Table III portrays the results of the localization evaluation for the 3 architectures. We see that IS-CAM performs better than most techniques in all three models. It also achieves the highest value for the VGG-16 variant. Table III: Localization Evaluation CAM techniques | VGG-16 Proportion(%) | ResNet18 Proportion(%) | SqueezeNet1.0 Proportion(%) ---|---|---|--- Grad-CAM | 42.69 | 43.55 | 42.01 G-CAM++ | 42.87 | 43.53 | 41.83 SG-CAM++ | 42.97 | 43.56 | 41.77 Score-CAM | 43.07 | 43.46 | 42.48 SS-CAM | 42.46 | 43.30 | 41.98 IS-CAM | 43.17 | 43.52 | 42.40 ## V Conclusion & Future Work Our proposed method involves integrating over the input mask and averaging the scores obtained from the normalised masks. According to our experiments, the increase or decrease of the value $N$, does not have a significant impact on the visual attribution map produced. The effect of $N$ is quite evident quantitatively as demonstrated in our experiments. In the future, we hope to test our algorithms in the medical domain to prove its effectiveness in sensitive real world scenarios. ## Acknowledgment We thank Mr. Haofan Wang from Carnegie Mellon University for his valuable inputs during the discussion. We would also like to thank the Research Society MIT, Manipal(RSM) for supporting and moderating the project. ## References * [1] Chattopadhyay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-cam++: Improved visual explanations for deep convolutional networks (2017) * [2] Deng, J., Dong, W., Socher, R., Li, L., Kai Li, Li Fei-Fei: Imagenet: A large-scale hierarchical image database pp. 248–255 (2009) * [3] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 770–778 (2016) * [4] Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡1mb model size. ArXiv abs/1602.07360 (2017) * [5] Omeiza, D., Speakman, S., Cintas, C., Weldermariam, K.: Smooth grad-cam++: An enhanced inference level visualization technique for deep convolutional neural network models (2019) * [6] Petsiuk, V., Das, A., Saenko, K.: Rise: Randomized input sampling for explanation of black-box models (2018) * [7] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization (2016) * [8] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2015) * [9] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: ICML (2017) * [10] Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., Hu, X.: Score-cam: Score-weighted visual explanations for convolutional neural networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) pp. 111–119 (2020) * [11] Wang, H., Naidu, R., Michael, J., Kundu, S.S.: Ss-cam: Smoothed score-cam for sharper visual feature localization (2020) * [12] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization (2015)
# Configurational temperature in active matter. II. Quantifying the deviation from thermal equilibrium Shibu Saw<EMAIL_ADDRESS>Glass and Time, IMFUFA, Department of Science and Environment, Roskilde University, P.O. Box 260, DK-4000 Roskilde, Denmark Lorenzo Costigliola<EMAIL_ADDRESS>Glass and Time, IMFUFA, Department of Science and Environment, Roskilde University, P.O. Box 260, DK-4000 Roskilde, Denmark<EMAIL_ADDRESS>Jeppe C. Dyre <EMAIL_ADDRESS>Glass and Time, IMFUFA, Department of Science and Environment, Roskilde University, P.O. Box 260, DK-4000 Roskilde, Denmark ###### Abstract This paper suggests using the configurational temperature ${T}_{\rm conf}$ for quantifying how far an active-matter system is from thermal equilibrium. We measure this “distance” by the ratio of the systemic temperature ${T}_{\rm s}$ to ${T}_{\rm conf}$, where ${T}_{\rm s}$ is the canonical-ensemble temperature for which the average potential energy is equal to that of the active-matter system. ${T}_{\rm conf}$ is “local” in the sense that it is the average of a function, which only depends on how the potential energy varies in the vicinity of a given configuration; in contrast ${T}_{\rm s}$ is a global quantity. The quantity ${T}_{\rm s}/{T}_{\rm conf}$ is straightforward to evaluate in a computer simulation; equilibrium simulations in conjunction with a single steady-state active-matter configuration are enough to determine ${T}_{\rm s}/{T}_{\rm conf}$. We validate the suggestion that ${T}_{\rm s}/{T}_{\rm conf}$ quantifies the deviation from thermal equilibrium by data for the radial distribution function of 3d Kob-Andersen and 2d Yukawa active- matter models with active Ornstein-Uhlenbeck and active Brownian Particle dynamics. Moreover, we show that ${T}_{\rm s}/{T}_{\rm conf}$, structure, and dynamics of the homogeneous phase are all approximately invariant along the motility-induced phase separation (MIPS) boundary in the phase diagram of the 2d Yukawa model. The measure ${T}_{\rm s}/{T}_{\rm conf}$ is not limited to active matter; it can be used for quantifying how far any system involving a potential-energy function, e.g., a driven Hamiltonian system, is from thermal equilibrium. ## I Introduction Temperature is fundamental in thermodynamics and statistical mechanics. Generalizations of the temperature concept to deal with out-of-equilibrium systems have been discussed in several publications, useful reviews of which are given in Refs. Casas-Vazquez and Jou, 2003; Powles _et al._ , 2005; Leuzzi, 2009; Puglisi _et al._ , 2017; Zhang _et al._ , 2019. Non- equilibrium temperatures generally attempt to relate the non-equilibrium system to the its thermal equilibrium properties. This paper and its companion Saw _et al._ (2023), henceforth referred to as Paper I, propose two applications of the so-called configurational temperature ${T}_{\rm conf}$ Landau and Lifshitz (1958); Rugh (1997); Powles _et al._ (2005); Himpel and Melzer (2019) to active-matter models, both of which are based on a different philosophy. Paper I showed that ${T}_{\rm conf}$ defines an energy scale, which can be used for tracing out lines of approximately invariant physics of the 3d Kob-Andersen binary Lennard-Jones model with active Ornstein-Uhlenbeck dynamics. The present paper shows that a similar procedure applies for the 2d Yukawa model with active Brownian dynamics (ABP), after which we proceed to the main focus: using ${T}_{\rm conf}$ for measuring how far an active-matter system is from thermal equilibrium. For an ordinary Hamiltonian system in thermal equilibrium, the temperature $T$ is identical to the configurational temperature ${T}_{\rm conf}$ that is defined Rugh (1997); Powles _et al._ (2005) as follows. If the system consists of $N$ particles with collective coordinate vector $\mathbb{R}\equiv(\mathbb{r}_{1},...,\mathbb{r}_{N})$ and potential-energy function $U(\mathbb{R})$, one defines $k_{B}{T}_{\rm conf}\equiv\langle(\nabla U)^{2}\rangle/\langle\nabla^{2}U\rangle$. Here $k_{B}$ is the Boltzmann constant, $\nabla$ is the gradient operator, and the sharp brackets denote canonical-ensemble averages. It is straightforward to prove that $T={T}_{\rm conf}$ in equilibrium Landau and Lifshitz (1958), see, e.g., Paper I. Approaching the thermodynamic limit, the relative fluctuations of both the numerator and the denominator of ${T}_{\rm conf}$ goes to zero. Thus if one defines an $\mathbb{R}$-dependent configurational temperature by $k_{B}{T}_{\rm conf}(\mathbb{R})\,\equiv\,\frac{(\nabla U(\mathbb{R}))^{2}}{\nabla^{2}U(\mathbb{R})}\,,$ (1) the identity ${T}_{\rm conf}(\mathbb{R})\cong T$ applies in thermal equilibrium in the sense that deviations vanish as $N\to\infty$. Because configurations with $\nabla^{2}U(\mathbb{R})\leq 0$ become less likely as $N\to\infty$, the fact that Eq. (1) is not defined for such configurations does not present a serious problem. The derivation and justification of the configurational temperature ${T}_{\rm conf}$ is based on the fact that the probability of configuration $\mathbb{R}$ in the canonical ensemble is proportional to $\exp(-U(\mathbb{R})/k_{B}T)$ Landau and Lifshitz (1958); Powles _et al._ (2005); Saw _et al._ (2023). This is irrelevant, however, for the property demonstrated in Paper I that ${T}_{\rm conf}(\mathbb{R})$ may be used for tracing out lines of invariant structure and dynamics in the phase diagram of active-matter models that involve a potential-energy function obeying hidden scale invariance Dyre (2014). This is the symmetry that the ordering of configurations according to their potential energy at a given density is maintained if these are scaled uniformly to a different density. Hidden scale invariance applies to a good approximation for many well-known potentials, e.g., systems defined by the Lennard-Jones and Yukawa interactions, density-functional derived atomic interactions, and simple molecular models Gnan _et al._ (2009); ing12b; Schrøder and Dyre (2014); Hummel _et al._ (2015); Dyre (2018). This paper proposes an application of ${T}_{\rm conf}$ to active-matter models, which addresses the problem of quantifying how far a system is from ordinary canonical-ensemble thermal equilibrium. This question is important because only if the system in question is close to thermal equilibrium, does it make good sense to refer to the temperature of the corresponding canonical- ensemble equilibrium system as a characteristic of the active-matter system. As discussed in the next section, the ratio between the global “systemic” temperature ${T}_{\rm s}$ and the “local” temperature ${T}_{\rm conf}$ provides such a measure. Section III sets the stage by detailing one example, the 2d Yukawa model with active Brownian particle dynamics. Section IV presents data for the radial distribution function of Kob-Andersen and 2d Yukawa active-matter models, confirming that when ${T}_{\rm s}/{T}_{\rm conf}$ is close to unity, the structure is close to that of thermal equilibrium. Also, Sec. IV evaluates a standard entropy-production-based measure of deviations from thermal equilibrium and compared to the proposed new measure. Section V shows that the new measure is roughly constant along the motility- induced phase-separation line, which is consistent with the reasonable assumption that all state points close to this line in the non-MIPS phase are equally far from equilibrium. Finally, Sec. VI summarizes Papers I and II. ## II How far is a given active-matter system from thermal equilibrium? The investigations of Papers I and II are limited to active-matter point- particle models characterized by a potential-energy function. Quantifying the degree of non-equilibrium is usually done by calculating some form of dissipation (entropy production). The idea is that since the entropy production is zero in thermal equilibrium, this quantity measures how far a given system is from thermal equilibrium Fodor _et al._ (2016); Flenner and Szamel (2020); O’Byrne _et al._ (2022). Such measures can be applied to both active-matter models and driven Hamiltonian systems. A fundamental issue with these measures is the following. Using a quantity that goes to zero in some limit to quantify the degree of deviation from that limit does not in an obvious way make possible the identification of when deviations from equilibrium are to be regarded as “large”. If deviations from thermal equilibrium are instead quantified by means of a quantity that goes to unity in the equilibrium limit, deviations from equilibrium are “small” whenever that quantity does not deviate substantially from unity and “large” otherwise. The configurational temperature is local in the sense that when regarded as a function of $\mathbb{R}$, it only depends on how the potential energy $U(\mathbb{R})$ varies in the immediate surroundings. Note that “local” here refers to the 2N or 3N dimensional configuration space, not to the two- or three-dimensional space in which the active particles move. This locality means that by evaluating ${T}_{\rm conf}$ for a passive system’s configuration at a given time, one cannot determine whether the system is in thermal equilibrium corresponding to the temperature $T={T}_{\rm conf}(\mathbb{R})$. For instance, for an aging glass annealed at temperature $T$, already after a time on the phonon scale does ${T}_{\rm conf}(\mathbb{R})\cong T$ apply, i.e., long before equilibrium has been reached Powles _et al._ (2005). A completely different, global temperature concept is the systemic temperature ${T}_{\rm s}$. This quantity was introduced for generalizing isomorph theory to out-of- equilibrium conditions Dyre (2020), but ${T}_{\rm s}$ may be introduced for any system as the equilibrium canonical-ensemble temperature of the Hamiltonian system at the same density and average potential energy as that of the out-of-equilibrium system. In thermal equilibrium one has ${T}_{\rm conf}={T}_{\rm s}=T$. The idea is to use the ratio of global to local temperature, ${T}_{\rm s}/{T}_{\rm conf}$, for quantifying how far an active-matter system is from thermal equilibrium. We showed in Paper I that the ratio ${T}_{\rm s}/{T}_{\rm conf}$ is predicted to be constant along active-matter isomorphs. Since structure and dynamics are also invariant along both active-matter isomorphs and the corresponding Hamiltonian-system isomorphs, it is consistent to assume that ${T}_{\rm s}/{T}_{\rm conf}$ measures how far the system is from thermal equilibrium. ## III The Yukawa Active Brownian-particle model in two dimensions This section details the ABP model in two dimensions based on the single- component Yukawa pair potential Yukawa (1935); Meacock _et al._ (2021), $v(r)\,=\,\frac{Q^{2}\,\sigma}{r}\,e^{-r/(\lambda\sigma)}\,.$ (2) This potential obey hidden scale invariance Dyre (2014); Veldhorst _et al._ (2015); Tolias and Castello (2019), so a procedure for identifying active- matter isomorphs analogous to that introduced in Paper I for the active Ornstein-Uhlenbeck particle (AOUP) model should apply here, as well. The idea is that ${T}_{\rm s}/{T}_{\rm conf}$, as mentioned, is predicted to be invariant along active-matter isomorphs where the deviations from thermal equilibrium are also expected to be invariant. If $\mathbb{r}_{i}$ is the position vector of particle $i$, the ABP equations of motion in two dimensions are $\dot{\mathbb{r}}_{i}\,=\,\mu\mathbb{F}_{i}+\bm{\xi}_{i}(t)+v_{0}\,\mathbf{o}_{i}(t)\,.$ (3) Here, $\mu$ is the mobility, $\mathbb{F}_{i}(\mathbb{R})=-\nabla_{i}U(\mathbb{R})$ is the force on particle $i$, $\bm{\xi}_{i}(t)$ is a Gaussian random white-noise vector, $v_{0}$ is a constant velocity, and $\mathbf{o}_{i}(t)=(\cos(\theta_{i}(t)),\sin(\theta_{i}(t)))$ is a stochastic unit vector. The direction vector angle $\theta_{i}(t)$ is controlled by a white Gaussian noise of magnitude $D_{r}$, $\langle\dot{\theta}_{i}(t)\dot{\theta}_{j}(t^{\prime})\rangle\,=\,2D_{r}\delta_{ij}\,\delta(t-t^{\prime})\,,$ (4) and the white-noise vector has magnitude $D_{t}$, $\langle\bm{\xi}_{i}^{\alpha}(t)\bm{\xi}_{j}^{\beta}(t^{\prime})\rangle\,=\,2D_{t}\delta_{ij}\delta_{\alpha\beta}\delta(t-t^{\prime})\,.$ (5) The ABP model has four parameters. Regarding $\mu$ as a system-specific constant, the dimensionless versions of the three other parameters must be constant in order to have invariant physics when the density is changed. Following the procedure of Sec. III of Paper I, we take as length unit $l_{0}=\rho^{-1/2}$ (the exponent is $-1/2$ and not $-1/3$ as in Paper I because the model here is two-dimensional) and as time unit $t_{0}=1/D_{r}$, and write the equation of motion in terms of the corresponding reduced variables. Substituting $\mathbb{r}_{i}=\rho^{-1/2}\tilde{\mathbb{r}}_{i}$ and $t=\tilde{t}/D_{r}$ into Eq. (3) and making use of Eq. (8) of Paper I and the definition of the systemic temperature ${T}_{\rm s}$ Dyre (2020) in which ${S}_{\rm ex}(\mathbb{R})$ is the microscopic excess-entropy function Schrøder and Dyre (2014); Dyre (2020), ${T}_{\rm s}(\mathbb{R})\,\equiv\,{T}_{\rm eq}(\rho,{S}_{\rm ex}(\mathbb{R}))={T}_{\rm eq}(\rho,U(\mathbb{R}))\,,$ (6) we get $\dot{\tilde{\mathbb{r}}}_{i}\,=\,-\mu\rho({T}_{\rm s}/D_{r})\tilde{\nabla}_{i}{S}_{\rm ex}({\tilde{\mathbb{R}}})+\tilde{\bm{\xi}}_{i}(t)+\tilde{v}_{0}\,\mathbf{o}_{i}(t)\,.$ (7) Here $\tilde{v}_{0}=(\rho^{1/2}/D_{r})v_{0}$, $\tilde{\bm{\xi}}_{i}=(\rho^{1/2}/D_{r})\bm{\xi}_{i}$, ${T}_{\rm s}$ is brief for ${T}_{\rm s}(\mathbb{R})$, $\langle\tilde{\bm{\xi}}_{i}^{\alpha}(t)\tilde{\bm{\xi}}_{j}^{\beta}(t^{\prime})\rangle\,=\,2\rho(D_{t}/D_{r})\delta_{ij}\delta_{\alpha\beta}\delta(\tilde{t}-\tilde{t}^{\prime})\,,$ (8) and dots now mark the derivative with respect to $\tilde{t}$, $\langle\dot{\theta}_{i}(t)\dot{\theta}_{j}(t^{\prime})\rangle\,=\,2\delta_{ij}\,\delta(\tilde{t}-\tilde{t}^{\prime})\,.$ (9) These equations are invariant under a change of density if $\mu\rho{T}_{\rm s}/D_{r}$, $\rho D_{t}/D_{r}$, and $\tilde{v}_{0}$ are kept constant. Since $\mu$ is a (system-specific) constant, this implies (where the subscript zero refers to a reference state of density $\rho_{0}$ and ${T}_{\rm s}(\rho)\equiv{T}_{\rm eq}(\rho,{S}_{\rm ex}({\tilde{\mathbb{R}}}))$ can be used instead of ${T}_{\rm s}(\mathbb{R})$ because fluctuations go to zero in the thermodynamic limit) $\displaystyle D_{r}$ $\displaystyle\,=\,$ $\displaystyle D_{r,0}\,\frac{\rho}{\rho_{0}}\,\frac{{T}_{\rm s}(\rho)}{{T}_{\rm s}(\rho_{0})}$ $\displaystyle D_{t}$ $\displaystyle\,=\,$ $\displaystyle D_{t,0}\,\frac{{T}_{\rm s}(\rho)}{{T}_{\rm s}(\rho_{0})}$ (10) $\displaystyle v_{0}$ $\displaystyle\,=\,$ $\displaystyle v_{0,0}\,\left(\frac{\rho}{\rho_{0}}\right)^{1/2}\frac{{T}_{\rm s}(\rho)}{{T}_{\rm s}(\rho_{0})}\,.$ By the same argument as in Sec. III of Paper I one can here replace the ${T}_{\rm s}$ ratios by ${T}_{\rm conf}$ ratios, leading to $\displaystyle D_{r}$ $\displaystyle\,=\,$ $\displaystyle D_{r,0}\,\frac{\rho}{\rho_{0}}\,\frac{{T}_{\rm conf}\left((\rho_{0}/\rho)^{1/2}\mathbb{R}_{0}\right)}{{T}_{\rm conf}(\mathbb{R}_{0})}$ $\displaystyle D_{t}$ $\displaystyle\,=\,$ $\displaystyle D_{t,0}\,\frac{{T}_{\rm conf}\left((\rho_{0}/\rho)^{1/2}\mathbb{R}_{0}\right)}{{T}_{\rm conf}(\mathbb{R}_{0})}$ (11) $\displaystyle v_{0}$ $\displaystyle\,=\,$ $\displaystyle v_{0,0}\,\left(\frac{\rho}{\rho_{0}}\right)^{1/2}\frac{{T}_{\rm conf}\left((\rho_{0}/\rho)^{1/2}\mathbb{R}_{0}\right)}{{T}_{\rm conf}(\mathbb{R}_{0})}\,.$ In passing we note that while the Peclet number $v_{0}/\sqrt{2D_{r}D_{t}}$ Bechinger _et al._ (2016); Hecht _et al._ (2021) is invariant along the active-matter isomorph, this requirement is not enough to determine how to scale the model parameters – thus Peclet-number invariance is a necessary, but not sufficient condition for identifying an active-matter isomorph. $\rho$ | $D_{r}$ | $D_{t}$ | $v_{0}$ | ${T}_{\rm conf}$ ---|---|---|---|--- $1.0$ | $3.000$ | $1.000$ | $25.00$ | $1.489$ $1.5$ | $12.37$ | $2.750$ | $84.20$ | $4.093$ $2.0$ | $30.43$ | $5.072$ | $179.3$ | $7.550$ $2.5$ | $58.13$ | $7.751$ | $306.4$ | $11.54$ $3.0$ | $95.82$ | $10.65$ | $461.0$ | $15.85$ Table 1: Values of $\rho$, $D_{r}$, $D_{t}$, $v_{0}$, and ${T}_{\rm conf}$ along the active-matter isomorph of the 2d Yukawa ABP model determined by Eq. (III). By means of Eq. (1) the configurational temperature ${T}_{\rm conf}(\rho)$ is determined from a single configuration $\mathbb{R}_{0}$ scaled to density $\rho$. To validate the existence of active-matter isomorphs according to the above prediction we simulated $N=10000$ particles of the 2d Yukawa system with $Q=50$, $\lambda=0.16$, $\sigma=1$ defining the length unit, and a cutoff at $4.5\sigma$. The time step used is given by $\Delta t=\Delta\tilde{t}(D_{t}/{v_{0}}^{2})$, where $\Delta\tilde{t}=0.0625$ so that $\Delta t=0.0001$ at the reference state point defined by $(\rho,D_{r},D_{t},v_{0})=(1.0,3.0,1.0,25.0)$. The simulations were carried out on GPU cards using a home-made code. An active-matter isomorph was traced out for densities varying a factor of three using Eq. (III) for a configuration $\mathbb{R}_{0}$ selected from a steady-state simulation at the reference state point. Table 1 gives the parameters obtained from Eq. (III). Figure 1: Structure and dynamics of the Yukawa ABP model in two dimensions. (a) The left panel shows the RDF as a function of the pair distance $r$ along the active-matter isomorph, the middle panel shows the same data in reduced units, and the right panel shows the reduced RDF for the same parameters (Table 1) at the reference density $\rho=1.0$. (b) The left panel shows the MSD as a function of time $t$ along the active-matter isomorph, the middle panel shows the same data in reduced units where the dashed line marks slope unity, i.e., ordinary diffusion; the right panel shows the reduced MSD for the same parameters at the reference state-point density $\rho=1.0$. Figure 1(a) shows the radial distribution function (RDF). The left two panels show the RDF along the active-matter isomorph as a function of $r$ and $\tilde{r}$, respectively. For comparison, the right panel shows the results for the same parameters at the reference state-point density $\rho=1.0$. We find a good invariance of the reduced RDF along the active-matter isomorph. The same applies for the reduced mean-square displacement (MSD) shown in (b). ## IV Deviations from thermal equilibrium quantified by ${T}_{\rm s}/{T}_{\rm conf}$ Figure 2: Determination of the ratio of systemic to configurational temperature, ${T}_{\rm s}/{T}_{\rm conf}$, quantifying how far an active- matter system is from thermal equilibrium. (a) shows data for ${T}_{\rm s}$ and ${T}_{\rm conf}$ for the 3d Kob-Andersen AOUP model (Paper I, Saw _et al._ (2023)) as functions of $\tau$ with the remaining model parameters kept fixed. (b) shows ${T}_{\rm s}/{T}_{\rm conf}$ for the same data. For $\tau$ values around $10^{-4}$ the system begins to move away from equilibrium and for $\tau>10^{-3}$ significant deviations from equilibrium are predicted. (c) shows data for ${T}_{\rm s}$ and ${T}_{\rm conf}$ for the 2d Yukawa AOUP model as functions of $\tau$ with the remaining model parameters kept fixed. (d) shows ${T}_{\rm s}/{T}_{\rm conf}$ for the same data. For $\tau$ values above $10^{-4}$ the system starts to deviate from equilibrium. (e) shows data for ${T}_{\rm s}$ and ${T}_{\rm conf}$ for the 2d Yukawa ABP model as functions of $v_{0}$ with the remaining model parameters kept fixed. (f) shows ${T}_{\rm s}/{T}_{\rm conf}$ for the same data. For $v_{0}$ values around $10$ the system begins to move away from equilibrium. Figure 2 gives data for the systemic and configurational temperatures of different active-matter models, starting with the Kob-Andersen model studied in Paper I. Figure 2(a) shows the systemic temperature ${T}_{\rm s}$ (black symbols) and the configurational temperature ${T}_{\rm conf}$ (red symbols) for the Kob-Andersen AOUP active-matter model as functions of the colored- noise correlation time $\tau$ for fixed values of the other model parameters. As mentioned, ${T}_{\rm s}$ is determined by identifying the equilibrium temperature at which the system for a standard MD simulation has the same average potential energy as the AOUP system. The system approaches an equilibrium system for $\tau\to 0$, corresponding to the canonical-ensemble temperature $T=1.6$. Figure 2(b) plots the ratio ${T}_{\rm s}/{T}_{\rm conf}$. We see that for values of $\tau$ above $10^{-4}$, the system starts to move away from thermal equilibrium. Figure 2(c) shows ${T}_{\rm s}$ and ${T}_{\rm conf}$ as functions of $\tau$ for the 2d Yukawa AOUP model for fixed values of the other model parameters. Both ${T}_{\rm s}$ and ${T}_{\rm conf}$ converge to $5$ as $\tau\to 0$, confirming the fact that $T=5$ is the equilibrium Brownian-dynamics temperature corresponding to the parameters $D_{t}=5$, $\mu=1$. Figure 2(d) shows ${T}_{\rm s}/{T}_{\rm conf}$ and we see that for $\tau$ above $10^{-4}$, the system begins to deviate from thermal equilibrium. Figure 2(e) and (f) show ${T}_{\rm s}$ and ${T}_{\rm conf}$ and their ratio for the 2d Yukawa ABP model as functions of $v_{0}$ for fixed values of the other model parameters; here $v_{0}>10$ is the approximate criterion for deviations from equilibrium. Figure 3: RDFs of active-matter states predicted to be close to (left column) and not close to (right column) thermal equilibrium. The red curves are the active-matter data and the black dashed lines are the RDFs of the corresponding equilibrium system for $T={T}_{\rm s}$. (a)-(d) show results for the AA and BB RDFs of the Kob-Andersen AOUP model for $\tau=10^{-4}$ and $\tau=4\cdot 10^{-2}$ (red curves) corresponding to ${T}_{\rm s}/{T}_{\rm conf}=1.13$ and ${T}_{\rm s}/{T}_{\rm conf}=6.59$. (e) and (f) show results for the 2d Yukawa AOUP model at states with $\tau=10^{-4}$ and $\tau=8\cdot 10^{-3}$ corresponding to ${T}_{\rm s}/{T}_{\rm conf}=1.09$ and ${T}_{\rm s}/{T}_{\rm conf}=2.59$. (g) and (h) show results for the 2d Yukawa ABP model at states with $v_{0}=10$ and $v_{0}=50$ corresponding to ${T}_{\rm s}/{T}_{\rm conf}=1.13$ and ${T}_{\rm s}/{T}_{\rm conf}=2.18$. By reference to the data in Fig. 2, Fig. 3 compares the RDF of states predicted to be close to and not close to thermal equilibrium. Each subfigure reports ${T}_{\rm s}/{T}_{\rm conf}$; results for the cases where ${T}_{\rm s}/{T}_{\rm conf}$ is close to unity are found in the left column. The RDFs are compared to the equilibrium RDF for $T={T}_{\rm s}$, i.e., the temperature corresponding to the potential energy of the active-matter configurations. The black dashed lines give the equilibrium RDF, the red curves are the active- matter RDFs. Figure 3(a)-(d) show data for $\textrm{RDF}_{\textrm{AA}}$ and $\textrm{RDF}_{\textrm{BB}}$ of the Kob-Andersen AOUP model studied in Paper I; $\textrm{RDF}_{\textrm{AB}}$ is similar to the AA (data not shown). Figure 3(e) and (f) give data for the 2d Yukawa AOUP model, while (g) and (h) give data for the 2d Yukawa ABP model (Sec. III). Figure 3 confirms that when the ratio ${T}_{\rm s}/{T}_{\rm conf}$ is close to unity, the configurations of the active-matter model are close to thermal equilibrium configurations. Figure 4: Using the ratio of systemic to configurational temperature to quantify how far the 2d Yukawa ABP system is from thermal equilibrium (corresponding to $v_{0}=0$ in Eq. (3)); the parameters kept fixed here are $\rho=1$, $D_{r}=3$, and $D_{t}=2$. (a) shows how the dissipation (“Power”) varies with $v_{0}$ (MD units). From Fig. 2(e) we see that when $v_{0}\to 0$, the two temperatures become identical (equal to $2$ because $D_{t}=2$ corresponds to that thermal equilibrium temperature); at the same time the dissipation goes to zero. (b) and (c) show the power as a function of ${T}_{\rm s}/{T}_{\rm conf}$. The quantity ${T}_{\rm s}/{T}_{\rm conf}$ goes to unity as thermal equilibrium is approached, which presents an advantage compared to using the dissipated power for quantifying deviations from thermal equilibrium. Next we compare to a previously proposed measure of deviations from thermal equilibrium, focusing on the 2d Yukawa ABP model. Figure 4(a) shows the dissipated “active” power, i.e., the average of the scalar product of the particle velocity with the ${v}_{0}\,\mathbf{o}_{i}(t)$ term of Eq. (3), plotted as a function of $v_{0}$, keeping the three other model parameters constant. From data like these one cannot easily determine when the system is expected to be close to thermal equilibrium. Figure 4(b) shows the dissipated power plotted against ${T}_{\rm s}/{T}_{\rm conf}$, demonstrating a one-to-one correspondence between the two measures of deviations from thermal equilibrium. Figure 4(b) also includes data for the reduced-unit power (red points), which shows an interesting almost linear proportionality to ${T}_{\rm s}/{T}_{\rm conf}-1$ for which we have no good explanation. Finally, Figure 4(c) plots the same data in a log-linear scale, which further illustrates that measuring deviations from thermal equilibrium in terms of a quantity that is zero in equilibrium is not useful for distinguishing between weak and stronger deviations from equilibrium. ## V The MIPS boundary of the 2d ABP Yukawa model Figure 5: $(\rho,D_{t})$ phase diagrams showing MIPS state points as red stars and homogeneous state points as black squares (green circles are gas- like states of minor relevance here). The MIPS phase consists of coexisting phases that differ in density, the denser phase is a “solid” phase of hexagonal crystal structure. The reference state point $(\rho,D_{r},D_{t},v_{0})=(1.01,3,1,367)$ is located in the homogeneous (solid) phase close to the phase boundary. From this an active-matter isomorph was traced out using Eq. (III) (black line). The figure gives data in the $(\rho,D_{t})$ phase diagram with $D_{r}$ and $v_{0}$ given by Eq. (III) at density $\rho$. The blue dashed lines mark $\pm 5$% variations in density. We see that the phase-transition line is an approximate active-matter isomorph, which is consistent with the degree of deviation from thermal equilibrium being constant along this line. For certain parameters of the 2d ABP Yukawa model, motility-induced phase separation (MIPS) is observed. This is the striking active-matter phenomenon that even a purely repulsive system may phase separate into high- and low- density phases Vicsek _et al._ (1995); Das _et al._ (2014); Cates and Tailleur (2015); Ramaswamy (2017); Geyer _et al._ (2019); Das _et al._ (2020); Merrigan _et al._ (2020). It is reasonable to assume that, when the phase transition is approached from the homogeneous phase, the deviations from thermal equilibrium are the same for all parameter values. Thus if ${T}_{\rm s}/{T}_{\rm conf}$ indeed provides a measure of the deviation from equilibrium, this quantity should be roughly constant approaching the MIPS phase transition. Since the 2d Yukawa ABP model obeys hidden scale invariance, this means that the phase transition should approximately follow an isomorph (because the physics is approximately invariant along an active-matter isomorph, such a curve cannot cross the MIPS boundary, compare Refs. Gnan _et al._ , 2009 and Costigliola _et al._ , 2016; Pedersen _et al._ , 2016). Thus if one has identified a state point in the homogeneous solid phase close to the MIPS boundary and uses this as reference state point for generating an active-matter isomorph, all state points identified by Eq. (III) should be close to the MIPS boundary. A similar line of reasoning was validated for the melting line of the ordinary Lennard-Jones system Costigliola _et al._ (2016); Pedersen _et al._ (2016). Figure 6: Structure and dynamics probed along the active-matter isomorph approximately delimiting the MIPS phase boundary of the 2d ABP Yukawa system, slightly into the homogeneous phase (Fig. 5). (a) and (b) show log-log plots of the RDF and MSD, respectively, (c) and (d) show the same data in reduced units. We studied the 2d Yukawa model with parameters $Q=1000$ and $\lambda=0.12$ with a cutoff at $4.2\sigma$ and $(D_{r},D_{t},v_{0})=(3,1,367)$, by systematically decreasing the density from a high value well within the homogeneous solid phase. Initially, a system of 40000 particles was simulated for 40 million time steps, and the occurrence of MIPS was detected by visual inspection. The lowest density before observing MIPS was $\rho=1.01$. We then used $(\rho,D_{r},D_{t},v_{0})=(1.01,3,1,367)$ as reference state point for generating an active-matter isomorph according to Eq. (III). This is the black full line in Fig. 5, which shows the results of investigating the existence of MIPS in a $(\rho,D_{t})$ phase diagram (along the isomorph the remaining parameters $D_{r}(\rho)$ and $v_{0}(\rho)$ are given by Eq. (III)). The black squares denote state points of the homogeneous solid phase, the red stars denote state points where MIPS appears, and the green circles denote gas-phase state points. The blue dashed lines mark the active-matter isomorph $\pm$5% in density. We see that the phase transition line is predicted reasonably well though not accurately; this is consistent with the approximate nature of the argument. Nevertheless, the simulations demonstrate that Eq. (III) can be used for roughly identifying the MIPS phase boundary. This confirms the physical expectation that the deviation from thermal equilibrium is virtually constant along the phase-transition line because it is an approximate active-matter isomorph characterized by constant ${T}_{\rm s}/{T}_{\rm conf}$. In order to confirm that the black line of Fig. 5 is a line of approximately invariant physics, i.e., an active-matter isomorph, we show in Fig. 6 how structure and dynamics vary along it. The upper figures show the RDF and MSD in standard units, the lower figures show the same data in reduced units. ## VI Summary of Papers I & II and Outlook The configurational-temperature concept has traditionally been used in connection with liquid models based on Newton’s laws of motion with forces derived from a potential-energy function $U(\mathbb{R})$ Powles _et al._ (2005). Indeed, the derivation of ${T}_{\rm conf}$ refers to the canonical ensemble, and for this reason it is not obvious that ${T}_{\rm conf}$ has relevance also for non-Hamiltonian and non-time-reversible systems like those of active matter. We have suggested that the configurational temperature may be useful also in that context and have presented two applications of ${T}_{\rm conf}$. Paper I demonstrated how ${T}_{\rm conf}$ may be used for tracing out lines of approximately invariant structure and dynamics in the phase diagram of models described by AOUP dynamics if the potential-energy function obeys hidden scale invariance; such lines are referred to as active- matter isomorphs. Specifically, Paper I gave the equations for how to change the model parameters with density in order to have invariant physics, and Paper II derived a similar procedure for ABP models. In both cases, by effectively reducing the number of model parameters by one, this approach provides a tool for simplifying the exploration of phase diagrams of active- matter models with hidden scale invariance of the potential-energy function. For the AOUP and the ABP models the ratio of systemic to configurational temperature is predicted to be constant along an active-matter isomorph. Since both the active-matter physics and the corresponding passive-matter physics are invariant along their common systemic isomorph (defined as the thermal equilibrium isomorph mapped into the density systemic-temperature phase diagram Dyre (2020)), this is consistent with the present paper’s proposal that ${T}_{\rm s}/{T}_{\rm conf}$ quantifies how far a given active-matter system is from thermal equilibrium. The ratio ${T}_{\rm s}/{T}_{\rm conf}$ is defined for any active-matter system based on a potential-energy function, whether or not hidden scale invariance applies. We suggest that an active-matter system may be regarded as “close to thermal equilibrium” whenever ${T}_{\rm s}/{T}_{\rm conf}$ is close to unity and “far from thermal equilibrium” whenever this is not the case. We illustrated the use of ${T}_{\rm s}/{T}_{\rm conf}$ for quantifying deviations from thermal equilibrium by showing that when this quantity is close to unity, the RDF of the active-matter system is close to that of the corresponding thermal-equilibrium system with $T={T}_{\rm s}$. Moreover, ${T}_{\rm s}/{T}_{\rm conf}$ is roughly constant along the motility-induced phase separation (MIPS) boundary along which the deviation from equilibrium are expected not to vary, compare Fig. 6. The advantages of using the quantity ${T}_{\rm s}/{T}_{\rm conf}$ for quantifying how far an active-matter system is from thermal equilibrium are threefold: * • A measure that converges to unity when the system in question approaches thermal equilibrium allows for answering the question: how to quantify the deviation from thermal equilibrium? This is not the case for a measure that converges to zero when equilibrium is approached. * • ${T}_{\rm s}/{T}_{\rm conf}$ is easy to evaluate because it can be determined from a single configuration $\mathbb{R}$ of a steady-state simulation of the active-matter system in conjunction with equilibrium simulations of the corresponding Hamiltonian system. * • ${T}_{\rm s}/{T}_{\rm conf}$ is general measure because this quantity is defined for any system characterized by a potential-energy function, whether or not in the context of an active-matter model. For instance, in the case of a non-linear steady-state shear flow of an ordinary Hamiltonian system, it is also possible to quantify the deviation from thermal equilibrium by means of ${T}_{\rm s}/{T}_{\rm conf}$. An interesting question that remains to be explored is the following: What is the difference between the cases ${T}_{\rm s}/{T}_{\rm conf}>1$ and ${T}_{\rm s}/{T}_{\rm conf}<1$? ###### Acknowledgements. This work was supported by the VILLUM Foundation’s Matter grant (16515). ## References * Casas-Vazquez and Jou (2003) J. Casas-Vazquez and D. Jou, “Temperature in non-equilibrium states: a review of open problems and current proposals,” Rep. Prog. Phys. 66, 1937–2023 (2003). * Powles _et al._ (2005) J. G. Powles, G. Rickayzen, and D. M. Heyes, “Temperatures: old, new and middle aged,” Mol. Phys. 103, 1361–1373 (2005). * Leuzzi (2009) L. Leuzzi, “A stroll among effective temperatures in aging systems: Limits and perspectives,” J. Non-Cryst. Solids 355, 686–693 (2009). * Puglisi _et al._ (2017) A. Puglisi, A. Sarracino, and A. Vulpiani, “Temperature in and out of equilibrium: A review of concepts, tools and attempts,” Phys. Rep. 709-710, 1–60 (2017). * Zhang _et al._ (2019) D. Zhang, X. Zheng, and M. Di Ventra, “Local temperatures out of equilibrium,” Phys. Rep. 830, 1–66 (2019). * Saw _et al._ (2023) S. Saw, L. Costigliola, and J. C. Dyre, “Configurational temperature in active matter. I. Lines of invariant physics in the phase diagram of the Ornstein-Uhlenbeck model,” Phys. Rev. E 107, ?? (2023). * Landau and Lifshitz (1958) L. D. Landau and E. M. Lifshitz, _Statistical Physics[Eq. (33.14)]_ (Pergamon, Oxford, 1958). * Rugh (1997) H. H. Rugh, “Dynamical approach to temperature,” Phys. Rev. Lett. 78, 772–774 (1997). * Himpel and Melzer (2019) M. Himpel and A. Melzer, “Configurational temperature in dusty plasmas,” Phys. Rev. E 99, 063203 (2019). * Dyre (2014) J. C. Dyre, “Hidden scale invariance in condensed matter,” J. Phys. Chem. B 118, 10007–10024 (2014). * Gnan _et al._ (2009) N. Gnan, T. B. Schrøder, U. R. Pedersen, N. P. Bailey, and J. C. Dyre, “Pressure-energy correlations in liquids. IV. “Isomorphs” in liquid phase diagrams,” J. Chem. Phys. 131, 234504 (2009). * Schrøder and Dyre (2014) T. B. Schrøder and J. C. Dyre, “Simplicity of condensed matter at its core: Generic definition of a Roskilde-simple system,” J. Chem. Phys. 141, 204502 (2014). * Hummel _et al._ (2015) F. Hummel, G. Kresse, J. C. Dyre, and U. R. Pedersen, “Hidden scale invariance of metals,” Phys. Rev. B 92, 174116 (2015). * Ingebrigtsen and Tanaka (2015) T. S. Ingebrigtsen and H. Tanaka, “Effect of size polydispersity on the nature of Lennard-Jones liquids,” J. Phys. Chem. B 119, 11052–11062 (2015). * Dyre (2018) J. C. Dyre, “Perspective: Excess-entropy scaling,” J. Chem. Phys. 149, 210901 (2018). * Fodor _et al._ (2016) E. Fodor, C. Nardini, M. E. Cates, J. Tailleur, P. Visco, and F. van Wijland, “How far from equilibrium is active matter?” Phys. Rev. Lett. 117, 038103 (2016). * Flenner and Szamel (2020) E. Flenner and G. Szamel, “Active matter: Quantifying the departure from equilibrium,” Phys. Rev. E 102, 022607 (2020). * O’Byrne _et al._ (2022) J. O’Byrne, Y. Kafri, J. Tailleur, and F. van Wijland, “Time irreversibility in active matter, from micro to macro,” Nat. Rev. Phys. 4, 167–183 (2022). * Dyre (2020) J. C. Dyre, “Isomorph theory beyond thermal equilibrium,” J. Chem. Phys. 153, 134502 (2020). * Yukawa (1935) H. Yukawa, “On the interaction of elementary particles,” Proc. Phys.-Math. Soc. Jpn. 17, 48–57 (1935). * Meacock _et al._ (2021) O. J. Meacock, A. Doostmohammadi, K. R. Foster, J. M. Yeomans, and W. M. Durham, “Bacteria solve the problem of crowding by moving slowly,” Nat. Phys. 17, 205–210 (2021). * Veldhorst _et al._ (2015) A. A. Veldhorst, T. B Schrøder, and J. C. Dyre, “Invariants in the Yukawa system’s thermodynamic phase diagram,” Phys. Plasmas 22, 073705 (2015). * Tolias and Castello (2019) P. Tolias and F. L. Castello, “Isomorph-based empirically modified hypernetted-chain approach for strongly coupled Yukawa one-component plasmas,” Phys. Plasmas 26, 043703 (2019). * Bechinger _et al._ (2016) C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, “Active particles in complex and crowded environments,” Rev. Mod. Phys. 88, 045006 (2016). * Hecht _et al._ (2021) L. Hecht, J. C. Urena, and B. Liebchen, “An introduction to modeling approaches of active matter,” arXiv , 2102.13007 (2021). * Vicsek _et al._ (1995) T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet, “Novel type of phase transition in a system of self-driven particles,” Phys. Rev. Lett. 75, 1226–1229 (1995). * Das _et al._ (2014) S. K. Das, S. A. Egorov, B. Trefz, P. Virnau, and K. Binder, “Phase behavior of active swimmers in depletants: Molecular dynamics and integral equation theory,” Phys. Rev. Lett. 112, 198301 (2014). * Cates and Tailleur (2015) M. E. Cates and J. Tailleur, “Motility-induced phase separation,” Ann. Rev. Cond. Mat. Phys. 6, 219–244 (2015). * Ramaswamy (2017) S. Ramaswamy, “Active matter,” J. Stat. Mech. , 054002 (2017). * Geyer _et al._ (2019) D. Geyer, D. Martin, J. Tailleur, and D. Bartolo, “Freezing a flock: Motility-induced phase separation in polar active liquids,” Phys. Rev. X 9, 031043 (2019). * Das _et al._ (2020) M. Das, C. F. Schmidt, and M. Murrell, “Introduction to active matter,” Soft Matter 16, 7185–7190 (2020). * Merrigan _et al._ (2020) C. Merrigan, K. Ramola, R. Chatterjee, N. Segall, Y. Shokef, and B. Chakraborty, “Arrested states in persistent active matter: Gelation without attraction,” Phys. Rev. Research 2, 013260 (2020). * Costigliola _et al._ (2016) L. Costigliola, T. B. Schrøder, and J. C. Dyre, “Freezing and melting line invariants of the Lennard-Jones system,” Phys. Chem. Chem. Phys. 18, 14678 – 14690 (2016). * Pedersen _et al._ (2016) U. R. Pedersen, L. Costigliola, N. P. Bailey, T. B Schrøder, and J. C. Dyre, “Thermodynamics of freezing and melting,” Nat. Commun. 7, 12386 (2016).
††thanks: Published in the Journal of the Acoustical Society of America. The published version of this preprint can be found at https://doi.org/10.1121/10.0002974 # Resolution dependence of rough surface scattering using a power law roughness spectrum Derek R. Olson<EMAIL_ADDRESS>Oceanography Department, Naval Postgraduate School, Monterey, CA 93943 Anthony P. Lyons University of New Hampshire, Durham, NH 03824 ###### Abstract Contemporary high-resolution sonar systems use broadband pulses and long arrays to achieve high resolution. It is important to understand effects that high-resolution sonar systems might have on quantitative measures of the scattered field due to the seafloor. A quantity called the broadband scattering cross section is defined, appropriate for high-resolution measurements. The dependence of the broadband scattering cross section, $\sigma_{bb}$ and the scintillation index, $SI$ on resolution was investigated for one-dimensional rough surfaces with power-law spectra and backscattering geometries. Using integral equations and Fourier synthesis, no resolution dependence of $\sigma_{bb}$ was found. The incoherently-averaged frequency- domain scattering cross section has negligible bandwidth dependence. $SI$ increases as resolution increases, grazing angle decreases, and spectral strength increases. This trend is confirmed for center frequencies of 100 kHz and 10 kHz, as well as for power-law spectral exponents of 1.5, 2, and 2.5. The hypothesis that local tilting at the scale of the acoustic resolution is responsible for intensity fluctuations was examined using a representative model for the effect of slopes (inspired by the composite roughness approximation). It was found that slopes are responsible in part for the fluctuations, but other effects, such as multiple scattering and shadowing may also play a role. ## I Introduction Theoretical treatment of wave scattering from rough interfaces is generally performed using an incident monochromiatic plane wave, which has a single direction and exists over infinite spatial extent. However, experimental measurements of the scattered field often employ broadband pulses to achieve high spatial resolution - desirable for seafloor mapping or target detection. Performance of such systems typically depends on the mean intensity of the scattered field from the seafloor, and more generally its probability density function. For scattering from one-dimensional (1D) roughness, the mean intensity is usually characterized in terms of the scattering cross-section per unit length per unit angle111or per unit area per unit solid angle for two-dimensional (2D) rough interfaces, $\sigma$ – hereafter referred to as the “cross section”, “scattering cross section,” or “scattering strength” for the decibel version. Variability of the scattered intensity is often characterized using the scintillation index, $SI$ Tatarski (1961); Ishimaru (1978). The scattering cross section for a frequency-domain incident field for one- dimensional roughness is defined as Thorsos (1988) $\displaystyle\sigma_{f}=\frac{\langle I_{s}\rangle R}{I_{i}L_{eff}}$ (1) where $\langle I_{s}\rangle=\langle|p_{s}(f)|^{2}\rangle/(2\rho_{0}c)$ is the mean scattered intensity in the far-field, $I_{i}=|p_{0}|^{2}/(2\rho_{0}c)$ is the incident intensity in the direction of the incident wave vector, $\rho_{0}$ is the ambient density, $p_{0}$ is the complex amplitude of the incident plane wave, $L_{eff}$ is the effective ensonified length of the incident field, and $R$ is the distance between a patch of rough interface and the receiver location. Note that this definition is valid only for geometries with well-defined incident and scattered field directions and for a single frequency. Strictly, this definition of the scattering cross section is only true in the limit as the ensonified length becomes large compared to all length scales of interest (i.e. outer scale of the rough surface, or the acoustic wavelength), since monochromatic plane waves interact with the entire rough surface. This definition is often used for narrowband222In this work, when the bandwidth of the signal is a tenth of the center frequency or smaller, it is considered to be narrowband. incident fields that provide a good approximation to a single-frequency tone. In this work, a quantity termed the “broadband scattering cross section” is investigated, that is appropriate for cases with short pulses333To be consistent with the criterion for narrowband defined above, a “short” pulse is defined to contain ten or fewer cycles.. For a plane-wave rectangular pulse of length $\tau$, this is defined as $\displaystyle\sigma_{bb}=\frac{R}{c\tau/(\cos\theta_{i}+\cos\theta_{s})}\frac{\langle I_{s}\rangle}{I_{i}}$ (2) where $c$ is the sound speed, $\theta_{i}$ is the incident grazing angle, and $\theta_{s}$ is the scattered grazing angle. This quantity is discussed more fully and derived in Sec. V. In this work, the frequency-domain version of the scattering cross section is referred to as the “cross section”, and the broadband version is $\sigma_{bb}$. The broadband scattering cross section may exhibit pulse-length dependence if the properties of the ensemble of rough surfaces vary with resolution, especially for high-resolution systems. This dependence is not possible for the frequency-domain cross section. The interface scattering cross-section (or its broadband version) characterizes the mean scattered power from an interface, but a more general property of the scattered field is the probability density function (pdf) of the modulus of the complex pressure, termed the envelope pdf. The envelope pdf is connected to performance of target detection systems, and has potential utility for use in remote sensing of the environment using high resolution systems Lyons _et al._ (2009, 2016); Olson _et al._ (2019). The statistical distribution of pressures resulting from scattering from a rough, homogeneous interface with Gaussian height statistics has commonly been assumed to be Gaussian for the real and imaginary components, and a Rayleigh distribution for the envelope Jakeman (1980)). In this situation, the scintillation index, or normalized intensity variance is unity. For heavy-tailed statistics (with more frequent large amplitude events), the scintillation index is greater than unity. Arguments for Rayleigh-distributed scattered pressure magnitude follow from the assumption of a large number of independent surface elements contributing to the scattered field Jakeman (1980); Abraham and Lyons (2002). So long as the ensonfied area of a Gaussian, homogeneous rough interface is large (so there are many independent scatterers contributing to the field), this assumption holds true. Another argument for Rayleigh magnitudes follows from perturbation theory and the interpretation in terms of Bragg scattering. In this framework, the scattered pressure is proportional to the amplitude spectrum of the roughness evaluated at the Bragg wavenumber, $2k_{w}\cos(\theta_{i})$, where $k_{w}$ is the acoustic wavenumber in the water column, and $\theta_{i}$ is the incident grazing angle. If the surface has Gaussian statistics in the spatial domain, then the wavenumber components will have Rayleigh-distributed magnitude via the central limit theorem. Therefore, the acoustic envelope pdf will be Rayleigh-distributed as well. Contemporary high-resolution seafloor imaging systems, such as synthetic aperture sonar (SAS) have spatial resolutions on the order of the center wavelength Fossum _et al._ (2008); Bellettini and Pinto (2009); Pinto (2011); Dillon (2018); Sternlicht _et al._ (2016). Small resolution cell sizes may result in ensembles that vary with the resolved area of the seafloor, thereby causing a departure from Rayleigh statistics. The resolution dependence of the scintillation index has implications for target detection performance, synthetic aperture autofocus algorithms (e.g. Marston and Plotnick (2015)), and preprocessing algorithms for automated target recognition Kwon and Nasrabadi (2005); Williams (2015); Galusha _et al._ (2018). It was observed in Lyons _et al._ (2016) that measurements of the scintillation index from SAS images of homogeneous random rough interfaces had a strong dependence on range, which was interpreted as a result of modulation of the local slope by roughness components at the scale of the acoustic resolution or larger, which is called the local-tilting hypothesis. This interpretation uses a model for local slope modulation inspired by the composite roughness approximation McDaniel and Gorman (1983). Combined with interpretations in Lyons _et al._ (2016), this effect results in a dependence of the scintillation index on the acoustic resolution, the underlying pixel statistics, range (through grazing angle), as well as roughness spectrum parameters. These interpretations, while plausible, suffer from a lack of experimental confirmation. In the electromagnetics literature, slope modulation was postulated to cause non-Rayleigh scattering by Valenzuela and Liang (1971), and Li and Johnson (2017). In the specular direction, the Fresnel zone (a form of resolved area) has been shown to affect the scintillation index Yang and McDaniel (1991), although this situation contains a coherent component and is not germane to the current problem. In this work, the question is examined of whether there is a dependence on resolution of the broadband scattering cross section and scintillation index. The acoustic resolution, $\Delta X$ is defined as the full width half maximum spatial extent of the square of the incident pulse envelope. For broadband signals, the temporal resolution is set by $1/(2aB_{3dB})$, where $B_{3dB}$ is the 3 dB full width bandwidth of the transmitted pulse444If phase coding and pulse compression are employed, this analysis deals with the resolution of the compressed, or matched-filtered pulse., and $a$ is a constant which depends on the shape of the pulse used. The spatial resolution is $c/(2aB_{3dB})$ for small grazing angles. These questions were investigated through numerical solution of the Helmholtz- Kirchhoff integral equation for the scattered pressure using the boundary element method (BEM) Sauter and Schwab (2011); Wu (2000) using pressure- release boundary conditions. This method is similar to that used by Thorsos (1988). Fourier synthesis was used to construct the broadband scattered pressure at various spatial resolutions, and metrics were computed based on the scattered time-domain pressure. The numerical method detailed here can, in general, treat bistatic geometries, but only the monostatic case was examined. Comparisons were made to the ensemble averaged cross section performed in the frequency domain (i.e. computed at a single frequency, which is a good approximation of the monochromatic plane wave case). These simulations were performed for center frequencies of both 100 kHz and 10 kHz, and for one- dimensional rough surfaces with power law spectra, whose parameters are the spectral strength and spectral exponent. Comparisons between numerical simulations and field experiments would require two-dimensional (2D) roughness with a three-dimensional (3D) scattering geometry. However, the numerical method has very high computational and memory requirements that made 3D simulations impossible at this time, and therefore a 2D geometry was used instead. Although the values of the scattering cross section and scintillation index will be different in 2 and 3D geometries, the fundamental scattering phenomena, such as Bragg scattering, local tilting, multiple scattering, and shadowing will be present in both 2D and 3D. Through these numerical experiments, it was found that the broadband scattering strength does not vary as a function of bandwidth for the parameters investigated in this study. The error of this comparison is within Monte-Carlo error. For scintillation index it was found that it becomes greater than one as resolution increases, grazing angle decreases, and spectral strength increases. For larger spectral exponents, the scintillation index is more sensitive to changes in spectral strength and grazing angle. An overview of the geometry and roughness statistics is presented in Sec. II. The integral equations and discretization methods are given in Sec. III, and the incident field in Sec. IV. Methods to estimate the broadband scattering cross section and scintillation index are given in Sec. V. A discussion on how the parameters of the numerical simulations were selected is given in Sec. VI. Results are presented in Sec. VII, with a discussion and some preliminary hypotheses given in Sec. VIII. Conclusions are given in Sec. IX ## II Geometry and Environment The geometry of the scattering problem is presented in Fig. 1. The problem takes place in two dimensions with position vector $\textbf{r}=(x,z)$. The acoustic medium is above the rough surface, defined as $z_{s}=f(x_{s})$ and shown as the thick black line in this figure. The coordinates $(x_{s},z_{s})$ are points on the rough surface. In this figure, the nominal incident and scattered wave directions are shown with their grazing angles and nominal wave vectors. The slope angle is defined as $\epsilon=\tan^{-1}\left(df(x)/dx\right)$ and is shown in the figure, along with the normal vector pointing out of the acoustic medium (into the lower half-space). The sound speed in the upper medium is $c$, which is taken to be 1500 m/s, but these results can be applied to other sound velocities by performing the appropriate dimensional scaling. The acoustic frequency is $f$, and is related to the wavenumber by $k=2\pi f/c$. Simulations are performed at a center frequency $f_{0}$, and 3 dB bandwidth $B_{3dB}$. The center wavelength and wavenumber are $\lambda_{0}$ and $k_{0}$ respectively Figure 1: Rough surface scattering geometry. The nominal incident and scattered wave vector are shown, along with the slope angle, $\epsilon$, at the origin, and the unit normal vector at that point, which is defined to point out of the acoustic domain, which is defined as the upper medium. The rough interface is assumed to have wide-sense homogeneity (spatial stationarity) and a Gaussian pdf for both height and slope. By assuming Gaussian statistics, the rough-interface second-order properties can be completely described by its autocovariance function, $\displaystyle B(x)=\langle f(y)f(y+x)\rangle$ (3) and power density spectrum $\displaystyle W(K)=\frac{1}{2\pi}\int B(x)e^{iKx}\,\textrm{d}x.$ (4) Several second-order properties of this spectrum are useful for the analysis performed here. In particular, the mean square height, $h^{2}$ is given by $\displaystyle h^{2}=\int\limits_{-\infty}^{\infty}W(K)\,\textrm{d}K=B(0).$ (5) The mean square slope $s^{2}$ is $\displaystyle s^{2}=\int\limits_{-\infty}^{\infty}K^{2}W(K)\,\textrm{d}K=-\left.\frac{\partial^{2}B(x)}{\partial x^{2}}\right|_{x=0}.$ (6) The power density spectrum used in this work is the truncated power law, $\displaystyle W(K)=\frac{w}{|K|^{\gamma}}$ (7) for $k_{l}\leq|K|\leq k_{u}$, and zero otherwise. The spectral strength is $w$ with units of m3-γ, and $\gamma$ is the dimensionless spectral exponent. The lower wavenumber cutoff is $k_{l}=2\pi/L_{0}$, where $L_{0}$ is the outer scale. The upper wavenumber cutoff is $k_{u}=\pi/\ell_{0}$, where $\ell_{0}$ is the inner scale. The extra factor of $1/2$ in defining $k_{u}$ is chosen such that the interval $[-k_{u},k_{u}]$ has a total length of $2\pi/\ell_{0}$. Random realizations are produced from this power spectrum using the Fourier synthesis technique from Thorsos (1988), and is given for completeness in Appendix A. The outer scale is specified independently of the surface length, $L$, and is required to satisfy $L_{0}<L$. Similarly, the inner scale satisfies $\ell_{0}>\delta x$, where $\delta x$ is the sampling interval of the rough interface realization. Although not required in general, the inner scale is smaller than the smallest wavelength with significant energy in these broadband simulations. For the power-law form used here, the non-dimensional mean square slope and mean square height are $\displaystyle s^{2}$ $\displaystyle=\frac{2k_{0}^{3-\gamma}w}{3-\gamma}\left[\left(\frac{k_{u}}{k_{0}}\right)^{3-\gamma}-\left(\frac{k_{l}}{k_{0}}\right)^{3-\gamma}\right]$ (8) $\displaystyle k_{0}^{2}h^{2}$ $\displaystyle=\frac{2k_{0}^{3-\gamma}w}{\gamma-1}\left[\left(\frac{k_{0}}{k_{l}}\right)^{\gamma-1}-\left(\frac{k_{0}}{k_{u}}\right)^{\gamma-1}\right].\,,$ (9) where $k_{0}$ is the center wavenumber defined in Eq. (16). These parameters have been expressed in a form where the terms outside and inside the brackets parentheses are dimensionless. L’Hôpital’s rule can be used to show that the mean square slope is finite for $\gamma=3$, and the mean square height is finite for $\gamma=1$. The true upper wavenumber is set by the inner scale, $k_{u}=\pi/\ell_{0}$. However, the way in which the rough surfaces enter into the acoustical simulations may be subject to an effective upper limit, $k_{u}^{\prime}=\pi/\ell^{\prime}$, where $\ell^{\prime}$ is an effective inner scale. Roughness wavelengths much less than $\lambda_{0}$ likely have an insignificant effect on the scattered field. Thus, the effective upper wavenumber is likely much less than that defined by the surface sampling. The effective upper limit likely does not affect the scales causing scattering near the Bragg wavelength, but rather sets the upper wavenumber limit for computing the large-scale slope in the slope modulation model examined below. For $\gamma>1$, root-mean-square height is insensitive to the upper cutoff, and more sensitive to the low-wavenumber cutoff. For $\gamma<3$, the root- mean-square slope is sensitive to the upper cutoff, and insensitive to the lower cutoff (so long as it is sufficiently small). As $k_{u}$ becomes large, $s$ grows without bound, and an effective upper limit can alleviate this problem. To make the effective upper wavenumber limit explicit, the notation $s_{\ell^{\prime}}$ is used to denote the rms slope computed using $k_{u}^{\prime}=\pi/\ell^{\prime}$. ## III Integral equations and discretization This study was performed numerically using a discretized form of the 2D Helmholtz-Kirchhoff integral equation for Dirichlet boundary conditions Thorsos (1988). Although the motivation for this work is seafloor scattering, the assumption of a Dirichlet boundary allows us to focus solely on the role of the rough interface. For a single frequency, this integral equation, defined on the rough interface is Wu (2000), $\displaystyle p_{i}\left(\textbf{r}_{p}\right)=-\int\limits_{S}\frac{\partial p\left(\textbf{r}_{s}\right)}{\partial n_{s}}G_{k}\left(|\textbf{r}_{s}-\textbf{r}_{p}|\right)\textrm{d}S,$ (10) where $p_{i}$ is the incident pressure, $\textbf{r}_{p}=(x_{p},z_{p})$ and $\textbf{r}_{s}=(x_{s},z_{s})$ are points on the rough surface (with a subscript $s$ denoting the integration variable), $\partial p/\partial n_{s}$ is the total pressure normal derivative at $\mathbf{r}_{s}$, with the normal direction pointing out of the acoustic domain (into the lower space), and $G_{k}(|\textbf{r}_{s}-\textbf{r}_{p}|)=(i/4)H_{0}^{(2)}(k|\textbf{r}_{s}-\textbf{r}_{p}|)$ is the 2D free-space Green function Wu (2000), where $H_{0}^{(2)}(z)$ is the zeroth-order Hankel function of the second kind. Note that the Green function used here differs from that used in Thorsos (1988) due to the differing time convention. The normal vector points downward here, as opposed to upward in Thorsos (1988), although in both cases it points out of the acoustic domain. This integral equation can also describe electromagnetic scattering from 1D corrugated surfaces with perfectly conducting boundary conditions subject to an incident wave with transverse magnetic (also known as p) polarization Toporkov _et al._ (1998). The scattering problem is solved in two steps. First, Eq. (10) is numerically solved for $\partial p/\partial n$ on the surface, through discretization of the integral equation using the boundary element method Sauter and Schwab (2011); Wu (2000). In particular, piecewise linear basis functions are used to approximate $\partial p/\partial n$, and collocation to compare the true and approximate solution at discrete points. These two steps convert the integral equation into a linear system, $\displaystyle Vu=b,$ (11) where $u$ is the solution vector consisting of the basis function coefficients used in the approximation for $\partial p/\partial n$, and $b=p_{i}$ evaluated at the discrete collocation points $\textbf{r}_{m}=(x_{m},z_{m})$. The matrix $V$ has elements $\displaystyle V_{mn}=-\int G_{k}\left(|\textbf{r}_{m}-\textbf{r}_{s}|\right)\phi(\xi_{n}(\textbf{r}_{s}))\textrm{d}S.$ (12) Here, $\phi(\eta)$ is a linear basis function defined on the interval $\eta\in[-1,\,1]$. Outside of the interval, $\phi$ is zero. The function $\xi_{n}$ maps the basis function centered at the $n$-th point from physical space, $\textbf{r}_{s}$ to the $\eta$ domain. In this case, the basis functions are centered at the same collocation points $\textbf{r}_{m}$, resulting in a square matrix. Integration is carried out using a 4 point Gauss-Legendre quadrature rule Abramowitz and Stegun (1972) for nonsingular elements. Due to the weak singularity in the Green function, the diagonal elements of the matrix are computed using a 16 point quadrature rule combined with a variable transformation whose Jacobian exactly cancels the singularity Wu (2000). LAPACK routines were used to solve the linear system using LU decomposition and back substitution Anderson _et al._ (1999). Collocation points are defined on the rough surface, $(x_{m},z_{m})$ with equal spacing, $\delta x$ on the horizontal axis. The method to generate these points is given in Appendix A. From these points, a cubic approximation is used to construct a continuous and smooth surface. This interpolation process forces the surface normal, and thus $\partial p/\partial n$ to be continuous, which improves the convergence rate of the discretization of the integral operator Atkinson (1997). The interpolation scheme may extend the region of wavenumber support beyond $\pm\pi/\ell_{0}$, affecting the estimate of the rms slope and height. However, monotonic piecewise Hermite interpolation Fritsch and Carlson (1980) was used, which does not suffer from overshoot and has a negligible effect on the rms slope and height estimates. Once the surface pressure normal derivative is found, the scattered pressure at a field point in the domain, $p_{s}(\mathbf{r}_{f})$, is found using, $\displaystyle p_{s}(\textbf{r}_{f})=\int\limits_{S}\frac{\partial p\left(\textbf{r}_{s}\right)}{\partial n_{s}}G_{k}\left(|\textbf{r}_{s}-\textbf{r}_{f}|\right)\textrm{d}S,$ (13) where $\textbf{r}_{f}=(x_{f},z_{f})$. In this work, the field points are equally spaced intervals of one degree at a distance $R$ from the rough surface, which is approximately 25 times the Rayleigh distance from the surface, $L^{2}/\lambda_{0}$, where $L$ is the total surface length and $\lambda_{0}$ is the center wavelength. This criterion for the far-field is very conservative (Jackson and Richardson, 2007, Appendix J), Winebrenner and Ishimaru (1986); Lysanov (1973), although it enables the use of asymptotic expansions for the Hankel function. The scattered angle, $\theta_{s}$ is related to the field point locations using $x_{f}=R\cos\theta_{s}$, and $z_{f}=R\sin\theta_{s}$. The variable $p_{s}(f,\theta_{s},\theta_{i},R)$ is the complex pressure $p_{s}$ measured at a location $\mathbf{r}_{f}$, produced by an incident field with frequency $f$ and nominal incident grazing angle $\theta_{i}$. ## IV Incident Field The incident fields used in this work are broadband pulses whose spatial dependence is an approximation of a plane wave. The nominal directions of the incident and scattered wave vectors are indicated as arrows in Fig. 1. The incident and scattered wave vector lengths vary due to the broadband nature of the field, although the center wave vectors can be defined at the center frequency by the expressions $\textbf{k}_{0i}=(k_{0ix},k_{0iz})$ and $\textbf{k}_{0s}=(k_{0sx},k_{0sz})$. The components are defined in terms of the grazing angles $\theta_{i}$ and $\theta_{s}$ (with respect to the horizontal axis) by $\displaystyle k_{0ix}$ $\displaystyle=-k_{0}\cos\theta_{i}$ $\displaystyle k_{0sx}$ $\displaystyle=k_{0}\cos\theta_{s}$ (14) $\displaystyle k_{0iz}$ $\displaystyle=-k_{0}\sin\theta_{i}$ $\displaystyle k_{0sz}$ $\displaystyle=k_{0}\sin\theta_{s}.$ (15) The incident unit wave vector is $\hat{\mathbf{k}}_{0i}=\mathbf{k}_{0i}/k_{0}$, and the scattered unit vector is $\hat{\mathbf{k}}_{0s}=\mathbf{k}_{0s}/k_{0}$. The center wavenumber, $k_{0}$, is defined by an average of the wavenumber weighted by power spectrum of the transmitted source $\displaystyle k_{0}=\frac{\int\limits_{-\infty}^{\infty}\frac{2\pi f}{c}S^{2}(f)\,\textrm{d}f}{\int\limits_{-\infty}^{\infty}S^{2}(f)\,\textrm{d}f},$ (16) where $S(f)=\int s(t)\exp(-i2\pi ft)\,\textrm{d}t$ is the linear (amplitude) spectrum of the transmitted pulse, $s(t)$. The transmitted pulse used here was a complex exponential multiplied by a Gaussian envelope, with the time- and frequency-domain forms, $\displaystyle s(t)=\exp\left(-t^{2}/\tau^{2}+i\omega_{0}t\right)$ (17) $\displaystyle S(f)=\tau\sqrt{\pi}\exp\left(-(f-f_{0})^{2}\pi^{2}\tau^{2}\right)\,,$ (18) where $\tau$ is a parameter of the pulse length, and $\omega_{0}=2\pi f_{0}$ is the center angular frequency. Using the Gaussian form for $S(f)$, the numerator in Eq. (16) is $\pi\sqrt{2\pi}f_{0}\tau/c$, and the denominator is $\sqrt{\pi/2}\tau$. The ratio is $2\pi f_{0}/c=k_{0}$. The temporal resolution of the pulse, $\Delta\tau$ is defined by the duration of the pulse envelope between its half power points. For the Gaussian pulse used, this quantity can be obtained by solving the equation $\exp\left(-(\Delta\tau/2)^{2}/\tau^{2}\right)=1/\sqrt{2}$, resulting in $\Delta\tau=\tau\sqrt{2\ln 2}$. The 3 dB bandwidth, $B_{3dB}$, of the pulse is defined as the full-width, half maximum of $|S(f)|^{2}$, namely $|S(f_{0})|^{2}/2=|S(f_{0}\pm B_{3dB}/2)|^{2}$. These definitions of the temporal resolution and bandwidth result $B_{3dB}\Delta\tau=2\ln(2)/\pi\approx 0.44$. For reference, if a rectangular function with full width of $B_{3dB}$ is used for $S(f)$, then $B_{3dB}\Delta\tau\approx 0.88$. The same relationship is obtained if constant envelope pulse of length $\Delta\tau$ is used. Although the rectangular pulse has a larger time-bandwidth product, the Gaussian pulse has no sidelobes in the time-domain, but requires a computational bandwidth much larger than $B_{3dB}$ to approximate a true Gaussian function. The equivalent noise bandwidth (EQNB)Harris (1978) is also needed, which for the Gaussian pulse is $B_{EQNB}=(\tau\sqrt{2\pi})^{-1}$. The constant $A$ is defined in terms of the product $\displaystyle\Delta\tau B_{EQNB}=A^{-1}\,,$ (19) so that $A=\sqrt{\pi/\ln 2}$ for the Gaussian pulse. This constant is used later to define an effective inner scale to the rough interface based on its acoustic resolution. Broadband fields are synthesized from single frequency approximations of a plane wave. This narrowband incident field is the extended Gaussian beam developed by Thorsos (1988) that provides tapering to guard against edge effects entering into the scattering calculation. The form of this field (adapted to this time convention) is given by $\displaystyle\begin{split}p_{i}(\textbf{r}_{s},f)=p_{0}&\exp\left(-i\textbf{k}_{i}\cdot\textbf{r}_{s}\left(1+w_{t}(\textbf{r}_{s})\right)\right)\\\ \times&\exp\left(-\left(x_{s}-z_{s}\cot\theta_{i}\right)^{2}/g^{2}\right),\end{split}$ (20) where $\displaystyle w_{t}(\textbf{r}_{s})=(kg\sin\theta_{i})^{-2}\left[2\left(x_{s}-z_{s}\cot\theta_{i}\right)^{2}/g^{2}-1\right],$ (21) $g$ is a width parameter of the incident field, $\mathbf{r}_{s}$ is a point on the rough surface, and $\theta_{i}$ is the nominal incident grazing angle. The factor $p_{0}$ has units of Pa, and is helpful for keeping track of units when estimating the scattering cross section, or broadband cross section. For broadband simulations, Eq. (20) is used for each frequency. Since the Gaussian function has an infinite domain of support, it must be truncated to use in numerical simulations. The function $w_{t}\left(\textbf{r}\right)$ improves the agreement between the numerical solution of the Helmholtz-Kirchhoff integral equation of the first kind, Eq. (10), and the integral equation of the second kind. Discrepancies between these two solutions can result because the incident field satisfies the Helmholtz equation approximately to order $(kg\sin\theta_{i})^{-2}$ Thorsos (1988). Good agreement between the two solutions was observed when $kg\sin\theta_{i}$ is large. Therefore, to analyze low-grazing angles, (which are important to contemporary synthetic aperture sonar systems, e.g. Fossum _et al._ (2008); Bellettini and Pinto (2009); Pinto (2011); Dillon (2018); Sternlicht _et al._ (2016)), the parameter $g$ must grow as $\theta_{i}$ approaches zero. This requirement can be thought of as enforcing the constraint that the angular width of the incident beam (full width half max), $\displaystyle\Delta\theta=\frac{2\sqrt{2\ln(2)}}{kg\sin\theta_{i}},$ (22) should be small compared to $\theta_{i}$. When the relative angular width, defined as $\Delta\theta/\theta_{i}$ is not small, the direction of the incident field is spread over a large range of angles compared to the incident grazing angle. This situation should be avoided at low grazing angles, since the scattering cross section is commonly rapidly varying in that region. ## V Estimating time-domain quantities of the scattered field In this section, expressions are derived for the broadband scattering cross sections and scintillation index of the scattered field due to a broadband incident pulse. The time-domain pressure is computed by $\displaystyle p_{s}(t^{\prime},\theta_{s},\theta_{i},R)=\int\limits_{-\infty}^{\infty}S(f)p_{s}(f,\theta_{s},\theta_{i},R)e^{i2\pi f(t^{\prime}+R/c)}\,\textrm{d}f$ (23) where $p_{s}(f,\theta_{s},\theta_{i},R)$ is the scattered pressure measured at grazing angle $\theta_{s}$, distance from the origin, $R$, incident field frequency $f$, and nominal incident grazing angle $\theta_{i}$. The factor $R/c$ in the exponent removes the time delay associated with propagation to the far field when the incident pulse is centered on the origin, so that the scattered pressure time series can be mapped to the mean plane of the rough interface. This delayed time is denoted $t^{\prime}$, which is used below in the calculation of the effective energy flux. The scattered grazing angle, $\theta_{s}$ is computed using the location at which the pressure is calculated in the far field, $\theta_{s}=\tan^{-1}(z_{f}/x_{f})$. In practice the integral is computed using the fast Fourier transform. The broadband scattering cross section, $\sigma_{bb}$, is computed directly from the scattered pressure in the time domain. Although there is no general framework for this kind of quantity, motivation is provided here by comparison to the definition of the scattering cross section in the frequency domain. In practice, finite resolution in scattering measurements is sometimes obtained using short pulses Urick (1954, 1983), and this definition accords with the sonar equation used in those situations. The frequency domain version of the cross section, $\sigma_{f}$ for a monochromatic plane wave from a rough interface of length $L$, was given in Sec. I as Eq. (1), due to Thorsos (1988). This expression can be cast in terms of the incident energy-flux (Landau and Lifshitz, 1987, p. 255),Thorsos (1988) passing through the plane $z=0$ $\displaystyle\sigma_{f}=\frac{\langle I_{s}\rangle R\sin\theta_{i}}{E_{f}},$ (24) where $\displaystyle E_{f}=\int\limits_{-\infty}^{\infty}\left[\left.\frac{1}{2}\mathrm{Re}\\{p_{i}\mathbf{v}_{i}^{\ast}\\}\cdot\hat{n}\right.\right|_{z=0}\,\mathrm{d}x,$ (25) is the total energy flux passing through the $z=0$ plane, $p_{i}$ is the incident complex pressure, $\mathbf{v}_{i}$ is the incident acoustic particle velocity, and $\hat{n}$ is the unit normal vector of the line $z=0$. Since the energy is directed downards towards the rough interface, $\hat{n}$ was chosen to be $-\hat{z}$. In Thorsos (1988), the incident acoustic energy was directed upwards towards the rough interface, and correspondingly, their normal vector pointed up. Making this substitution, the energy flux is $\displaystyle E_{f}=-\int\limits_{-\infty}^{\infty}\left[\left.\frac{1}{2}\mathrm{Re}\\{p_{i}{v}_{iz}^{\ast}\\}\right.\right|_{z=0}\,\mathrm{d}x,$ (26) where $v_{iz}$ is the vertical component of the incident acoustic particle velocity. For an untapered plane wave incident on a rough surface of length $L$, $E_{f}=I_{i}L\sin\theta_{i}$, and the definition in Eq. (1) is recovered. This definition can be extended to an incident plane wave with a time dependence set by $e^{i\omega_{0}t}$ times a rectangular pulse of length $\tau$. Here, a new quantity is defined called the broadband scattering cross section, $\sigma_{bb}$. It might seem reasonable to start with Eq. (24) with $E_{f}$ evaluated in the time-domain. In this case, $E_{f}=I_{i}c\tau\sin\theta_{i}/\cos\theta_{i}$, and thus $\sigma_{bb}=(\langle I_{s}\rangle R)/(I_{i}c\tau/\cos\theta_{i})$. However, this definition is insufficient because in the case of an incident pulse, the (effective) ensonified length that is relevant for the scattering cross section is set by both the incoming and outgoing angles. Physically, the dependence on the scattered field point (or angle) is due to the fact that at a given instant in time, only part of the ensonified surface contributes to the total field measured at a point in the far field. To account for this effect, the vertical component of the energy flux density can be written as a function of space and time $\displaystyle e_{f}(t,\textbf{r})=\frac{1}{2}Re\\{p_{i}v_{iz}^{\ast}\\}\,,$ (27) which is the integrand of (26). Instead of integrating this quantity over $x$ to obtain the energy flux, a delayed version of $e_{f}$ is integrated over $x$ with $z=0$ to obtain the effective energy flux. This quantity is defined as $\displaystyle E_{f}^{\prime}=-\int\limits_{-\infty}^{\infty}e_{f}(t-t_{s},\mathbf{r})|_{z=0}\,\mathrm{d}x\,,$ (28) where $\displaystyle t_{s}=|\mathbf{r}_{f}-\mathbf{r}|/c\approx(R-x\cos\theta_{s}-z\sin\theta_{s})/c\,,$ (29) and $R=|\mathbf{r}_{f}|$. This time-delay takes into account the alteration in the region that contributes to the scattered field at a given instant in time due to the position of the field point. For a line source, or other compact source configuration, a similar analysis can be performed by taking into account the constant-time (isochronous) ellipse for the transmitted pulse and geometry. The broadband scattering cross section is defined in terms of the effective incident energy-flux, analogous to the frequency-domain version $\displaystyle\sigma_{bb}=\frac{\langle I_{s}\rangle R\sin\theta_{i}}{E_{f}^{\prime}}\,.$ (30) Note that $E_{f}^{\prime}$ may in general be a function of time, since the pulse may be attenuated by a spatial taper (as it is here), or by the beam pattern of a transducer in experiments. To use the entire time series scattered by the rough interface due to the broadband pulse, the energy flux must be calculated as a function of time. For a plane-wave (untapered) pulse the energy flux is independent of time, for all pulse shapes. First, this defintion will be demonstrated for a plane wave with a rectangular pulse shape, to build intuition. The incident pressure in this case is given by $\displaystyle p_{i}(\mathbf{r},t)=p_{0}e^{i\omega_{0}(t-\hat{\mathbf{k}}_{i}\cdot\mathbf{r}/c)}\Pi\left(\frac{t-\hat{\mathbf{k}}_{i}\cdot\mathbf{r}/c}{\tau/2}\right)$ (31) where $\mathbf{r}$ is a general point in space, $\hat{\mathbf{k}}_{i}$ is the incident unit wave vector, $\Pi(x)=1$ if $|x|\leq 1$ and zero otherwise, and sets the incident pulse duration. The effective energy flux is then $E_{f}^{\prime}=I_{i}c\tau\sin\theta_{i}/\nu_{x}$, where $\nu_{x}=\cos\theta_{i}+\cos\theta_{s}$. For a broadband rectangular pulse, the sine factors cancel, and $\displaystyle\sigma_{bb}=\frac{\langle I_{s}\rangle R}{I_{i}c\tau/\nu_{x}}.$ (32) The denominator now includes dependence on the scattered direction, and contains the effective ensonified length, $L_{eff}$. The form $L_{eff}=c\tau/\nu_{x}$ is consistent with the down-range resolution of bistatic synthetic aperture radar, given in Moccia and Renga (2011) after converting between different angle conventions. The effective energy flux for a broadband Gaussian pulse and incident Gaussian beam is derived in Appendix B. The result is $\displaystyle E_{f}^{\prime}$ $\displaystyle=\frac{|p_{0}|^{2}}{2\rho_{0}c}\sin\theta_{i}L_{eff}D(t^{\prime})$ (33) $\displaystyle L_{eff}$ $\displaystyle=\frac{\sqrt{\pi/2}}{\sqrt{g^{-2}+(c\tau/\nu_{x})^{-2}}}$ (34) $\displaystyle D(t^{\prime})$ $\displaystyle=e^{-\frac{2(ct^{\prime}/\nu_{x})^{2}}{g^{2}+(c\tau/\nu_{x})^{2}}}(1+J_{1}(1+J_{2}J_{3}))\,.$ (35) where $\displaystyle J_{1}$ $\displaystyle=\frac{\cot\theta_{i}\nu_{x}}{\sin\theta_{i}g^{2}(2g^{-2}+(c\tau/\nu_{x})^{-2})}$ (36) $\displaystyle J_{2}$ $\displaystyle=\frac{2\left(g^{-2}+(c\tau/\nu_{x})^{-2}\right)}{\left(2g^{-2}+(c\tau/\nu_{x})^{-2}\right)}$ (37) $\displaystyle J_{3}$ $\displaystyle=\frac{\left(\frac{t^{\prime}}{\tau}\right)^{2}\left(1-\chi^{2}\right)-\left(\frac{\omega_{0}\tau}{2}\right)^{2}\left(1+\chi^{2}\right)}{\left(\frac{t^{\prime}}{\tau}\right)^{2}\left(1-\chi^{2}\right)^{2}-\left(\frac{\omega_{0}\tau}{2}\right)^{2}\left(1+\chi^{2}\right)^{2}}$ (38) $\displaystyle\chi$ $\displaystyle=\frac{\nu_{x}/(c\tau)}{\sqrt{2g^{-2}+(c\tau/\nu_{x})^{-2}}}\,.$ (39) $L_{eff}$ is the effective ensonified length, and $D(t^{\prime})$ can be thought of as the directivity function that captures the effect of the incident pulse traveling throughout space and changing amplitude due to the incident beam. Away from specular, if $g\gg c\tau/2$, then the $L_{eff}\approx(\pi/2)^{1/2}c\tau/\nu_{x}$. Note that when $\sin\theta_{i}g\nu_{x}/(c\tau)\gg 1$, $J_{1}$ is small, $J_{2}$ is of order unity, and $J_{3}$ tends to 1/2. The broadband scattering cross section is defined here in terms of an intermediate variable, the unaveraged broadband scattering cross section as a function of time, incident angle and scattered angle, $q(t^{\prime},\theta_{s},\theta_{i})$, $\displaystyle q(t^{\prime},\theta_{s},\theta_{i})=\frac{|p_{s}(t^{\prime},\theta_{s},\theta_{i},R)|^{2}R}{|p_{0}|^{2}L_{eff}D(t^{\prime})}.$ (40) To reduce uncertainty, $q(t^{\prime},\theta_{s},\theta_{i})$ is computed for different roughness realizations, and results concatenated. The broadband scattering cross section can be computed by averaging $q$ over time, $t^{\prime}$ and random realizations, $N_{E}$. $\displaystyle\sigma_{bb}(\theta_{s},\theta_{i})=\langle q(t^{\prime},\theta_{s},\theta_{i})\rangle_{t^{\prime},N_{E}}.$ (41) In Sec. VII, the broadband cross section is compared to the frequency domain scattering cross section, $\sigma_{f}$, calculated using Eq. (14) from Thorsos (1988). Although the scattering cross section is exclusively a frequency- domain quantity, the subscript $f$ is included for clarity. Figure 2: (color online) Steps to estimate broadband scattering cross section. The frequency domain scattered pressure, and the pressure weighted by the source spectrum are given in (a) (angle arguments to $p_{s}$ have been omitted). In $(b)$ the raw time-domain scattered-pressure magnitude-squared is plotted, along with a version divided by $D(t^{\prime})$ (defined in Eq. (35)). In (c), the time-domain unaveraged broadband scattering cross section (defined in Eq. (40)) is plotted for five different pulse lengths. The incident and scattered grazing angles were 20∘, $g$ was 3.75 m, and $R$ was placed conservatively in the far-field of the rough interface. An example realization of the scattered pressure in the frequency domain is plotted in Fig. 2(a). The frequency domain pressure is plotted as the raw scattered pressure, and also weighted by the amplitude spectrum, $S(f)$. The time-domain pressure-squared after weighting by $S(f)$ and an inverse Fourier transform, is plotted in Fig. 2(b). The time-domain squared pressure magnitude contains fluctuations with a characteristic time-scale close to the pulse length, as well as deterministic changes due to the incident beam used in the Helmholtz integral calculations. One example of $q(t^{\prime},20^{\circ},20^{\circ})$ is plotted in Fig. 2(c) for five different pulse resolutions. The mapping $x=-ct^{\prime}/\nu_{x}$, can be used to convert the time series of the scattered pressure to the mean plane. This mapping gives the location of the center of the incident pulse on the mean plane, as is sometimes performed for imaging sonars. If single-scattering is assumed, then the time- domain scattered field can be mapped to the horizontal position of the rough interface. If the surfaces are very rough and multiple scattering is present, then the scattering will occur from locations other than $x=-ct^{\prime}/\nu_{x}$. The scintillation index, $SI$ is also examined. It is the variance of the scattered intensity divided by the square of the mean scattered intensity (Ishimaru, 1978, p. 437). Since $SI$ is invariant under a multiplication of the intensity by a constant, the unaveraged broadband cross section, $q$, may be used instead of the scattered intensity, so that $\displaystyle SI=\frac{\langle q^{2}\rangle-\langle q\rangle^{2}}{\langle q\rangle^{2}}\,.$ (42) The scintillation index characterizes the fluctuations in the scattered field. If $SI=1$, then the magnitude of the complex pressure (known as the envelope) has a Rayleigh distribution and its real and imaginary components are Gaussian. If $SI>1$, then the pdf of the scattered field is heavy-tailed, which means that there is a higher probability of occurrence of high-amplitude events compared to the Rayleigh distribution. ## VI Parameters for Numerical Experiments ### VI.1 Signal Parameters The objective of this work is to study the resolution (or bandwidth) dependence of the scattered field. These experiments covered the resolutions typically used in narrowband scattering experiments Jackson _et al._ (1986a); Williams _et al._ (2002), with the resolution cell on the order of 10 or more wavelengths, down to a value of one wavelength, which is on the order of what is achievable by modern SAS systems. Specific values of $\Delta X/\lambda_{0}=(1,2,4,8,16)$ were used. The proportional spatial resolutions correspond to temporal resolutions $\Delta\tau f_{0}=(2,4,8,16,32)$ at small grazing angles, since $\Delta X=c\Delta\tau/(2\cos\theta_{i})$ for backscattering. In all cases, the resolution is defined for $\theta_{i}=\theta_{s}=0$. At larger incident and scattered grazing angles, the resolution will be somewhat larger than the values listed in the figures and tables presented below. High-frequency acoustic imaging systems provided the motivation for this work, and thus the simulations used a center frequency of 100 kHz. However, the parameters of the simulation were specified in a non-dimensional fashion. As long as every dimensional quantity is scaled properly, results of these simulations should be valid for lower frequencies with larger roughness parameters, and longer surface lengths. To check whether the non-dimensional scaling was valid, one of the simulations was performed at 10 kHz as well, and the roughness parameters were scaled accordingly. A sound speed of 1500 m/s was used for all simulations. ### VI.2 Roughness parameters Roughness parameters were specified using two dimensional constants, the center wavenumber, $k_{0}$ (through a combination of $f_{0}$ and $c$) and spectral strength, $w$. Three spectral exponents were used, $\gamma=(1.5,2,2.5)$, since this parameter has been observed to vary for measured seafloor roughness (Jackson and Richardson, 2007, Ch 6). A spectral exponent of 2 was used for both 100 kHz and 10 kHz. The $\gamma=1.5$ and $\gamma=2.5$ simulations used a center frequency of 100 kHz only. It now remains to specify the spectral strength. For $\gamma=2$, spectral strengths of $w_{2}=(1\times 10^{-6},1\times 10^{-5},2\times 10^{-5},3\times 10^{-5},4\times 10^{-5})$ m were used, where a subscript on the spectral strength denotes that it is used for a specific value of the spectral exponent. These values resulted in $SI\approx 1$ for the smallest $w_{2}$, and $SI>1$ for larger values. These values are much smaller than the measurements with exponent close to 2 summarized in Table 6.1 of Jackson and Richardson (2007). The same is true for spectral strength for other $\gamma$ in Table 1. Given our interest in investigating the local tilting hypothesis, the spectral strength for other values of $\gamma$ were specified such that they resulted in equal rms slope when the effective inner scale was held constant at $\ell^{\prime}=A\lambda_{0}\approx 2\lambda_{0}$, where $A$ is defined in Eq. (19). The factor of $A$ is included since $(A\Delta X)$ is the length scale relevant for computing rms slope and height with an effective upper wavenumber limit. With this requirement, values of $w$ for other $\gamma$ were computed using $\displaystyle w_{\gamma}=w_{2}(3-\gamma)\frac{k_{u}^{\prime}-k_{l}^{\prime}}{k_{u}^{\prime 3-\gamma}-k_{l}^{\prime 3-\gamma}}.$ (43) The effective upper cutoff was set to $k_{u}^{\prime}=k_{0}/(2A)$, since $\ell^{\prime}=A\lambda_{0}$. The true inner scale was $\ell_{0}=\lambda_{0}/6$ to ensure that the Bragg condition was satisfied for all wavelengths in the broadband simulation with significant support. This criterion ensured that $1.5f_{0}$ was the largest frequency that satisfied the Bragg condition. The pulse with the largest bandwidth has a spectrum that is 67 dB down at $1.5f_{0}$ compared to the the peak value at $f_{0}$. The outer scale was $L_{0}=112.5\lambda_{0}=L/10$ to ensure that a broad range of scales were included in the power law spectrum, but the surface length was sufficiently larger than the outer scale. Roughness parameters for the 100 kHz simulations are summarized in Table 1. Root-mean-square slope is given in degrees for different values of $\ell^{\prime}$, the effective inner scale. With $\ell^{\prime}=\ell_{0}=\lambda_{0}/6$, the true rms slope of the rough interface is given. Values of $s_{\ell^{\prime}}$ for $\ell^{\prime}$ between $A\lambda_{0}$ and $16A\lambda_{0}$ are also given, each of which corresponds to the acoustic resolutions used in the numerical simulations. For the $\gamma=2$ case, the true rms slope varied between $2.9^{\circ}$ and $18^{\circ}$, depending on the spectral strength. For the $\gamma=1.5$ case the true rms slope varied between $5.3^{\circ}$ and $31^{\circ}$, and for the $\gamma=2.5$ case it varied between $1.6^{\circ}$ and $10^{\circ}$. The effective rms slopes were much smaller than the true values. The large values of true rms slope result because this quantity increases without bound as the inner scale tends to zero. For the 10 kHz simulations, the same $s_{\delta x}$ and $kh$ were chosen. This condition can be satisfied if the spectral strength, surface length, inner scale, and outer scale, for $100$ kHz and $\gamma=2$ are all multiplied by 10. For other values of $\gamma$, a different scaling must be used. $\gamma$ $w$ $s_{\lambda_{0}/6}$ $s_{A\lambda_{0}}$ $s_{2A\lambda_{0}}$ $s_{4A\lambda_{0}}$ $s_{8A\lambda_{0}}$ $s_{16A\lambda_{0}}$ $k_{0}h$ - m3-γ ∘ ∘ ∘ ∘ ∘ ∘ - 2 1.00$\times 10^{-6}$ 2.87 0.79 0.55 0.37 0.24 0.13 0.31 1.00$\times 10^{-5}$ 9.00 2.49 1.73 1.17 0.75 0.40 0.97 2.00$\times 10^{-5}$ 12.62 3.52 2.44 1.66 1.06 0.56 1.37 3.00$\times 10^{-5}$ 15.33 4.31 2.99 2.03 1.30 0.69 1.68 4.00$\times 10^{-5}$ 17.57 4.97 3.45 2.34 1.50 0.80 1.94 1.5 1.47$\times 10^{-7}$ 5.34 0.79 0.47 0.27 0.15 0.07 0.22 1.47$\times 10^{-6}$ 16.44 2.49 1.47 0.86 0.48 0.23 0.71 2.93$\times 10^{-6}$ 22.65 3.52 2.08 1.21 0.68 0.32 1.00 4.40$\times 10^{-6}$ 27.07 4.31 2.55 1.49 0.83 0.39 1.23 5.86$\times 10^{-6}$ 30.54 4.97 2.94 1.72 0.96 0.45 1.42 2.5 5.90$\times 10^{-6}$ 1.61 0.79 0.63 0.48 0.35 0.21 0.44 5.92$\times 10^{-5}$ 5.09 2.49 1.99 1.53 1.11 0.65 1.39 1.18$\times 10^{-4}$ 7.18 3.52 2.81 2.17 1.57 0.93 1.96 1.78$\times 10^{-4}$ 8.77 4.31 3.44 2.66 1.92 1.13 2.41 2.37$\times 10^{-4}$ 10.11 4.97 3.97 3.07 2.21 1.31 2.78 Table 1: Roughness parameters used in the simulations. All parameters are listed for $f_{0}=100$kHz and $c=1500$ m/s. Simulations at 10 kHz use the same dimensionless mean square parameters. Units for each parameter are given in the second row. The rms slope is reported using the upper limit computed with the sampling interval for the rough surface, as well as using the acoustic resolution. The parameter $A\approx 2.12$. $\Delta X/\lambda_{0}$ | $N_{p}$ | Err (%) | Err (dB) ---|---|---|--- 1 | 4275 | 1.53 | 0.066 2 | 2137 | 2.16 | 0.093 4 | 1069 | 3.06 | 0.13 8 | 534 | 4.33 | 0.18 16 | 267 | 6.12 | 0.26 Table 2: Number of independent samples, $N_{p}$ for the different resolutions at small grazing angles, and relative uncertainty associated with the finite ensemble assuming intensity is exponentially distributed. Uncertainty is reported in terms of percent and decibels. The frequency-domain simulations used 5000 surface realizations, with an uncertainty of 1.14%, or 0.061 dB. ### VI.3 Sampling parameters The sampling interval, $\delta x$ was specified so that the errors caused by discretization of the integral equation were not observable to within Monte- Carlo error. Since very large numbers of independent samples were used, these estimates of the broadband scattering cross section had uncertainty of about 0.07-0.3 dB. With this small uncertainty, a noticeable bias in the scattering strength occurred when $\delta x/\lambda_{0}>1/12$, due to discretization error. A conservative value of $\delta x/\lambda_{0}=1/15$ was therefore used to minimize this bias. The focus in this work is moderate to low grazing angles. The lower limit to the grazing angles that can be reliably estimated in these numerical simulations is set by the surface length, which in turn is set by the memory limitations and acceptable number of CPU hours used. These latter constraints limited us to $g=250\lambda_{0}$. At the center frequency, the relative angular width for that value of $g$ was about 3.5% at 10∘ grazing angle. Lower frequencies are more problematic, since, for a constant value of $g$, decreasing the frequency will increase the angular width of the field. At the lower 6 dB down point of the largest bandwidth signal used (equivalent to $0.844f_{0}$), the angular width for $g=250\lambda_{0}$ was approximately 4.2∘ at a nominal angle of $10^{\circ}$ grazing angle. With these angular widths, $10^{\circ}$ was taken to be an acceptable, if conservative, lower limit to the grazing angles that can be reliably estimated in this work. In choosing the surface length, $g$ was increased to $400\lambda_{0}$ without any change in the behavior of $\sigma_{bb}$ or $SI$ above 10∘. Grazing angles less than $10^{\circ}$ likely require fast approximate methods to solve the integral equation, such as the fast multipole method Liu (2009), or hierarchical matrices Hackbusch (2015). The rough surface length, $L$ was set to $4.5g$ to allow the incident field taper to decay sufficiently at the edges of the computational domain. Near the edges, 2.5% of the time-domain samples were discarded on each side to remove edge effects (e.g. the large peak at -11 ms in the blue curve in Fig. 2(b)), so that only 95% of the full time series available was used. In the broadband simulations, four roughness realizations were used, and 5000 realizations used in the frequency domain. For the broadband case, the time- domain response for each angle contains many independent samples, $N_{p}$ (including multiple realizations). At small grazing angles for all ensembles, $N_{p}$ is $4\times 0.95L/(\Delta X)$, where the factor of $0.95$ results from discarding samples near the edges of the surface. Assuming Gaussian pressure statistics (or exponentially distributed intensity statistics), one can define a standard error $Err_{\mathrm{std}}=1/\sqrt{N_{p}}$ which characterizes the relative uncertainty of the broadband scattering cross section estimate. The decibel representation is $Err_{\mathrm{dB}}=10\log_{10}(1+1/\sqrt{N_{p}})$. However, the scattered intensity is manifestly not exponentially distributed for large values of spectral strength, small pulse length, and small grazing angles. A more realistic characterization of the uncertainty is to divide the standard deviation of the intensity by the mean intensity and $\sqrt{N_{p}}$, amounting to $\sqrt{SI/N_{p}}$. Since the scintillation index is shown in the next section to vary with roughness parameters and grazing angle, it is difficult to give an overall characterization of the uncertainty. Therefore, uncertainty calculated using Gaussian statistics are given in Table 2, with the smallest error of 1.5% or 0.066 dB, and largest uncertainty of 6.1% or 0.26 dB. If the uncertainty for the non-Gaussian cases are desired, then the scintillation index plots presented in Figs. (5-7) can be used along with $N_{p}$ to provide this quantity. The frequencies required for the largest bandwidth simulation $\Delta\tau f_{0}=2$, spanned approximately $0.2f_{0}$ to $1.8f_{0}$. This computational bandwidth was about seven times the largest 3 dB bandwidth used for the Gaussian spectrum. Using a frequency spacing of $\delta f=c/(3L)$, the number of frequencies per simulation was approximately 6000. For the proportional bandwidths studied in this work, these surface parameters resulted in surfaces with $N=16,875$ points. The matrix $V$ resulting from this discretization required 4.3 GB of memory storage for double precision complex numbers. Simulations were performed on the Hamming high-performance computing cluster at the Naval Postgraduate School. ## VII Results Figure 3: (color online) Scattering Strength comparison for $f_{0}$= 100 kHz and $\gamma=2$. Each color represents scattering strength estimated using different bandwidths, as indicated by the caption. The dot-dashed line indicates the frequency-domain estimate. Five different values of spectral strength are plotted on separate lines, with scattering strength monotonically increasing with spectral strength (i.e. the smallest spectral strength has the lowest scattering strength). Values for the spectral strength are given in Table 1. Note that the random realizations are generated only with inner cutoff scale of $\ell_{0}=\lambda_{0}/6$. The time-domain results are difficult to distinguish, and the ratio for the time-domain and frequency results are shown in Fig. 4. Figure 4: (color online) Scattering Strength ratio for $f_{0}$=100 kHz and $\gamma=2$ Figure 5: (color online) Scintillation Index for $f_{0}$=100 kHz and $\gamma=2$. Note that the vertical axis is different for each subfigure. Figure 6: (color online) Scintillation Index for $f_{0}=$100 kHz and $\gamma=1.5$. Note that the vertical axis is different for each subfigure. Figure 7: (color online) Scintillation index for $f_{0}$=100 kHz and $\gamma=2.5$. Note that the vertical axis is different for each subfigure. Results for backscattering strength as a function of grazing angle, $\theta_{i}$, resolution $\Delta X$, and spectral strength, $w$, are presented in Fig. 3 for $f_{0}=100$ kHz and $\gamma=2$. The numerical method can handle bistatic geometries (with differing $\theta_{i}$ and $\theta_{s}$), but only monostatic geometries are presented here. Broadband scattering strength, $\sigma_{bb}$ as well as the frequency domain version, $\sigma_{f}$ is plotted on the vertical axis, with grazing angle on the horizontal axis. The angles have a lower limit of 10∘ grazing angle due to finite surface length, and an upper limit of 70∘, due to the difficulty in estimating the broadband cross section near vertical using broadband pulses Hefner (2015); Hellequin _et al._ (2003). Each resolution is plotted as a different color and line style, with the frequency domain version as a black dashed-dot line. Results for different spectral strengths are on the same figure since they are well separated from one another, with the smallest $w$ corresponding to the lowest scattering strength. For each value of the spectral strength, the frequency-domain scattering cross section is indistinguishable from the broadband version. To compare these results more closely, the broadband cross section is divided by the frequency-domain cross section and the dB value taken. This quantity, which is called the scattering strength difference, is plotted in Fig. 4. At the largest spectral strength, some systematic oscillations as a function of $\theta_{i}$ are present, but cannot be easily disentangled from the rapid Monte-Carlo fluctuations. Other than that case, all differences appear to be random. The standard deviation of the ratio $\sigma_{bb}/\sigma_{f}$ across angles between 10 and 70 degrees grazing is about 4% for $\Delta X/\lambda_{0}=1$, and about 10% for $\Delta X/\lambda_{0}=16$. Note that since this ratio is between two random variables, the uncertainty may be higher than the theoretical uncertainty for $\sigma_{bb}$ or $\sigma_{f}$ alone. These numbers are relatively consistent across all spectral strengths. Larger uncertainty for long pulses is a consequence of having fewer independent samples per roughness realization. The uncertainty for long pulse lengths is consistent with the theoretical uncertainty shown in Table 2, but the uncertainty for short pulse lengths is higher by about factor of 2.5. This increased uncertainty is likely a result of the departure from exponential intensity statistics for short pulses. Broadband scattering strength and the frequency domain scattering strength are indistinguishable, to within the Monte-Carlo error of the numerical simulations. This result confirms that the broadband scattering strength is a robust metric across different resolutions for scattering from a power-law seafloor with spectral exponent $\gamma=2$. The frequency domain scattering cross section exhibits power-law frequency dependence, as seen by the nearly $f^{1}$ trend in Fig. 2(a). When incoherently averaged in the frequency domain, weighted by $S(f)$, the frequency dependence in the cross section for $\gamma\in[1,3]$, results in a negligible bandwidth dependence, under 4 percent. If $\sigma_{f}$ exhibited very strong frequency dependence, e.g. with sharp peaks seen in scattering from layered seafloors (e.g. Jackson and Ivakin (1998); Jackson and Olson (2020)), then more significant bandwidth dependence may be observable. The scintillation index for this case is plotted in Fig. 5 for broadband signals as well as the single frequency case. Each spectral strength is plotted in its own subfigure, and each resolution has its own line within the subfigure. The vertical axis is $SI$, and the horizontal axis is grazing angle in degrees. The narrowband result is approximately unity for the entire angular domain shown, for all spectral strengths. This is the expected result if the central-limit theorem is employed. For the broadband signals, there is a profound dependence on resolution, with the scintillation index increasing as the resolution cell becomes small. This behavior can be seen in the example realization shown in Fig. 2(c), in which the intensity peaks become higher as $\Delta X$ becomes small. Additionally, holding resolution constant, the scintillation index increases as the grazing angle becomes small, monotonically for this case. For most broadband cases, $SI$ asymptotically approaches unity as grazing increases to its upper limit. However, for the highest resolution cases, $\Delta X/\lambda_{0}=(1,2)$, and the largest spectral strengths, this high angle asymptote is greater than one, indicating that for all angles examined here, scattered complex pressure magnitude is non-Rayleigh. In Lyons _et al._ (2016) a K distribution was required to describe the pdf of the scattered field at moderate grazing angles, which agrees with this result. The Monte-Carlo fluctuations are significantly less than the difference between the high-angle $SI$ asymptote and unity, indicating that this is a statistically significant finding. Lupien (1999) also observed non-Rayleigh scattering for broadband scattering from rough surfaces with a power-law exponent of $\gamma=3$, but statistical tests barely rejected the Rayleigh distribution. That analysis did not remove the effect of the Gaussian taper, so the conclusions are not comparable to the present work. Broadband and single-frequency scattering strength and $SI$ were also computed for 10 kHz, $\gamma=2$, and spectral strengths that were ten times the value in the previous section. The relative resolution, $\Delta X/\lambda_{0}$ was held constant, but consequently the resolution $\Delta X$ was a factor of 10 larger. These parameters were chosen such that the dimensionless second-order quantities were the same as in the previous 100 kHz simulations. This scaling is only true for $\gamma=2$, and would be modified if $\gamma$ were a different value. It was found that the scattering strengths, scattering strength dB error, and the scintillation index were the same for the 10 kHz and 100 kHz cases, to within Monte-Carlo fluctuation. This set of simulations was performed to verify that characterizing the simulations non-dimensionally was valid. Since plots for the 10 kHz case do not add significantly new information, they are not shown here. These results indicate that departure from Rayleigh statistics is not isolated to very high-frequency imaging systems, and may occur in lower-frequency sonar systems as well, so long as the seafloor has the appropriate roughness parameters. The spectral exponent was changed to $\gamma=1.5$ to examine the effect of changing the shape of the power spectrum. New values of $w$ were used, as specified in Table 1. Again, $\sigma_{bb}$ and $\sigma_{f}$ are the same. Scattering strength comparisons and the scattering strength ratio are not shown. The scintillation index is plotted in Fig. 6, and is seen to depend on angle, resolution, and spectral strength, as in the previous case. The rough interfaces have the same effective large-scale slope for $\gamma=2$ and $\gamma=1.5$ at $\Delta X/\lambda_{0}=1$. $SI$ qualitatively depends similarly on spectral strength and resolution, as compared to the $\gamma=2$ case. Quantitatively, the values of $SI$ are slightly different than the $\gamma=2$ case, but are similar. Finally, the spectral exponent was changed to $\gamma=2.5$. The values of spectral strength can be found in Table 1. Again, the broadband scattering cross section, and the frequency-domain version computed at the center- frequency were the same to within the Monte-Carlo error of the simulations, and are not shown here. The scintillation index is plotted in Fig. 7, and again depends on resolution, spectral strength and grazing angle. As the spectral strength is increased in the same proportions as the earlier plots (the second through fifth spectral strengths are 10, 20, 30, and 40 times the smallest spectral strength respectively), the scintillation index increases much more rapidly than either the $\gamma=2$ and $\gamma=1.5$ cases. For the four largest values of spectral strength, the $SI$ is elevated at the higher angles as well. The smallest resolution cases also have elevated $SI$ for the entire angular domain, and the four highest spectral strength values. ## VIII Discussion In the results presented in Sec. VII, it was shown that the broadband scattering strength is indistinguishable from frequency-domain scattering strength, and is independent of pulse length. This conclusion is not surprising. However, as the pulse length changes, the properties of the ensemble used to estimate scattering strength changes as well. It is encouraging to see that although the ensemble is changing with respect to resolution (i.e the rough patch within a resolution cell is different for each resolution), $\sigma_{bb}$ is invariant to pulse length. It is expected that this result holds for 3D environments as well if the roughness is isotropic, although numerical tests using this geometry are needed. Based on these results, high-resolution systems should be able to reliably estimate scattering strength, and this work also confirms that it may be a stable quantity to use for seafloor remote sensing. However, for highly anisotropic, non-stationary scenarios, such as those studied by Olson _et al._ (2016, 2019); Lyons _et al._ (2010), the measured scattering strength may depend on pulse length. The scintillation index (also called structure Wang and Bovik (2002), lacunarity Williams (2015), or contrast Marston and Plotnick (2015)) was shown to be is highly dependent on all the parameters studied: resolution, grazing angle, spectral strength, and spectral exponent. For moderate to low grazing angles, $SI$ monotonically increases as grazing angle decreases, resolution cell decreases, and spectral strength increases. SI is similar for $\gamma=1.5$ and $\gamma=2$, but is larger for $\gamma=2.5$, for the values of $w$ chosen for these numerical experiments. Contrary to scattering strength, $SI$, and therefore the scattering process in general, is fundamentally different in the frequency and time domains for broadband pulses. In Lyons _et al._ (2016), it was hypothesized that the physical cause of heavy-tailed statistics in high resolution sonar imagery was local tilting of the seafloor due to roughness wavelengths larger than the acoustic resolution. The scattered pressure amplitude (envelope) was modeled as a product between a random variable due to sub-resolution roughness, and a random variable that took into account the effect of tilting by longer wavelengths – the small- scale scattering strength evaluated at the nominal grazing angle modified by the local slope. Local tilting (due to large-wavelength roughness components) modulates the Rayleigh-distributed field (due to small-wavelength components) and causes the $SI$ to be greater than unity. An rms slope with an upper cutoff related to the acoustic resolution was used as the input parameter to a simplified version of the composite roughness model McDaniel and Gorman (1983), which was then used to compute the scintillation index. Here, the role of slopes at scales at or larger than the pulse length on the intensity fluctuations is investigated. A representative function is used to model the effect of slope modulation that is inspired by the composite roughness model. The composite roughness approximation McDaniel and Gorman (1983) uses separate scattering models on the small and large-scale surfaces. The validity of the composite model is subject to the validity of the perturbation approximation applied to the small-scale surface, and the Kirchhoff approximation applied to the large-scale surface. The perturbation approximation is valid when $k_{0}h_{s}$ is small Thorsos and Jackson (1989), where $h_{s}$ is the rms roughness of the small-scale roughness. This parameter can be found by integrating the roughness spectrum over wavenumbers with magnitudes between $\pi/(A\Delta X)$ and $\pi/\ell_{0}$. In this work, $k_{0}h_{s}$ was less than 2.0 for all simulations. Since perturbation theory must be applicable on the small-scale surface, $k_{0}h_{s}<1$ was required. This is possible if the highest spectral strength for each value of $\gamma$ is ignored, and the restriction $\Delta X/\lambda_{0}\leq 8$ is made. We found that scattering strength was well modeled by perturbation theory for a few cases where $k_{0}h>1$ (including all the wavenumber components), but exclude these cases since the validity region of this model for general power-law roughness spectra has nost been previously studied. In McDaniel and Gorman (1983); Jackson _et al._ (1986b), the validity of the Kirchhoff approximation was stated to be a function of the rms radius of curvature of the large-scale surface. However, Thorsos (1988), found that the radius of curvature is not important for the Kirchhoff approximation, but that the characteristic length of the surface, $L_{c}$, must be large compared to the wavelength. In the separation of scales performed here, the characteristic length of the large scale surface is approximately equal to $\Delta X$, which is always $\lambda_{0}$ or greater. Therefore $k_{0}L_{c}\geq 2\pi$, which is typically acceptable for the Kirchhoff approximation Thorsos (1988). At low grazing angles, which are most relevant here, the Kirchhoff approximation fails due to the presence of multiple scattering, which is important when $\theta_{i}$ and $\theta_{s}$ are each less than or equal to $2\epsilon$, twice the large-scale rms slope angle Thorsos (1988); Thorsos and Jackson (1991). Shadowing is another source of inaccuracy, which occurs when $\theta_{i}$ and $\theta_{s}$ are less than or equal to $\epsilon$ Thorsos (1988). The large-scale slope angles can be seen in Table 1. For the rms slope angles that are based on the acoustic resolution as the upper cutoff, the global maximum is about 5∘. Since this number is comparable to the smallest grazing angles examined here, both multiple scattering and shadowing may be present in these cases. To explore the local tilting hypothesis, the relationship between intensity as a function of space $I_{\Delta X}(x)$, by which is $I(t^{\prime})$ for a given resolution mapped to $x$ through $x=-ct^{\prime}/(2\cos\theta)$ for backscattering, with the large-scale surface slope at position $x$ is investigated. $I_{\Delta X}(x)$ should decrease if the large-scale slope at $x$ is positive, and increase if negative. Consequently there should be a statistical correlation between $I_{\Delta X}(x)$, the intensity at a given resolution at position $x$, and $-s_{\Delta X}(x)$, the negative of the slope field low-pass filtered to remove wavelengths shorter than $\Delta X$. More specifically, the scattered intensity from the integral equations can be compared to the intensity produced by the effect of tilting in the composite roughness approximation. A simplified version of this approximation is used, which is explained in the following paragraphs. The composite model requires two types of averages, one over the small-scale and one over the large-scale. The large-scale average is a simple average of the small-scale scattering cross section over the pdf of the large-scale slopes. The small scale average results in the scattering cross section from the small-scale roughness. Using the results of McDaniel and Gorman (1983), the composite model may be written as (after converting between differing conventions of the roughness power spectrum, and adapting to 1D roughness used here), $\displaystyle\sigma_{CR}$ $\displaystyle=4k^{3}\langle\sin(\theta_{i}-\epsilon_{L})^{4}W_{s}(\Delta K_{mod})\rangle_{L}$ (44) $\displaystyle\Delta K_{mod}$ $\displaystyle=\Delta K+\frac{df_{L}}{dx}\Delta k_{z}$ (45) where $\langle\cdot\rangle_{L}$ denotes averaging over the large-scale slope, and $\epsilon_{L}=\tan^{-1}(df_{L}/dx)$ is the large-scale slope angle at position $x$. The power spectrum of the small-scale roughness is denoted $W_{s}(\Delta K_{mod})$, where the modified Bragg wavenumber, $\Delta K_{mod}$ is specialized to the backscattering case and takes into account the effect of tilting on the Bragg components. The difference between the vertical component of the scattered and incident vertical wavenumber is $\Delta k_{z}=2k_{0}\sin\theta_{i}$, and the horizontal wavenumber difference is $\Delta K=2k_{0}\cos\theta_{i}$. Although not present in most applications of the composite roughness model (e.g. Kur’yanov (1963); Bachmann (1973); McDaniel and Gorman (1983)), the modulation of the Bragg wavenumber due to large-scale slopes may be significant. The form of the Bragg modulations is given in an intermediate result (Eq. (22)) of McDaniel and Gorman (1983), and adapted to the present notation. This model was formulated in the frequency domain, and is not directly applicable to these broadband simulations. However, in the broadband case, the effect of tilting would be preserved, even if the scattered power from the small-scale surface is altered. Because of this simplification and the broadband nature of the simulations, this model is called the slope modulation model (SMM) instead of the two- scale, or composite roughness model. Terms that are constant as a function of the large-scale slope were ignored in the SMM. Averaging over the large-scale slopes was not performed in the model, to compare fluctuations produced by tilting to the fluctuations from the integral equation results for an given ensemble. The intensity fluctuations caused by tilting can be written as, $\displaystyle\begin{split}I^{SMM}_{{\Delta X}}(x)=\sin^{4}&\left(\theta_{i}-\tan^{-1}\left[\frac{df_{\Delta X}}{dx}(x)\right]\right)\\\ \times&W(\Delta K+\Delta k_{z}\,\frac{df_{\Delta X}}{dx}(x))\end{split}$ (46) where the subscript $\Delta X$ on the intensity and slope explicitly denotes that a Gaussian function with 3 dB width $\Delta X$ was used to filter the large-scale slopes. The filtered slope is defined by $\displaystyle\frac{df_{\Delta X_{surf}}}{dx}(x)=\int\limits_{-\infty}^{\infty}\frac{df}{dx}(x^{\prime}-x)e^{-x^{\prime 2}\ln(4)/(\Delta X_{surf})^{2}}\mathrm{d}x^{\prime}\,.$ (47) Eq. (46) has been averaged over the small-scale roughness, but not the large- scale slopes. It is important to note that in the integral equation simulations, it is impossible to perform this partial averaging. Thus, taking the variance of Eq. (46) does not result in the scintillation index of the scattered pressure, but only the variance of slope-induced fluctuations. Taking the limit of zero large-scale rms slope in the previous equation results in no slope-induced fluctuations, and fluctuations in the total scattered field would have a Rayleigh distribution, with a scintillation index of unity. It is strongly emphasized that this model is only a representative function of the effect of slopes at the scale of the pulse resolution, and is not an adequate model for the scattered intensity. This inadequacy is due to the broadband nature of these simulations, and the formulation of the composite model in the frequency domain. The integral equation results are considered to be the accurate “ground truth” here. Although not shown in (46), $I^{SMM}_{{\Delta X}}(x)=0$ if the argument to the sine function is less than zero, to include a rudimentary form of shadowing. This form of shadowing has a small effect, and the results presented below are essentially unchanged if this simple form of shadowing is left out. Non-local shadowing may have a significant effect, but is not examined here. The simplifications used here are employed because a rigorous formulation of the composite roughness approximation for very broadband signals is not available in the literature. Development of such a model, and comparison to results using the broadband integral equation technique developed here are both fruitful areas for future work. Similarity between the intensity from the numerical solutions and model predictions can be quantified in a crude but straightforward manner using the Pearson product-moment correlation coefficient, $\rho$, defined for random variables $U$ and $V$. $\displaystyle\rho\left(U,V\right)=\frac{\langle\left(U-\langle U\rangle\right)\left(V-\langle V\rangle\right)\rangle}{\sqrt{\langle\left(U-\langle U\rangle\right)^{2}\rangle\langle\left(V-\langle V\rangle\right)^{2}\rangle}}$ (48) This coefficient quantifies the linear variation between a dependent and an independent variable. The correlation coefficient is used here because a rigorous application of the composite roughness approximation was not used, and absolute intensity cannot be compared directly. The local tilting hypothesis is tested by forming the correlation coefficient between the scattered intensity from numerical simulations, and $I^{SMM}_{{\Delta X}}(X)$. Both the acoustic resolution and the surface filter size are varied. The surface filter scale that maximizes the correlation for each acoustic resolution is estimated, and this process is repeated for all acoustic resolutions. The acoustic resolution is $\Delta X$, and the surface filter size is $\Delta X_{\textrm{surf}}$. The $\Delta X_{\textrm{surf}}$ that maximizes $\rho$ is denoted $\Delta X_{\textrm{max}}$. If $\Delta X_{\textrm{max}}$ varies in proportion to $\Delta X$, then it may be concluded 1) that slope modulation is responsible in part for the intensity fluctuations, and 2) that slopes at (or larger than) the scale of the acoustic resolution are responsible in part for the fluctuations. Note that the surface filters used here are zero-phase, acausal filters (Eq. (47)), and are applied in the same way as the Fourier synthesis used to obtain the time-domain scattered pressure, Eq. (23). Figure 8: Correlation coefficient as a function of the acoustic resolution, $\Delta X$, and the surface filter size, $\Delta X_{surf}$. The Gaussian pulse shape has been used as the surface filter here, as specified by Eq. 47. Figure 9: (color online) Surface slope filter scale that maximizes the product moment cross correlation coefficient for each acoustic resolution. Grazing angle has been held constant at $20^{\circ}$, and each line represents a different spectral strength. Each subplot contains a different spectral exponent with (a)$\gamma=1.5$, (b)$\gamma=2$, and (c),$\gamma=2.5$. Dashed black lines have a slope of unity and intercept of zero for reference. Note that the Gaussian pulse shape, Eq. (47) has been used as the surface filter. The correlation coefficient for parameters $\gamma=2$, $\theta_{i}=20^{\circ}$, and $w=3\times 10^{-5}$m is plotted in Fig. 8. $\Delta X$ is on the horizontal axis, $\Delta X_{\textrm{surf}}$ is on the vertical axis, and $\rho$ is denoted by grayscale. Holding $\Delta X$ constant, there is a distinct peak in $\rho$ as a function of $\Delta X_{\textrm{surf}}$. The peak value of $\rho$ increases as the acoustic resolution becomes small, indicating that the role of slope modulation is greater for smaller resolution cells. Additionally, the peak location in $\Delta X_{\textrm{surf}}$ varies with $\Delta X$, indicating the local tilting hypothesis may be correct. This plot has a similar structure for other $\theta_{i}$, $\gamma$, and $w$. $\Delta X_{\textrm{max}}$ as a function of $\Delta X$ is plotted in Fig. 9. $\theta_{i}$ is constant at $20^{\circ}$ grazing angle, and each spectral strength is plotted as its own line. The different values of $\gamma$ appear in subfigures. For each spectral strength and exponent, $\Delta X_{\textrm{surf}}$ varies monotonically with the acoustic resolution, except for the smallest spectral strength for $\gamma=2.5$. A line with unit slope and zero intercept is also plotted for reference. For many cases, the slope of these lines is approximately unity when the acoustic resolution is small. Some of the lines retain that slope when the acoustic resolution is larger, but others taper off and have a smaller slope. For example the smallest roughness cases in $\gamma=2$ and $\gamma=2.5$ curve downards to an approximately constant function at large $\Delta X/\lambda_{0}$. For these two curves in particular, the scintillation index was approximately unity, and therefore slope modulation was not required to explain their behavior. Other cases where the slope of these lines becomes small for large $\Delta X/\lambda_{0}$, but result in in $SI>1$, indicate that slopes at scales smaller than the pulse resolution are the most important. Future work modeling broadband scattering from power law surfaces is needed to explain this behavior. The monotonic dependence of $\Delta X_{\textrm{max}}$ on $\Delta X$ indicates that the large-scale slope near the acoustic resolution accounts for a significant part of the intensity fluctuations for short pulses. The unit slope for some of these cases indicates that the 3 dB width of the point spread function is sometimes the appropriate scale for calculating the large- scale slope, and is a good estimate for all cases. For longer pulses, the lines in Fig. 9 have slopes generally less than unity, meaning that the horizontal roughness components that are important for slope modulation are not exactly the same as the 3 dB width of the incident intensity, but are proportionally slightly smaller. This departure from unit slope may indicate that the slope modulation model is not sufficient to explain the data, and other effects, such as multiple scattering or shadowing may be present. The same analysis was performed for the large-scale surface heights, and no systematic trends were found. Therefore, the large-scale rough interface at a single point is uncorrelated with the scattered intensity mapped to that point. To understand what kind of intensity fluctuations can be caused by this model when for the large-scale rms slope values in these numerical simulation, the model for $SI$ from Lyons _et al._ (2016) was applied to the case of $w=3\times 10^{-5}$ m, $\gamma=2$, and $\Delta X=2\lambda_{0}$. This model accounts for fluctuations due to large-scale tilting, as well as Rayleigh- distributed scattering from the small-scale roughness to produce the scintillation index. In the model of Lyons _et al._ (2016), $SI>1$ if the large-scale slope variance is greater than 0, and the small-scale scattering cross section is neither constant nor linear in $\theta_{i}$. The rms slope was taken from Table 1, and was $2.99^{\circ}\approx 3^{\circ}$ ($s_{2A\lambda_{0}}$ in Table 1). A model-data comparison is shown in Fig. 10. The model follows the same trends as the integral equations. It is slightly higher than 1 at high grazing angles, and increases as the grazing angle becomes small. However, the model $SI$ increases less rapidly than the integral equation simulation at low grazing angles. Given the evidence from Figs. 8, and 9, as well as evidence that the composite roughness model produces non-Rayleigh behavior that is similar to the integral equations it may be concluded that the tilting hypothesis is responsible in part for the intensity fluctuations. Note that both the $\sin^{4}$ term and Bragg modulation term from Eq. (46) were included to account for fluctuations due to slopes. Neglecting the Bragg term decreases $SI$ by about 5% at 10∘, and about 20% at 70∘ grazing angle. Including modifications to the Bragg wavenumber is responsible for the good agreement at high and moderate grazing angles. Since the model of Lyons _et al._ (2016) underestimates $SI$, especially for small grazing angles, the tilting hypothesis cannot be fully responsible for heavy-tailed scattering seen here. We suspect multiple scattering and shadowing may be responsible for the model-simulation mismatch at low angles, since the large-scale rms slope is 3∘ for the surface in Fig. 10, and is comparable to the smaller range of grazing angles Thorsos (1988); Thorsos and Jackson (1991). Shadowing may increase $SI$, since it increases the degree of intensity fluctuations. Multiple scattering may move $SI$ towards unity, since scattering from multiple locations on the rough interface may increase the effective ensonified length, and contribute an additive Gaussian component to the scattered pressure. Figure 10: Comparison of the $SI$ from the integral equation results with the model of Lyons _et al._ (2016), for the parameters $w=3\times 10^{-5}$m, $\gamma=2$, and $\Delta X=2\lambda_{0}$. A physically accurate theoretical model that includes all of these effects is evidently required to predict the scintillation index at low grazing angles. Such a model is, at this time, not available, and is a fruitful opportunity for future research. The independence of the broadband scattering cross section on resolution established here imposes a useful constraint. Any theoretical model that breaks apart the solution of the exact integral equation into physically interpretable phenomena (such as tilting, shadowing, or multiple scattering) must also satisfy this constraint. ## IX Conclusion In this work Fourier synthesis combined with numerical solution of the Helmholtz integral equation was used to analyze the scattered field in terms of the broadband scattering cross section and scintillation index. The dependence of these two quantities on acoustic resolution was examined to understand the effects that contemporary high-resolution acoustic imaging systems have on quantitative measurements of seafloor scattering. For power- law surfaces, the scattering strength is independent of pulse length, which indicates it is a stable quantity to use across measurement systems with different geometries (at least for the kinds of rough surfaces examined here). The scintillation index depends strongly on pulse length, which indicates that the scattering processes is fundamentally different in the time and frequency domains, and that further research is needed to understand or predict intensity fluctuations in high resolution broadband sonar systems. Although simulations were performed in two dimensions, these results may be present in three dimensions as well, if the roughness spectrum is assumed to be isotropic. The exact values of the scintillation index will be different for 3D environments, as the rms slope is calculated differently, and out of plane effects may be important. Heavy-tailed, or non-Rayleigh scattering is commonly observed in scattering measurements and is usually attributed to non-stationary, or patchy environments Abraham and Lyons (2004); Lyons _et al._ (2009). Heavy-tailed statistics have been observed in seemingly homogeneous seafloors by Lyons _et al._ (2016), and these numerical simulations have verified that statistically homogeneous surfaces can produce heavy-tailed statistics when interrogated by a broadband high-resolution system. The slope modulation model used in Lyons _et al._ (2016) was investigated, and found that it is in part responsible for intensity fluctuations. Other sources of fluctuations, such as shadowing and multiple scattering, were found to be necessary to accurately model the scintillation index. Further research is required to better understand this aspect of seafloor scattering. A few consequences are noted for heavy-tailed statistics arising in high- resolution systems in homogeneous roughness environments. Heavy-tailed statistics are a significant source of false alarms in acoustic target detection systems. Since scintillation index increases at low angles, long range systems may suffer from decreased performance. The benefit of high resolution systems, more pixels per target, may be offset due to the increased false alarms. There may be some benefits of resolution dependence of the pdf of the scattered field. Some autofocus algorithms for synthetic aperture systems (e.g. Blacknell _et al._ (1992); Callow (2003); Marston and Plotnick (2015)), use the scintillation index, or contrast as their cost function. If the acoustic field is entirely due to point scatterers, as is commonly assumed Brown _et al._ (2017), $SI$ will be unity for all resolutions. If an autofocus algorithm is applied, and the point spread function of the imaging algorithm becomes smaller, then the field will still be Rayleigh and contrast will not increase. Therefore, an autofocus algorithm based on $SI$ or lacunarity will not be sensitive to the focus, unless there are discrete scatterers in the scene. However, as shown here, $SI$ is a strong function of resolution for a statistically homogeneous power law surfaces, especially at low grazing angles. Improving the focus at low grazing angles will lead to an increase in $SI$, and thus the autofocus algorithm will be more sensitive to the actual degree of focus for seemingly featureless seafloors. Median filters are often used to “remove” the speckle or intensity fluctuations from acoustic or electromagnetic images before use in remote sensing or target detection algorithms (e.g. Kwon and Nasrabadi (2005); Williams (2015, 2018); Galusha _et al._ (2018) and references therein). Although the broadband scattering cross section, which uses the arithmetic mean of the intensity, is insensitive to resolution, the median is not, since the median is highly dependent on the probability density function. Thus either the broadband cross section can be used as a resolution independent quantity that has a large variance, or its variance can be reduced with the consequence that the pixel intensity will no longer be directly related to broadband scattering strength. ###### Acknowledgements. The authors thank the US Office of Naval Research for financial support of this work through grants N00014-13-1-0056, N00014-18-WX-00769, N00014-19-WX-00427, and N00014-16-1-2268, as well as the Research Initiation Program of the Naval Postgraduate School. Numerical simulations were performed on the Hamming high-performance computing cluster at Naval Postgraduate School. The authors thank the anonymous reviewers for their comments on previous versions of the manuscript. ## Appendix A In this appendix, the steps are given to generate random realizations of a given roughness power spectrum. Here, the method of Thorsos (1988) is followed, specifically Eqs. (17-18). The spatial frequency is $u=K/(2\pi)$. The sampling of this vector is set by the surface length, $L$, and sampling interval, $\delta x$. The number of points, $N$, is $L/\delta x$, and the spatial frequency spacing is $\delta u=1/L$. A spatial frequency vector is specified, $u_{m}=m\delta u$ for $m\in[-N/2+1,N/2]$. Then, a sampled power spectrum vector is created, $W_{m}=W(2\pi u_{m})$. To create a random realization from this power spectrum, a sampled version of a randomized complex amplitude spectrum, $F$, is defined by $\displaystyle F_{m}=\sqrt{2\pi W_{j}L}\mu_{m}$ (49) where $\mu_{m}$ are elements of a complex random vector with unit variance. To generate a real vector of heights, $f_{n}$, the amplitude spectrum $F$ must have conjugate symmetry. This property is achieved by setting $\displaystyle\mu_{m}=\begin{cases}\frac{N(0,1)+iN(0,1)}{\sqrt{2}}&m\in[1,N/2-1]\\\ \mu_{-m}^{\ast}&m<0\\\ N(0,1)&m=0,N/2\end{cases}\,,$ (50) where $N(0,1)$ is an independent random draw from the normal distribution with zero mean, and unit variance. The first case creates random vector for positive spatial frequencies, the second case enforces conjugate symmetry. The third case stipulates that points in the sampled spatial frequency vector that do not have symmetric pairs (the point at zero frequency, and the single Nyquist point) are real. This method is valid for even values of $N$. For odd $N$, $m\in[-(N-1)/2,(N-1)/2]$, and the lower Nyquist point, $m=-(N-1)/2$ must be the complex conjugate of the point $M=(N-1)/2$, and only the case $m=0$ should be real. The rough interface $f(x)$ is found through the inverse discrete Fourier transform. For even $N$, this is $\displaystyle f_{n}=f(n\delta x)=\delta u\sum\limits_{m=-N/2+1}^{N/2}F_{m}e^{i2\pi\frac{mn}{N}}\,.$ (51) For odd $N$, the sum is the same, except that the limits are from $m=-(N-1)/2$ to $(N-1)/2$. In practice, this sum is approximated using the fast Fourier transform algorithm. The MATLAB computer language was used to generate the rough surfaces, which itself uses the FFTW library (fastest Fourier transform in the west) Frigo and Johnson (2005). This library requires that the input spatial frequency spectrum be contained in a vector starting with the $m=0$ point, followed by the points with $m>0$, followed by the points with $m<0$. The FFTW library also divides by $N$ when performing the inverse FFT, so the output must be multiplied by $N$, and by $\delta u$ to form the sum in Eq. (51). ## Appendix B Here, the effective energy flux is calculated for the incident field used in this work. Effects of the $w_{t}$ term in the extended Gaussian beam are not included, since analytically tractable results are not available for that case. Although the full extended Gaussian beam was used as the incident field in the numerical simulations, the error introduced by omitting it in calculating the energy flux was insignificant, as the parameter $(k_{0}g\sin\theta_{i})^{-1}$ was always small, with a maximum of 3.7$\times 10^{-3}$. Setting $w_{t}=0$, the incident pressure for a broadband, tapered pulse is given by $\displaystyle\begin{split}p_{i}(\mathbf{r},t)=p_{0}&\psi(\mathbf{r})\\\ \times&\exp\left(i\omega_{0}(t-t_{i}(\mathbf{r})\right)\\\ \times&\exp\left(-(t-t_{i}(\mathbf{r}))^{2}/\tau^{2}\right)\,,\end{split}$ (52) where $\displaystyle t_{i}(\mathbf{r})=\frac{\mathbf{r}\cdot\hat{\mathbf{k}}_{i}}{c}=\frac{-x\cos\theta_{i}-z\sin\theta_{i}}{c},$ (53) and $\displaystyle\psi(\mathbf{r})=\exp\left(-(x-z\cot\theta_{i})^{2}/g^{2}\right)$ (54) is the spatial taper. The vertical component of the acoustic particle velocity can be computed as (Pierce, 1994, p. 19) $\displaystyle v_{z}(\mathbf{r},t)$ $\displaystyle=\frac{\partial}{\partial z}\Phi(\mathbf{r},t),$ (55) and the acoustic pressure as $\displaystyle p(\mathbf{r},t)$ $\displaystyle=-\rho_{0}\frac{\partial}{\partial t}\Phi(\mathbf{r},t),$ (56) where $\Phi$ is the acoustic velocity potential. Using Eq. (52) in (56), the velocity potential can be computed using Eqs. 3.322(1-2) from Gradshteyn and Rhyzik (2007), $\displaystyle\begin{split}\Phi(\textbf{r},t)&=\frac{-p_{0}}{\rho_{0}}\psi(\mathbf{r})M(t-t_{i}(\mathbf{r}),f_{0},\tau)\end{split}$ (57) where $\displaystyle M(t,f_{0},\tau)=\frac{\sqrt{\pi}\tau}{2}e^{-\pi^{2}f_{0}^{2}\tau^{2}}\operatorname{erf}\left[t/\tau-i\pi f_{0}\tau\right],$ (58) and $\operatorname{erf}\left[z\right]$ is the error function (Abramowitz and Stegun, 1972, p. 297). If the opposite time convention is used, then the minus sign inside the error function argument must be changed to a plus. In the integration used to compute Eq. (57), an integration constant independent of time is required, but is assumed to be zero here, since derivatives are taken to compute $p$ and $v_{z}$. Inserting Eq. (57) into Eq. (55), one obtains $\displaystyle\begin{split}v_{iz}(\mathrm{r},t&)=-\frac{p_{0}}{\rho_{0}}\left[\frac{2\cot\theta_{i}}{g^{2}}(x-z\cot\theta_{i})\psi(\mathbf{r})M(t-t_{i}(\mathbf{r}),f_{0},\tau)\right.\\\ &\left.+\frac{\sin\theta_{i}\psi(\mathbf{r})}{c}\exp\left(i\omega_{0}(t-t_{i}(\mathbf{r}))-(t-t_{i}(\mathbf{r}))^{2}/\tau^{2}\right)\right]\end{split}$ (59) To compute the effective energy flux passing through the $z=0$ plane, Eq. (28) is computed using Eqs. (52) and (59). The time variable is set to $t=R/c+t^{\prime}$, to match the convention in Eq. (23). Making these substitutions, $\displaystyle E_{f}^{\prime}=\frac{|p_{0}|^{2}}{2\rho_{0}}\left(\frac{2\cot\theta_{i}}{g^{2}}I_{1}+\frac{\sin\theta_{i}}{c}I_{2}\right)\,.$ (60) The two integrals are defined as $\displaystyle\begin{split}I_{1}&=\frac{\sqrt{\pi}\tau}{2}e^{-(\omega_{0}\tau/2)^{2}}\\\ &\times Re\biggl{[}\int\limits_{-\infty}^{\infty}x\,\mathrm{erf}\left(\frac{t^{\prime}}{\tau}+\frac{x\nu_{x}}{c\tau}+i\frac{\omega_{0}\tau}{2}\right)\biggr{.}\\\ &\times\biggl{.}e^{-2x^{2}/g^{2}-\left(\frac{t^{\prime}}{\tau}+\frac{x\nu_{x}}{c\tau}\right)^{2}+i\omega_{0}\tau\left(\frac{t^{\prime}}{\tau}+\frac{x\nu_{x}}{c\tau}\right)}\,\mathrm{d}x\biggr{]}\end{split}$ (61) $\displaystyle I_{2}$ $\displaystyle=\int\limits_{-\infty}^{\infty}e^{-2x^{2}/g^{2}-2(t^{\prime}+x\nu_{x}/c)^{2}/\tau^{2}}\,\mathrm{d}x,$ (62) where the identity $\mathrm{erf}^{\ast}(z)=\mathrm{erf}(z^{\ast})$ was used. The integral $I_{1}$ may be cast into a generic form by completing the square in the exponent, resulting in $\displaystyle I_{1}=\frac{\sqrt{\pi}\tau}{2}Re\left[e^{-\zeta}\int\limits_{-\infty}^{\infty}x\,\mathrm{erf}(ax+b)e^{-(\alpha x+\beta)^{2}}\,\mathrm{d}x\right]\,,$ (63) with $\displaystyle a$ $\displaystyle=\nu_{x}/(c\tau)$ (64) $\displaystyle b$ $\displaystyle=t^{\prime}/\tau+i\frac{\omega_{0}\tau}{2}$ (65) $\displaystyle\alpha$ $\displaystyle=\sqrt{\frac{2}{g^{2}}+\left(\frac{\nu_{x}}{c\tau}\right)^{2}}$ (66) $\displaystyle\beta$ $\displaystyle=\frac{ab^{\ast}}{\alpha}$ (67) $\displaystyle\zeta$ $\displaystyle=(b^{\ast})^{2}+2\left(\frac{\omega_{0}\tau}{2}\right)^{2}-\left(\frac{ab^{\ast}}{\alpha}\right)^{2}\,.$ (68) This integral can be transformed via integration by parts, $\int u\,\mathrm{d}v=uv|_{-\infty}^{\infty}-\int v\,\mathrm{d}u$, using $\displaystyle u$ $\displaystyle=\mathrm{erf}(ax+b)$ (69) $\displaystyle dv$ $\displaystyle=xe^{-(\alpha x+\beta)^{2}}dx$ (70) $\displaystyle du$ $\displaystyle=\frac{2a}{\sqrt{\pi}}e^{-(ax+b)^{2}}dx$ (71) $\displaystyle v$ $\displaystyle=(2\alpha^{2})^{-1}\left(-e^{-(\alpha x+\beta)^{2}}-\beta\sqrt{\pi}\mathrm{erf}(\alpha x+\beta)\right).$ (72) Using the large-argument expansion of the error function, the product of the error function terms in $uv$ evaluated at $+\infty$ is 1, and is 1 at $x=-\infty$, so the difference is zero. The exponential in $v$ evaluated at $\pm\infty$ is zero. Thus the $uv$ term evaluates to zero. Turning to $-\int v\,\mathrm{d}u$, $\displaystyle\begin{split}-\int v\,\mathrm{d}u=&\frac{a}{\sqrt{\pi}\alpha^{2}}\int\limits_{-\infty}^{\infty}e^{-(ax+b)^{2}}\\\ \times&\left(e^{-(\alpha x+\beta)^{2}}+\beta\sqrt{\pi}\mathrm{erf}(\alpha x+\beta)\right)\,\mathrm{d}x\,.\end{split}$ (73) There are two integrals in the previous equation: $\displaystyle I_{11}$ $\displaystyle=\int\limits_{-\infty}^{\infty}e^{-(ax+b)^{2}-(\alpha x+\beta)^{2}}\,\mathrm{d}x$ (74) $\displaystyle I_{12}$ $\displaystyle=\beta\sqrt{\pi}\int\limits_{-\infty}^{\infty}\mathrm{erf}(\alpha x+\beta)e^{-(ax+b)^{2}}\,\mathrm{d}x\,.$ (75) $I_{11}$ straightforward to compute, and is $\displaystyle I_{11}=\frac{\sqrt{\pi}}{\sqrt{a^{2}+\alpha^{2}}}e^{-(b\alpha-a\beta)^{2}/(a^{2}+\alpha^{2})}\,,$ (76) with the restriction that both $a$ and $\alpha$ are greater than zero (which is satisfied here). $I_{12}$ can be solved using the following tabulated integral, which is Eq. (13) of section 4.3 in Ng and Geller (1969), $\displaystyle\int\limits_{-\infty}^{\infty}\mathrm{erf}(y)e^{-(zy+w)^{2}}\,\mathrm{d}y=-\frac{\sqrt{\pi}}{z}\mathrm{erf}\left(\frac{w}{\sqrt{z^{2}+1}}\right).$ (77) Using Eq. (77), in combination with the substitutions, $\displaystyle y$ $\displaystyle=\alpha x+\beta$ (78) $\displaystyle z$ $\displaystyle=\frac{a}{\alpha}$ (79) $\displaystyle w$ $\displaystyle=b-\frac{\beta a}{\alpha}$ (80) the integral can be calculated as $\displaystyle I_{12}$ $\displaystyle=-\frac{\beta\pi}{a}\mathrm{erf}\left(\frac{b\alpha-a\beta}{\sqrt{\alpha^{2}+a^{2}}}\right)\,.$ (81) Here, $Re[z^{2}]=Re[(a/\alpha)^{2}]$ must be greater than zero, which is satisfied here. Note that $y$ is complex, so the integration limits in Eq. (77) should have the imaginary part of $\beta$ added to both. In effect, this changes the integral from over the real line, to a line with a constant imaginary part, parallel to the real line. This change has no effect on the integral for the parameters in this work, as it has been verified using numerical quadrature that Eq. (81) produces the correct result. Putting these results together, the integral $I_{1}$ can be calculated as $\displaystyle I_{1}=\tau\frac{a}{2\alpha^{2}}\begin{aligned} Re&\left\\{e^{-\zeta}\left[\frac{\sqrt{\pi}}{\sqrt{a^{2}+\alpha^{2}}}e^{-(b\alpha-a\beta)^{2}/(a^{2}+\alpha^{2})}\right.\right.\\\ -&\left.\left.\frac{\pi\beta}{a}\mathrm{erf}\left(\frac{b\alpha-a\beta}{\sqrt{\alpha^{2}+a^{2}}}\right)\right]\right\\}\,.\end{aligned}$ (82) Turning to $I_{2}$, it can be computed as $\displaystyle I_{2}=\sqrt{\frac{\pi}{2}}\frac{e^{-\frac{2(ct^{\prime}/\nu_{x})^{2}}{g^{2}+(c\tau/\nu_{x})^{2}}}}{\sqrt{g^{-2}+(c\tau/\nu_{x})^{-2}}}\,.$ (83) Using (82) and (83) in (60), the energy flux can be written as $\displaystyle\begin{split}E_{f}^{\prime}&=\frac{|p_{0}|^{2}}{2\rho_{0}}\left(\frac{\sin\theta_{i}}{c}\sqrt{\frac{\pi}{2}}\frac{e^{-\frac{2(ct^{\prime}/\nu_{x})^{2}}{g^{2}+(c\tau/\nu_{x})^{2}}}}{\sqrt{g^{-2}+(c\tau/\nu_{x})^{-2}}}\right.+\frac{2\cot\theta_{i}}{g^{2}}\frac{\tau\nu_{x}}{2c\tau}\frac{1}{2/g^{2}+(\nu_{x}/(c\tau))^{2}}\mathrm{Re}\left\\{\vphantom{\frac{\sqrt{\pi}}{\sqrt{2/g^{2}+(\nu_{x}/(c\tau))^{2}}}}\right.\\\ \times&\exp\left[\left(\left(\frac{\nu_{x}}{c\tau\sqrt{2/g^{2}+(\nu_{x}/(c\tau))^{2}}}\right)^{2}-1\right)\left(\left(\frac{t^{\prime}}{\tau}\right)^{2}-i\omega_{0}\tau-\left(\frac{\omega_{0}\tau}{2}\right)^{2}\right)-2\left(\frac{\omega_{0}\tau}{2}\right)^{2}\right]\\\ \times&\left(\frac{\sqrt{\pi}}{\sqrt{2}\sqrt{1/g^{2}+(\nu_{x}/(c\tau))^{2}}}\exp\left[-\left(\frac{b\alpha-a\beta}{\sqrt{a^{2}+\alpha^{2}}}\right)^{2}\right]\right.-\left.\left.\left.\frac{\pi(t^{\prime}/\tau-i\omega_{0}\tau/2)}{\sqrt{2/g^{2}+(\nu_{x}/(c\tau))^{2}}}\operatorname{erf}\left[\frac{b\alpha-a\beta}{\sqrt{a^{2}+\alpha^{2}}}\right]\vphantom{\frac{\sqrt{\pi}}{\sqrt{2/g^{2}+(\nu_{x}/(c\tau))^{2}}}}\right)\right\\}\right)\,,\end{split}$ (84) where the factor $\displaystyle\frac{b\alpha-a\beta}{\sqrt{a^{2}+\alpha^{2}}}$ $\displaystyle=\frac{\left(t^{\prime}/\tau+i\omega_{0}\tau/2\right)\sqrt{2/g^{2}+(\nu_{x}/(c\tau))^{2}}-(\nu_{x}/(c\tau))^{2}\frac{t^{\prime}/\tau-i\omega_{0}\tau/2}{\sqrt{2/g^{2}+(\nu_{x}/(c\tau))^{2}}}}{\sqrt{2}\sqrt{1/g^{2}+(\nu_{x}/(c\tau))^{2}}}$ (85) is not explicitly included in Eq. (84) for space reasons. It is advantageous to find a simplification for the error function with a complex argument appearing in (84). Many software packages only implement the error function with a real argument (e.g. the basic Matlab installation, the GNU C math library, or the GNU C++ cmath library), and an approximation of this special function in terms of elementary functions makes implementing the broadband scattering cross section much simpler numerically. The approximation derived below is very accurate for all parameters investigated here, and has a more succinct representation than Eq. (84). To see what kind of approximation is appropriate, it is necessary to know if the error function argument is large, or small. To obtain an estimate of the order of magnitude, for this problem at moderate and low grazing angles, $g\gg c\tau/\nu_{x}$. Therefore, $\alpha\approx a$, and $\alpha^{2}+a^{2}\approx 2a^{2}$. The argument of the error function is $q$, and combined with the previous approximation, can be simplified to $\displaystyle q\approx\frac{b-b^{\ast}}{\sqrt{2}}=\frac{i\omega_{0}\tau}{\sqrt{2}}$ (86) The largest value of $f_{0}\tau$ examined in this work is 1.7, resulting in $q\approx 7.5i$. Therefore the magnitude of $q$ is large, and its real part is small compared to its imaginary part. The large-argument asymptotic series approximation is $\displaystyle\mathrm{erf}(q)=1-\frac{e^{-q^{2}}}{q\sqrt{\pi}}\left(1-\frac{1}{2q^{2}}+\frac{3}{4q^{4}}+\dots\right).$ (87) For $f_{0}\tau=1.7$, $\exp(-q^{2})\approx 10^{24}$, so additive factor of 1 outside the parentheses can be safely neglected. The terms inside the parentheses become progressively smaller. The second term is about $10^{-2}$, and the third term is about $10^{-4}$, for $f_{0}\tau=1.7$, so only retaining the first term is appropriate. If longer pulse lengths are used, this approximation improves, and if shorter pulse lengths are used, then the approximation is poorer. If extremely short pulses are used, then more terms could be kept in the approximation. Keeping only the largest terms in the power series of $\mathrm{erf}(q)$, the integral $I_{1}$ is approximately $\displaystyle I_{1}$ $\displaystyle\approx\sqrt{\frac{\pi}{2}}\frac{e^{-\frac{2(ct^{\prime}/\nu_{x})^{2}}{g^{2}+(c\tau/\nu_{x})^{2}}}}{\sqrt{a^{2}+\alpha^{2}}}\frac{\tau a\sqrt{2}}{2\alpha^{2}}\left(1+Re\left\\{\frac{\beta}{a}\frac{\alpha^{2}+a^{2}}{b\alpha-a\beta}\right\\}\right).$ (88) where $e^{-\zeta}$ has been combined with the other exponential terms. This approximation for $I_{1}$ can be used to obtain a good approximation to the effective energy flux $\displaystyle\begin{split}E^{\prime}_{f}\approx&\frac{|p_{0}|^{2}}{2\rho_{0}c}L_{eff}e^{-\frac{2(ct^{\prime}/\nu_{x})^{2}}{g^{2}+(c\tau/\nu_{x})^{2}}}\sin\theta_{i}\\\ &\times\left[1+\frac{\cot\theta_{i}}{\sin\theta_{i}}\frac{\nu_{x}}{g^{2}\alpha^{2}}\left(1+Re\left\\{\frac{\beta(\alpha^{2}+a^{2})}{a(b\alpha-a\beta)}\right\\}\right)\right]\end{split}$ (89) where $L_{eff}$ is defined in the main text, in Eq. (34). Substituting in the definitions of $a$, $b$, $\alpha$ and $\beta$, and taking the real part results in Eqs. (33) though (39) in the main text. ## References * Abraham and Lyons (2002) Abraham, D. A., and Lyons, A. P. (2002). “Novel physical interpretations of K-distributed reverberation,” IEEE J. Ocean Eng. 27, 800–813. * Abraham and Lyons (2004) Abraham, D. A., and Lyons, A. P. (2004). “Reverberation envelope statistics and their dependence on sonar bandwidth and scattering patch size,” IEEE J. Ocean Eng. 29, 126–137. * Abramowitz and Stegun (1972) Abramowitz, M., and Stegun, I. A. (1972). _Handbook of Mathematical Functions_ (Dover, Mineola, NY). * Anderson _et al._ (1999) Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Croz, J. D., Greenbaum, A., Hammarling, S., McKenney, A., and Sorensen, D. (1999). _LAPACK Users’ Guide_ , 3rd ed. (SIAM, Philadelphia,PA). * Atkinson (1997) Atkinson, K. (1997). _The Numerical Solution of Integral Equations of the Second Kind_ (Cambridge University Press, Cambridge, UK). * Bachmann (1973) Bachmann, W. (1973). “A theoretical model for the backscattering strength of a composite‐roughness sea surface,” J. Acoust. Soc. Am. 54(3), 712–716, http://dx.doi.org/10.1121/1.1913652. * Bellettini and Pinto (2009) Bellettini, A., and Pinto, M. (2009). “Design and experimental results of a 300-khz synthetic aperture sonar optimized for shallow-water operations,” IEEE J. Ocean Eng. 34(3), 285–293. * Blacknell _et al._ (1992) Blacknell, D., Blake, A. P., Oliver, C. J., and White, R. G. (1992). “A comparison of SAR multilook registration and contrast optimisation autofocus algorithms applied to real SAR data,” in _92 International Conference on Radar_ , pp. 363–366. * Brown _et al._ (2017) Brown, D. C., Johnson, S. F., and Olson, D. R. (2017). “A point-based scattering model for the incoherent component of the scattered field,” J. Acoust. Soc. Am. 141(3), EL210–EL215, 10.1121/1.4976584. * Callow (2003) Callow, H. J. (2003). “Signal processing for synthetic aperture sonar image enhancement,” Ph.D. thesis, University of Canterbury, Christchurch, New Zealand. * Dillon (2018) Dillon, J. (2018). “Real-time interferometric SAS processing with ultra-low power consumption,” in _OCEANS 2018 MTS/IEEE Charleston_ , pp. 1–6, 10.1109/OCEANS.2018.8604512. * Fossum _et al._ (2008) Fossum, T. G., Sæbø, T. O., Langli, B., Callow, H., and Hansen, R. E. (2008). “HISAS 1030 - High resolution interferometric synthetic aperture sonar,” in _Proc. Can. Hydrog. Conf._ , Victoria BC, Canada. * Frigo and Johnson (2005) Frigo, M., and Johnson, S. G. (2005). “The design and implementation of FFTW3,” Proceedings of the IEEE 93(2), 216–231 special issue on “Program Generation, Optimization, and Platform Adaptation”. * Fritsch and Carlson (1980) Fritsch, F. N., and Carlson, R. E. (1980). “Monotone piecewise cubic interpolation,” SIAM Journal on Numerical Analysis 17(2), 238–246, 10.1137/0717021. * Galusha _et al._ (2018) Galusha, A. P., Keller, J. M., Zare, A., and Galusha, G. (2018). “A fast target detection algorithm for underwater synthetic aperture sonar imagery,” in _Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXIII_ , edited by J. C. Isaacs and S. S. Bishop, SPIE, 10.1117/12.2304976. * Gradshteyn and Rhyzik (2007) Gradshteyn, I., and Rhyzik, I. (2007). _Table of Integrals, Series, and Products_ , Seventh ed. (Elsevier Academic Press, Burlington, MA). * Hackbusch (2015) Hackbusch, W. (2015). _Hierarchical Matrices: Algorithms and Analysis_ , 1st ed. (Springer Publishing Company, Incorporated). * Harris (1978) Harris, F. J. (1978). “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proceedings of the IEEE 66(1), 51–83, 10.1109/proc.1978.10837. * Hefner (2015) Hefner, B. T. (2015). “Inversion of high frequency acoustic data for sediment properties needed for the detection and classification of UXOs,” Final Report, https://www.serdp-estcp.org/content/download/34593/333838/file/MR-2229-FR.pdf. * Hellequin _et al._ (2003) Hellequin, L., Boucher, J. M., and Lurton, X. (2003). “Processing of high-frequency multibeam echo sounder data for seafloor characterization,” IEEE J. Ocean Eng. 28(1), 78–89, 10.1109/JOE.2002.808205. * Ishimaru (1978) Ishimaru, A. (1978). _Wave Propagation and Scattering in Random Media_ , 2 (Academic Press). * Jackson and Olson (2020) Jackson, D., and Olson, D. R. (2020). “The small-slope approximation for layered, fluid seafloors,” The Journal of the Acoustical Society of America 147(1), 56–73, 10.1121/10.0000470, 10.1121/10.0000470. * Jackson _et al._ (1986a) Jackson, D. R., Baird, A. M., Crisp, J. J., and Thomson, P. A. G. (1986a). “High-frequency bottom backscatter measurements in shallow water,” J. Acoust. Soc. Am. 80(4), 1188–1199, http://dx.doi.org/10.1121/1.393809. * Jackson and Ivakin (1998) Jackson, D. R., and Ivakin, A. N. (1998). “Scattering from elastic sea beds: First-order theory,” J. Acoust. Soc. Am. 103, 336 – 345. * Jackson and Richardson (2007) Jackson, D. R., and Richardson, M. D. (2007). _High-Frequency Seafloor Acoustics_ , 1st ed. (Springer, New York, NY). * Jackson _et al._ (1986b) Jackson, D. R., Winebrenner, D. P., and Ishimaru, A. (1986b). “Application of the composite roughness model to high-frequency bottom backscattering,” J. Acoust. Soc. Am. 79, 1410–1422. * Jakeman (1980) Jakeman, E. (1980). “On the statistics of K-distributed noise,” J. Phys. A: Math. Gen. 13, 31–48. * Kur’yanov (1963) Kur’yanov, B. (1963). “The scattering of sound at a rough surface with two types of irregularity,” Sov. Phys. Acoust. 8(3), 252–257. * Kwon and Nasrabadi (2005) Kwon, H., and Nasrabadi, N. (2005). “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing 43(2), 388–397, 10.1109/tgrs.2004.841487. * Landau and Lifshitz (1987) Landau, L. D., and Lifshitz, E. M. (1987). _Fluid Mechanics_ , 2 ed. (Butterworth Heinemann, New York). * Li and Johnson (2017) Li, H., and Johnson, J. T. (2017). “On the amplitude distributions of bistatic scattered fields from rough surfaces,” IEEE Transactions on Geoscience and Remote Sensing 55(12), 6883–6892, 10.1109/tgrs.2017.2735862. * Liu (2009) Liu, Y., ed. (2009). _Fast Multipole Boundary Element Method: Thory and Applications in Engineering_ (Cambridge University Press, Cambridge, UK). * Lupien (1999) Lupien, V. (1999). “The role of scale structure in scattering from random rough surfaces,” J. Acoust. Soc. Am. 105, 2187–2202. * Lyons _et al._ (2016) Lyons, A., Olson, D. R., and Hansen, R. (2016). “Quantifying the effect of random seafloor roughness on high-frequency synthetic aperture sonar image statistics,” in _Acoustic and Environmental Variability, Fluctuations, and Coherence_ , Institute of Acoustics, Cambridge, UK, Vol. 38, pp. 151–158. * Lyons _et al._ (2010) Lyons, A. P., Abraham, D. A., and Johnson, S. F. (2010). “Modeling the effect of seafloor ripples on synthetic aperture sonar speckle statistics,” IEEE J. Ocean Eng. 35(2), 242–249, 10.1109/JOE.2009.2039656. * Lyons _et al._ (2009) Lyons, A. P., Johnson, S. F., Abraham, D. A., and Pouliquen, E. (2009). “High-frequency scattered envelope statistics of patchy seafloors,” IEEE J. Ocean Eng. 34, 451–458. * Lysanov (1973) Lysanov, Y. P. (1973). “A property of the scattering coeffciient in the Fraunhofer zone,” Sov. Phys. Acoust. 18, 505 – 506. * Marston and Plotnick (2015) Marston, T. M., and Plotnick, D. S. (2015). “Semiparametric statistical stripmap synthetic aperture autofocusing,” IEEE Transactions on Geoscience and Remote Sensing 53(4), 2086–2095, 10.1109/TGRS.2014.2353515. * McDaniel and Gorman (1983) McDaniel, S. T., and Gorman, A. D. (1983). “An examination of the composite-roughness scattering model,” J. Acoust. Soc. Am. 73, 1476–1486. * Moccia and Renga (2011) Moccia, A., and Renga, A. (2011). “Spatial resolution of bistatic synthetic aperture radar: Impact of acquisition geometry on imaging performance,” IEEE Trans. Geosci. Remote Sens. 49(10), 3487–3503, 10.1109/TGRS.2011.2115250. * Ng and Geller (1969) Ng, E. W., and Geller, M. (1969). “A table of integrals of the error functions,” J. Research of the National Bureau of Standards – B. Mathematical Sciences 73B(1), 1–20. * Olson _et al._ (2019) Olson, D. R., Lyons, A. P., Abraham, D. A., and Sæbø, T. O. (2019). “Scattering statistics of rock outcrops: Model-data comparisons and Bayesian inference using mixture distributions,” J. Acoust. Soc. Am. 145(2), 761–774, 10.1121/1.5089892. * Olson _et al._ (2016) Olson, D. R., Lyons, A. P., and Sæbø, T. O. (2016). “Measurements of high-frequency acoustic scattering from glacially eroded rock outcrops,” J. Acoust. Soc. Am. 139(4), 1833–1847, 10.1121/1.4945589. * Pierce (1994) Pierce, A. D. (1994). _Acoustics: An introduction to its physical principles and applications_ (the Acoustical Society of America). * Pinto (2011) Pinto, M. (2011). “Interferometric synthetic aperture sonar design optimized for high area coverage shallow water bathymetric survey,” in _Proc. 4th Int. Conf Exhib. Underwater Ac. Meas._ , Kos, Greece, pp. 505–512. * Sauter and Schwab (2011) Sauter, S. A., and Schwab, C. (2011). _Boundary Element Methods_ (Springer-Verlag, Heidelberg). * Sternlicht _et al._ (2016) Sternlicht, D. D., Fernandez, J. E., Prater, J. L., Weaver, J. N., Isaacs, J. C., Montgomery, T. C., Loeffler, C. M., and Purcell, M. (2016). “Advanced sonar technologies for high clearance rate mine countermeasures,” in _OCEANS 2016 MTS/IEEE Monterey_ , pp. 1–8, 10.1109/OCEANS.2016.7761133. * Tatarski (1961) Tatarski, V. I. (1961). _Wave Propagation in a Turbulent Medim_ (Dover Reprint, New York). * Thorsos (1988) Thorsos, E. I. (1988). “The validity of the Kirchhoff approximation for rough surface scattering using a Gaussian roughness spectrum,” J. Acoust. Soc. Am. 83, 78–92. * Thorsos and Jackson (1989) Thorsos, E. I., and Jackson, D. R. (1989). “The validity of the perturbation approximation for rough surface scattering using a Gaussian roughness spectrum,” J. Acoust. Soc. Am. 86, 261–277. * Thorsos and Jackson (1991) Thorsos, E. I., and Jackson, D. R. (1991). “Studies of scattering theory using numerical methods,” Waves Random Media 1(3), S165–S190, 10.1088/0959-7174/1/3/014. * Toporkov _et al._ (1998) Toporkov, J., Marchand, R., and Brown, G. (1998). “On the discretization of the integral equation describing scattering by rough conducting surfaces,” Antennas and Propagation, IEEE Transactions on 46(1), 150 –161, 10.1109/8.655462. * Urick (1954) Urick, R. J. (1954). “The backscattering of sound from a harbor bottom,” J. Acoust. Soc. Am. 26, 231–235. * Urick (1983) Urick, R. J. (1983). _Principles of Underwater Sound_ , 3rd ed. (Peninsula, Los Altol Hills, California). * Valenzuela and Liang (1971) Valenzuela, and Liang (1971). “On the statistics of sea clutter,” US Naval Research Laboratory Report 7349\. * Wang and Bovik (2002) Wang, Z., and Bovik, A. C. (2002). “A universal image quality index,” IEEE Signal Processing Letters 9(3), 81–84, 10.1109/97.995823. * Williams (2015) Williams, D. P. (2015). “Fast unsupervised seafloor characterization in sonar imagery using lacunarity,” IEEE Transactions on Geoscience and Remote Sensing 53(11), 6022–6034, 10.1109/TGRS.2015.2431322. * Williams (2018) Williams, D. P. (2018). “The Mondrian detection algorithm for sonar imagery,” IEEE Transactions on Geoscience and Remote Sensing 56(2), 1091–1102, 10.1109/TGRS.2017.2758808. * Williams _et al._ (2002) Williams, K., Jackson, D., Thorsos, E., Tang, D., and Briggs, K. (2002). “Acoustic backscattering experiments in a well characterized sand sediment: Data/model comparisons using sediment fluid and Biot models,” IEEE J. Ocean Eng. 27, 376–387. * Winebrenner and Ishimaru (1986) Winebrenner, D., and Ishimaru, A. (1986). “On the far-field approximation for scattering from randomly rough surfaces,” Antennas and Propagation, IEEE Transactions on 34(6), 847–849, 10.1109/TAP.1986.1143901. * Wu (2000) Wu, T. W., ed. (2000). _Boundary Element Acoustics: Fundamentals and Computer Codes_ (WIT Press, Southampton, UK). * Yang and McDaniel (1991) Yang, C. C., and McDaniel, S. T. (1991). “Fourth moments of acoustic waves forward scattered by a rough ocean surface,” Waves in Random Media 1(4), 419–439, 10.1088/0959-7174/1/4/012.
* Braun-Munzinger _et al._ (2017) P. Braun-Munzinger, A. Rustamov, and J. Stachel, “Bridging the gap between event-by-event fluctuation measurements and theory predictions in relativistic nuclear collisions,” Nucl. Phys. A 960, 114–130 (2017), arXiv:1612.00702 [nucl-th] . * Vovchenko _et al._ (2020) Volodymyr Vovchenko, Oleh Savchuk, Roman V. Poberezhnyuk, Mark I. Gorenstein, and Volker Koch, “Connecting fluctuation measurements in heavy-ion collisions with the grand-canonical susceptibilities,” Phys. Lett. B 811, 135868 (2020), arXiv:2003.13905 [hep-ph] . * Nahrgang _et al._ (2015) Marlene Nahrgang, Marcus Bluhm, Paolo Alba, Rene Bellwied, and Claudia Ratti, “Impact of resonance regeneration and decay on the net-proton fluctuations in a hadron resonance gas,” Eur. Phys. J. C 75, 573 (2015), arXiv:1402.1238 [hep-ph] . * Zhang _et al._ (2020) Yu Zhang, Shu He, Hui Liu, Zhenzhen Yang, and Xiaofeng Luo, “Effects of resonance weak decay and hadronic rescattering on the proton number fluctuations in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}=5$ from a microscopic hadronic transport (JAM) model,” Phys. Rev. C 101, 034909 (2020), arXiv:1905.01095 [nucl-ex] . * Berdnikov and Rajagopal (2000) Boris Berdnikov and Krishna Rajagopal, “Slowing out-of-equilibrium near the QCD critical point,” Phys. Rev. D 61, 105017 (2000), arXiv:hep-ph/9912274 . * Bluhm _et al._ (2020a) Marcus Bluhm _et al._ , “Dynamics of critical fluctuations: Theory – phenomenology – heavy-ion collisions,” Nucl. Phys. A 1003, 122016 (2020a), arXiv:2001.08831 [nucl-th] . * Bluhm _et al._ (2020b) Marcus Bluhm, Marlene Nahrgang, and Jan M. Pawlowski, “Locating the freeze-out curve in heavy-ion collisions,” (2020b), arXiv:2004.08608 [nucl-th] . * Bluhm _et al._ (2019) M. Bluhm, Y. Jiang, M. Nahrgang, J. M. Pawlowski, F. Rennecke, and N. Wink, “Time-evolution of fluctuations as signal of the phase transition dynamics in a QCD-assisted transport approach,” _Proceedings, 27th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions (Quark Matter 2018): Venice, Italy, May 14-19, 2018_ , Nucl. Phys. A982, 871–874 (2019), arXiv:1808.01377 [hep-ph] . * Braun-Munzinger _et al._ (2021) P. Braun-Munzinger, B. Friman, K. Redlich, A. Rustamov, and J. Stachel, “Relativistic nuclear collisions: Establishing a non-critical baseline for fluctuation measurements,” Nucl. Phys. A 1008, 122141 (2021), arXiv:2007.02463 [nucl-th] . * Braun (2009) Jens Braun, “The QCD Phase Boundary from Quark-Gluon Dynamics,” Eur. Phys. J. C64, 459–482 (2009), arXiv:0810.1727 [hep-ph] . * Braun _et al._ (2010) Jens Braun, Holger Gies, and Jan M. Pawlowski, “Quark Confinement from Color Confinement,” Phys.Lett. B684, 262–267 (2010), arXiv:0708.2413 [hep-th] . * Cyrol _et al._ (2016) Anton K. Cyrol, Leonard Fister, Mario Mitter, Jan M. Pawlowski, and Nils Strodthoff, “Landau gauge Yang-Mills correlation functions,” Phys. Rev. D94, 054005 (2016), arXiv:1605.01856 [hep-ph] . * Cyrol _et al._ (2018b) Anton K. Cyrol, Mario Mitter, Jan M. Pawlowski, and Nils Strodthoff, “Nonperturbative finite-temperature Yang-Mills theory,” Phys. Rev. D97, 054015 (2018b), arXiv:1708.03482 [hep-ph] . * Braun _et al._ (2020b) Jens Braun, Wei-jie Fu, Jan M. Pawlowski, Fabian Rennecke, Daniel Rosenblüh, and Shi Yin, “Chiral susceptibility in ( 2+1 )-flavor QCD,” Phys. Rev. D 102, 056010 (2020b), arXiv:2003.13112 [hep-ph] . * Braun _et al._ (2020c) Jens Braun, Marc Leonhardt, Jan M. Pawlowski, and Daniel Rosenblüh, “Chiral and effective $U(1)_{\rm A}$ symmetry restoration in QCD,” (2020c), arXiv:2012.06231 [hep-ph] . * Litim and Pawlowski (1998) Daniel F. Litim and Jan M. Pawlowski, “On gauge invariant Wilsonian flows,” in _The exact renormalization group. Proceedings, Workshop, Faro, Portugal, September 10-12, 1998_ (1998) pp. 168–185, arXiv:hep-th/9901063 [hep-th] . * Berges _et al._ (2002) Juergen Berges, Nikolaos Tetradis, and Christof Wetterich, “Nonperturbative renormalization flow in quantum field theory and statistical physics,” Phys. Rept. 363, 223–386 (2002), arXiv:hep-ph/0005122 [hep-ph] . * Schaefer and Wambach (2008) Bernd-Jochen Schaefer and Jochen Wambach, “Renormalization group approach towards the QCD phase diagram,” _Helmholtz International Summer School on Dense Matter in Heavy Ion Collisions and Astrophysics Dubna, Russia, August 21-September 1, 2006_ , Phys. Part. Nucl. 39, 1025–1032 (2008), arXiv:hep-ph/0611191 [hep-ph] . * Gies (2012) Holger Gies, “Introduction to the functional RG and applications to gauge theories,” _Renormalization group and effective field theory approaches to many-body systems_ , Lect. Notes Phys. 852, 287–348 (2012), arXiv:hep-ph/0611146 [hep-ph] . * Rosten (2012) Oliver J. Rosten, “Fundamentals of the Exact Renormalization Group,” Phys. Rept. 511, 177–272 (2012), arXiv:1003.1366 [hep-th] . * Braun (2012) Jens Braun, “Fermion Interactions and Universal Behavior in Strongly Interacting Theories,” J. Phys. G39, 033001 (2012), arXiv:1108.4449 [hep-ph] . * Pawlowski (2014) Jan M. Pawlowski, “Equation of state and phase diagram of strongly interacting matter,” _Proceedings, 24th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions (Quark Matter 2014): Darmstadt, Germany, May 19-24, 2014_ , Nucl. Phys. A931, 113–124 (2014). * Litim (2000) Daniel F. Litim, “Optimization of the exact renormalization group,” Phys. Lett. B486, 92–99 (2000), arXiv:hep-th/0005245 [hep-th] . * Litim (2001) Daniel F. Litim, “Optimized renormalization group flows,” Phys. Rev. D64, 105007 (2001), arXiv:hep-th/0103195 [hep-th] . * Litim and Pawlowski (2006) Daniel F. Litim and Jan M. Pawlowski, “Non-perturbative thermal flows and resummations,” JHEP 11, 026 (2006), arXiv:hep-th/0609122 . * Khan _et al._ (2015) Naseemuddin Khan, Jan M. Pawlowski, Fabian Rennecke, and Michael M. Scherer, “The Phase Diagram of QC2D from Functional Methods,” (2015), arXiv:1512.03673 [hep-ph] . * Pawlowski and Rennecke (2014) Jan M. Pawlowski and Fabian Rennecke, “Higher order quark-mesonic scattering processes and the phase structure of QCD,” Phys. Rev. D90, 076002 (2014), arXiv:1403.1179 [hep-ph] . * Rennecke (2015b) Fabian Rennecke, _The Chiral Phase Transition of QCD._ , Ph.D. thesis, U. Heidelberg (main) (2015b). * Rennecke and Schaefer (2017) Fabian Rennecke and Bernd-Jochen Schaefer, “Fluctuation-induced modifications of the phase structure in (2+1)-flavor QCD,” Phys. Rev. D96, 016009 (2017), arXiv:1610.08748 [hep-ph] . * Lo _et al._ (2013) Pok Man Lo, Bengt Friman, Olaf Kaczmarek, Krzysztof Redlich, and Chihiro Sasaki, “Polyakov loop fluctuations in SU(3) lattice gauge theory and an effective gluon potential,” Phys. Rev. D88, 074502 (2013), arXiv:1307.5958 [hep-lat] . * Pawlowski (2011) Jan M. Pawlowski, “The QCD phase diagram: Results and challenges,” _Proceedings, 9th Conference on Quark Confinement and the Hadron Spectrum: Madrid, Spain, 30 Aug-3 Sep 2010_ , AIP Conf. Proc. 1343, 75–80 (2011), arXiv:1012.5075 [hep-ph] .
# Inertial Confinement Fusion Forecasting via LLMs Mingkai Chen Rochester Institute of Technology <EMAIL_ADDRESS> &Taowen Wang Rochester Institute of Technology <EMAIL_ADDRESS> &James Chenhao Liang Rochester Institute of Technology <EMAIL_ADDRESS> &Chuan Liu University of Rochester <EMAIL_ADDRESS> &Chunshu Wu University of Rochester <EMAIL_ADDRESS> &Qifan Wang Meta AI <EMAIL_ADDRESS> &Ying Nian Wu University of California, Los Angeles <EMAIL_ADDRESS> &Michael Huang University of Rochester <EMAIL_ADDRESS> &Chuang Ren University of Rochester <EMAIL_ADDRESS> &Ang Li Pacific Northwest National Laboratory <EMAIL_ADDRESS> &Tong Geng University of Rochester <EMAIL_ADDRESS> &Dongfang Liu Rochester Institute of Technology <EMAIL_ADDRESS> Corresponding author. ###### Abstract Controlled fusion energy is deemed pivotal for the advancement of human civilization. In this study, we introduce Fusion-LLM, a novel integration of Large Language Models (LLMs) with classical reservoir computing paradigms tailored to address challenges in Inertial Confinement Fusion (ICF). Our approach offers several key contributions: Firstly, we propose the LLM- anchored Reservoir, augmented with a fusion-specific prompt, enabling accurate forecasting of hot electron dynamics during implosion. Secondly, we develop Signal-Digesting Channels to temporally and spatially describe the laser intensity across time, capturing the unique characteristics of ICF inputs. Lastly, we design the Confidence Scanner to quantify the confidence level in forecasting, providing valuable insights for domain experts to design the ICF process. Extensive experiments demonstrate the superior performance of our method, achieving 1.90 CAE, 0.14 top-1 MAE, and 0.11 top-5 MAE in predicting Hard X-ray (HXR) energies of ICF tasks, which presents state-of-the-art comparisons against concurrent best systems. Additionally, we present Fusion4AI, the first ICF benchmark based on physical experiments, aimed at fostering novel ideas in plasma physics research and enhancing the utility of LLMs in scientific exploration. Overall, our work strives to forge an innovative synergy between AI and plasma science for advancing fusion energy. ## 1 Introduction > _“…human society remains at a Type 0, a primitive form of civilization…”_ > > $-$ The Kardashev scale [34] After National Ignition Facility (NIF) achieving ignition in December 2022 [1], the focus of current inertial confinement fusion (ICF) research shifts to exploring high gain schemes required to make fusion a practical and sustainable energy source for humankind. Fusion represents a potential key enabler for advancing humanity towards a Type I civilization on the Kardashev scale [91], offering a virtually limitless and clean energy source that could power our civilization globally. This advancement could potentially resolve numerous crises we currently face — $e.g.,$ economic recessions and climate change — by eliminating the need for finite resources like fossil fuels. The optimization of ICF designs and the achievement of reliable ignition face formidable constraints [8, 14] due to laser-plasma instabilities (LPI)[22, 54]. Efficient and symmetrical driving of the target, vital for ICF, is impeded by LPI phenomena such as stimulated Raman and Brillouin backscatterings (SRS and SBS), which can disrupt implosion symmetry and reduce efficiency through cross-beam energy transfer (CBET) [68, 21]. Hot electrons, a byproduct of LPI processes like SRS and Two-Plasmon-Decay (TPD), can both hinder and assist ignition, showcasing the complexity of the interaction. Despite efforts to measure and simulate hot electrons, obtaining predictive scaling laws remains challenging due to the dynamic nature of laser/plasma conditions and computational limitations [38, 57, 89, 66, 42], highlighting the current gap in establishing a predictive capability based on first principles that aligns with experimental data. These constraints highlight the need for an innovative approach to overcome these obstacles. Figure 1: The overview of Fusion-LLM. Currently, Large Language Models (LLMs) exhibit versatile capabilities across diverse disciplines ($e.g.,$ robotics [44, 73, 90, 26], medical health [41, 67, 77, 50], agriculture [56, 80]), adeptly capturing intricate patterns in multimodal data. Due to their success in other domains [39, 93, 46], we are convinced that LLMs may also potentially excel in generalizing to plasma physics, particularly in forecasting the behavior of hot electrons generated during implosion in the real-world, physics experiments. Leveraging their vast pre-trained knowledge base, LLMs could optimize ICF designs by efficiently evaluating numerous scenarios, aiding researchers in identifying promising approaches expediently, and generating insights to enhance our understanding and control of the fusion process. In light of this perspective, we intend to harness the power of LLMs to overcome the barrier to the vertical advancement of the next stage of human civilization: fusion energy. This science problem manifests as two sub- problems in a unique and challenging setup: ❶ how to tailor pre-trained LLMs in order to accurately predict the behavior of hot electrons based on laser intensity inputs? and ❷ how to evaluate the trustworthiness of the LLM’s predictions in order to guide the ICF design? We conceptualize LLMs as a computational reservoir to unlock their potential for robust domain adaptation and generalization for ICF task, titled Fusion- LLM (see Fig. 1). In order to address question ❶, we propose the LLM-anchored Reservoir, augmented with a fusion-specific prompt, to facilitate the interpretation of plasma physics. The incorporated prompts encompass domain- specific knowledge, instructional cues, and statistical information, thereby enabling the LLMs to accommodate the specific demands of the given task. Additionally, we develop the Signal-Digesting Channels to capture the distinctive characteristics of ICF inputs. It features a temporal encoder to better align the laser signals with the pre-trained, time-series space, and a spatial encoder to provide a global description of the input landscape. To tackle question ❷, we introduce the Confidence Scanner to assess the trustworthiness in predictions. Specifically, we couple the gradient saliency in prediction head and token entropy in LLM outputs to obtain the model’s confidence scores. Overall, this study aims to provide an exciting synergy between the domain of AI and plasma science for fusion energy development. The core contributions of this paper can be summarized as follows: * • We represent a pioneer investigation into the use of LLMs for analyzing hot electron dynamics in ICF. LLMs offer a cost-effective alternative compared to both plasma-physics experiments and simulations at the ignition scale. Empirical evidence (see §3/S2) showcases the efficacy of the application of LLM for plasma physics study, which directly benefits the practical design of ICF. * • We construct an LLM-anchored reservoir computing framework to predict the hot electron dynamics in ICF implosions. Compared to prior arts in reservoir computing [43, 20] and time-series-based LLM [31], our approach requires less data (see Table 2d) and shorter training schedule (see Table 2e), while achieving superior performance. * • We develop the first fusion benchmark — Fusion4AI (see §3.1) based on physical experiments, to facilitate new ideas in plasma physics research and the use of LLMs in scientific exploration. ## 2 Methodology ### 2.1 Preliminary Figure 2: The ICF pipeline. The ICF Overview. A ICF shot is depicted in Fig. 2. Laser beams are directed towards a target fuel pellet , rapidly heating its surface to generate a plasma envelope . This plasma envelope subsequently undergoes implosion through a rocket-like expulsion of material, compressing the pellet to densities exceeding that of lead by twenty-fold and attaining ignition temperatures of approximately 100,000,000∘C, thus initiating fusion reactions. Throughout this process, laser-induced plasma instabilities give rise to hot electrons, which may prematurely heat the pellet and emit Hard X-Rays (HXR) emissions as they interact within the plasma environment. Understanding the hot electron dynamics is crucial for guiding the design of the ICF. In the context, HXR diagnostics have been extensively applied in studies at facilities such as OMEGA [69, 88, 71, 87] and the NIF [58, 59, 70]. It is however worth noting that conducting these experiments is extremely expensive (around $1m for one shot). Experiment and Data Collection. To collect the fusion data, we conduct 100 real-world ignitions of ICF. Each ignition is accompanied by detailed configurations including the size of the fuel pellet, the phase plate of the laser, and the input laser profile. We also measure raw sensor readings regarding HXR, which can be further converted into charges of hot electrons, allowing for the subsequent calculation of the total energy of these electrons based on their temperature. In this study, we focus on forecasting the energy from HXR emissions by given laser intensity inputs throughout the ignition. This data is indispensable for physicists to ascertain and analyze the state of ICF. ### 2.2 Fusion-LLM In this section, we introduce Fusion-LLM, a novel approach for predicting hot- electron energy in ICF. Illustrated in Fig. 3, the inputs comprise fusion- specific prompts and HXR signals, which are processed through LLM for feature extraction. Subsequently, the output module makes predictions of hot-electron energy along with confidence scores. Our approach is characterized by three core modules: LLM-anchored Reservoir, which establishes a reservoir foundation using LLMs to comprehend the dynamic impact of laser intensity on hot-electron energy emission; Signal-Digesting Channels is responsible for encoding HXR signals, capturing the temporal characteristics of sequential details, and spatial distinctiveness of data landscapes; and Confidence Scanner, tasked with estimating prediction confidence for each shot, thereby providing trustworthy guidance for practical ICF design. Figure 3: Fusion-LLM framework. (a) Fusion-specific prompts structure domain textual prompts with context, task and input descriptions (see §2.2.1 for details). (b) Signal-digesting channels comprise a pre-trained temporal encoder to extract time-series features of laser signals, and a spatial encoder to encode critical landscapes of the inputs (see §2.2.2 for details). For simplicity, we skip some architectural modules. We provide implementation details in appendix (see §S1). #### 2.2.1 LLM-anchored Reservoir In classic reservoir computing (RC), a fixed, randomly generated reservoir ($e.g.$, RNN) transforms input data into a high-dimensional representation. A trainable output layer then maps this representation to the desired output. RC has gained popularity in the scientific community due to its ability to enable efficient processing of sequential data with simple training methods. Following the RC convention, we construct a reservoir using LLM with a shallow prediction head (see §S1). Due to the extensive pre-training, LLMs are equipped with robust generalization capabilities for in-context reasoning and time-series forecasting. To leverage the physics knowledge embedded in LLMs, we design fusion-specific prompts (FSP) that strategically connect the LLM’s vast knowledge base to the specific nuances of the ICF domain. Our LLM- anchored reservoir can be given by: $s[t+1]=f(s[:t],E_{las},^{**}\texttt{Prompt}),$ (1) where $s[t]$ denotes the state of the reservoir at time step $t$ and $E_{las}$ represents input laser intensity. The ${}^{**}\texttt{Prompt}$ specifically denotes the structured FSP, comprising three descriptors (see Fig. 3a) designed with particular focus: ➤ Context descriptor provides a detailed overview of the ICF process, highlighting the nature, sources, and characteristics of the data. It elaborates on the experimental procedures (see §2.1) used in data collection, emphasizing principles and methodologies specific to ICF. This descriptor enhances the LLMs’ comprehension of the experimental context. ➤ Task descriptor outlines explicit instructions for the prediction tasks, including the format and expected length of the output. It guides the inference and forecasting process by specifying imperative considerations, ensuring that the forecasting align with domain-specific insights. ➤ Input descriptor presents a concise statistical summary of the input data, offering insights into its distribution and key characteristics such as minimum, maximum, and median values. This descriptor is vital for informing the LLMs about the underlying statistical properties of the input signals. Collectively, this prompting strategy facilitates the LLMs’ ability to mine intricate input signals and produce predictions that are scientifically robust and contextually coherent within the phase space of the reservoir. In general, our LLM-based design extends the territorial boundary of reservoir computing, offering two significant advantages in leveraging artificial intelligence for scientific exploration: * • Adaptability for science. The utilization of LLMs in ICF forcasting exhibits notable adaptability in confronting a pivotal scientific challenge. Later, we will present empirical evidence to substantiate the systemic efficacy (see §3.1). The success of this methodology is poised to provide a generic alternative for other adjacent scientific domains in pursuit of LLM-based solutions. * • Efficiency for data scarcity. Gratitude is extended for the extensive pre- trained knowledge base and the precise delineation of prompt descriptors. Leveraging these assets, the LLM-based system mitigates data dependency to the K-shot level — 80 shots in our experiments — exemplifying the advantageous efficiency in addressing the persistent challenge of data scarcity in scientific inquiry. #### 2.2.2 Signal-Digesting Channels With the establishment of the LLM-based approach described in §2.2.1, robust predictions can be immediately attained (see Table 2c). Due to the uniqueness of the ICF data, the input HXR signal exhibits its time-series, temporal landscape. Recognizing the sensitivity of LLMs to input data [64, 65], we introduce Signal-Digesting Channels (SDC), conceptually aligned with our reservoir design, to capture crucial input characteristics and further augment the performance of LLMs. For the ICF process, the hot electron energy emitted during the initial and terminal phases is characterized by relatively uniform values, in contrast to the target impact phase, where peak values are observed. ${}_{\\!}$This laser- energy signal landscape exhibits significant discrepancies between the uniformity of the plain phase and the peaks of the impact phase. ${}_{\\!}$This physics insight guides our SDC design (see Fig. 3b), which comprises two components: a temporal encoder to align the laser signals with the pre-trained time-series space, and a spatial encoder to delineate the landscape of input data. ➤ Temporal encoder is designed to extract time-series features using a windowing mechanism that constructs consistent signal patches across sequential time steps. It employs a set of Transformer layers [85] to capture time-series distributions over the forecast horizon. This process is formulated as $f_{tmp}:(X_{t-l:t+h},Y_{t-l:t})\mapsto\hat{\psi}$, where $X$ and $Y$ represent the input data and target data spanning time windows of length $l$ and $h$ at time $t$. The encoder is pre-trained on a large-scale time-series dataset (see §S1 for details) and is optimized using the log- likelihood of the forecast: $\arg\max_{\theta}\quad\mathbb{E}_{(X,Y)\sim p(D)}\log p(Y_{t:t+h}|f_{tmp}(X_{t-l:t+h},Y_{t-l:t})),$ where $p(D)$ represents the data distribution from which the time-series samples are drawn. During fine-tuning on the target-domain ICF data, all pre- trained parameters are frozen, except for the last linear layer. The temporal encoder is utilized for feature extraction over the input laser signals $I$, producing temporal tokens denoted as $E_{tmp}=f_{tmp}(I)$ for subsequent processing. ➤ ${}_{\\!}$Spatial encoder is designed to analyze the input signals by providing a qualitative overview of the input landscape. Specifically, it is structured to characterize the spatial patterns of laser intensity signals throughout the ICF process. We utilize the projection block $f_{LLM}$ to project sets of critical contexts into spatial features, denoted as $E_{spt}=\\{f_{LLM}(\text{``pulse''}),f_{LLM}(\text{``peak''}),...,f_{LLM}(\text{``trailing''})\\}$. In practice, $E_{spt}$ is further processed by a cross-attention layer with temporal features $E_{tmp}$. This step couples the contextual description to the actual signal distribution within the ICF, enabling the LLM to better capture the observed physical phenomena for predictions. After acquiring the features from both the temporal and spatial encoders, we concatenate them $\dot{E}$ = [$E_{tmp}$;$E_{spt}$] to form the output of DSC. Here, we use $\dot{E}$ as augmented inputs to replace the vanilla $E_{las}$ in Eq.1. Fundamentally, DSC introduces a novel method for input encoding in reservoir computing, which enhances overall system performance. This design offers the following advantages: * • Discernment for temporal pattern. With the pre-trained temporal encoder, SDC adeptly captures crucial time-series features of laser signals that correlate with HXR outputs. This merit enables the LLM to recognize distinct patterns across various time steps, representing an improvement over strong reservoir models (see Table 2c), which struggle to manage ICF’s temporal patterns. * • In-context reasoning in signal processing. SDC tackles the complexity of processing signals with diverse attributes, such as those found in the uniform and peak phases within ICF. Through the integration of contextual disciplinary knowledge, SDC, equipped with in-context reasoning, significantly boosts the effectiveness of LLM backbone (see Table 2c). This enhancement enables robust performance even in the face of inherent fluctuations (i.e., uniform $vs$. peak in ICF). #### 2.2.3 Confidence Scanner Figure 4: Pipeline of Confidence Scanner. We re-calibrate token confidence to align with the energy-prediction head. Trustworthiness is pivotal for AI in science. Under-confident predictions may lead to misguided conclusions or improper actions. Some approaches [45, 27] directly utilize the entropy observed in the output tokens of the LLMs to gauge the confidence. However, these methods encounter a challenge in our study of ICF whereby the entropy of LLM’s output tokens does not consistently reflect confidence of prediction at each time step. This discrepancy arises due to the non-linear transformation undertaken by the multi-layer perception within the prediction head, which alters the embedding of token counterparts, thereby distorting the relation between tokens and their corresponding confidence estimations. To this end, we propose Confidence Scanner that incorporates a confidence reweighing mechanism to assess the confidence level of each prediction systematically. Concretely, our approach re-calibrates the allocation of confidence across tokens to implicitly reflect their actual influence on predictions. As shown in Fig. 4, we extract the embedding $E_{k}=\\{e_{n-k},\dots,e_{n}\\}$ for the last layer of LLM, which specifically analyze the embedding of the last $k$ tokens. The confidence level $H$ is then formulated as: $H=\left[h(e_{n-k}),\dots,h(e_{n})\right],$ (2) where $h(\cdot)$ map the embedding to word probabilities, which are subsequently used to calculate the entropy. Furthermore, we derive a reweigh matrix $S$ from the task prediction head $H_{task}(\cdot)$ by performing a forward process to predict hot-electron energy by $P=H_{task}(E_{k})$. The matrix $S$ is then obtained through saliency, reflecting the contribution to the $i$-length of predictions: $S=\left[\sigma\left(\frac{\partial P_{0}}{\partial E_{k}}\right),\dots,\sigma\left(\frac{\partial P_{i}}{\partial E_{k}}\right)\right],$ (3) where the $\sigma$ is the softmax function to normalize the saliency scale. Finally, entropy $H$ is combined with reweigh matrix $S$ to produce the confidence score $C=-H\times S$ for the hot electron energy predictions. Through our design, we align the entropy derived from LLM embeddings with the hot electron energy forecasting. This alignment allows us to directly obtain a confidence level that can serve as a trustworthiness indicator for our system. We provide empirical evidence in Fig. 6. ## 3 Empirical Findings Figure 5: Qualitative results of hot electron prediction. We plot Ground Truth and the predictions of Ours, Time-LLM, LSTM and Autoformer. Y and X axes denote energy and time steps respectively. ### 3.1 Main Results Dataset. We develop a new benchmark, Fusion4AI, to support AI research in ICF. This benchmark consists of 40,000 LPI samples containing 100 shot sequences with 400 time steps/shot. Each shot is documented with key parameters such as target size, laser intensity, and energy of hot electrons (see §2.1). The dataset has been systematically divided into 80/10/10 for train/val/test splits respectively. We will release this dataset upon acceptance to advance research for the fusion reaction. Baselines. We choose a classic physical Particle-In-Cell (PIC) simulation method [13], a number of classic AI models (i.e., LSTM [25], Autoformer [86]), reservoir computing models (i.e., HoGRC [43], RCRK [17], NGRC [20]), and concurrent time-series LLM-based models (i.e., GPT4TS [94] and Time-LLM [31]) as baseline models for performance comparison on proposed Fusion4AI dataset. Experimental Setup. Our experiments are trained with 100 epochs and a batch size of 5, which is adequate to achieve convergence based on our empirical findings. In addition, we utilize a fixed learning rate of $0.0004$ and the Adam optimizer [36]. A loss function defined by the cumulative absolute error across each time-steps $f_{\text{loss}}(Y,G)=\sum_{n=1}^{\text{pred\\_len}}|y_{n}-g_{n}|$ is used, where $Y$ and $G$ represent the sequences of predictions and ground truths, respectively, $y_{n}$ and $g_{n}$ denote the values at the $n$-th time-step, and pred_len is the length of prediction. For all other counterpart methods, we follow the original experimental setting and training configurations to reproduce the results. Metric. We employ cumulative absolute error (CAE) as our primary metric. The sole distinction in its implementation involves nullifying values that are less than 0.03 of the predicted value. In addition, we incorporate two supplementary metrics: top-1 and top-5 MAE, which represent mean absolute error focusing exclusively on the top one percent or five percent errors, respectively, thereby highlighting the performance where the highest errors are observed. Table 1: Quantitative results on Fusion4AI test split for hot electron energy predictions (see §3.1 for details). Refer to the metrics section for details of CAE, top-1 and top-5 MAE. Method | CAE$\downarrow$ | top-1 MAE$\downarrow$ | top-5 MAE$\downarrow$ ---|---|---|--- PIC Simulation [13] | 2.88 | 0.20 | 0.13 LSTM [25] | 5.82±0.06 | 0.35±0.01 | 0.35±0.01 Autoformer [86] | 5.79±0.04 | 0.35±0.01 | 0.34±0.01 HoGRC [43] | 4.20±0.79 | 0.25±0.05 | 0.22±0.02 RCRK [17] | 4.31±0.46 | 0.28±0.04 | 0.22±0.01 NGRC [20] | 4.28±0.68 | 0.27±0.04 | 0.23±0.02 GPT4TS [94] | 3.34±0.58 | 0.18±0.05 | 0.14±0.04 Time-LLM [31] | 3.48±0.72 | 0.18±0.05 | 0.15±0.05 Fusion-LLM (Llama-2-7B) | 2.15±0.26 | 0.14±0.01 | 0.12±0.01 Fusion-LLM (Llama-3-8B) | 1.90±0.33 | 0.14±0.01 | 0.11±0.01 Performance Comparison. The primary challenge in forecasting for ICF lies in developing a robust yet efficient predictive model. Table 1 empirically demonstrates the effectiveness of our system in predicting output energy in ICF, showcasing a significant advantage over all other baseline methods across all metrics. Specifically, our approach with Llama-3 surpasses PIC simulation model [13] by 0.98 in terms of CAE. Additionally, compared to classic AI methods, our approach outperforms LSTM [25] and Autoformer [86] by 3.92 and 3.89, respectively. Our margins in CAE over three other reservoir computing methods, HoGRC [43], RCRK [17], NGRC [20], are 2.30, 2.41, and 2.38, respectively. Moreover, our method outperforms concurrent LLM-based approaches GPT4TS [94] and Time-LLM [31] by 1.44 and 1.58 on CAE, ${}_{\\!}$respectively, ${}_{\\!}$while ${}_{\\!}$maintaining ${}_{\\!}$comparable ${}_{\\!}$speed. ${}_{\\!}$Detailed ${}_{\\!}$analysis ${}_{\\!}$is ${}_{\\!}$supplemented ${}_{\\!}$in ${}_{\\!}$§S2. These results underscore the efficacy and efficiency of our LLM-based solution in predicting hot electron dynamics in ICF. By extending LLM’s successful adaptability to the new and exciting domain of fusion energy, our empirical findings represent just the beginning of the innovative opportunities presented by applying LLM algorithms to challenging subjects in scientific exploration. Figure 6: Visualization of confidence score and prediction error. Samples correspond to Fig. 5. We further present qualitative results in Fig. 5, aligning with our quantitative findings that our method surpasses all comparative baselines in predictive accuracy. Additionally, Fig. 6 illustrates the confidence scores associated with our predictions. The visualization elucidates a clear correlation between predictive error and confidence scores, indicating high confidence corresponding to low errors and conversely. Notably, our approach consistently demonstrates a heightened level of confidence, particularly in forecasting peak values across the temporal sequence, a critical phase in ICF. Table 2: A set of ablative studies. The experiments are evaluted on val split. Algorithm Component | CAE$\downarrow$ ---|--- Baseline | 3.57 $+$ Fusion-Specific Prompt | 2.59 $+$ Signal-Digesting Channels | 2.01 Ours (both) | 1.19 a Key Component Analysis # of samples | Ours | HoGRC [43] | NGRC [20] | Time-LLM [31] ---|---|---|---|--- 80 | 1.19 | 3.80 | 3.73 | 3.60 60 | 1.73 | 3.87 | 3.84 | 3.69 40 | 2.56 | 4.01 | 4.08 | 3.94 20 | 3.47 | 4.35 | 4.19 | 4.10 d Different # of Samples Prompt Type | CAE$\downarrow$ ---|--- Baseline | 2.01 $+$ Discipline-related Prompt | 1.46 $+$ Input Statistics | 1.58 Ours (both) | 1.19 b Fusion-Specific Prompt # of epochs | Ours | HoGRC [43] | NGRC [20] | Time-LLM [31] ---|---|---|---|--- 100 | 1.19 | 3.80 | 3.73 | 3.60 50 | 1.19 | 3.99 | 3.83 | 3.60 20 | 1.56 | 4.12 | 4.35 | 4.01 10 | 2.79 | 5.23 | 5.50 | 4.52 e Different # of Epochs Algorithm Component | CAE$\downarrow$ ---|--- Baseline | 2.59 $+$ Temporal Encoder | 2.47 $+$ Spatial Encoder | 1.41 Ours (both) | 1.19 c Signal-Digesting Channels Head Dimension | # Params | CAE$\downarrow$ | top-1 MAE$\downarrow$ | top-5 MAE$\downarrow$ ---|---|---|---|--- 256 | 102.4 K | 1.23 | 0.14 | 0.12 128 | 51.2 K | 1.19 | 0.14 | 0.11 64 | 25.6 K | 1.34 | 0.14 | 0.11 32 | 12.8 K | 1.57 | 0.14 | 0.13 f Different Head Dimension ### 3.2 Diagnostic Experiment This section ablates Fusion-LLM’s systemic design on val split of Fusion4AI. All experiments use the Llama 3 8B variant. Appendix §S2 has more experimental results. Key Component Analysis. We first investigate the two principal modules of Fusion-LLM, specifically, Fusion-Specific Prompt and Signal-Digesting Channels. We construct a baseline model with generic dummy prompts which only provide broad, non-specific instructions regarding fusion, and a rudimentary encoder composed of a single linear layer. As shown in Table 2a, the baseline model achieves 3.57 CAE. Upon applying Fusion-Specific Prompt to the baseline, we observe significant improvements for CAE from 3.57 to 2.59. Furthermore, after incorporating Signal-Digesting Channels into the baseline model, we achieve significant gains of 1.56 CAE. Finally, by integrating both core techniques, our Fusion-LLM delivers the best performance of 1.19 CAE. These findings affirm that the proposed Fusion-Specific Prompt and Signal-Digesting Channels operate synergistically, and validate the effectiveness of our comprehensive model design. Fusion-Specific Prompt. We next study the impact of our Fusion-Specific Prompt by contrasting it with a constructed baseline. This baseline incorporates Signal-Digesting Channels and employs generic prompts that provide broad, nonspecific instructions unrelated to the process of fusion. As shown in Table 2b, the baseline yields a performance measure of 2.01 in terms of CAE. Upon substituting the generic prompt with one that integrates discipline-specific information, including background knowledge, task instructions, etc, there is an observable enhancement in performance, achieving an improvement of 0.55 in CAE over the baseline. Additionally, a further analysis involving the integration of input statistics, containing the maximum and minimum values, etc, of the input time series, demonstrates superior performance, outperforming the baseline by 0.43 in CAE. The most notable enhancement is recorded when employing our Fusion-Specific Prompt, which amalgamates both the discipline-related information and input statistics, culminating in a peak performance of 1.19 CAE. This outcome highlights the essential function of the Fusion-Specific Prompt within our approach, significantly impacting the performance of the overall model. Signal-Digesting Channels. We then examine the influence of Signal-Digesting Channels in Table 2c. For the baseline, we use a basic approach comprising solely a single linear layer. Under this setting, the baseline model achieves 2.59 in terms of CAE. Integration of either the Temporal Encoder or the Spatial Encoder independently results in performance improvements of 0.12 and 1.18 above the baseline respectively. Conversely, the integration of both Encoders in SDC substantially surpasses all alternative counterparts, achieving a CAE of 1.19. These results substantiate the hypothesis that the proposed Signal-Digesting Channels augment the capability of our approach to more accurately interpret input time series data. Reservoir and LLM Comparison. To thoroughly explore the training effectiveness of our methodology under conditions of limited sample availability, we perform a comparative analysis using a variable number of training samples with two concurrent reservoir methods [43, 20] and one LLM-based method [31] in Table 2d using CAE. The empirical findings demonstrate that our approach consistently outperforms all competing strategies across various sample configurations. Notably, this superior performance and training effectiveness are evident even with as few as 20 samples. Such robust efficacy is critical in scientific AI applications, where datasets are often constrained in size. In addition, we study the training efficiency of our approach in contrast to the above strong baselines [43, 20, 31] in Table 2e, across various training epochs. The experimental outcomes illustrate that our approach not only outperforms its counterparts but also demonstrates superior efficiency. Specifically, our method is capable of achieving comparable or superior performance in a significantly reduced training duration. For instance, our model requires only 10 epochs to achieve better performance than other methods that require 100 epochs. This enhanced efficiency in training is particularly significant, as it demonstrates the potential of our approach to deliver robust performance swiftly, thereby facilitating more expedient research in practical scenarios. Prediction Head Dimension. Lastly, we conduct additional experiments to evaluate the impact of varying dimensions in the prediction head. As shown in Table 2f, our approach demonstrates an enhancement in CAE, reducing from 1.57 to 1.34, concomitant upon augmenting the head dimension from 32 to 64. This improvement continues, culminating in a CAE of 1.19 at a head dimension of 128, where it stabilizes, indicating this as the optimal head dimension for balancing effectiveness and parameter-efficiency. We therefore select the dimension of 128 as the default setting. ## 4 Related Work and Discussion AI ${}_{\\!}$for ${}_{\\!}$Science. ${}_{\\!}$AI ${}_{\\!}$is ${}_{\\!}$increasingly ${}_{\\!}$acknowledged ${}_{\\!}$as ${}_{\\!}$a ${}_{\\!}$critical ${}_{\\!}$instrument and recently ${}_{\\!}$led trending of ${}_{\\!}$significant scientific discovery ${}_{\\!}$[32, 40, 55, 5]. ${}_{\\!}$The ${}_{\\!}$trajectory ${}_{\\!}$of ${}_{\\!}$AI in ${}_{\\!}$scientific ${}_{\\!}$research ${}_{\\!}$commenced ${}_{\\!}$with ${}_{\\!}$different ${}_{\\!}$elementary data analysis methods (e.g., Rule- based [11, 60], Bayesian [19], Analogy [24, 30, 76], Evolutionary [35, 16], Connectionism [83, 40], etc) and has evolved into the advanced foundation models [23, 81, 15, 18, 10]. This development is underscored by considerable progress in AI recently, particularly the emergence of sophisticated constructs such as LLMs [2, 78, 75, 95], which have fundamentally altered our approach to scientific challenges. By incorporating expansive knowledge bases, AI is indispensable for deciphering vast and varied scientific data. The uniqueness of our work lies in the challenging nature of accessing ICF data, given the difficulty in observing LPI, which renders conventional AI tools less effective in this crucial area of physics. Building on the success of LLMs, this study harnesses their extensive knowledge base to unlock robust predictive capabilities for physical phenomena. Our empirical findings validate our methodological approach, positioning our work as pioneering and innovative. This not only demonstrates the applicability of LLMs in the scientific domain but also fosters a dynamic interplay between AI and scientific exploration. We showcase the potential of our LLM-based solution to make meaningful contributions in addressing complex scientific challenges, thus advancing both AI and science. Plasma Physics for Fusion. In the realm of plasma physics, particularly in ICF, efficient and symmetric driving of a target is imperative. Specifically, LPI represents significant impediments due to the complex dynamics of phenomena in fusion. For instance, Stimulated Raman and Brillouin Scattering (SRS and SBS) can cause the reflection of laser beams [37]. Moreover, Cross- Beam Energy Transfer [28] may adversely influence the symmetry of implosion . A critical issue particularly in direct drive scenarios is the preheating caused by hot electrons produced via SRS and Two-Plasmon Decay, which potentially increases the shell entropy and diminishes the implosion efficiency [68, 21, 14, 54]. Conversely, these hot electrons might contribute positively by depositing their energy in the compressed shell, thereby enhancing the ignition shock and aiding ignition processes such as those observed in shock ignition, a novel high-gain strategy [9, 53]. Understanding the dynamics ofLPI and establishing predictive models for hot electron generation in direct drive configurations are crucial.D In response to these issues and challenges, extensive physical experimentation and simulations [68, 21, 14, 54, 9, 53] are necessary, which are both costly and time-consuming. To address the above challenges, our AI-based approach emerges as a potential alternative. By reformulating LLMs with a simple yet effective pipeline, our model can effectively engage in such ICF tasks as Computational Reservoir to encapsulate domain-specific knowledge and generalize within the domain. This capability potentially allows LLMs to guide ICF design, circumventing the need for extensive and expensive experimental setups and simulations. Empirical evidence indicates that the incorporation of LLMs had promising to revolutionize the predictive modeling in plasma physics by providing quick, cost-effective insights that are grounded in generalizable knowledge. Reservior Computing. The ICF data exhibits a time-series nature, where laser intensity influences the target, correlating with the volume of hot electron energy emission. Traditional machine learning approaches [72, 7, 3, 84, 61, 62, 33] for time-series data typically involve dynamically transforming temporal inputs into a high-dimensional state space through nonlinear mappings. Revolutionary advancements in the field, such as echo state networks [29, 47] and liquid state machines [49, 92], have introduced the concept of Reservoir Computing (RC). RC presents an innovative framework [51, 63, 20, 48] characterized by a static “reservoir”. This reservoir is pre-trained, reducing the computational expenses associated with training only the readout parameters, rather than the entire network. This feature makes RC advantageous for applications involving temporal data [48, 82]. In this paradigm, the input signals are transformed non-linearly into a dynamic time-series system’s state space, from which outputs are linearly extracted [74]. In contrast to traditional reservoirs (e.g., RNN-based [20, 48]) that often struggle with processing highly dynamic temporal data, and contemporary time- series Large Language Models (LLMs) [79, 4, 31, 94] that employ massive parameters but focus solely on temporal sequences, often neglecting the vast knowledge embedded within LLMs, we propose a re-imagining of LLMs within the RC paradigm. This innovative integration not only leverages extensive domain- specific knowledge and adept handling of temporal data but also reduces the need for extensive meta-parameter tuning by maintaining most parameters as static during training. Consequently, our approach emerges as a compelling alternative, effectively addressing the challenges posed by both traditional RNN-based reservoirs and modern time-series LLMs, showcasing superior capability in handling intricate and large-scale temporal patterns. ## 5 Conclusion Fusion energy stands as a pivotal pathway toward advancing human civilization to a Type I status on the Kardashev scale [34]. The key to realizing this potential lies in mastering Inertial Confinement Fusion, where understanding laser-plasma instabilities is paramount. To address this challenge, we present Fusion-LLM, a groundbreaking framework merging LLMs with reservoir computing. Our approach not only provides a cost-effective solution but also emerges as a top-tier contender in forecasting hot electron dynamics, offering invaluable insights for plasma scientists in ${}_{\\!}$refining ${}_{\\!}$ICF ${}_{\\!}$designs. ${}_{\\!}$Beyond ${}_{\\!}$its ${}_{\\!}$immediate ${}_{\\!}$impact ${}_{\\!}$on ${}_{\\!}$ICF, ${}_{\\!}$employing ${}_{\\!}$LLMs for ${}_{\\!}$scientific ${}_{\\!}$exploration ${}_{\\!}$holds ${}_{\\!}$promise ${}_{\\!}$for ${}_{\\!}$${}_{\\!}$cross-domain ${}_{\\!}$applications, ${}_{\\!}$potentially ${}_{\\!}$catalyzing ${}_{\\!}$advancements ${}_{\\!}$in ${}_{\\!}$AI-driven ${}_{\\!}$scientific ${}_{\\!}$endeavors. ## References * [1] Abu-Shawareb, H., Acree, R., Adams, P., Adams, J., Addis, B., Aden, R., Adrian, P., Afeyan, B., Aggleton, M., Aghaian, L., et al.: Achievement of target gain larger than unity in an inertial fusion experiment. Physical Review Letters 132(6), 065102 (2024) * [2] Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023) * [3] Ahmed, N.K., Atiya, A.F., Gayar, N.E., El-Shishiny, H.: An empirical comparison of machine learning models for time series forecasting. Econometric Reviews 29(5-6), 594–621 (2010) * [4] AI, M.: Llama 3 model card. Meta (2024), https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md * [5] Ali, S., Ravi, P., Williams, R., DiPaola, D., Breazeal, C.: Constructing dreams using generative ai. In: AAAI (2024) * [6] Anthropic: The claude 3 model family: Opus, sonnet, haiku. Anthropic (2024), https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf * [7] Arel, I., Rose, D.C., Karnowski, T.P.: Deep machine learning-a new frontier in artificial intelligence research [research frontier]. IEEE Computational Intelligence Magazine 5(4), 13–18 (2010) * [8] Betti, R., Hurricane, O.: Inertial-confinement fusion with lasers. Nature Physics 12, 435–448 (2016) * [9] Betti, R., Zhou, C.D., Anderson, K.S., Perkins, L.J., Theobald, W., Solodov, A.A.: Shock ignition of thermonuclear fuel with high areal density. Physical Review Letters 98, 155001 (2007) * [10] Bommasani, R., Hudson, D.A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M.S., Bohg, J., Bosselut, A., Brunskill, E., et al.: On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021) * [11] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) * [12] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: NeurIPS (2020) * [13] Cao, S., Patel, D., Lees, A., Stoeckl, C., Rosenberg, M., Gopalaswamy, V., Wen, H., Huang, H., Shvydky, A., Betti, R., et al.: Predicting hot electron generation in inertial confinement fusion with particle-in-cell simulations. Physical Review E 106(5), 055214 (2022) * [14] Craxton, R.S., Anderson, K.S., Boehly, T.R., Goncharov, V.N., Harding, D.R., Knauer, J.P., McCrory, R.L., McKenty, P.W., Meyerhofer, D.D., Myatt, J.F., Schmitt, A.J., Sethian, J.D., Short, R.W., Skupsky, S., Theobald, W., Kruer, W.L., Tanaka, K., Betti, R., Collins, T.J.B., Delettrez, J.A., Hu, S.X., Marozas, J.A., Maximov, A.V., Michel, D.T., Radha, P.B., Regan, S.P., Sangster, T.C., Seka, W., Solodov, A.A., Soures, J.M., Stoeckl, C., Zuegel, J.D.: Direct-drive inertial confinement fusion: A review. Physics of Plasmas 22(11), 110501 (2015) * [15] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) * [16] Dietterich, T.G.: Ensemble methods in machine learning. In: International Workshop on Multiple Classifier Systems. pp. 1–15. Springer (2000) * [17] Dong, J., Ohana, R., Rafayelyan, M., Krzakala, F.: Reservoir computing meets recurrent kernels and structured transforms. In: NeurIPS (2020) * [18] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021) * [19] Frank, E., Trigg, L., Holmes, G., Witten, I.H.: Naive bayes for regression. Machine Learning 41, 5–25 (2000) * [20] Gauthier, D.J., Bollt, E., Griffith, A., Barbosa, W.A.: Next generation reservoir computing. Nature Communications 12(1), 1–8 (2021) * [21] Goncharov, V.N., Sangster, T.C., Radha, P.B., Betti, R., Boehly, T.R., Collins, T.J.B., Craxton, R.S., Delettrez, J.A., Epstein, R., Glebov, V.Y., Hu, S.X., Igumenshchev, I.V., Knauer, J.P., Loucks, S.J., Marozas, J.A., Marshall, F.J., McCrory, R.L., McKenty, P.W., Meyerhofer, D.D., Regan, S.P., Seka, W., Skupsky, S., Smalyuk, V.A., Soures, J.M., Stoeckl, C., Shvarts, D., Frenje, J.A., Petrasso, R.D., Li, C.K., Seguin, F., Manheimer, W., Colombant, D.G.: Performance of direct-drive cryogenic targets on omega. Physics of Plasmas 15(5), 056310 (2008) * [22] Gopalaswamy, V., Williams, C., Betti, R., Patel, D., Knauer, J., Lees, A., Cao, D., Campbell, E., Farmakis, P., Ejaz, R., et al.: Demonstration of a hydrodynamically equivalent burning plasma in direct-drive inertial confinement fusion. Nature Physics pp. 1–7 (2024) * [23] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) * [24] Hearst, M.A., Dumais, S.T., Osuna, E., Platt, J., Scholkopf, B.: Support vector machines. IEEE Intelligent Systems and their Applications 13(4), 18–28 (1998) * [25] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997) * [26] Huang, W., Abbeel, P., Pathak, D., Mordatch, I.: Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In: ICML (2022) * [27] Huang, Y., Song, J., Wang, Z., Chen, H., Ma, L.: Look before you leap: An exploratory study of uncertainty measurement for large language models. arXiv preprint arXiv:2307.10236 (2023) * [28] Igumenshchev, I.V., Edgell, D.H., Goncharov, V.N., Delettrez, J.A., Maximov, A.V., Myatt, J.F., Seka, W., Shvydky, A., Skupsky, S., Stoeckl, C.: Crossed-beam energy transfer in implosion experiments on omega. Physics of Plasmas 17(12), 122708 (2010) * [29] Jaeger, H.: Echo state network. scholarpedia 2(9), 2330 (2007) * [30] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Computing Surveys 31(3), 264–323 (1999) * [31] Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J.Y., Shi, X., Chen, P.Y., Liang, Y., Li, Y.F., Pan, S., et al.: Time-llm: Time series forecasting by reprogramming large language models. In: ICLR (2023) * [32] Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., et al.: Highly accurate protein structure prediction with alphafold. Nature 596(7873), 583–589 (2021) * [33] Kadous, M.W.: Learning comprehensible descriptions of multivariate time series. In: ICML (1999) * [34] Kardashev, N.S.: Transmission of Information by Extraterrestrial Civilizations. Soviet Astronomy 8, 217 (1964) * [35] Kennedy, J., Eberhart, R.: Particle swarm optimization. In: ICNN (1995) * [36] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015) * [37] Kirkwood, R.K., Montgomery, D.S., Afeyan, B.B., Moody, J.D., MacGowan, B.J., Joshi, C., Wharton, K.B., Glenzer, S.H., Williams, E.A., Young, P.E., Kruer, W.L., Estabrook, K.G., Berger, R.L.: Observation of the nonlinear saturation of langmuir waves driven by ponderomotive force in a large scale plasma. Physical Review Letters 83, 2965–2968 (1999) * [38] Klimo, O., Weber, S., Tikhonchuk, V.T., Limpouch, J.: Particle-in-cell simulations of laser–plasma interaction for the shock ignition scenario. Plasma Physics and Controlled Fusion 52(5), 055013 (2010) * [39] Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. In: NeurIPS (2022) * [40] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015) * [41] Li, C., Wong, C., Zhang, S., Usuyama, N., Liu, H., Yang, J., Naumann, T., Poon, H., Gao, J.: Llava-med: Training a large language-and-vision assistant for biomedicine in one day. In: NeurIPS (2023) * [42] Li, J., Zhang, S., Krauland, C.M., Wen, H., Beg, F.N., Ren, C., Wei, M.S.: Pump depletion and hot-electron generation in long-density-scale-length plasma with shock-ignition high-intensity laser. Physical Review E 101, 033206 (2020) * [43] Li, X., Zhu, Q., Zhao, C., Duan, X., Zhao, B., Zhang, X., Ma, H., Sun, J., Lin, W.: Higher-order granger reservoir computing: simultaneously achieving scalable complex structures inference and accurate dynamics prediction. Nature Communications 15(1), 2506 (2024) * [44] Lin, B., Zhu, Y., Chen, Z., Liang, X., Liu, J., Liang, X.: ADAPT: vision-language navigation with modality-aligned action prompts. In: CVPR (2022) * [45] Lin, Z., Trivedi, S., Sun, J.: Generating with confidence: Uncertainty quantification for black-box large language models. arXiv preprint arXiv:2305.19187 (2023) * [46] Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. In: NeurIPS (2023) * [47] Lukoševičius, M.: A practical guide to applying echo state networks. In: Neural Networks: Tricks of the Trade: Second Edition, pp. 659–686. Springer (2012) * [48] Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review 3(3), 127–149 (2009) * [49] Maass, W.: Liquid state machines: motivation, theory, and applications. Computability in context: computation and logic in the real world pp. 275–296 (2011) * [50] Moor, M., Banerjee, O., Abad, Z.S.H., Krumholz, H.M., Leskovec, J., Topol, E.J., Rajpurkar, P.: Foundation models for generalist medical artificial intelligence. Nature 616(7956), 259–265 (2023) * [51] Nakajima, K., Fischer, I.: Reservoir computing. Springer (2021) * [52] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. In: NeurIPS (2019) * [53] Perkins, L.J., Betti, R., LaFortune, K.N., Williams, W.H.: Shock ignition: A new approach to high gain inertial confinement fusion on the national ignition facility. Physical Review Letters 103, 045004 (2009) * [54] Radha, P., Hohenberger, M., Edgell, D., Marozas, J., Marshall, F., Michel, D., Rosenberg, M., Seka, W., Shvydky, A., Boehly, T., et al.: Direct drive: Simulations and results from the national ignition facility. Physics of Plasmas 23(5), 056305 (2016) * [55] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) * [56] Rezayi, S., Liu, Z., Wu, Z., Dhakal, C., Ge, B., Zhen, C., Liu, T., Li, S.: Agribert: Knowledge-infused agricultural language models for matching food and nutrition. In: IJCAI (2022) * [57] Riconda, C., Weber, S., Tikhonchuk, V., Héron, A.: Kinetic simulations of stimulated raman backscattering and related processes for the shock-ignition approach to inertial confinement fusion. Physics of Plasmas 18(9), 092701 (2011) * [58] Rosenberg, M., Solodov, A., Myatt, J., Seka, W., Michel, P., Hohenberger, M., Short, R., Epstein, R., Regan, S., Campbell, E., et al.: Origins and scaling of hot-electron preheat in ignition-scale direct-drive inertial confinement fusion experiments. Physical Review Letters 120(5), 055001 (2018) * [59] Rosenberg, M., Solodov, A., Seka, W., Follett, R., Myatt, J., Maximov, A., Ren, C., Cao, S., Michel, P., Hohenberger, M., et al.: Stimulated raman scattering mechanisms and scaling behavior in planar direct-drive experiments at the national ignition facility. Physics of Plasmas 27(4) (2020) * [60] Safavian, S.R., Landgrebe, D.: A survey of decision tree classifier methodology. IEEE Transactions on Systems, Man, and Cybernetics 21(3), 660–674 (1991) * [61] Samuel, A.L.: Some studies in machine learning using the game of checkers. IBM Journal of Research and Development 3(3), 210–229 (1959) * [62] Sapankevych, N.I., Sankar, R.: Time series prediction using support vector machines: a survey. IEEE Computational Intelligence Magazine 4(2), 24–38 (2009) * [63] Schrauwen, B., Verstraeten, D., Van Campenhout, J.: An overview of reservoir computing: theory, applications and implementations. In: European Symposium on Artificial Neural Networks. pp. 471–482 (2007) * [64] Sclar, M., Choi, Y., Tsvetkov, Y., Suhr, A.: Quantifying language models’ sensitivity to spurious features in prompt design or: How I learned to start worrying about prompt formatting. arXiv preprint arXiv:2310.11324 (2023) * [65] Sclar, M., Choi, Y., Tsvetkov, Y., Suhr, A.: Quantifying language models’ sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting. In: ICLR (2024) * [66] Shang, W.L., Betti, R., Hu, S.X., Woo, K., Hao, L., Ren, C., Christopherson, A.R., Bose, A., Theobald, W.: Electron shock ignition of inertial fusion targets. Physical Review Letters 119, 195001 (2017) * [67] Singhal, K., Azizi, S., Tu, T., Mahdavi, S.S., Wei, J., Chung, H.W., Scales, N., Tanwani, A., Cole-Lewis, H., Pfohl, S., et al.: Large language models encode clinical knowledge. Nature 620(7972), 172–180 (2023) * [68] Smalyuk, V.A., Shvarts, D., Betti, R., Delettrez, J.A., Edgell, D.H., Glebov, V.Y., Goncharov, V.N., McCrory, R.L., Meyerhofer, D.D., Radha, P.B., Regan, S.P., Sangster, T.C., Seka, W., Skupsky, S., Stoeckl, C., Yaakobi, B., Frenje, J.A., Li, C.K., Petrasso, R.D., Séguin, F.H.: Role of hot-electron preheating in the compression of direct-drive imploding targets with cryogenic ${\mathrm{d}}_{2}$ ablators. Physical Review Letters 100, 185005 (2008) * [69] Smalyuk, V., Shvarts, D., Betti, R., Delettrez, J., Edgell, D., Glebov, V.Y., Goncharov, V., McCrory, R., Meyerhofer, D., Radha, P., et al.: Role of hot-electron preheating in the compression of direct-drive imploding targets with cryogenic d 2 ablators. Physical Review Letters 100(18), 185005 (2008) * [70] Solodov, A., Rosenberg, M., Seka, W., Myatt, J., Hohenberger, M., Epstein, R., Stoeckl, C., Short, R., Regan, S., Michel, P., et al.: Hot-electron generation at direct-drive ignition-relevant plasma conditions at the national ignition facility. Physics of Plasmas 27(5) (2020) * [71] Stoeckl, C., Bahr, R., Yaakobi, B., Seka, W., Regan, S., Craxton, R., Delettrez, J., Short, R., Myatt, J., Maximov, A., et al.: Multibeam effects on fast-electron generation from two-plasmon-decay instability. Physical Review Letters 90(23), 235002 (2003) * [72] Sutton, R.S.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988) * [73] Szot, A., Clegg, A., Undersander, E., Wijmans, E., Zhao, Y., Turner, J.M., Maestre, N.D., Mukadam, M., Chaplot, D.S., Maksymets, O., Gokaslan, A., Vondruš, V., Dharur, S., Meier, F., Galuba, W., Chang, A.X., Kira, Z., Koltun, V., Malik, J., Savva, M., Batra, D.: Habitat 2.0: Training home assistants to rearrange their habitat. In: NeurIPS (2021) * [74] Tanaka, G., Yamane, T., Héroux, J.B., Nakane, R., Kanazawa, N., Takeda, S., Numata, H., Nakano, D., Hirose, A.: Recent advances in physical reservoir computing: A review. Neural Networks 115, 100–123 (2019) * [75] Team, G., Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A.M., Hauth, A., et al.: Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023) * [76] Tenenbaum, J.B., Silva, V.d., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000) * [77] Thirunavukarasu, A.J., Ting, D.S.J., Elangovan, K., Gutierrez, L., Tan, T.F., Ting, D.S.W.: Large language models in medicine. Nature Medicine 29(8), 1930–1940 (2023) * [78] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) * [79] Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Canton-Ferrer, C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Koura, P.S., Lachaux, M., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog, I., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E.M., Subramanian, R., Tan, X.E., Tang, B., Taylor, R., Williams, A., Kuan, J.X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Rodriguez, A., Stojnic, R., Edunov, S., Scialom, T.: Llama 2: Open foundation and fine-tuned chat models. CoRR (2023) * [80] Tzachor, A., Devare, M., Richards, C., Pypers, P., Ghosh, A., Koo, J., Johal, S., King, B.: Large language models and agricultural extension services. Nature Food 4(11), 941–948 (2023) * [81] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) * [82] Verstraeten, D., Schrauwen, B., d’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Networks 20(3), 391–403 (2007) * [83] Weisberg, S.: Applied linear regression, vol. 528. John Wiley & Sons (2005) * [84] Williams, C.K., Rasmussen, C.E.: Gaussian processes for machine learning. MIT press Cambridge, MA (2006) * [85] Woo, G., Liu, C., Kumar, A., Xiong, C., Savarese, S., Sahoo, D.: Unified training of universal time series forecasting transformers. arXiv preprint arXiv:2402.02592 (2024) * [86] Wu, H., Xu, J., Wang, J., Long, M.: Autoformer: Decomposition transformers with Auto-Correlation for long-term series forecasting. In: NeurIPS (2021) * [87] Yaakobi, B., Stoeckl, C., Seka, W., Delettrez, J., Sangster, T., Meyerhofer, D.: Measurement of preheat due to fast electrons in laser implosions of cryogenic deuterium targets. Physics of Plasmas 12(6) (2005) * [88] Yaakobi, B., Stoeckl, C., Boehly, T., Meyerhofer, D., Seka, W.: Measurement of preheat due to fast electrons in laser implosions. Physics of Plasmas 7(9), 3714–3720 (2000) * [89] Yan, R., Li, J., Ren, C.: Intermittent laser-plasma interactions and hot electron generation in shock ignition. Physics of Plasmas 21(6), 062705 (2014) * [90] Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K.R., Cao, Y.: React: Synergizing reasoning and acting in language models. In: ICLR (2023) * [91] Zhang, A., Yang, J., Luo, Y., Fan, S.: Forecasting the progression of human civilization on the kardashev scale through 2060 with a machine learning approach. Scientific Reports 13(1), 11305 (2023) * [92] Zhang, Y., Li, P., Jin, Y., Choe, Y.: A digital liquid state machine with biologically inspired learning and its application to speech recognition. IEEE Transactions on Neural Networks and Learning Systems 26(11), 2635–2649 (2015) * [93] Zheng, L., Chiang, W., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E.P., Zhang, H., Gonzalez, J.E., Stoica, I.: Judging llm-as-a-judge with mt-bench and chatbot arena. In: NeurIPS (2023) * [94] Zhou, T., Niu, P., Sun, L., Jin, R., et al.: One fits all: Power general time series analysis by pretrained lm. In: NeurIPS (2023) * [95] Zhu, D., Chen, J., Shen, X., Li, X., Elhoseiny, M.: Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592 (2023) SUMMARY OF THE APPENDIX This supplementary contains additional experimental results and discussions of our NeurIPS 2024 submission: Inertial Confinement Fusion Forecasting via LLMs, organized as follows: * • §S1 provides Implementation Details. * • §S2 reports more Quantitative Results with Runtime Analysis. * • §S3 shows more Qualitative Results. * • §S4 analyzes Failure Case. * • §S5 conducts Confidence Analysis. * • §S6 discusses the Social Impact & Limitation of our research. * • §S7 offers Ethical Guard or our dataset. * • §S8 claims Reproducibility of our approach. * • §S9 supplies Data License for the methods we used for comparison. ## Appendix S1 Implementation Details The overall pipeline of Fusion-LLM is shown in Fig. 3. Experiments are conducted on two NVIDIA A100-40GB GPUs. For our approach, we keep all parameters of the LLMs and most of the SDC frozen during the fine-tuning. Only parameters pertaining to the Prediction Head and partial Spatial Encoder are trainable. The codes and dataset shall be publicly released upon paper acceptance. * • Fusion-LLM is built from Llama-2-7B-hf / 3-8B [78, 4] to construct reservoir without tuning. * • Fusion-specific prompts structure the textual prompts with three descriptors: context descriptor, task descriptor, and input descriptor. Each descriptor is initialized with specialized tokens for indication (e.g., $<|$begin_of_text$|>$, $<|$eot_id$|>$, $<|$start_header_id$|>$, etc) and input scalars as context descriptions (e.g., $<$seq_len$>$, $<$pred_len$>$, $<$phase_plate$>$, etc). These prompts are subsequently concatenated and input into the projection layer from LLMs for feature embedding. * • Signal-digesting channels are composed of two components, temporal encoder and spatial encoder. The ${}_{\\!}$former ${}_{\\!}$one, ${}_{\\!}$which incorporates 24 Transformer layers and a linear layer, captures temporal features ${}_{\\!}$over ${}_{\\!}$the ${}_{\\!}$input ${}_{\\!}$laser ${}_{\\!}$signal. ${}_{\\!}$This ${}_{\\!}$module ${}_{\\!}$has ${}_{\\!}$been ${}_{\\!}$pre-trained ${}_{\\!}$on ${}_{\\!}$the ${}_{\\!}$Large-scale ${}_{\\!}$Open ${}_{\\!}$Time ${}_{\\!}$Series ${}_{\\!}$Archive ${}_{\\!}$(LOTSA) ${}_{\\!}$dataset ${}_{\\!}$[85], ${}_{\\!}$which ${}_{\\!}$covers ${}_{\\!}$nine ${}_{\\!}$varied ${}_{\\!}$domains ${}_{\\!}$and ${}_{\\!}$compiles ${}_{\\!}$over ${}_{\\!}$27 ${}_{\\!}$billion ${}_{\\!}$timestamped ${}_{\\!}$instances. ${}_{\\!}$The ${}_{\\!}$spatial ${}_{\\!}$decoder ${}_{\\!}$first ${}_{\\!}$uses ${}_{\\!}$a ${}_{\\!}$projection ${}_{\\!}$block ${}_{\\!}$from ${}_{\\!}$Llama ${}_{\\!}$[78, 4] ${}_{\\!}$to ${}_{\\!}$encode ${}_{\\!}$the ${}_{\\!}$context ${}_{\\!}$description ${}_{\\!}$of ${}_{\\!}$the ${}_{\\!}$input ${}_{\\!}$signal, ${}_{\\!}$followed ${}_{\\!}$by ${}_{\\!}$a ${}_{\\!}$leaner ${}_{\\!}$transformation. ${}_{\\!}$Outputs ${}_{\\!}$are ${}_{\\!}$fed ${}_{\\!}$into ${}_{\\!}$a ${}_{\\!}$cross-attention ${}_{\\!}$layer, ${}_{\\!}$where ${}_{\\!}$Key ${}_{\\!}$and ${}_{\\!}$Value ${}_{\\!}$are ${}_{\\!}$derived ${}_{\\!}$from ${}_{\\!}$the ${}_{\\!}$contextual ${}_{\\!}$embedding ${}_{\\!}$and ${}_{\\!}$query ${}_{\\!}$stems ${}_{\\!}$from ${}_{\\!}$temporal ${}_{\\!}$features, ${}_{\\!}$to ${}_{\\!}$generate ${}_{\\!}$the ${}_{\\!}$final ${}_{\\!}$spatial ${}_{\\!}$features. ${}_{\\!}$We concatenate ${}_{\\!}$the ${}_{\\!}$spatial ${}_{\\!}$and ${}_{\\!}$temporal ${}_{\\!}$features ${}_{\\!}$before ${}_{\\!}$feeding ${}_{\\!}$them ${}_{\\!}$into ${}_{\\!}$a ${}_{\\!}$linear ${}_{\\!}$layer ${}_{\\!}$to ${}_{\\!}$produce ${}_{\\!}$the ${}_{\\!}$final, ${}_{\\!}$augmented ${}_{\\!}$input ${}_{\\!}$signals. * • Confidence scanner has been described in §2.2.3 and it has no consumption of parameters. The default number of tokens $k$ used in confidence calculation is set to 50 in the implementation. * • Prediction head consists of two layers: a convolution layer with the kernel size of 32 and stride of 32, followed by batch normalization and GELU activation, connected to LLM, then fed to a linear layer with the input dimension of 128 that produces the final prediction. ## Appendix S2 Quantitative Results This section elaborates on a detailed analysis of quantitative results in Table S2, focusing specifically on in-context learning [12] performance and a runtime assessment of the models under investigation. Initially, we present supplementary in-context learning results obtained directly from various LLMs (i.e., Llama 2 [79], Llama 3 [4], and Claude 3 Opus [6]). These findings indicate that, even without an additional fine-tuning process, the LLMs exhibit substantial proficiency in the in-context learning scheme within the ICF task. For instance, Claude 3 Opus [6] achieves CAE scores of 12.19, 10.67, and 9.46 for the 1-shot, 2-shot, and 3-shot scenarios, respectively. It is amply demonstrated that the vanilla LLM has the ability to make inferences and predictions on empirical scientific data even if it is not fine-tuned at all. This underscores that our approach, leveraging these LLMs, represents a notable advancement, particularly in forecasting the energy dynamics of hot electrons. Furthermore, it is pertinent to emphasize the economic and operational advantages of our computational approach over traditional physical experiments. Specifically, conducting a single ICF experiment typically incurs costs upwards of one million US dollars. Conversely, computational simulations such as a $150\mu m$ PIC simulation [13] require extensive computational resources, amounting to the utilization of 19,584 cores of CPU over a period of 10 hours. In stark contrast, our model necessitates significantly less computational time and resources, requiring only 30 minutes on 2 NVIDIA A100 GPUs for training and only 3~4 seconds for inference with much higher predictive accuracy compared to the PIC simulation. This comparison not only underscores the cost-effectiveness of our approach but also its efficiency and practicality in other scientific applications where computational resource constraints are a critical factor. Table S1: Quantitative results on Fusion4AI test split for hot electron energy forecasting (see §3.1 for details). Train Time refer to training for the designated task, and Infer Time refer to the amount of time used to predict one case. Note that 3-shot experiments could not be performed on Llama series of models due to the limitation of context window. Method | # Params | Train Time | Infer Time | CAE$\downarrow$ | top-1 MAE$\downarrow$ | top-5 MAE$\downarrow$ ---|---|---|---|---|---|--- PIC Simulation [13] | - | - | $>10$ hrs | 2.88 | 0.20 | 0.13 LSTM [25] | 81.6K | ~5 mins | $<1$ s | 5.82 | 0.35 | 0.35 Autoformer [86] | 120.4K | ~8 mins | $<1$ s | 5.79 | 0.35 | 0.34 GPT4TS [94] | 1.5 B | ~22 mins | ~2 s | 3.34 | 0.18 | 0.14 Time-LLM [31] | 7 B | ~20 mins | ~3 s | 3.48 | 0.18 | 0.15 Llama 2 (1-Shot) [79] | 7 B | - | ~3 s | 471.22 | 15.80 | 14.92 Llama 2 (2-Shot) [79] | 7 B | - | ~5 s | 30.33 | 0.95 | 0.91 Llama 2 (1-Shot) [79] | 70 B | - | ~7 s | 20.63 | 0.64 | 0.63 Llama 2 (2-Shot) [79] | 70 B | - | ~8 s | 16.42 | 0.51 | 0.50 Llama 3 (1-Shot) [4] | 8 B | - | ~4 s | 583.14 | 17.70 | 16.97 Llama 3 (2-Shot) [4] | 8 B | - | ~6 s | 26.66 | 0.83 | 0.81 Llama 3 (1-Shot) [4] | 70 B | - | ~14 s | 72.35 | 1.02 | 1.15 Llama 3 (2-Shot) [4] | 70 B | - | ~19 s | 13.62 | 0.40 | 0.39 Claude 3 Opus (1-Shot) [6] | 137 B | - | ~12 s | 12.19 | 0.39 | 0.39 Claude 3 Opus (2-Shot) [6] | 137 B | - | ~17 s | 10.67 | 0.38 | 0.37 Claude 3 Opus (3-Shot) [6] | 137 B | - | ~20 s | 9.46 | 0.37 | 0.36 RCRK [17] | 106K | ~2 mins | $<1$ s | 4.31 | 0.28 | 0.22 HoGRC [43] | 394K | ~4 mins | $<1$ s | 4.20 | 0.25 | 0.22 NGRC [20] | 157K | ~2 mins | $<1$ s | 4.28 | 0.27 | 0.23 Fusion-LLM (Llama-2) | 7B | ~30 mins | ~3 s | 2.15 | 0.14 | 0.12 Fusion-LLM (Llama-3) | 8B | ~30 mins | ~4 s | 1.90 | 0.14 | 0.11 Figure S1: Predictions of LLMs with In-Context Learning. We plot Ground Truth and the predictions of Claude 3 Opus (3-shot) and Llama 3 70B (2-shot) with the comparison of trained methods LSTM and Autoformer. Y and X axes denote energy and time steps. ## Appendix S3 More Qualitative Result This section expands to include more qualitative results that help to understand the capabilities and effectiveness of this model. Initially, we release all visualized prediction results of our model Fusion-LLM on test split of our dataset Fusion4AI in Fig. S2. From these qualitative results, it can be found that our model achieves accurate predictions on all unseen data, especially conforming to the temporal and spatial characteristics of the predicted targets, which is crucial for physicists to apply our model as a tool in the design of real-world ICF ignitions. Figure S2: Visualization of hot electron prediction in test split. We plot Ground Truth and the predictions of Ours, Time-LLM, LSTM and Autoformer. Y and X axes denote energy and time steps, respectively. Recall that we describe in §S2, the direct prediction of ICF tasks by vanilla LLM use in-context learning method without fine-tuning on our data is not as good as fine-tuned methods in terms of the quantitative value of metric, but it can in fact predict more meaningful patterns than traditional methods such as LSTM [25] and Autoformer [86]. As illustrated in Figure S1, the LLM without fine-tune can infer approximate predictions with reference to the 1 to 3 examples provided, as compared to LSTM [25] and Autoformer [86] which can only predict straight lines close to 0 with patterns. This proves that the vanilla LLMs themselves already contain the capability to infer specific empirical scientific data, which is the core reason we chose LLMs as the reservoir of our Fusion-LLM. ## Appendix S4 Failure Case Analysis Figure S3: The Qualitative result of Shot 80937. We plot Input Laser Intensity, Ground Truth and Prediction. Y axis denotes laser intensity / hot electron energy, and X axis denotes time step. In this section, we examine a significant outlier with the largest error forecasts generated by the Fusion-LLM on the test split. This particular instance serves as a critical case study for understanding the limitations and challenges faced by our model. Fig. S3 illustrates that the shot markedly deviates from the typical scenarios. Notably, this shot exhibits an exceptionally low peak hot electron energy, registering less than 0.15, whereas the majority of other cases yield values ranging between 0.25 and 0.5 under a comparable input laser profile. This anomaly categorizes this shot as an out-of-distribution (OOD) instance. The limited volume of training data available for Fusion-LLM is a plausible explanation for the model’s diminished performance on this OOD data. In scenarios where training data is sparse, the model’s capability to generalize to new, especially atypical, data points is inherently restricted. Consequently, this case highlights the importance of enhancing the dataset’s diversity and volume for ICF tasks. We hope the community can share more data points to improve and enlarge Fusion4AI dataset (§3.1) together. ## Appendix S5 Confidence Analysis In this section, we provide a comprehensive discussion and analysis of the confidence scores associated with the Fusion-LLM. As elaborated in §2.2.3, our confidence scores offer per-step evaluations, thereby aiding physicists to gain deeper insights into the reliability across various segments of predictions. This functionality is particularly vital for understanding the model’s performance dynamics within specific contexts of its predictive output. To visually represent this, we have plotted the prediction errors at each time step for the four test sets alongside their respective confidence scores in Fig. S4. The visual analysis reveals that the confidence scores exhibit a discernible decline in the intervals where the model’s predictions incur larger errors compared to other intervals. This observation substantiates the existence of a correlation between the model’s diminished confidence and the occurrence of higher prediction inaccuracies. Such an analysis underscores the importance of confidence scores as a diagnostic tool, highlighting intervals where the model’s predictions are potentially less reliable. By mapping these confidence scores to the corresponding prediction errors, physicists can identify specific phases within the prediction and temporal sequence where the model’s forecasting should be interpreted with caution. This capability not only enhances the trustworthiness of the Fusion-LLM but also provides critical feedback for further refinement of the model in the future research. Moreover, the integration of confidence scores into the model’s predictive framework offers a robust mechanism for assessing the model’s performance in real-time applications. By continuously monitoring these scores, physicists can make informed decisions about the reliability of the predictions, ensuring that critical assessments and subsequent actions are based on the most credible forecasting. Figure S4: Qualitative results of confidence score and prediction error. ## Appendix S6 Social Impacts and Limitations The introduction of Fusion-LLM represents a significant advancement in integrating LLMs with classical reservoir computing paradigms to enhance predictive capabilities in Inertial Confinement Fusion. This novel approach not only meets but exceeds several existing state-of-the-art models in performance benchmarks. From a societal perspective, the implications of Fusion-LLM are profoundly beneficial, as our approach provides a valuable tool for advancing our understanding and capabilities in harnessing fusion energy — a potential key to long-term sustainable energy solutions. However, it is imperative to acknowledge and critically assess the potential drawbacks associated with this technology. Similar to other predictive models, Fusion- LLM faces challenges when dealing with out-of-distribution data or scenarios that have not been previously encountered. This limitation underscores the need for ongoing research and refinement, particularly in its application to real-world ICF scenarios where unpredictable behaviors might emerge. Therefore, while the model demonstrates promising applications, its deployment in practical settings must be approached with caution, ensuring continuous evaluation and adaptation to maintain reliability and safety in its predictive assertions. ## Appendix S7 Ethical Safeguards In our paper, which involves a new dataset, we will establish comprehensive ethical safeguards to mitigate potential misuse and ensure responsible utilization, as outlined in the detailed protocols in the final release of models and datasets. These protocols include strict usage guidelines, access restrictions, integration of safety filters, and monitoring mechanisms. We conduct thorough risk assessments to identify potential misuse scenarios, developing tailored mitigation strategies such as robust data governance frameworks. Although not all research may require stringent safeguards, we adhere to best practices, promoting ethical awareness encouraging researchers to consider the broader impacts of their work and maintain detailed documentation for transparency and accountability. These efforts demonstrate our commitment to upholding the highest standards of ethical conduct in scientific inquiry, aiming to safeguard the interests and privacy of all people involved. ## Appendix S8 Reproducibility Fusion-LLM is implemented in PyTorch [52]. Experiments are conducted on two NVIDIA A100-40GB GPUs. To guarantee reproducibility, our full implementation shall be publicly released upon paper acceptance. ## Appendix S9 Licenses for existing assets All the methods we used for comparison are publicly available for academic usage. PIC Simulation is implemented based on the reproducing by osiris- code/osiris with AGPL-3.0 license. We use huggingface/transformers for the implementations of Autoformer [86], Llama 2 [79] and Llama 3 [4] with Apache-2.0 license and Llama 2/3 Community License Agreement. We used the official repositories DAMO-DI-ML/NeurIPS2023-One-Fits-All (GPT4TS [94]), KimMeen/Time-LLM [31], rubenohana/Reservoir-computing-kernels (RCRK [17]), CsnowyLstar/HoGRC [43] and quantinfo/ng-rc-paper-code (NGRC [20]) for our comparison experiments, where Time-LLM [31] is licensed under Apache-2.0, HoGRC [43] and NGRC [20] are licensed under MIT, the rest did not mention their licenses.
# Open or not open: Are conventional radio access networks more secure and trustworthy than Open RAN? Felix Klement Computer Engineering, University of Passau, 94032 Passau, Germany Stefan Katzenbeisser Computer Engineering, University of Passau, 94032 Passau, Germany Vincent Ulitzsch Security in Telecommunications, TU Berlin, 10587 Berlin, Germany Juliane Krämer Data Security and Cryptography, University of Regensburg, 93053 Regensburg, Germany Slawomir Stanczak Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany Zoran Utkovski Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany Igor Bjelakovic Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany Gerhard Wunder Cybersecurity and AI Group, FU Berlin, 14195 Berlin, Germany ###### Abstract The Open RAN architecture is a promising and future-oriented architecture. It is intended to open up the radio access network (RAN) and enable more innovation and competition in the market. This will lead to RANs for current 5G networks, but especially for future 6G networks, evolving from the current highly integrated, vendor-specific RAN architecture towards disaggregated architectures with open interfaces that will enable to better tailor RAN solutions to the requirements of 5G and 6G applications. However, the introduction of such an open architecture substantially broadens the attack possibilities when compared to conventional RANs. In the past, this has often led to negative headlines that in summary have associated Open RAN with faulty or inadequate security. In this paper, we analyze what components are involved in an Open RAN deployment, how to assess the current state of security, and what measures need to be taken to ensure secure operation. ###### Index Terms: Security, Open RAN, O-RAN, OpenRAN, 5G, 6G, ## I Introduction to Open-RAN ### I-A Motivation Modern communication is one of the central pillars of successful digitization. Particularly instrumental is the recently introduced 5G technology and its ongoing evolution towards 6G. In addition to public mobile networks, 5G technology - and in the long term 6G technology - will also be used for local radio networks (so-called private networks or campus networks). A 5G mobile network (whether public or private) typically consists of a transport network (e.g., fiber optic network), a core network with central elements for network control, and the radio access network (RAN) that provides connections to mobile terminals. A schematic example of such a 5G mobile radio network can be seen in Figure 1. While there is a plethora of vendors for virtualized core networks, radio access networks are provided by only a handful of major network equipment vendors. Today’s RANs are in addition highly integrated solutions from individual vendors, with little interoperability between products from different vendors. This inevitably leads to innovation barriers. A key to more innovation in mobile networks lies in the Open RAN approach, which uses RAN technologies based on disaggregation and openness. In the Open RAN approach, the RAN is divided into several RAN units, each of which performs different RAN functions. The crucial point here is that the interfaces between the RAN units are open and guarantee interoperability. The open interfaces are therefore the basis for more flexibility and the much needed trust in communication technologies. Finally, Open RAN promises performance enhancements over the current integrated vendor-specific solutions. Fig. 1: Mobile Network In addition to the architectural disaggregation and openness (in the sense of interoperability), the aspects of cloudification and virtualisation [1], network slicing [2], [3] and machine learning [4] also play an important role in the Open RAN context. Yet, it is important to emphasize that except for disaggregation and openness, the other aspects such as virtualization and machine learning are not an inherent part of the Open RAN concept. This means that Open RAN systems basically do not need to be virtualized, which can be beneficial in some cases. For instance, in Massive MIMO systems, it may be beneficial in terms of energy-efficiency to implement the lower layer RAN functions on a dedicated system-on-chip (SoC) rather than running these functions on general-purpose processors. Therefore, Open RAN is not equivalent to virtualized RAN (vRAN), even though many Open RAN systems are highly virtualized systems. However, this is also true for highly integrated and closed RAN systems. For a better understanding of this paper, it is also helpful to distinguish between Open RAN, O-RAN, and OpenRAN (one word), as these terms are often confused or used interchangeably. The acronym O-RAN originates from the O-RAN Alliance, which focuses largely on the development of the O-RAN architecture. This architecture forms the basis for the analysis in this paper. OpenRAN (one word), on the other hand, is a project group established by the Telecom Infra Project (TIP). This term plays no role in this paper. Finally, Open RAN is used as a generic term for disaggregated systems with open and interoperable interfaces. The O-RAN architecture is one possible Open RAN architecture. Especially in in the context campus networks, there might be different flavors of the Open RAN architecture. ### I-B Architectural Overview A traditional 3GPP-specified NG-RAN (Next Generation NodesBs Radio Access Network) is divided into two logical RAN units: The CU (Central Unit) and the DU (Distributed Unit). These basic units in turn comprise several logical functional units. All of them together are then connected to the core network. In the Open-RAN, the above two RAN functions are further divided according to the 3GPP definition. Figure 2 shows a simplified version outlining the breakdown of RAN functions and interfaces for a 3GPP compliant O-RAN architecture [5]. New RAN functions specifically defined in the context of O-RAN are: 1. 1. Service Management and Orchestration (SMO) Framework 2. 2. RAN Intelligent Controllers (RICs) in the variants non real time (Non-RT RIC) and near real time (Near-RT RIC) 3. 3. Remote Unit (O-RU) 4. 4. O-Cloud We briefly explain the newly introduced interfaces within O-RAN in Section II. Fig. 2: O-RAN specific interfaces [5] ### I-C Security Advantages While most critics of Open-RAN architectures often criticise the general security or describe it as poor, we rather see the great opportunity to effectively apply security solutions across the board. It is of extreme importance in all projects to include security assessment at an early stage. As we are still in the initial design phase for O-RAN, we can now develop and incorporate security concepts in a timely manner. The concept of security by design is based on the fact that the impact of iterating an architecture in its early stages towards a secure design is many times more effective and leads to more stable outcomes than conducting a security analysis in the later stages of a technology’s lifecycle [6]. A risk analysis [7] has already been commissioned by the German Federal Office for Information Security (BSI). We have used this analysis as the basis for our further research and built on it. The BSI study is an important contribution because it highlights the risks associated with the O-RAN architecture and has triggered an important discussion about security in 5G networks. In this paper, we would like to focus more on the opportunities of the Open RAN approach and raise other important issues such as post-quantum security. ## II Important Interfaces In this section we briefly mention the most important interfaces within an O-RAN deployment and summarise them. The descriptions of the individual interfaces are taken from the the official O-RAN specifications. ### II-A O-Cloud The O-Cloud is a cloud-based computing platform that includes a collection of physical infrastructure nodes and hosts the components of relevant O-RAN functions (e.g., Near-RT RIC, O-CU-CP, O-CU-UP and O-DU). Beyond that, it also provides support for software components as well as the corresponding management and orchestration functions. Generally speaking it can be seen as the central execution environment for O-RAN components [8]. ### II-B O1-Interface This interface grants access to network capabilities to the service management and orchestration framework. Here, network management is implemented according to the FCAPS model [9]. FCAPS follows to the ISO model for telecommunications network management, which defines and incorporates the fault, configuration, accounting, performance, and security management task areas [10], [11]. ### II-C O2-Interface This interface plays a central role in the O-RAN environment. It is a tool for managing and orchestrating open management and services. Its objective is to guarantee secure communication between the SMO framework and the O-Cloud platform. Therefore, the O2 interface as such is also extremely empowering [12]. ### II-D A1-Interface The A1-Interface facilitates communication between the Non-Real-Time RIC and the Near-Real-Time RIC. This involves the transmission of data from internally and externally O-RAN sources to the SMO-Framework. As an example, the declarative A1 policy-based guidelines, which contain statements about goals and resources for UEs and cells, can be mentioned here. Other administrative information shared via this interface are used for ML models (training, updating, use of ML models). Various internal as well as external O-RAN data sources are made available as enrichment information. The availability and general use of these sources is not crucial for the fulfilment of a task, but only serves the purpose of general improvement [8]. ### II-E R1-Interface To access RIC functions that do not run in real time, the so-called rApps utilise the R1 interface. Examples here include the provision of policy-based guidance and enrichment information which is obtained through the A1 interface. In addition, data analysis, AI/ML training and information retrieval for RAN optimisation or for usage by other rApps is another main component. As well as the recommendation of configurations which can be transmitted via the O1 interface [13]. ### II-F E2-Interface In order to connect the near-RT RIC with the so-called E2 nodes, the E2-Interface is used. In general it shall support all protocol layers and interfaces that are defined in 3GPP radio access networks. The objective is to manage and improve the E2 nodes and the resources they consume. To make this feasible, the ”RAN Function Network Interfaces (NI)” service can observe and, if necessary, modify the entire data traffic of the network interface of each individual direct node [14]. ### II-G Open Fronthaul M-Plane The Open Fronthaul M-Plane interface allows the management of the open radio unit components. It is used for performance reporting or for initialising and configuring operating parameters. Particularly relevant from a risk perspective is the possibility of updating the software of the components with this interface [15]. ### II-H Open Fronthaul CUS-Plane The user and control plane data of the Uu interface are transmitted via the Open Fronthaul CUS-Plane interface. Furthermore, this component is in charge of synchronizing the time between the Open Distributed Unit and the Open Radio Unit. [16]. ## III Stakeholders To be able to analyse risks within an architecture more precisely, it is important to determine all stakeholders within the system and to define their capabilities. In table I, possible attackers are therefore briefly explained according to the models defined by [17], [7] and [18]. We have extracted what we consider to be the most important stakeholders and categorised them according to their level of access and overall safety-critical impact. In total, we have assigned four (from L1 to L4) access levels. The higher the level, the greater the potential impact of the capabilities available to the stakeholder. The categorisation of the individual levels was declared by us on the basis of the respective possibilities and their effects. All stakeholders who use only a few or only individual areas of the RAN and can exploit it if fall under L1. We therefore assume a relatively low common risk. At level two, the stakeholder can further influence the component and also control it if necessary, i.e. he has explicit knowledge of the respective device and either operates it himself or knows to what extent and in what environment it is operated. In terms of risk, L2 already causes considerably more damage than L1, which is why we declare the overall risk to be medium. The next level, L3, describes all stakeholder entities that have control and access to a complete stand-alone RAN system. This class poses a very high risk, as the instances have the ability to manipulate any services or hardware components. Thus, L3 creates a high risk with regard to the entire system. The last and highest access category is level 4, representing the highest possible general risk (severe). It refers to actors which are specialised in the construction or provision of RAN subsystems as well as complete systems. The main difference to L3 is that L4 stakeholders not only operate the network, but also configure and create it themselves. This means that they have fundamental knowledge about the functionality of the individual components. Thus, it can be assumed that possibly malicious L4 stakeholders can cause the greatest possible damage. Another point that underlines the seriousness of this is that L4 entities can already manipulate the overall RAN at an early stage. This means, for example, before the RAN operator in L3 has the chance to do so. It may be the case that an L3 stakeholder also falls into the L4 category at the same time, e.g. if an MNO integrates its RAN system completely itself. Later, in connection with the interfaces explained in section II, we will derive which measures are necessary to reduce or mitigate the attack surfaces. TABLE I: Itemised categorisation of the key stakeholder Stakeholder | Access | Short Description | Capability | Overall Risk ---|---|---|---|--- External [18] / Outsider [7] | L1 | | Does not match with the other --- defined stakeholder classes. Is able to access the interfaces defined by 3GPP or the O-RAN Alliance. | Is capable of performing designated types --- of threats in several risk zones Low | Network consumer --- (Similar to User [7] & Connected Devices [18]) L1 | | Is participating in the network as --- a normal and legitimate service consumer by connecting entities or services that facilitate network functionality. | Is able to exploit specific types of --- threats in different risk zones using legitimate credentials/secrets to exploit the respective network. Low Government Services [18] | L2 | | Authority that has legal rights --- to intercept/tap network traffic. | Decoding and reading of data sets sent --- over the network Medium | Hard-/Software Suppliers [18] --- & Manufacturers [17] L2 | | Vendor of one or more specific --- hard- or software component utilised in the system ”… providing services or infrastructure to MNOs in order to build and/or operate their networks” [18] | Can substitute benign hard-/software --- components with malicious ones Medium | RAN-Operator [7] --- & MNOs [18] L3 | | Refers to traditional Mobile (Virtual) --- Network operators, as well as critical infrastructure operators from non telecommunication sectors. Has complete and comprehensive control over the respective RAN. | Possesses enhanced capabilities to --- tamper hardware components as well as specific services High RAN-Integrator | L4 | | Is specialised in assembling RAN --- subsystems into a functioning entity and ensuring that the subsystems function smoothly together. | Controls and operates software deployed --- in the RAN as well as all hardware. Thus, he has the possibility to manipulate all individual entities at each available stage. Severe ## IV Mitigations of Security Problems In this section, we deal with the issue of how to solve the problems denounced and, above all, what steps are necessary to do so and how much effort is required. For this purpose, we identify security aspects that should be applied. ### IV-A Enforcement of clear safety & security concepts One of the main components to fully secure a system is the use of standardised mechanisms to ensure both safety and security. It would be necessary, for example, to create a clear definition of a clear rights and role concept with regard to the communication of interfaces and services. Furthermore, for separation concepts, firewall-friendly designs, minimisation of the effects of denial of service and the implementation of a zero trust model. To go one step further in defining possible measures, we have extracted the individual proposals from the enisa report [19] for network function virtualisation. These measures can also be applied holistically to an Open-RAN system. In Table 2 we have listed the best practices divided according to their level of applicability. If we look at the individual practices, it becomes clear how extensive our current security and safety options are in order to facilitate a secure Open-RAN deployment. ### IV-B Mandatory Encryption Parts of the O-RAN definition currently require no or only weak encryption, which weakens the overall security of a potential Open-RAN deployment. For example, the encryption at the transport layer is only optional and legacy protocols that rely on the weak encryption are not forbidden by default. Consequently, it is recommended to increase the security of O-RAN by mandating strong encryption and not allowing old protocols. This would indeed increase the security of O-RAN. It is noteworthy however, that previous radio access networks have the same shortcomings as Open-RAN when it comes to encryption, and have suffered from security vulnerabilities as a result. The partial lack of strong cryptography is thus not a threat introduced by O-RAN itself, but an ever existing threat to telecommunication networks, that can now be fixed in the O-RAN specification. Following the described security by design approach, we strongly encourage strong cryptography should be mandatory. This would result in a major security improvement compared to the previous standards. ### IV-C Post-Quantum Security Due to the dynamic development of quantum computers and their expected future ability to break currently used classical public-key schemes, using post- quantum schemes, i.e., quantum-resistant cryptographic schemes, should be at least recommended within O-RAN. Thus, while encryption at large should be mandatory, ”shall support” for post-quantum cryptography would be sufficient for now. We want to stress that classically encrypted data is also at risk before powerful quantum computers exist, since the encrypted data can be stored now and be decrypted once powerful quantum computers exist. Hence, the integrators have to carefully analyze which data need only short-term protection and which data need to be protected for several decades, which would then require post-quantum protection. Since the NIST process for standardizing post-quantum cryptographic schemes is on the verge of announcing first schemes that will be standardized, standardized PQC schemes will exist once 6G is being deployed. We want to emphasize two points that should be paid attention to when developing post-quantum secure telecommunication protocols: First, O-RAN relies mainly on symmetric cryptography to ensure authentication and confidentiality of the data. So far, post-quantum security considerations for telecommunication are often based on the assumption that quantum computers mainly pose a serious threat to asymmetric cryptography [20, 21] \- and thus, telecommunication protocols can ensure post-quantum security by doubling the key-size. Arguably though, this claim is not backed by proofs but rather by a truism that guides post-quantum security considerations; so in addition to replacing the existing asymmetric cryptography, O-RAN’s symmetric cryptosystems need to be re-evaluated w.r.t. quantum resilience (proofs) as well to back up the claim that doubling the key size is indeed sufficient. Second, the resource-constrained environment of telecommunication infrastructure (e.g., SIMcards have low resources available, low network bandwidth requirements are a must) and highly adversarial threat environment (e.g., SIM cards are highly susceptible to side channels, the need to defend against nation-station actors, and the fact that cloud environments introduce new threats) require to carefully evaluate and tailor post-quantum cryptography schemes towards use in telecommunication protocols. This process is time-intensive - given the slow nature of telecommunication standardization bodies and even more so the slow nature of actually implementing a new telecommunication standard, it is urgent to address the topic (and integrate post-quantum security in the standard) already now. Recent works have already started to explore these trade-offs for the Subscriber Concealed Identifier (SUCI), which aims at concealing subscriber identities [22]. The developed post-quantum secure protocols for the SUCI, tailored towards use in telecommunication protocols, serve as a great starting point to explore the challenges and necessary changes required when bringing post-quantum security to next generation telecommunication networks. ### IV-D Cloud Environments Moving Open-RAN components to the cloud results in a new threat landscape, specifically when considering a malicious cloud provider. Recent risk analysis correctly states that a cloud provider that controls the O-Cloud has the same capabilities as the RAN-Operator. Currently, there are few mandatory safety measures in the O-RAN requirements and therefore two recommendations can be clearly made to mitigate the problems: 1) Integrate security measures to defend against malicious cloud provider, for example, through Trusted Execution Environments and 2) to integrate mandatory access control and security requirements in the O-RAN definition. A malicious cloud provider would indeed undermine a RAN’s security. In practice however, operators expect to build and run their own data-centers instead of relying on external cloud solutions. As a result, the O-Cloud operator can be assigned the same level of trust as the RAN-Operator themselves. This completely mitigates the malicious cloud provider scenario. We also recommend to only use trusted datacenters and cloud solution for the O-Cloud: Defending against malicious cloud providers through the usage of confidential computing and trusted execution environments is non-trivial: The security provided by confidential computing and trusted execution environments have been undermined with various attack vectors, stemming from the very powerful attacker model. If the O-Cloud is trusted and adheres to the standard security best practices in its configuration and design, we expect the security risk induced through a cloud-based RAN to be minimal. ### IV-E Clarification & concrete definition From the point of view of security research, the current state of the O-RAN specification still leaves a number of wide gaps in the specification of security aspects. One of the reasons why this is the case is the philosophy of the O-RAN Alliance to mainly provide some kind of guidelines. This is why the term ”shall support” is often found in the documents. The actual implementation of the necessary security concepts is, in their view, the responsibility of the integrator or hardware manufacturer. Of course, this is a clear thorn in the side for an authority like the German Federal Office for Information Security, the BSI. The German Technical Inspection Agency (TÜV) would also like to have more concrete definitions, e.g., in order to be able to precisely evaluate and approve an Open-RAN system. However, this will probably never be possible with the plan pursued by the O-RAN Alliance, since they only provide a possible architecture and specifications for it, but do not develop an additional standard to 3GPP. The O-RAN Alliance merely defines a technical concept that is intended to improve interoperability in the radio access networks of mobile networks. This fact, however, offers us a great opportunity as security researchers in this field. We are now able to put an additional security view on top of the concept and thus mitigate all security- critical concerns. ### IV-F Privacy Technological innovations and emerging trends in the context of 6G, such as the tighter integration/convergence of sensing and communication, may pose unique challenges to both communication security and privacy. For example, in dual radar and communication systems, the inclusion of data into the probing signal, used to illuminate targets, makes it prone to eavesdropping from potentially malicious targets. Even if the data itself is protected with higher-layer encryption, the existence of a communication link can still be detected from a malicious agent, thus making it prone to cyberattacks [23]. Similarly, the introduction of sensing capabilities and explicit localization of users/devices to improve communication network performance, may pose significant privacy challenges. In face of these challenges, it is important to assess the potential of certain approaches to provide secure, privacy-preserving solutions at the radio access level. E.g. from a privacy perspective, it is essential to collect only user related data that is absolutely necessary for the operation of the network and to move away from the explicit localization paradigm as much as possible. In addition, appropriate privacy-enhancing technologies should be integrated, e.g. differential privacy. An example is provided by the use of channel charting [24] to enhance network functionalities such as, e.g., radio resource management, beam management (mmWave and sub-THz), cell- association and handover. Channel charting uses unsupervised/semi-supervised learning to embed high-dimensional information about the radio environment into a low-dimensional chart and relies on a pseudo-location (i.e. location in the low-dimensional chart) of the users. The use of pseudo-location on a channel chart can be seen as a privacy protection feature, allowing localization-related services to be delivered without requiring the actual user location to be estimated. While promising in general, the concept needs to be further formalized and thoroughly investigated from the perspective of privacy before inclusion as a design metric. ## V Machine Learning O-RAN seeks to utilize novel Machine Learning (ML) techniques such as deep learning to automate operational network functions and reduce operational cost once applied at both component and network levels [25]. However, the Open-RAN architecture entails security challenges because of its inherently open and modular nature. Devices within Open-RAN are able to run software that does not trust the hardware it is running on. Due to the openness of Open-RAN, it is susceptible to intrusion. Cyber-attacks pose a threat to security goals and could result in denial-of-service (DoS) within the Open-RAN network. These attacks present vulnerability to the Non-RT RIC and the Near-RT RIC controller operations within Open-RAN [26]. ### V-A Anomaly Detection Ideally, intrusions are detected with high performance, high speed, and a low false-positive alarm rate. It is therefore clear that traditional human means will not be enough to detect and combat cyber-attacks in the required manner. Consequently, robust ML techniques will also play a big role in tackling anomaly-based intrusion detection. Anomaly-based IDS can be divided into three classes: statistical anomaly IDS, knowledge-based IDS, and ML IDS [27]. The focus here is on novel ML-based IDS. The ML-based anomaly detection methods in IDS [28] can be classified into supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, and graph neural networks [29]. The limitations of traditional (shallow) ML-based IDS, such as the reliance on manual feature engineering to extract useful information from network traffic and dealing with unlabeled, high-dimensional data, have paved the way for Deep Learning (DL)-based IDS [30] that do not require manual feature engineering and can automatically learn complex features from raw data due to their deeper structure [31]. Initial RAN-specific anomaly detection approaches have been presented by [32], [33], [34], [35]. However, the O-RAN architecture with its different interfaces poses new challenges for an anomaly detector, as next-generation RAN data can be roughly divided into performance management, configuration management, and fault management data. It has been shown that considering only performance management data in anomaly detection can lead to sub-optimal results [34], [35]. Therefore, it is important to define the requirements under which the anomaly detector must operate, as not all approaches presented so far take into account the different data streams in the next-generation RAN. In terms of where the anomaly detector is deployed, for example, the training of ML models can be performed in the Non-RT RIC controller. Subsequently, the learned model is inputted into the Near-RT RIC controller which uses this model on Real-Time data and makes Real-Time decisions in an online manner [36]. ### V-B Importance of Robustness Moreover, every ML-based learning method mentioned above is prone to an attack technique referred to as adversarial ML [37]. Hence, incorporating ML solutions into the RAN poses new cyber-security threats. Consequently, having a thorough understanding of the attack surface and ensuring the robustness of the O-RAN to adversarial machine learning threats are mandatory for securing the new Open-RAN architecture [38]. ### V-C Explainable AI Notably, Explainable AI (XAI) has a great potential for securing ML systems as explanations are key in identifying and defending against different types of attacks. To elaborate, if explanations become available for adversarial attacks, they become easier to defend against [39]. Additionally, explanations can support effective root cause analysis and localisation [40]. TABLE II: List of best practices presented by enisa [19] Level | Practice | Level | Practice ---|---|---|--- Organisational | Trust model | Technical | Tracking version changes SLAs establishment | Deployment security Policy | Zero Trust | Software detection or relocation Security Assessment of new or changes to existing Services | (Post-Quantum) Cryptography Vulnerability handling patch management | Hypervisor protection Security testing and assurance | Security Management and orchestration Incident management | Remote attestation Secure Update Management | Software compliance and integrity preservation Restriction on installing applications | Segmentation/isolation between network functions Defense in depth | Secure boot integrity String password policy | Data protection and privacy Secure supply chain | Encrypting Volume/swap Areas Resources inventory management system and database | Trusted computing technologies Apply hardening policies | Hardware security Multi-vendors segregation and trust | Centralised log auditing Security by design | Use and ownership of ”root” administration credentials Life cycle management | Local or removal Blade Storage - SAN protection Software Bill Of Materials (SBM) | Network security Technical | Trusted time source | SDN security management Secure 3rd party hosting environments | MANO access control and management Redundancy and backup | VIM/CISM connectivity to Hypervisor/CIS Specific container security controls | Recovery and reinstallation OSS/BSS protection | Deploying VMs/Containers of differing trust levels LI capabilities | Orchestration platform security management User plane security | MEC security | ## VI Conclusion In our research so far, we have not found that Open-RAN concepts such as O-RAN introduce major problematic security issues. Of course, there is an increased attack target due to the larger surface area of the ecosystem. However, such complex systems can be secured through various practices such as in Table 2. We can only agree with the opinion of Mimran et. al. [41] that the security risks that arise are mainly due to the 5G requirements and less due to the specific decisions in the Open-RAN architecture. In general, we can say that now with O-RAN the attack surface is much clearer than in contrast to previous proprietary implementations, which were unclear and in general rather a big question mark. Our task now is to define the overall security methodologies for the individual critical points and to apply them. This will then provide integrators and network operators with options for operating their Open-RAN deployment securely. ## Acknowledgment The authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany in the programme of “Souverän. Digital. Vernetzt.” Joint project 6G-RIC, project identification number: 16KISK020K and 16KISK0[21-35]. The authors of this paper also acknowledge the support of educational conversations with Jean-Pierre Seifert around the topics discussed in this paper, as well as the German Federal Office for Information Security (BSI). ## References * [1] Sameer Kumar Singh, Rohit Singh and Brijesh Kumbhani “The Evolution of Radio Access Network Towards Open-RAN: Challenges and Opportunities” In _2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW)_ , 2020, pp. 1–6 DOI: 10.1109/WCNCW48565.2020.9124820 * [2] Sławomir Kukliński, Lechosław Tomaszewski and Robert Kołakowski “On O-RAN, MEC, SON and Network Slicing integration” In _2020 IEEE Globecom Workshops (GC Wkshps_ , 2020, pp. 1–6 DOI: 10.1109/GCWkshps50303.2020.9367527 * [3] Salah Eddine Elayoubi, Sana Ben Jemaa, Zwi Altman and Ana Galindo-Serrano “5G RAN Slicing for Verticals: Enablers and Challenges” In _Comm. Mag._ 57.1 IEEE Press, 2019, pp. 28–34 DOI: 10.1109/MCOM.2018.1701319 * [4] John S. Vardakas et al. “Towards Machine-Learning-Based 5G and Beyond Intelligent Networks: The MARSAL Project Vision” In _2021 IEEE International Mediterranean Conference on Communications and Networking (MeditCom)_ , 2021, pp. 488–493 DOI: 10.1109/MeditCom49071.2021.9647671 * [5] “O-RAN Architecture Description 5.0, Technical Specification” In _WG1: Use Cases and Overall Architecture Workgroup_ , 2021 URL: https://www.o-ran.org/specifications * [6] Terry Benzel et al. “Design Principles for Security”, 2005 * [7] BSI “Open-RAN Risikoanalyse”, 2021 Bundesamt für Sicherheit in der Informationstechnik URL: https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/Studien/5G/5GRAN-Risikoanalyse.html * [8] “O-RAN Architecture-Description 6.0, Technical Specification” In _WG1: Use Cases and Overall Architecture Workgroup_ , 2022 URL: https://www.o-ran.org/specifications * [9] “O-RAN Operations and Maintenance Interface Specification 6.0, Technical Specification” In _WG10: OAM for O-RAN_ , 2022 URL: https://www.o-ran.org/specifications * [10] “ITU-T Recommendation M.3010 : Principles for a telecommunications management network” URL: https://www.itu.int/rec/T-REC-M.3010 * [11] Heinz-Gerd Hegering, Sebastian Abeck and Bernhard Neumair “Integrated Management of Networked Systems: Concepts, Architectures, and Their Operational Application” San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1999 * [12] “Cloud Architecture and Deployment Scenarios for O-RAN Virtualized RAN v02.02” In _WG6: Cloudification and Orchestration Workgroup_ , 2021 URL: https://www.o-ran.org/specifications * [13] “O-RAN Non-RT RIC Architecture 1.0” In _WG2: Non-real-time RAN Intelligent Controller and A1 Interface Workgroup_ , 2021 URL: https://www.o-ran.org/specifications * [14] “Near-Real-time RAN Intelligent Controller Architecture and E2 General Aspects and Principles v02.01” In _WG3: Near-real-time RIC and E2 Interface Workgroup_ , 2022 URL: https://www.o-ran.org/specifications * [15] “Near-Real-time RAN Intelligent Management Plane Specification 8.0” In _WG4: Open Fronthaul Interfaces Workgroup_ , 2022 URL: https://www.o-ran.org/specifications * [16] “Fronthaul Control, User and Synchronization Plane Specification 8.0” In _WG4: Open Fronthaul Interfaces Workgroup_ , 2022 URL: https://www.o-ran.org/specifications * [17] Dudu Mimran et al. “Evaluating the Security of Open Radio Access Networks” arXiv, 2022 DOI: 10.48550/ARXIV.2201.06080 * [18] NIS “Report on the cybersecurity of Open RAN”, 2022 NIS Cooperation Group URL: https://digital-strategy.ec.europa.eu/en/library/cybersecurity-open-radio-access-networks * [19] enisa “NFV Security in 5G - Challenges and Best Practices”, 2022 European Union Agency for Cybersecurity URL: https://www.enisa.europa.eu/publications/nfv-security-in-5g-challenges-and-best-practices * [20] Chris J Mitchell “The impact of quantum computing on real-world security: A 5G case study” In _Computers & Security_ 93 Elsevier, 2020, pp. 101825 * [21] Jing Yang and Thomas Johansson “An overview of cryptographic primitives for possible use in 5G and beyond” In _Science China Information Sciences_ 63.12 Springer, 2020, pp. 1–22 * [22] Vincent Ulitzsch, Shinjo Park, Soundes Marzougui and Jean-Pierre Seifert “A Post-Quantum Secure Subscription Concealed Identifier for 6G” In _To appear in 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 2022)_ , 2022 * [23] Zhongxiang Wei et al. “Towards Multi-Functional 6G Wireless Networks: Integrating Sensing, Communication and Security” arXiv, 2021 DOI: 10.48550/ARXIV.2107.07735 * [24] Christoph Studer et al. “Channel Charting: Locating Users Within the Radio Environment Using Channel State Information” In _IEEE Access_ 6, 2018, pp. 47682–47698 DOI: 10.1109/ACCESS.2018.2866979 * [25] Open RAN Alliance “O-RAN: towards an open and smart RAN” In _White paper_ , 2018, pp. 1–19 * [26] Paul H Masur and Jeffrey H Reed “Artificial Intelligence in Open Radio Access Network” In _arXiv preprint arXiv:2104.09445_ , 2021 * [27] Geeta Kocher and Gulshan Kumar “Machine learning and deep learning methods for intrusion detection systems: recent developments and challenges” In _Soft Computing_ 25.15 Springer, 2021, pp. 9731–9763 * [28] Song Wang et al. “Machine Learning in Network Anomaly Detection: A Survey” In _IEEE Access_ 9 IEEE, 2021, pp. 152379–152396 * [29] Xiaoxiao Ma et al. “A comprehensive survey on graph anomaly detection with deep learning” In _IEEE Transactions on Knowledge and Data Engineering_ IEEE, 2021 * [30] Hongyu Liu and Bo Lang “Machine learning and deep learning methods for intrusion detection systems: A survey” In _applied sciences_ 9.20 Multidisciplinary Digital Publishing Institute, 2019, pp. 4396 * [31] Zeeshan Ahmad et al. “Network intrusion detection system: A systematic study of machine learning and deep learning approaches” In _Transactions on Emerging Telecommunications Technologies_ 32.1 Wiley Online Library, 2021, pp. e4150 * [32] Dovile Momkute, Karolis Žvinys and Vaidotas Barzdėnas “Adapted Anomaly Detection for RAN Performance” In _2018 IEEE 6th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE)_ , 2018, pp. 1–4 IEEE * [33] Yannan Yuan et al. “Anomaly Detection and Root Cause Analysis Enabled by Artificial Intelligence” In _2020 IEEE Globecom Workshops (GC Wkshps_ , 2020, pp. 1–6 IEEE * [34] Tobias Sundqvist, Monowar Bhuyan and Erik Elmroth “Uncovering latency anomalies in 5G RAN-A combination learner approach” In _2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS)_, 2022, pp. 621–629 IEEE * [35] Faris B Mismar and Jakob Hoydis “Unsupervised learning in next-generation networks: Real-time performance self-diagnosis” In _IEEE Communications Letters_ 25.10 IEEE, 2021, pp. 3330–3334 * [36] Samad Ali et al. “6G white paper on machine learning in wireless communication networks” In _arXiv preprint arXiv:2004.13875_ , 2020 * [37] Jinxin Liu, Michele Nogueira, Johan Fernandes and Burak Kantarci “Adversarial Machine Learning: A Multilayer Review of the State-of-the-Art and Challenges for Wireless and Mobile Systems” In _IEEE Communications Surveys Tutorials_ 24.1, 2022, pp. 123–159 DOI: 10.1109/COMST.2021.3136132 * [38] Ron Bitton et al. “Adversarial Machine Learning Threat Analysis in Open Radio Access Networks” In _arXiv preprint arXiv:2201.06093_ , 2022 * [39] Atul Rawal et al. “Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges and Perspectives” In _IEEE Transactions on Artificial Intelligence_ 1.01 IEEE Computer Society, 2021, pp. 1–1 * [40] Ashima Chawla et al. “Interpretable Unsupervised Anomaly Detection For RAN Cell Trace Analysis” In _2020 16th International Conference on Network and Service Management (CNSM)_ , 2020, pp. 1–5 IEEE * [41] Dudu Mimran et al. “Evaluating the Security of Open Radio Access Networks” In _CoRR_ abs/2201.06080, 2022 arXiv: https://arxiv.org/abs/2201.06080 TABLE III: List of abbreviations Term | Description ---|--- 3GPP | Third Generation Partnership Project BSS | Business Support System CISM | Container Infrastructure Service Management CU | Central Unit DU | Distributed Unit DL | Deep Learning DoS | Denial-of-Service FCAPS | Fault, Configuration, Accounting, Performance and Security IDS | Intrusion Detection Systems ISO | International Organization for Standardization LI | Lawful Interception MANO | Management and Orchestration MEC | Multi-access Edge Computing MIMO | Multiple Input Multiple Output MNO | Mobile Network Operator NG-RAN | Next Gerneration NodesBs Radio Access Network OSS | Operations Support System PQS | Post-Quantum Security RAN | Radio Access Network RIC | RAN Intelligent Controller RU | Remote Unit SAN | Storage Area Network SDN | Software Defined Network SMO | Service Management and Orchestration SLA | Service-level Agreement SoC | System-on-Chip UE | User Equipment VIM | Virtual Infrastructure Manager VM | Virtual Manager vRAN | Virtualized Radio Access Network XAI | Explainable AI
# Evaluating Counterfactual Explanations Using Pearl’s Counterfactual Method Bevan I. Smith School of Mechanical, Industrial and Aeronautical Engineering, University of the Witwatersrand, Johannesburg, South Africa <EMAIL_ADDRESS> ###### Abstract Counterfactual explanations (CEs) are methods for generating an alternative scenario that produces a different desirable outcome. For example, if a student is predicted to fail a course, then counterfactual explanations can provide the student with alternate ways so that they would be predicted to pass. The applications are many. However, CEs are currently generated from machine learning models that do not necessarily take into account the true causal structure in the data. By doing this, bias can be introduced into the CE quantities. I propose in this study to test the CEs using Judea Pearl’s method of computing counterfactuals which has thus far, surprisingly, not been seen in the counterfactual explanation (CE) literature. I furthermore evaluate these CEs on three different causal structures to show how the true underlying causal structure affects the CEs that are generated. This study presented a method of evaluating CEs using Pearl’s method and it showed, (although using a limited sample size), that thirty percent of the CEs conflicted with those computed by Pearl’s method. This shows that we cannot simply trust CEs and it is vital for us to know the true causal structure before we blindly compute counterfactuals using the original machine learning model. _K_ eywords Counterfactual explanations $\cdot$ Structural causal model (SCM) $\cdot$ Counterfactuals ## 1 Background This study presents a method for validating counterfactual explanations using Pearl’s counterfactual method [1, 2]. Counterfactual explanations (CEs) form part of the rapidly expanding field of explainable machine learning which aims to explain why machine learning models make their predictions [3, 4]. If the predictions made by the model were undesirable, knowing why the predictions were made would allow us to generate counterfactual explanations that give us alternative desirable outcomes. For example, if a student was predicted to fail, we could generate CEs that would advise the student how to change certain features to increase her probability of passing. A counterfactual explanation (CE) “describes the smallest change to the feature values that changes the prediction to a predefined output" [3]. Clearly we can see that CEs promise much and the applications are vast, such as improving pass rates, advising patients what to change to improve health, advising clients what to do so they can obtain a bank loan, and so forth. The main problem however is that CEs, and the algorithms that generate them, are yet to be properly validated [5]. Why would they need to be validated or tested? This is because CEs generated using the various optimization algorithms such as DiCE and WACH [6, 4], are generated using the model trained on the original data [3]. This is problematic because machine learning (ML) models do not care about the causal relationships and causal structure between the features, only correlation [1]. The ML model is designed to make good predictions without concern for causality. Therefore if the algorithms that generate CEs are based on these ML models, would they be the same as those generated via a ground truth structural causal model? We already know that when we fit simple linear regression models without taking into account the causal structure, there are biases in the coefficients [7]. Examples include leaving out a confounding feature in a regression model and including a collider in the model. If not taking into account causal relationships in a regression model causes problems, how much more if we generate counterfactual explanations using the existing trained ML model. The problem that this study is aiming to address is the following: Counterfactual explanations are generated using machine learning models that do not necessarily take into account the true underlying causal structure in the data. This can lead to erroneous estimations of the counterfactuals. To address this problem, I propose using the counterfactual methods found in the work of Judea Pearl[1, 2]. Details about this method is found later in Section 5. However, the essential idea is that to compute counterfactuals, we first require a structural causal model (SCM) and associated graph that describe the true data generating process. This allows us to compute counterfactuals using the true causal relationships between the data, whether it be between input features or between input and output features. Whereas computing CEs from machine learning models would not necessarily incorporate causal relationships, Pearl’s method does. I propose using this method to evaluate and test the current CE methods. The main contributions in this paper are: * • I present a method for evaluating CEs using the counterfactual method developed by Pearl. This has surprisingly not been applied in the CE literature thus far. This would allow us to evaluate our CEs to know if we can trust them. * • Using this method, I show how the CEs generated using current methods conflict with those computed via Pearl’s method and how different causal structures affect the results. This should alert us to the fact that it is vital to understand the true underlying causal structure before generating CEs. ## 2 Counterfactual Explanations Recently, counterfactual explanations (CEs) have been gaining much traction and interest in the machine learning literature [4, 3, 8, 5]. The aim of CEs is to manipulate the inputs of a model to change the output to a desired one. CEs are generated by solving an optimization problem where the input feature space is perturbed to as close a feature space as possible, but one that leads to a different, desirable output [8, 3]. This is therefore an optimization problem that aims to perturb the input space as little as possible, in order to change the output. ### 2.1 Method by Wachter et al. An example of a CE method is that by Wachter et al. [4]. The loss function is given below: $L(x,x^{\prime},y^{\prime},\lambda)=\lambda\cdot(f(x^{\prime})-y^{\prime})^{2}+d(x,x^{\prime}),$ (1) where $x$ and $x^{\prime}$ are the original and counterfactual input space respectively, $y^{\prime}$ is the counterfactual output, and $\lambda$ a tuning parameter. Importantly, $f$ refers to the original machine learning model trained on the data, which does not necessarily take into account causal relationships in the data. $L$ aims to balance between minimizing the distance between original and counterfactual $x$ and $x^{\prime}$ and predicted $f(x^{\prime})$ and original $y^{\prime}$ outputs. ### 2.2 Method by Mothilal et al. Another CE method was developed by Mothilal et al. [6] and is the method used in this study. It is known as DiCE: Diverse Counterfactual Explanations. Their method of optimization is based on the following loss function: $C(x)=\underset{{c_{1},...,c_{k}}}{\arg\min}\frac{1}{k}\sum_{i=1}^{k}yloss(f(c_{i}),y)+\frac{\lambda_{1}}{k}\sum_{i=1}^{k}dist(c_{i},x)-\lambda_{2}dpp\\_diversity(c_{1},...,c_{k}),$ (2) where $c_{i}$ is a counterfactual explanation (CE), k is the total number of CEs, f(.) refers to the ML model trained on the data, yloss(.) is a metric minimizing the distance between counterfactual predictions and the desired outcome, dist(.) is the distance between counterfactual input $c_{i}$ and actual input x, dpp_diversity(.) functions to maximize diversity, and finally the lambda values balance the three parts of the loss function. An important aspect of this study is to show that CE methods shown above use the ML model trained on the data. ### 2.3 Characteristics of CEs CEs require important characteristics. I highlight the following: * • Feasibility/plausibility: This refers to the CEs being realistic; not being smaller or larger than those observed in the original data [9]. It means the CEs must come from a possible world [4]. * • Actionable: Features must be able to be modified, to be changed. Features that are non-actionable include age, gender, race etc. [9]. * • Causality: I discuss this in more detail in the Discussion section (Section 8). This characteristic requires that causal relationships be maintained in the CEs. However, I push back on this requirement. ## 3 Structural Causal Models Structural causal models (SCMs) refer to the true data generation process, and allow for counterfactual analysis [5, 1]. SCMs are associated with graphical causal models called directed acyclic graphs (DAGs), seen on the right side of Figure 1. Consider a set of features $X_{1},...,X_{n}$, known as endogenous features, where each feature $X_{i}$ is generated via a deterministic function $f_{i}$ which is a function of its causes (or parents, PA). $PA_{i}$ refers to the parents (known causes) of $X_{i}$,. and $U_{i}$ refers to some stochastic unexplained cause of $X_{i}$ [10], also known as exogenous features (outside the model). Each feature corresponds to a vertex in the DAG. $X_{i}:=f_{i}(PA_{i},U_{i})\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (i=1,...,n)$ (3) Figure 1 presents some intuition behind the difference between generating counterfactuals based on a machine learning model (left image) and a true structural causal model DAG (right image) [11]. SCMs would generate counterfactuals based on the true causal structure but CEs are generated based on the original machine learning model $f$ which does not necessarily take into account causal structure. The question is: would the CE algorithms generate the same counterfactuals as those via SCMs? Figure 1: ML model (left) vs SCM (right) [11]. ## 4 Causal Structures Before presenting Pearl’s Counterfactual Method (PCM) in Section 5, I present three important causal structures in causal inference: chains (or mediators), forks (common cause) and colliders. This study will use these three structures when evaluating the CEs using PCM. According to Pearl, these three types are the building blocks of causal structures and “enable us to test a causal model, discover new models, evaluate effects of interventions, and much more" [2]. They are presented using a directed acyclic graph and an SCM. ### 4.1 Chain (mediator) The chain or mediator DAG is shown in Figure 2. An example of a chain would be a fire (X), smoke (Z) and alarm (y) system. The fire does not directly cause the alarm to go off, but is required to first produce smoke that then sets off the alarm. There is no direct causal path between fire and alarm, in this case. Of course, there can be a causal path between X and y in other cases. We are restricting ourselves here to the basic chain. What this model tells us is that if we control for Z, then we block the causal flow between X and y and do not allow for measuring the true effect of X on y. That is, X and y are conditionally independent given Z. What this implies practically is that if we include X, Z and y in our model, we are effectively controlling for Z and blocking the path between X and y. The SCM for Figure 2 is seen in Equations 4 and 5. Here we introduce the exogenous variables, U, that are vital to Pearl’s method seen later. Z is a function of X and $U_{z}$, where X is an endogenous feature (e.g. fire) and $U_{z}$ refers to all external features causing Z that we can not account for one by one. This goes for all U features seen later. $X$$y$$Z$$U_{y}$$U_{z}$ Figure 2: DAG showing chain path between X and Z and y. Causality flow from X to Z to y. $\displaystyle Z=f(X,U_{z})$ (4) $\displaystyle y=f(Z,U_{y})$ (5) ### 4.2 Fork (common cause) Figure 3 shows a DAG representing a fork or common cause causal structure. Practically, it represents spurious correlation where X and y are conditionally independent given Z. That is, in this case, to obtain the true causal effect of X on y requires conditioning (controlling) on Z. The SCM for Figure 3 is shown in Equations 6 to 8. An example of this particular causal structure could be where X refers to shark attacks and y refers to ice-cream sales. We may obtain data that shows that these two events are correlated: as shark attacks increase, ice-cream sales also increase. However, what explains this spurious correlation is Z, which refers to summertime. Both shark attacks and ice-cream sales are conditionally dependent on the seasons. $X$$U_{x}$$y$$Z$$U_{y}$$U_{z}$ Figure 3: DAG showing common parent confounding where Z is the parent of both X and y, but with no causal flow from X to y in this case. Also known as spurious correlation. $\displaystyle X=f(Ux)$ (6) $\displaystyle Z=f(Uz)$ (7) $\displaystyle y=f(Z,Uy)$ (8) ### 4.3 Collider The final causal structure used in this study is the collider, presented by the DAG in Figure 4 with no causal path between X and y. An example of a collider is where X is a sprinkler, y is rain, and Z is wet grass. Each feature can be on or off, so to speak. Z is therefore a function of the sprinkler and rain. For example, if Z = dry (off), then both sprinkler (X) and rain (y) must be off. However, if Z = wet, either X or y is on, or both are on. The important point to note here is that X and y are marginally independent; they are independent, conditioned on nothing. Rain being on has nothing to do with the sprinkler being on, for example. However, when we condition on Z, we make sprinkler and rain conditionally dependent. This means that by conditioning on grass, we create the illusion that there is a causal relationship between otherwise independent features. For example, if we know that grass is wet (Z = on) and that it is raining (y = on), then we immediately know the value of the sprinkler (X = off). Two variables (X and y) that are independent, become conditionally dependent, given z. This is the opposite of chains and forks where conditioning on Z results in estimating the actual causal relationship between X and y. The SCMs for the collider is seen in Equations 9 to 10. $X$$U_{x}$$y$$Z$$U_{y}$$U_{z}$ Figure 4: DAG showing collider path between X, Z and y, but with no causal path between X and y. $\displaystyle Z=f(X,y,U_{z})$ (9) $\displaystyle X=f(U_{x})$ $\displaystyle y=f(U_{y})$ (10) ### 4.4 Difference between SCM and ML What is vital to note from these three basic causal structures is that whereas the SCMs would take into account the causal structure (i.e. the data generating process), the ML model samples from the joint distribution of X, y and Z without any thought for the causal structure. Clearly these three types are distinctly different. For example, the DAG and linear model for an ML model might look like that shown in Figure 5 and Equation 11. The question is, if we generate CEs from the ML model, would they be the same as those generated by the true SCM using Pearl’s method shown next? $X$$y$$Z$$U_{y}$ Figure 5: DAG showing how a supervised machine learning model would train on the data. $\displaystyle y=f(X,Z,Uy)$ (11) ## 5 Pearl’s Counterfactual Method (PCM) We now come to Pearl’s Counterfactual Method (PCM). This sections presents how we go about generating counterfactuals using SCMs. The method is based on Judea Pearl’s work that can be found in Chapter 8 of The Book of Why [2] and Chapter 4 of Causal Inference in Statistics, A Primer [1]. The method involves three steps: abduction, action and prediction and works as follows. ### 5.1 Abduction: to compute exogenous variables, U Consider data with a known true causal structure. This structure has a DAG and SCMs. For example, consider the DAG and SCM for Figure 3 where we have spurious correlation from a common parent. The SCMs are given as Z = f(Uz) and y = f(Z,Uy). The initial aim in PCM is to compute the values of the exogenous variables Uz and Uy. These refer to all unmeasured variables for an individual unit, that affect Z and y, respectively. These U variables, also referred to as noise variables, describe the world of an individual person or observation. It is these that we need to compute first before perturbing the input features to compute counterfactuals. We need to compute these variables first because they describe the "situation" of an individual unit that must remain the same in both the factual and counterfactual worlds. Recall that the idea behind the counterfactual is to keep all other things constant and to change only one feature. These other things refer to the exogenous variables. To compute Uz and Uy in our example, we select a unit (eg. a patient, a student etc.) out of the data that we are interested in, and input observed factual features into our SCMs. For example, say we measured X, Z and y from observation. We input the individual unit’s Z into the first SCM, Z = f(Uz), and compute Uz. To compute Uy, input measured y and Z from the individual unit, into the second SCM, y = f(Z,Uy). We now have the exogenous variables for the selected unit. This is called abduction and was computed based on actual measured data. ### 5.2 Action: to intervene on counterfactual variables In the second step, called action, we input the new counterfactual features, for example, Z=z. This means that Z is no longer a function of Uz and we have therefore deleted the arrow from Uz to Z and removed the relationship that Z had with its parent, Uz. See Figure 6. When we apply a counterfactual, we remove the relationship the feature has with its parents, regardless if they are endogenous or exogenous features. This is also called applying Pearl’s do- operator to Z. This is a vital step in computing counterfactuals. we now have a new modified (or surgically altered) model with which we compute counterfactual outcomes. ### 5.3 Prediction In the final step, prediction, we use the new Z = z feature value as well as the U variables computed in step 1, and compute the new y = f(Z=z,Uy). Detailed examples will be presented in the results of how to perform this using more complicated data. $X$$y$$z$$U_{y}$$U_{z}$ Figure 6: DAG showing common parent confounding but now with deleted edge between $U_{z}$ and z, showing a counterfactual quantity independent of its’ parents. ## 6 Methodology The CE method used in this study to generate CEs was DiCE ML [6]. This was used because it is readily available as a Python package that is easy to implement. The following presents the workflow for evaluating CEs using SCMs. Also see Figure 7. 1. 1. Select a causal structure based on the three types discussed above and generate a dataset based on the SCM. 2. 2. Train a supervised learning model on the data. This model is used in the DiCE CE generation process. 3. 3. Identify a case/unit in the dataset. 4. 4. Generate CEs using DiCE which utilizes the model trained in the earlier step. The CEs cause the output to switch to the opposite class. 5. 5. Now separately, carry out Abduction in Pearl’s method from Section 5.1. This is to compute all the exogenous features. 6. 6. Input DiCE CEs into the SCMs according to Pearl’s Action method (Section 5.2). 7. 7. Estimate the output class using Pearl’s Prediction method ((Section 5.3). $data$$ML$$DiCE$$CEs$$PCM$$class_{CE}$$class_{PCM}$$?$ Figure 7: Methodology flow generating DiCE CEs, feeding the CEs into PCM and comparing the output classes. ### 6.1 Experiments This section details the experiments based on the three causal structures above, but the datasets were made more complex with the aim of more closely modeling real life. The dataset shown in Table 1 comprised seven features and was based very loosely on features describing a university student taking a course. Table 1: Description of features used in this study. Feature | Description | Statistics ---|---|--- x1 | Grades | $\mu$=50, $\sigma$=5 x2 | Age | $\mu$=20, $\sigma$=1 x3 | Grades | $\mu$=45, $\sigma$=6 x4 | Gender | p = 0.6 x5 | Bursary | p = 0.3 x6 | Grades | $\mu$=70, $\sigma$=5 x7 | Grades | $\mu$=50, $\sigma$=5 ### 6.2 Experiment 1: Chain causal structure The DAG and SCMs for experiment 1 are based on the chain causal structure (Section 4.1) and is shown in Figure 8 and Equations 12 to 19. Note that in the SCM, y is not a function of $x_{7}$ and would not make use of it in predictions; however an ML model will include $x_{7}$ in the training and prediction. In the DAG, for brevity and space-saving, note that X refers to all the features not represented by $x_{7}$ and $x_{3}$, namely $x_{1}$, $x_{2}$, $x_{4}$, $x_{5}$, $x_{6}$. They are features feeding directly into y and are considered parents of y. Also, U refers to all the exogenous variables feeding into X, namely $U_{1}$, $U_{2}$, $U_{4}$, $U_{5}$, $U_{6}$. In the SCMs, the $\beta$ coefficients from 0 to 8 were arbitrarily selected as follows: 0.4, 0.6, 0.4, 0.6, 0.7, 0.4, 0.4, 0.3, 0.7. $u_{3}$$x_{7}$$y$$X$$x_{3}$$u_{y}$$u_{7}$$U$ Figure 8: Causal structure for experiment 1 that is based on a chain (mediator) causal structure. $\displaystyle x_{1}=u_{1}$ (12) $\displaystyle x_{2}=u_{2}$ (13) $\displaystyle x_{3}=\beta_{1}x_{7}+u_{3}$ (14) $\displaystyle x_{4}=u_{4}$ (15) $\displaystyle x_{5}=u_{5}$ (16) $\displaystyle x_{6}=u_{6}$ (17) $\displaystyle x_{7}=u_{7}$ (18) $\displaystyle y=\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2}+\beta_{3}x_{3}+\beta_{4}x_{4}+\beta_{5}x_{5}+\beta_{6}x_{6}+U_{y}$ (19) #### 6.2.1 Experiment 2: Fork (common cause) causal structure The second experiment was based on the fork structure from Section 4.2. The DAG and SCMs are presented in Figure 9 and Equations 20 and 27. Whereas the causal path in Experiment 1 flowed from $x_{7}$ to $x_{3}$ to y, in Experiment 2, $x_{3}$ is the cause of both $x_{7}$ and y. Everything else is the same as Experiment 1. $u_{3}$$x_{7}$$y$$X$$x_{3}$$u_{y}$$u_{7}$$U$ Figure 9: Causal structure for experiment 2 that is based on a fork (common cause/parent). $\displaystyle x_{1}=u_{1}$ (20) $\displaystyle x_{2}=u_{2}$ (21) $\displaystyle x_{3}=u_{3}$ (22) $\displaystyle x_{4}=u_{4}$ (23) $\displaystyle x_{5}=u_{5}$ (24) $\displaystyle x_{6}=u_{6}$ (25) $\displaystyle x_{7}=\beta_{8}x_{3}+u_{7}$ (26) $\displaystyle y=\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2}+\beta_{3}x_{3}+\beta 4x_{4}+\beta 5x_{5}+\beta_{6}x_{6}+u_{y}$ (27) ### 6.3 Experiment 3: Collider causal structure The DAG and SCMs for experiment 3 based on the collider causal structure shown in Figure 10 and Equations 28 to 35. Note again that there is no causal path directly between $x_{7}$ and y and $x_{3}$ is not a parent of y. Therefore the SCM does not consider $x_{7}$ nor $x_{3}$ as a parent of y and is not included in computing y (see Equation 35). However, in an ML model, $x_{7}$ and $x_{3}$ would most likely be included. This again highlights the distinction between ML modelling and SCM modelling. $u_{3}$$x_{7}$$y$$X$$x_{3}$$u_{y}$$u_{7}$$U$ Figure 10: Causal structure for experiment 3 that is based a collider causal structure. $\displaystyle x_{1}=u_{1}$ (28) $\displaystyle x_{2}=u_{2}$ (29) $\displaystyle x_{3}=\beta_{1}x_{7}+u_{3}$ (30) $\displaystyle x_{4}=u_{4}$ (31) $\displaystyle x_{5}=u_{5}$ (32) $\displaystyle x_{6}=u_{6}$ (33) $\displaystyle x_{7}=u_{7}$ (34) $\displaystyle y=\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2}+\beta_{4}x_{4}+\beta_{5}x_{5}+\beta_{6}x_{6}+u_{y}$ (35) ### 6.4 Limitations and Assumptions In all the CE computations and evaluations in this study, I make the assumption that all features are actionable and all counterfactual values are plausible. The aim of this study was not to study actionability and plausibility but rather causality. In real life, however, it is highly unlikely that all features are actionable and all counterfactuals are plausible. This was assumed to simplify the study. ## 7 Results This section presents two main results. The first is to show the PCM method using CEs from DiCE (Section 7.1). The second is to show the results of this method on 30 examples from the three causal structures (Section 7.2). ### 7.1 Pearl’s Counterfactual Method (PCM) #### 7.1.1 Experiment 1: Chain causal structure Following the steps outlined under Section 6, we begin by: ##### Step 1: generating a dataset by sampling each feature in Table 1 and generating the data according to the SCMs of Equations 12 to 19. A sample of the data generated is shown in the first five rows in Figure 11. Note that y is a numeric value and class is a binary 0 or 1. DiCE only operates on changing class values so I converted the output y to a class by making any value greater or equal to the mean value, a one, and anything below the mean value, a zero. This was the same in all subsequent results. Figure 11: First five rows of the data generated in step 1. ##### Step 2: Next, train a logistic regression model on the data which is used in the DiCE operations. Logistic regression is used because the output is a class. Any supervised learning model that performs classification can be used here. ##### Step 3: Select any unit in the dataset that you are interested in, on which to compute counterfactuals. This unit is presented in Figure 12, together with two CEs computed by DiCE. Notice the original class was 1 and the counterfactuals switched the class to 0. Figure 12: Random unit selected for experiment 1, shown at the top. In this DiCE computation, two DiCE counterfactual explanations were generated, shown by the two rows at the bottom. ##### Steps 4 and 5: Next we carry out Abduction, which is to compute the exogenous variables by using the actual feature values for this unit shown in Figure 12. Using the values from the top row in Figure 12, I reproduce Equations 28 to 35 but now with values inserted to compute the U values which are shown in Table 2. $\displaystyle 54.5=\boldsymbol{u_{1}}$ (36) $\displaystyle 20.2=\boldsymbol{u_{2}}$ (37) $\displaystyle 18.9=0.6\cdot 53.3+\boldsymbol{u_{3}}$ (38) $\displaystyle 0.0\leavevmode\nobreak\ \leavevmode\nobreak\ =\boldsymbol{u_{4}}$ (39) $\displaystyle 0.0\leavevmode\nobreak\ \leavevmode\nobreak\ =\boldsymbol{u_{5}}$ (40) $\displaystyle 69.2=\boldsymbol{u_{6}}$ (41) $\displaystyle 53.3=\boldsymbol{u_{7}}$ (42) $\displaystyle 81.7=0.4+0.6\cdot 54.5+0.4\cdot 20.2+0.6\cdot 18.9+0.7\cdot 0+0.4\cdot 0+0.4\cdot 77.0+\boldsymbol{u_{y}}$ (43) Table 2: Exogenous variables for experiment 1. $u_{1}$ | $u_{2}$ | $u_{3}$ | $u_{4}$ | $u_{5}$ | $u_{6}$ | $u_{7}$ | $u_{y}$ ---|---|---|---|---|---|---|--- 54.4 | 20.2 | -1.58 | 0 | 0 | 77.0 | 51.4 | -1.58 ##### Step 6: In this example I select counterfactual 0: i.e. the first row of counterfactuals in Figure 12. Here we can see that two features, $x_{6}=69.2$ and $x_{7}=53.4$ have been changed by DiCE in order to obtain a different output class of 0. The rest remain unchanged. Therefore when applying Pearl’s method, Action, if the feature is different, we delete its’ parents and input the new counterfactual value. This is because when we intervene on a feature, we now introduce a new mechanism that determines the state of those features, and it no longer “listens" to it’s parents [2]. We now apply the above (i.e the exogenous variables and the CEs) to the SCMs for this experiment shown below. Ultimately, we are trying to compute the output y from the SCM and compare it with the output y from the DiCE CEs. $\displaystyle x_{1}=u_{1}=54.4$ (44) $\displaystyle x_{2}=u_{2}=20.2$ (45) $\displaystyle x_{3}=0.6\boldsymbol{\cdot}53.3+(-1.58)$ (46) $\displaystyle x_{4}=u_{4}=0.0$ (47) $\displaystyle x_{5}=u_{5}=0.0$ (48) $\displaystyle x_{6}=CE_{6}=69.2$ (49) $\displaystyle x_{7}=CE_{7}=53.3$ (50) $\displaystyle y=\beta_{0}+\beta_{1}u_{1}+\beta_{2}u_{2}+\beta_{3}x_{3}+\beta_{4}u_{4}+\beta_{5}u_{5}+\beta_{6}CE_{6}+u_{y}=79.08$ (51) The output y = 79.08 is greater than the mean of 79.07. Therefore when applying the CEs computed by DiCE to PCM, we obtain a class of 1, whereas DiCE for this unit, predicted a class of 0. #### 7.1.2 Experiment 2 - Fork causal structure Following the same steps as above, for the fork causal structure, we generate the dataset (Step 1), train a classification model (Step 2), select a unit/case and generate DiCE CEs (Steps 3 and 4, see Figure 13). compute exogeneous variables (Steps 4 and 5), identify counterfactual values and apply Action (Step 6), and finally compute the SCM output and compare with DiCE output (Step 7). Figure 13 shows the unit that was selected as well as the two DiCE CEs. Figure 13: Selected unit and two DiCE CEs generated for the fork causal structure. Table 3 shows the calculated exogenous variables. Table 3: Exogenous variables for experiment 2. $u_{1}$ | $u_{2}$ | $u_{3}$ | $u_{4}$ | $u_{5}$ | $u_{6}$ | $u_{7}$ | $u_{y}$ ---|---|---|---|---|---|---|--- 53.2 | 19.5 | 51.9 | 0 | 0 | 76.0 | 0.84 | 0.84 If we again use $CE_{0}$ in our example, we can see from Figure 13, that features $x_{1}=34.4$ and $x_{4}=1.0$ are different from the original: they are counterfactuals. We therefore pay special attention to them and delete their original causal mechanism and replace it with the new values; the rest of the features remain unchanged in the SCMs. $\displaystyle y=\beta_{0}+\beta_{1}\cdot CE_{1}+\beta_{2}u_{2}+\beta_{3}u_{3}+\beta 4\cdot CE_{4}+\beta_{5}u_{5}+\beta_{6}u_{6}+u_{y}=91.9$ (52) The output y = 91.6 is smaller than the mean of 93.6. Therefore when applying the CEs computed by DiCE to PCM, we obtain a class of 0 which is the same class predicted by DiCE. #### 7.1.3 Experiment 3 - Collider causal structure The final example is for the collider causal structure and we again follow the same six steps as before. We will not repeat the details here, only the results. Figure 14 shows the unit selected and the DiCE counterfactuals and Table 4 shows the computed U variables. Figure 14: Selected unit for Experiment 3 with two DiCE CEs generated for the collider causal structure. Table 4: Exogenous variables for experiment 3. $u_{1}$ | $u_{2}$ | $u_{3}$ | $u_{4}$ | $u_{5}$ | $u_{6}$ | $u_{7}$ | $u_{y}$ ---|---|---|---|---|---|---|--- 44.8 | 20.4 | 0.43 | 0 | 1 | 76.7 | 44.3 | 0.43 If we again use $CE_{0}$ in this example from Figure 14, we can see that only features $x_{3}=61.9$ is different from the original. However, due to the causal structure of the collider, $x_{3}$ is not a parent of y and is therefore ignored. The output y is therefore calculated as follows: $\displaystyle y=\beta_{0}+\beta_{1}u_{1}+\beta_{2}u_{2}+\beta_{4}u_{4}+\beta_{5}u_{5}+\beta_{6}u_{6}+u_{y}=67.0$ (53) The output y = 67.0 is greater than the mean of 66.84. Therefore when applying the CEs computed by DiCE to PCM, we obtain a class of 1 which is the same class predicted by DiCE. ### 7.2 Effect of Causal Structures Section 7.1 presented the step by step method for computing CEs via DiCE and then using these values in PCM to compute counterfactual outputs. The aim was to present the method of using PCM. This section, still using the same method, aims to obtain an idea of whether different causal structures produce different counterfactual outcomes using PCM. We do this by performing this method on thirty DiCE CEs; ten for each causal structure. #### 7.2.1 Chain (mediator) Table 5 shows the results of selecting 5 units, with DiCE generating two counterfactual explanations for each unit, totalling ten CEs. Each CE was input into the chain SCMs and the 6 steps were followed as above. The column “class" refers to the class of the original unit and the CE classes. The column “class (PCM)" is the class predicted using Pearl’s counterfactual method. We can see that three out of the ten times, PCM predicted a different class than the DiCE method. This is highlighted in gray in the table. Table 5: Results for chain causal structure Item | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$ | $x_{6}$ | $x_{7}$ | y | class | class (PCM) | y (PCM) ---|---|---|---|---|---|---|---|---|---|---|--- Unit 1 | 54.4 | 20.2 | 18.9 | 0 | 0 | 77.0 | 51.3 | 81.7 | 1 | - | U | 54.4 | 20.2 | -1.58 | 0 | 0 | 77.0 | 51.4 | -1.58 | - | - | $CE_{0}$ | 54.4 | 20.2 | 18.9 | 0 | 0 | 69.2 | 53.4 | - | 0 | 1 | 79.1 $CE_{1}$ | 39.9 | 20.2 | 18.9 | 1 | 0 | 77.0 | 51.3 | - | 0 | 0 | 77.8 Unit 2 | 52.5 | 18.7 | 15.9 | 0 | 0 | 80.5 | 40.1 | 80.9 | 1 | - | U | 52.5 | 18.7 | -0.16 | 0 | 0 | 80.5 | 40.1 | -0.16 | - | - | $CE_{0}$ | 52.5 | 18.7 | 15.9 | 0 | 0 | 73.1 | 40.1 | - | 0 | 0 | 77.5 $CE_{1}$ | 41.4 | 18.7 | 15.9 | 0 | 0 | 70.9 | 40.1 | - | 0 | 0 | 70.4 Unit 3 | 57.5 | 21.3 | 23.2 | 0 | 1 | 61.9 | 58.6 | 82.3 | 1 | - | U | 57.5 | 21.3 | -0.2 | 0 | 1 | 61.9 | 58.6 | -0.2 | - | - | $CE_{0}$ | 57.5 | 21.3 | 19.0 | 0 | 1 | 61.9 | 58.6 | - | 0 | 1 | 79.8 $CE_{1}$ | 57.5 | 21.3 | 15.1 | 0 | 0 | 61.9 | 58.6 | - | 0 | 0 | 77.1 Unit 4 | 54.6 | 20.2 | 19.9 | 0 | 0 | 66.2 | 54.1 | 77.9 | 0 | - | U | 54.6 | 20.2 | -1.7 | 0 | 0 | 66.2 | 54.1 | -1.7 | - | - | $CE_{0}$ | 62.7 | 20.2 | 19.9 | 0 | 0 | 66.2 | 54.1 | - | 1 | 1 | 82.8 $CE_{1}$ | 62.7 | 18.5 | 19.9 | 0 | 0 | 66.2 | 54.1 | - | 1 | 1 | 82.1 Unit 5 | 50.4 | 20.4 | 25.6 | 0 | 1 | 67.5 | 58.8 | 83.7 | 1 | - | U | 50.4 | 20.4 | 2.1 | 0 | 1 | 67.5 | 58.8 | 2.1 | - | - | $CE_{0}$ | 50.4 | 20.4 | 21.7 | 0 | 1 | 67.5 | 58.8 | - | 0 | 1 | 80.9 $CE_{1}$ | 35.3 | 20.4 | 25.6 | 0 | 1 | 67.5 | 58.8 | - | 0 | 0 | 74.2 #### 7.2.2 Fork causal structure Table 6 presents the results for ten CEs using the fork causal structure. The gray rows show where the PCM method generates the opposite class to DiCE. This occurred two out of the ten times. Table 6: Results for fork causal structure. Item | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$ | $x_{6}$ | $x_{7}$ | y | class | class (PCM) | y (PCM) ---|---|---|---|---|---|---|---|---|---|---|--- Unit 1 | 47.5 | 19.53 | 50.6 | 0 | 0 | 68.5 | 19.9 | 94.1 | 0 | - | U | 47.5 | 19.53 | 50.6 | 0 | 0 | 68.5 | -0.3 | -0.3 | - | - | $CE_{0}$ | 47.5 | 19.53 | 51.7 | 1 | 0 | 68.5 | 19.9 | - | 1 | 1 | 95.5 $CE_{1}$ | 47.5 | 19.53 | 55.8 | 0 | 0 | 68.5 | 19.9 | - | 1 | 1 | 97.0 Unit 2 | 56.7 | 18.9 | 42.4 | 1 | 0 | 63.9 | 15.9 | 92.6 | 1 | - | U | 56.7 | 18.9 | 42.4 | 1 | 0 | 63.9 | -1.11 | -1.11 | - | - | $CE_{0}$ | 51.2 | 20.5 | 42.4 | 1 | 0 | 63.9 | 15.9 | - | 0 | 0 | 89.9 $CE_{1}$ | 62.4 | 18.9 | 28.2 | 1 | 0 | 63.9 | 15.9 | - | 0 | 0 | 87.5 Unit 3 | 52.3 | 18.7 | 43.1 | 1 | 0 | 64.4 | 18.6 | 92.9 | 1 | - | U | 52.3 | 18.7 | 43.1 | 1 | 0 | 64.4 | 1.36 | 1.36 | - | - | $CE_{0}$ | 53.5 | 22.1 | 43.1 | 1 | 0 | 64.4 | 18.6 | - | 0 | 1 | 95.0 $CE_{1}$ | 52.3 | 20.3 | 43.1 | 1 | 0 | 64.4 | 18.6 | - | 0 | 0 | 93.6 Unit 4 | 46.6 | 20.2 | 52.9 | 1 | 0 | 72.2 | 20.4 | 96.6 | 1 | - | U | 46.6 | 20.2 | 52.9 | 1 | 0 | 72.2 | -0.8 | -0.8 | - | - | $CE_{0}$ | 36.4 | 22.3 | 52.9 | 1 | 0 | 72.2 | 20.4 | - | 0 | 0 | 91.7 $CE_{1}$ | 38.8 | 20.2 | 52.9 | 1 | 0 | 72.2 | 20.4 | - | 0 | 0 | 92.3 Unit 5 | 54.4 | 21.1 | 46.7 | 0 | 0 | 74.9 | 19.9 | 100.7 | 1 | - | U | 54.4 | 21.1 | 46.7 | 0 | 0 | 74.9 | 19.9 | 2.1 | - | - | $CE_{0}$ | 36.4 | 21.1 | 46.7 | 0 | 0 | 70.1 | 19.9 | - | 0 | 0 | 87.9 $CE_{1}$ | 46.1 | 21.1 | 46.7 | 0 | 0 | 74.9 | 19.9 | - | 0 | 1 | 95.7 #### 7.2.3 Collider causal structure Table 7 presents the PCM results for ten CEs generated from the collider causal structure. We can see here that five out of ten (50%) of the DiCE CEs produced outputs that conflicted with the PCM output classes. These are shown in gray. Table 7: Results for collider causal structure Item | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$ | $x_{6}$ | $x_{7}$ | y | class | class (PCM) | y (PCM) ---|---|---|---|---|---|---|---|---|---|---|--- Unit 1 | 47.3 | 21.1 | 57.4 | 1 | 0 | 59.7 | 48.9 | 62.3 | 0 | - | U | 47.3 | 21.1 | 0.48 | 1 | 0 | 59.7 | 48.9 | 0.48 | - | - | $CE_{0}$ | 47.3 | 21.1 | 57.4 | 1 | 0 | 85.2 | 48.9 | - | 1 | 1 | 72.5 $CE_{1}$ | 47.3 | 20.7 | 69.5 | 1 | 0 | 59.7 | 48.9 | - | 1 | 0 | 62.1 Unit 2 | 46.8 | 19.7 | 58.9 | 0 | 1 | 68.9 | 51.9 | 64.0 | 0 | - | U | 46.8 | 19.7 | -0.3 | 0 | 1 | 68.9 | 51.9 | -0.3 | - | - | $CE_{0}$ | 46.8 | 19.7 | 69.7 | 0 | 1 | 68.9 | 51.9 | - | 1 | 0 | 64.05 $CE_{1}$ | 61.4 | 19.7 | 58.9 | 0 | 1 | 68.9 | 51.9 | - | 1 | 1 | 72.7 Unit 3 | 43.3 | 19.4 | 56.7 | 0 | 0 | 75.3 | 51.4 | 62.7 | 0 | - | U | 43.3 | 19.4 | -1.5 | 0 | 0 | 75.3 | 51.4 | -1.5 | - | - | $CE_{0}$ | 46.9 | 19.4 | 63.1 | 0 | 0 | 75.3 | 51.4 | - | 1 | 0 | 64.8 $CE_{1}$ | 60.5 | 16.8 | 56.7 | 0 | 0 | 75.3 | 51.4 | - | 1 | 1 | 72.0 Unit 4 | 45.5 | 19.9 | 63.8 | 0 | 1 | 66.3 | 59.7 | 64.1 | 0 | - | U | 45.5 | 19.9 | 1.5 | 0 | 1 | 66.3 | 59.7 | 1.5 | - | - | $CE_{0}$ | 45.5 | 19.9 | 63.8 | 0 | 1 | 66.3 | 47.17 | - | 1 | 1 | 73.1 $CE_{1}$ | 45.5 | 19.9 | 63.8 | 0 | 1 | 66.3 | 48.6 | - | 1 | 1 | 73.1 Unit 5 | 43.6 | 20.4 | 63.9 | 0 | 1 | 77.3 | 55.3 | 67.4 | 1 | - | U | 43.6 | 20.4 | 1.3 | 0 | 1 | 77.3 | 55.3 | 1.3 | - | - | $CE_{0}$ | 43.6 | 20.4 | 54.8 | 0 | 1 | 77.3 | 55.3 | - | 0 | 1 | 67.4 $CE_{1}$ | 43.6 | 20.4 | 61.5 | 0 | 0 | 77.3 | 55.3 | - | 0 | 1 | 67.4 ## 8 Discussion This section discusses the results and specifically how this method compares and fits in with current literature. ### 8.1 Pearl’s Counterfactual Method Verma et al. [5] identified a current challenge in the CE literature being the lack of using causal models (SCMs) to guide the counterfactual explanations. Mahajan et al. [8] stated that feasibility (i.e. coming from a possible world) in CEs is fundamentally a causal concept and that it is important to preserve causal relationships. That is, if we vary one feature, we have to consider how other features are causally related to that feature. They give an example where a CE might recommend increasing education level to masters to obtain a different desired outcome. According to them, if we change education level, then the age of the person also needs to change due to being causally linked with education level. To preserve causal constraints, they proposed to constrain the distance between features based on an SCM. Riccardo Guidotti [9] also mentions causality as a requirement for CEs: for a counterfactual explanation to be plausible and actionable, it should maintain causal relationships between features. Both of these papers make sense. We want to use the true causal structure in the data. This is even a main contribution of this paper. However, when dealing with counterfactuals, there is some nuance. According to Pearl, when we intervene on a feature and perform do-calculus, we manipulate and change the original causal structure in the data by deleting the relationship between that feature and its’ parents. This is slightly different to what Mahajan et al. [8] and Guidotti [9] are requiring. Indeed, we want to make use of the true causal structure in the data, however counterfactual features are no longer constrained by their parents as described above in Pearl’s Action method (see Section 5.2). Therefore if we apply Pearl’s method to Mahajan et al’s example, if age is a parent of education level in the original causal structure, and the CE has recommended increasing education level, then because we are now intervening on education, we free it from the influence of its surrounding and it becomes only a function of our intervention. Age is no longer a cause of education and these two features are no longer causally constrained. However, this differs slightly from Mahajan et al. who aim to preserve the causal relationship. Furthermore, by preserving the causal structure of the counterfactual variable, are we not simply maintaining the same distribution of data? Recall that counterfactuals are by nature not in the distribution. Of course, our features must still be feasible; i.e. they must come from the real world. However, it seems that maintaining the causal structure between a counterfactual and its parents keeps the problem on Rung 1 of the Pearl causal hierarchy: association [2]. A counterfactual by definition is to keep everything else the same, and only change that feature. If other things are changing with it according to the original data generation process, then it seems that it is not a counterfactual. I acknowledge however, that if I have misunderstood the above mentioned literature, then I am open for correction. The reader may then ask why am I making such a fuss over using SCMs if it seems that I don’t care for the constraints in the causal structure. Recall that it is only the counterfactual feature that is deleted from its’ parents; the other input features maintain their causal structure. Furthermore, the causal relationship between the counterfactual feature and the output remains untouched. Indeed, we must preserve feasibility by keeping the data in the real world, but we cannot do this by preserving a counterfactual feature’s causal relationship with its’ parents. ### 8.2 Effect of Causal Structures The second aim was to see, using PCM, how different causal structures affected the CEs generated using DiCE. We see that first of all, out of 30 different results, 10 (33%) showed conflict between the CE output and the PCM output. Although definitely not conclusive, the results suggest that DiCE CEs produce incorrect estimates (assuming PCM is correct). For more conclusive results, I suggest performing a simulation study of 500 to 1000 estimates. The challenge here was that, at least for now, each PCM calculation was carried out by hand and therefore very challenging to perform hundreds of times. Future work would develop code to simulate this type of study. Another important result was that the collider causal structure counterfactuals showed 50% conflict with the DiCE CEs. Chain had 3/10 conflicts, fork had 2/10 and collider, 5/10 conflicts. Again, these are not conclusive results, but they suggest that the type of causal structure may have a significant effect on the true counterfactual explanations generated. Whereas ML models simply include all features in the model, the true SCM would not include all the features. Actually, to include a collider in a statistical predictive model produces unwanted bias in the causal estimates [7] that can “sabotage a causal analysis"[12]. These results suggest that simply training a model on the entire dataset may introduce unwanted error into our counterfactual estimates. It is vital that we understand the underlying causal structure before blindly generating CEs. ### 8.3 Causal Discovery This study used simulated data with known ground truth causal structures. The reader may argue that this method only works with simulated data and how can we validate CEs using PCM if we have real life data and don’t know the causal structure? This is where causal discovery comes in which refers to discovering the true causal structure in real life data [13]. More explanation of this method is outside the scope of this study. However, I believe that for us to have significantly more trust in CEs from DiCE and others, we must first perform causal discovery on our data in order to determine as best we can, the true causal structure. We then perform CE validation by feeding the CEs into the PCM method on the true structure. Real life datasets may be substantially more complicated that the ones I created in this study. This is left for future work. ### 8.4 Concluding Remarks and Future Work In the beginning of the article I argued that it is vital to evaluate (validate) counterfactual explanations (CEs). A main reason is that CEs are generated from machine learning models that do not necessarily take into account the causal structure in the data. This could lead to errors in the predictions and estimations of counterfactuals. CEs are gaining much interest in the data community and if we don’t deeply understand how we are generating CEs, then at best they will add no value. The results of this study showed that there was some conflict between the ML method for generating CEs, and Pearl’s method. This casts some doubt on how much we can trust CEs that are blindly generated by machine learning models. Different causal structures showed different levels of conflict. Where do we go from here? It is vital that before we generate CEs, we do our best to know the true causal structure and modify our models accordingly. Causal discovery is essential for the future of CEs. Future work would include carrying out PCM using CEs on real life data with a known causal structure. I also believe that the ultimate way of validating CEs is to implement them on real data and measure the true output after a period of time. This is of course, highly challenging. For more confidence in these results, it is best to simulate on hundreds of CEs and also apply it to real life data. ## References * [1] Judea Pearl, Madelyn Glymour, and Nicholas P. Jewell. Causal inference in statistics A primer. Wiley, 2016. * [2] Judea Pearl and Dana Mackenzie. The book of why. Basic books, 2018. * [3] Christoph Molnar. Interpretable machine learning: A guide for making black box models explainable. 2019\. * [4] Sandra Wachter, Brent Mittelstadt, and Chris Russel. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. arXiv preprint arXiv:1711.00399, 2017. * [5] Sahil Verma, John P. Dickerson, and Keegan Hines. Counterfactual explanations for machine learning: Challenges revisited. arXiv:2106.07756v1, 2021. * [6] Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020. * [7] M. Luque-Fernandez, M.A.and Schomaker, D. Redondo-Sanchez, M. Jose Sanchez Perez, A. Vaidya, and M.E. Schnitzer. Educational Note: Paradoxical collider effect in the analysis of non-communicable disease epidemiological data: a reproducible illustration and web application. Int J Epidemiol., 48:640–653, 2019. * [8] Divyat Mahajan, Chenhao Tan, and Amit Sharma. Preserving causal constraints in counterfactual explanations for machine learning classifiers. arXiv:1912.03277v3, 2020. * [9] Riccardo Guidotti. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, 2022. * [10] Bernhard Schölkopf. Causality for machine learning. arXiv:1911.10500v2, 2019. * [11] Peter Tennant. Introduction to causal inference and directed acyclic graphs, 2022. * [12] Stephen L. Morgan and Christopher Winship. Counterfactuals and causal inference: Methods and principles for social research. Cambridge University Press, 2007. * [13] Clark Glymour, Kun Zhang, and Peter Spirtes. Review of causal discovery methods based on graphical models. Front. Genet. 10:524.doi: 10.3389/fgene.2019.00524, 2019.
∎ aFEMclipboard 11institutetext: A. Chambolle 22institutetext: CEREMADE, CNRS & Université Paris Dauphine, PSL Research University, Paris 22email<EMAIL_ADDRESS>ORCID: 0000-0002-9465-4659 33institutetext: R. Tovey 44institutetext: MOKAPLAN, INRIA Paris, Paris 44email<EMAIL_ADDRESS>ORCID: 0000-0001-5411-2268 # “FISTA” in Banach spaces with adaptive discretisations Antonin Chambolle Robert Tovey ###### Abstract FISTA is a popular convex optimisation algorithm which is known to converge at an optimal rate whenever a minimiser is contained in a suitable Hilbert space. We propose a modified algorithm where each iteration is performed in a subset which is allowed to change at every iteration. Sufficient conditions are provided for guaranteed convergence, although at a reduced rate depending on the conditioning of the specific problem. These conditions have a natural interpretation when a minimiser exists in an underlying Banach space. Typical examples are L1-penalised reconstructions where we provide detailed theoretical and numerical analysis. ###### Keywords: Convex optimization Multiscale Multigrid Sparsity Lasso ††journal: Computational Optimization and Applications ## 1 Introduction The Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) was proposed by Beck and Teboulle Beck2009 as an extension of Nesterov’s fast gradient method Nesterov2004 and is now a very popular algorithm for minimising the sum of two convex functions. We write this as the problem of computing $\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\qquad\text{such that}\qquad\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\coloneqq\operatorname{f}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])+\operatorname{g}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]),$ (1) for a Hilbert space $\mathds{H}$ where $\operatorname{f}\colon\mathds{H}\to\mathds{R}$ is a convex differentiable function with $L$-Lipschitz gradient and $\operatorname{g}\colon\mathds{H}\to\overline{\mathds{R}}$ is a “simple” convex function, whose “proximity operator” is easy to compute. Throughout this work we assume that $\operatorname{E}$ is bounded below so that the infimum is finite. The iterates of the FISTA algorithm will be denoted $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}\in\mathds{H}$. If, moreover the infimum is achieved, it has been shown that $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ converges at the optimum rate of $n^{-2}$ Beck2009 , and later (after a small modification) the convergence of the iterates was also shown in a general Hilbert space setting Chambolle2015 . Many further works have gone on to demonstrate faster practical convergence rates for slightly modified variants of FISTA Tao2016 ; Liang2017 ; Alamo2019 . In this work we address the case where the minimiser possibly fails to exist or lies in a larger space where $\mathds{H}$ is dense. There is much overlap between the techniques used in this work and those used in the literature of inexact optimisation, however, our interpretation is relatively novel. In particular, we emphasise the infinite-dimensional setting where errors come from “discretisation”, rather than random or decaying errors in $\mathds{H}$, which enables two new perspectives: * • Analytically, we prove new rates of convergence for FISTA when the minimum energy is not achieved (at least not in $\mathds{H}$). The exact rate can be computed by quantifying coercivity and regularity properties of $\operatorname{E}$. If there isn’t a minimiser in $\mathds{H}$, then this rate is strictly slower than $n^{-2}$. * • Numerically, we allow the optimisation domain to change on every iteration. This enables us to understand how FISTA behaves with adaptive discretisations. Adaptive finite-element methods are known to improve the efficiency of, for example, approximating the solutions of PDEs. Our analytical results show how to combine such tools with FISTA without reducing the guaranteed rate of convergence, and our numerical results confirm much improved time and computer memory efficiency in the Lasso example (Section 6). All the examples in this work, discussed from Section 5 onward, consider $\\{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}\operatorname{\;s.t.\;}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])<\infty\\}$ to be contained in some ambient Banach space $\mathds{U}$. The idea is that FISTA provides a minimising sequence in $\mathds{H}\cap\mathds{U}$, but further properties like rate of convergence (of $\operatorname{E}$ or the iterates) must come from the topology of $\mathds{U}$. It will not be necessary for $\mathds{H}\hookrightarrow\mathds{U}$ to be a continuous embedding, nor in fact the full inclusion $\mathds{H}\subset\mathds{U}$. Some other works for FISTA-like algorithms include Jiang2012 ; Villa2013 . Of particular note, our stability estimate for FISTA in Theorem 4.1 is very similar to (Schmidt2011, , Prop 2) and (Aujol2015, , Prop 3.3). This is then used to analyse the convergence properties in our more general Banach space setting, but where all sources of inexactness come from subspace approximations. The ideas in Parpas2017 are similar although in application to the proximal gradient method with an additional smoothing on the functional $\operatorname{g}$. The permitted refinement steps are also more broad in our work. Very recent work in Yu2021 proposes a “Multilevel FISTA” algorithm which allows similar coarse-to-fine refinement strategies, although only a finite number. We also allow for non-uniform refinement with a posteriori strategies. ### 1.1 Outline This work is organised as follows. Section 2 defines notation and the generic form of our proposed refining FISTA algorithm, Algorithm 1. The main theoretical contribution of this work is the convergence analysis of Algorithm 1 which is split into two parts: first we outline the proof structure in Section 3, then we state the specific results in the case of FISTA in Section 4. The main results are Theorems 4.2/4.3 which extend the convergence of FISTA to cases with un-attained minima with uniform/adaptively chosen subspaces $\mathds{U}^{n}$ respectively. Section 5 presents some general results for the application of Algorithm 1 in Banach spaces and Section 6 gives a much more detailed discussion of adaptive refinement for Lasso minimisation. In particular, we describe how to choose efficient refining discretisations to approximate $\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$, estimate the convergence of $\operatorname{E}$, and identify the support of the minimiser. The numerical results in Section 7 demonstrate these techniques in four different models demonstrating the apparent sharpness of our convergence rates and the computational efficiency of adaptive discretisations. ## 2 Definitions and notation We consider optimisation of (1) over a Hilbert space $(\mathds{H},\left\langle\cdot,\cdot\right\rangle,{\left\lVert\cdot\right\rVert})$. In the more analytical section (Sections 3 and 4) it will be more convenient to use the translated energy $\operatorname{E}_{0}\colon\mathds{H}\to\mathds{R},\qquad\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\coloneqq\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])-\inf_{\widetilde{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}\in\mathds{H}}\operatorname{E}(\widetilde{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]})$ (2) so that $\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])=0$, although access to this function is not assumed for numerical examples. The proposed generalised FISTA algorithm is stated in Algorithm 1 for an arbitrary choice of closed convex subsets $\mathds{U}^{n}\subset\mathds{H}$ for $n\in\mathds{N}$. The only difference from standard FISTA is that on iteration $n$, all computations are performed in the subset $\mathds{U}^{n}$. If $\mathds{U}^{n}=\mathds{H}$, then we recover the original algorithm. More generally, the idea is that $\mathds{U}^{n}$ are “growing”, for example $\mathds{U}^{n}\subset\mathds{U}^{n+1}$, but this assumption is not necessary in most of the results. Without loss of generality we will assume $L=1$, i.e. $\nabla\operatorname{f}$ is 1-Lipschitz. To get the general statement of any of the results which follow, replace $\operatorname{E}$ with $\frac{\operatorname{E}}{L}$. In particular, ${\left\lVert\nabla\operatorname{f}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])-\nabla\operatorname{f}(\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1])\right\rVert}\leq{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\right\rVert}$ (3) for all $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0],\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\in\mathds{H}$ and $\operatorname{g}$ is called “simple” if it is proper, convex, weakly lower-semicontinuous, and $\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\widetilde{\mathds{U}}}\tfrac{1}{2}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\right\rVert}^{2}+\operatorname{g}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ (4) is exactly computable for all $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\in\mathds{H}$ and all $\widetilde{\mathds{U}}\in\\{\mathds{U}^{n}\\}_{n=0}^{\infty}$. Closed subsets of $\mathds{H}$ are locally weakly compact, therefore this argmin is always non-empty. One defining property of the FISTA algorithm is an appropriate choice of inertia, dictated by $t_{n}$. In particular, we will say that $(t_{n})_{n=0}^{\infty}$ is a _FISTA stepsize_ if $t_{0}=1,\qquad t_{n}\geq 1,\qquad\text{and}\qquad\rho_{n}\coloneqq t_{n}^{2}-t_{n+1}^{2}+t_{n+1}\geq 0\qquad\text{ for all }n=0,1,\ldots.$ (5) The precise constants associated to a given rate are given in the statements of the theorems but, for convenience, are otherwise omitted from the text. For sequences $(a_{n})_{n=0}^{\infty}$,$(b_{n})_{n=0}^{\infty}$ we will use the notation: $\displaystyle a_{n}\lesssim b_{n}\qquad$ $\displaystyle\iff\qquad\exists C,N>0\operatorname{\;s.t.\;}a_{n}\leq Cb_{n}\text{ for all }n>N,$ $\displaystyle a_{n}\simeq b_{n}\qquad$ $\displaystyle\iff\qquad a_{n}\lesssim b_{n}\lesssim a_{n}.$ For $n\in\mathds{N}$ we use the abbreviation $[n]=\\{1,2,\ldots,n\\}$. When the subdifferential of $\operatorname{E}$ is set-valued, we will use the short-hand ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\coloneqq\inf_{\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\in\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])}{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ (6) for any specified norm ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$. Algorithm 1 Refining subset FISTA 1:Choose $(\mathds{U}^{n})_{n\in\mathds{N}}$, $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}\in\mathds{U}^{0}$ and some FISTA stepsize choice $(t_{n})_{n\in\mathds{N}}$ 2:$\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{0}\leftarrow\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0},n\leftarrow 0$ 3:repeat 4: $\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}_{n}\leftarrow(1-\tfrac{1}{t_{n}})\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}+\tfrac{1}{t_{n}}\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ 5: $\displaystyle\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n+1}\leftarrow\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n+1}}\tfrac{1}{2}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}_{n}+\nabla\operatorname{f}(\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}_{n})\right\rVert}^{2}+\operatorname{g}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ $\triangleright$ Only modification, $\mathds{U}^{n+1}\subset\mathds{U}$ 6: $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n+1}\leftarrow(1-t_{n})\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}+t_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n+1}$ 7: $n\leftarrow n+1$ 8:until some stopping criterion is met ## 3 General proof recipe In this section we give an intuitive outline of the full proof for convergence of Algorithm 1 before giving formal theorems and proofs in the next section. First we recall the classical FISTA convergence guarantee given by (Chambolle2015, , Thm 3.1); if there exists $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$, then $t_{N}^{2}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})+\sum^{N-1}_{n=1}\rho_{n}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\tfrac{1}{2}{\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{N}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}^{2}\leq\tfrac{1}{2}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}^{2}$ (7) for any FISTA stepsize choice $t_{N}\simeq N$ such that $\rho_{n}\geq 0$. ##### Step 1: Quantifying the stability The first step is to generalise (7) to account for the adapting subsets $\mathds{U}^{n}$. In the notation of Algorithm 1, Theorem 4.1 shows that $t_{N}^{2}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})+\sum_{n=1}^{N-1}\rho_{n}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\tfrac{1}{2}{\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{N}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{N}\right\rVert}^{2}\leq\tfrac{1}{2}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{0}\right\rVert}^{2}+\tfrac{{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{N}\right\rVert}^{2}-{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{0}\right\rVert}^{2}}{2}+\sum^{N}_{n=1}t_{n}\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n})+\left\langle\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n-1},\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n-1}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\right\rangle$ (8) for any $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\in\mathds{U}^{n}$. The similarities to (7) are clear. If $\mathds{U}^{n}=\mathds{H}$, then we can choose $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}=\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ and the two estimates agree. These extra terms in (8) quantify the robustness to changing of discretisation. ##### Step 2: Quantifying the scaling properties To show that the extra terms in (8) are small, we need to quantify the approximation properties of $\mathds{U}^{n}$. The idea is that there is a sequence $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\in\mathds{U}^{n}$, $n\in\mathds{N}$ such that ${\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\right\rVert}$ grows slowly and $\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n})$ decreases quickly. To quantify this balance, we introduce a secondary sequence $n_{0}<n_{1}<\ldots$ and constants $a_{\operatorname{U}},a_{\operatorname{E}}\geq 1$ such that for each $k\in\mathds{N}$ $n\leq n_{k}\implies{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\right\rVert}\lesssim a_{\operatorname{U}}^{k},\qquad n\geq n_{k}\implies\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n})\lesssim a_{\operatorname{E}}^{-k}.$ (9) A canonical example would be $\mathds{U}^{n}=\\{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}\operatorname{\;s.t.\;}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}\leq a_{\operatorname{U}}^{k}\\}$ for $n\in[n_{k},n_{k+1})$, then $a_{\operatorname{E}}$ reflects the smoothness of $\operatorname{E}_{0}$. The choice of exponential scaling is introduced to improve stability of Algorithm 1. It is natural if we consider the $\mathds{U}^{n}$ to be the subspace of functions discretised on a uniform mesh. If that mesh is sequentially refined, then the resolution of the mesh will be of order $h^{k}$ after $k$ refinements and for some $h<1$. The integer $n_{k}$ is then the time at which the mesh has refined $k$ times. The trade-off between $a_{\operatorname{E}}$ and $a_{\operatorname{U}}$ dictates the final convergence rate of the algorithm. If $a_{\operatorname{U}}>1$, then we cannot guarantee the original $n^{-2}$ rate of convergence. ##### Step 3: Generalising the convergence bound In this step we combine the FISTA stability estimate with the subset approximation guarantees to provide a sharper estimate of stability with respect to the parameters $a_{\operatorname{E}}$ and $a_{\operatorname{U}}$. For example, if for each $k\in\mathds{N}$ $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}=\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n_{k}}\text{ for each }n=n_{k},n_{k}+1,\ldots,n_{k+1}-1,$ then many terms on the right-hand side of (8) telescope to 0. The result of this is presented in Lemma 3. The key idea is that the stability error in (8) has $K\ll N$ terms, rather than $N$. ##### Step 4: Sufficiently fast growth In Step 3 we develop a convergence bound, now we wish to show that it is only worse than the classical (7) by a constant factor. In particular, it is equivalent to either run Algorithm 1 for $N$ iterations, or the classical FISTA algorithm for $N$ iterations on the fixed subset $\mathds{U}^{N}$. The estimate from (7) provides the estimate $N^{2}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{N}\right\rVert}^{2}=O(a_{\operatorname{U}}^{2K})$ for $N\leq n_{K}$. Lemma 4 shows that Algorithm 1 can achieve the same order of approximation, so long as $\mathds{U}^{n}$ grow sufficiently quickly (in particular $n_{k}^{2}\lesssim a_{\operatorname{E}}^{k}a_{\operatorname{U}}^{2k}$). ##### Step 5: Sufficiently slow growth The result of Step 3 is sufficient to prove convergence, but not yet a rate. If the subsets grow too quickly, then the influence of ${\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}\right\rVert}\to\infty$ will slow the rate of convergence. If $n_{k}$ is too large, then we overfit to the discrete problem, but if $n_{k}$ is too small, then FISTA converges slowly. Lemma 5 balances these two factors in an optimal way ($n_{k}^{2}\simeq a_{\operatorname{E}}^{k}a_{\operatorname{U}}^{2k}$) for Algorithm 1 resulting in a convergence rate of $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim\frac{a_{\operatorname{U}}^{2K}}{N^{2}}\lesssim\frac{N^{2\kappa}}{N^{2}}$ for all $N\in\mathds{N}$ and $\kappa=\frac{2\log a_{\operatorname{U}}}{\log a_{\operatorname{E}}+2\log a_{\operatorname{U}}}\in[0,1)$. In particular, if the minimum is attained in $\mathds{H}$, then we recover the classical rate with $\kappa=0$. ##### Step 6: Adaptivity Up to this point we have implicitly focused on the case where $\mathds{U}^{n}$ (and $n_{k}$) are chosen a priori. The main challenge for adaptive choice of $\mathds{U}^{n}$ is to guarantee (9) from Step 3 using a posteriori estimates. Combined with the partial telescoping requirement in Step 3, a natural choice is $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}=\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1}$ for $n\in[n_{k},n_{k+1})$, i.e. the value of $n_{k}$ is chosen to be $n+1$ once the iterate $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ is observed. Theorem 4.3 shows that a sufficient condition is $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1}\in\mathds{U}^{n_{k}}\cap\mathds{U}^{n_{k}+1}\cap\ldots\cap\mathds{U}^{n_{k+1}-1},\qquad{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1}\right\rVert}\lesssim a_{\operatorname{U}}^{k},\quad\text{and}\quad\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1})\lesssim a_{\operatorname{E}}^{-k}.$ Convergence is most stable if the approximation spaces $\mathds{U}^{n}$ satisfy a monotone inclusion, breaking the monotonicity requires more care. The only non-trivial property to verify is the energy gap $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})=\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$. Lemma 6 proposes some sufficient conditions to guarantee the same overall rate of convergence as in Step 3, $\min_{n\leq N}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\lesssim\frac{N^{2\kappa}}{N^{2}}$ for all $N\in\mathds{N}$, with the same $\kappa\in[0,1)$ from Step 3. The penalty for accelerating the change of discretisation is a potential loss of stability or monotonicity in $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$, although this behaviour has not been seen in numerical experiments. ## 4 Proof of convergence In this section we follow the recipe motivated in Section 3 to prove convergence of two variants of Algorithm 1. Each of the main theorems and lemmas will be stated with a sketch proof in this section. The details of the proofs are either trivial or very technical and are therefore placed in Section A to preserve the flow of the argument. ### 4.1 Computing the convergence bound For Step 3 of Section 3 we look to replicate the classical bound of the form in (7) for Algorithm 1. The proofs in this step follow the classical arguments Beck2009 ; Chambolle2015 very closely. Throughout this section we consider a sequence $(\mathds{U}^{n})_{n\in\mathds{N}}$ which generate the iterates $(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})_{n\in\mathds{N}}$ in Algorithm 1 such that $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{n}\in\mathds{U}^{n+1}\subset\mathds{H}\quad\text{where }\mathds{U}^{n}\text{ is a closed, convex subset for all }n\in\mathds{N}.$ (10) #### 4.1.1 Single iterations We first wish to understand a single iteration of Algorithm 1. This is done through the following two lemmas. ###### Lemma 1 (equivalent to (Chambolle2015, , Lemma 3.1)) thm: descent lemma Suppose $\nabla\operatorname{f}$ is 1-Lipschitz, for any $\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}\in\mathds{H}$ define $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\coloneqq\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\tfrac{1}{2}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}+\nabla\operatorname{f}(\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]})\right\rVert}^{2}+\operatorname{g}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]).$ Then, for all $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]\in\mathds{U}^{n}$, we have $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])+\tfrac{1}{2}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]\right\rVert}^{2}\leq\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2])+\tfrac{1}{2}{\left\lVert\overline{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]\right\rVert}^{2}.$ The proof is exactly the same as in Chambolle2015 on the subset $\mathds{U}^{n}$. Applying Lemma 1 to the iterates from Algorithm 1 gives a more explicit inequality. ###### Lemma 2 ((Chambolle2015, , (17)), (Beck2009, , Lemma 4.1)) thm: one step FISTA Let $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\in\mathds{U}^{n}$ be chosen arbitrarily and $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$/$\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ be generated by Algorithm 1 for all $n\in\mathds{N}$. For all $n>0$, it holds that $\Copy{thm:eq:onestepFISTA}{t_{n}^{2}(\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}))-(t_{n}^{2}-t_{n})(\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n-1})-\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}))\leq\tfrac{1}{2}\left[{\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n-1}\right\rVert}^{2}-{\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}\right\rVert}^{2}\right]+\left\langle\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}-\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n-1},\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\right\rangle.}$ (11) The proof is given in Theorem A.1 and is a result of the convexity of $\operatorname{E}_{0}$ and $\mathds{U}_{n}$ for a well chosen $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]$ in Lemma 1. #### 4.1.2 Generic convergence bound Lemma 2 gives us an understanding of a single iteration of Algorithm 1, summing over $n$ then gives our generic convergence bound for any variant of Algorithm 1. ###### Theorem 4.1 (analogous to (Chambolle2015, , Thm 3.2), (Beck2009, , Thm 4.1)) thm: mini FISTA convergence Fix a sequence of subsets $(\mathds{U}^{n})_{n\in\mathds{N}}$ satisfying (10), arbitrary $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}\in\mathds{U}^{0}$, and FISTA stepsize choice $(t_{n})_{n\in\mathds{N}}$. Let $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ and $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ be generated by Algorithm 1, then, for any choice of $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\in\mathds{U}^{n}$ and $N\in\mathds{N}$ we have $\Copy{thm:eq:miniFISTAconvergence}{t_{N}^{2}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})+\sum_{n=1}^{N-1}\rho_{n}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\frac{{\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{N}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{N}\right\rVert}^{2}}{2}\leq\frac{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{0}\right\rVert}^{2}-{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{0}\right\rVert}^{2}+{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{N}\right\rVert}^{2}}{2}+\sum^{N}_{n=1}t_{n}\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n})+\left\langle\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n-1},\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n-1}-\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}\right\rangle.}$ (12) The proof is given in Theorem A.2. This result is the key approximation for showing convergence of FISTA with changing subsets. In the classical setting, we have $\mathds{U}^{n}=\mathds{H}$, $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}=\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{0}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ and the extra terms on the right-hand side collapse to 0. If there exists a minimiser $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$, then the natural choice in (12) is $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}=\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ for some projection $\mathsf{\Pi}_{n}\colon\mathds{H}\to\mathds{U}^{n}$, however, there are simple counter-examples which give $\operatorname{E}_{0}(\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})=\infty$ and so this inequality becomes useless. For example, if $\operatorname{f}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])={\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}_{L^{2}([0,1])}^{2}$, $\operatorname{g}$ is the indicator on the set $\mathds{D}=\\{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in L^{1}([0,1])\operatorname{\;s.t.\;}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0](x)\geq x\\}$, and $\mathsf{\Pi}_{n}$ is the $L^{2}$ projection onto a set of piecewise constant functions, then $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}=x\mapsto x$. On the other hand, suppose one of the pixels of the discretisation is $[x_{0}-h,x_{0}+h]$, then $\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\left(x_{0}+\tfrac{h}{2}\right)=\operatorname*{argmin}_{\IfEqCase{5}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k5]\in\mathds{R}}\int_{x_{0}-h}^{x_{0}+h}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(x)-\IfEqCase{5}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k5])^{2}\mathop{}\\!\mathrm{d}x=\operatorname*{argmin}_{\IfEqCase{5}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k5]\in\mathds{R}}\int_{x_{0}-h}^{x_{0}+h}(x-\IfEqCase{5}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k5])^{2}\mathop{}\\!\mathrm{d}x=x_{0}<x_{0}+\tfrac{h}{2}.$ In particular $\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\notin\mathds{D}$ therefore $\operatorname{E}_{0}(\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})=\infty$. The choice $\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}=\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ is much more robust and allows us to apply Algorithm 1 more broadly. The penalty for this flexibility is a more complicated analysis; each time the subset changes, because $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}\in\mathds{U}_{n}$, the system receives a “shock” proportional to ${\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}\right\rVert}{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n}-\mathsf{\Pi}_{n}\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]_{n+1}\right\rVert}$. ### 4.2 Convergence bound with milestones In standard FISTA, the right-hand side of (12) is a constant. The following lemma minimises the growth of the “constant” as a function of $N$ by partially telescoping the sum on the right-hand side. Before progressing to the content of Step 3, we will first formalise the definition of the constants $a_{\operatorname{U}}$ and $a_{\operatorname{E}}$ introduced in Step 3. ###### Definition 1 Fix $a_{\operatorname{U}},a_{\operatorname{E}}\geq 1$ and a sequence $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{H}$. We say that $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ is an _$(a_{\operatorname{U}},a_{\operatorname{E}})$ -minimising sequence of $\operatorname{E}$ _ if ${\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right\rVert}\lesssim a_{\operatorname{U}}^{k}\qquad\text{and}\qquad\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})\lesssim a_{\operatorname{E}}^{-k}$ for all $k\in\mathds{N}$. In this section we will simply assume that such sequences exist and in Section 5 we will give some more general examples. ###### Lemma 3 thm: mini exponential FISTA convergence Let $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$, $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ be generated by Algorithm 1 with $(\mathds{U}^{n})_{n\in\mathds{N}}$ satisfying (10), $(n_{k}\in\mathds{N})_{k\in\mathds{N}}$ be a monotone increasing sequence, and choose $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{U}^{n_{k}}\cap\mathds{U}^{n_{k}+1}\cap\ldots\cap\mathds{U}^{n_{k+1}-1}$ for each $k\in\mathds{N}$. If such a sequence exists, then for all $K\in\mathds{N}$, $n_{K}\leq N<n_{K+1}$ we have $\Copy{thm:eq:miniexponentialFISTAconvergence}{t_{N}^{2}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})+\sum_{n=1}^{N-1}\rho_{n}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\frac{{\left\lVert\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{N}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{K}\right\rVert}^{2}}{2}\leq C+\frac{{\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{K}\right\rVert}^{2}}{2}+\frac{(N+1)^{2}-n_{K}^{2}}{2}\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{K})\\\ +\sum_{k=1}^{K}\frac{n_{k}^{2}-n_{k-1}^{2}}{2}\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k-1})+\left\langle\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n_{k}-1},\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k-1}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right\rangle}$ thm:end: mini exponential FISTA convergence where $C=\frac{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{0}\right\rVert}^{2}-{\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{0}\right\rVert}^{2}}{2}$. The proof is given in Lemma 11. The introduction of $n_{k}$ has greatly compressed the expression of Theorem 4.1. On the right-hand side, we now only consider $\operatorname{E}_{0}$ evaluated on the sequence $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}$ and there are $K$ elements to the sum rather than $N$. ### 4.3 Refinement without overfitting The aim of Step 3 is to show that $n$ iterations of Algorithm 1 is no slower (up to a constant factor) than $n$ iterations of classical FISTA on the space $\mathds{U}^{n}$. In other words, we would like to ensure that $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})=\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\lesssim\frac{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right\rVert}^{2}}{n^{2}}$ (13) uniformly for $n\in[n_{k},n_{k+1})$. If this condition is not satisfied, then it indicates that computational effort has been wasted by a poor choice of subsets. This can be interpreted as an overfitting to the discretisation of $\operatorname{E}_{0}|_{\mathds{U}^{n}}$ rather than the desired function $\operatorname{E}_{0}|_{\mathds{H}}$. Combining the assumptions given by Definition 1 and the result of Lemma 3, the following lemma proves the convergence of Algorithm 1 provided that the refinement times $n_{k}$ are sufficiently small (i.e. $\mathds{U}^{n}$ grows sufficiently quickly). ###### Lemma 4 thm: sufficiently fast Suppose $\mathds{U}^{n},\ \IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n},\ \IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ and $n_{k}$ satisfy the conditions of Lemma 3 and $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ forms an $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$ with $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{U}^{n_{k}}\cap\mathds{U}^{n_{k}+1}\cap\ldots\cap\mathds{U}^{n_{k+1}-1}.$ If either: * • $a_{\operatorname{U}}>1$ and $n_{k}^{2}\lesssim a_{\operatorname{E}}^{k}a_{\operatorname{U}}^{2k}$, * • or $a_{\operatorname{U}}=1$, $\sum_{k=1}^{\infty}n_{k}^{2}a_{\operatorname{E}}^{-k}<\infty$, and $\sum_{k=1}^{\infty}{\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k+1}\right\rVert}<\infty,$ then $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim\frac{a_{\operatorname{U}}^{2K}}{N^{2}}\qquad\text{for all}\qquad n_{K}\leq N<n_{K+1}.$ The proof is given in Lemma 12. We make two observations of the optimality of Lemma 4: * • The convergence guarantee for $N\in[n_{K},n_{K+1})$ iterations of classical FISTA in the space $\mathds{U}^{N}$ is $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim\frac{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{K}\right\rVert}^{2}}{N^{2}}+\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{N}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\lesssim\frac{a_{\operatorname{U}}^{2K}}{N^{2}}+a_{\operatorname{E}}^{-K}.$ This is equivalent to Lemma 4 after the assumptions on $n_{k}$. * • If $\mathds{H}$ is finite dimensional, then the condition $a_{\operatorname{U}}=1$ is almost trivially satisfied. Norms in finite dimensions are equivalent and any discretisation can be achieved with a finite number of refinements (i.e. the sums over $k$ are finite). ### 4.4 Convergence rate In Lemma 4 we show that $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$ converges at a rate depending on $k$ and $n$, so long as $k$ grows sufficiently quickly. On the other hand, as $k$ grows, the rate becomes worse and so we need to also put a lower limit on the growth of $n_{k}$. The following lemma completes Step 3 by computing the global convergence rate of $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$ when $k$ grows at the minimum rate which is consistent with Lemma 4. As a special case, note that if $a_{\operatorname{U}}=1$ then Lemma 4 already gives the optimal $O(N^{-2})$ convergence rate. This is in fact a special case of that shown in (Aujol2015, , Prop 3.3). If the minimum is achieved in $\mathds{H}$, then it is not possible to refine “too quickly” and the following lemma is not needed. ###### Lemma 5 thm: sufficiently slow Suppose $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ and $n_{k}$ are sequences satisfying $\forall N\in[n_{K},n_{K+1}),\ \operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim\frac{a_{\operatorname{U}}^{2K}}{N^{2}}\qquad\text{where}\qquad n_{K}^{2}\gtrsim a_{\operatorname{E}}^{K}a_{\operatorname{U}}^{2K},$ then $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim\frac{1}{N^{2(1-\kappa)}}\qquad\text{ where }\qquad\kappa=\frac{\log a_{\operatorname{U}}^{2}}{\log a_{\operatorname{E}}+\log a_{\operatorname{U}}^{2}}.$ The proof is given in Lemma 13. #### 4.4.1 FISTA convergence with a priori discretisation We can summarise Lemmas 3 to 5 into a single theorem stating the convergence guarantees when $\mathds{U}^{n}$ and $n_{k}$ are chosen a priori. ###### Theorem 4.2 thm: exponential FISTA convergence Let $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ be an $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$ and choose any $\mathds{U}^{n}$ satisfying (10) such that $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{U}^{n_{k}}\cap\mathds{U}^{n_{k}+1}\cap\ldots\cap\mathds{U}^{n_{k+1}-1}$ for all $k\in\mathds{N}$. Compute $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ and $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ by Algorithm 1. Suppose that either: * • $a_{\operatorname{U}}>1$ and $n_{k}^{2}\simeq a_{\operatorname{E}}^{k}a_{\operatorname{U}}^{2k}$, or * • $a_{\operatorname{U}}=1$, $\sum_{k=1}^{\infty}n_{k}^{2}a_{\operatorname{E}}^{-k}<\infty$ and $\sum_{k=1}^{\infty}{\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k+1}\right\rVert}<\infty$, then $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\lesssim\frac{1}{N^{2(1-\kappa)}}\qquad\text{ where }\qquad\kappa=\frac{\log a_{\operatorname{U}}^{2}}{\log a_{\operatorname{E}}+\log a_{\operatorname{U}}^{2}}\qquad\text{uniformly for }N\in\mathds{N}.$ Analytically, this theorem gives new rates of convergence for FISTA when the minimiser is not achieved in $\mathds{H}$. Indeed for the original algorithm ($\mathds{U}^{n}=\mathds{H}$), if $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{0}=0$ for simplicity and $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ is _any_ $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$ exists, the result of Lemma 3 is $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})\leq\inf_{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]\in\mathds{H}}\frac{{\left\lVert\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]\right\rVert}^{2}+N^{2}\operatorname{E}_{0}(\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2])}{2t_{N}^{2}}\leq\min_{k\in\mathds{N}}\frac{{\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right\rVert}^{2}+N^{2}\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})}{2t_{N}^{2}}\lesssim\min_{k\in\mathds{N}}\frac{a_{\operatorname{U}}^{2k}+N^{2}a_{\operatorname{E}}^{-k}}{N^{2}}\lesssim N^{-2(1-\kappa)}.$ (14) In this sense, we could say that $\operatorname{E}_{0}$ converges at the rate $N^{-2(1-\kappa)}$ if and only if such a sequence exists. Nothing is lost (or gained) analytically by choosing $\mathds{U}_{n}\subsetneq\mathds{H}$. Numerically, it is easy to implement the strategy of Theorem 4.2 and requires very little knowledge of how to estimate $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$. So long as $a_{\operatorname{U}}$ and $a_{\operatorname{E}}$ can be computed analytically, one can choose $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}$ implicitly to be the discrete minimisers of some “uniform” discretisations (e.g. $\mathds{U}^{n}=\\{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}\leq k\\}$ or finite element spaces with uniform mesh) to achieve the stated convergence rate. #### 4.4.2 FISTA convergence with adaptivity There are two properties of the sequence $(\mathds{U}^{n})_{n\in\mathds{N}}$ which we may wish to decide adaptively: the refinement times $n_{k}$ and the discretising spaces $\\{\mathds{U}^{n}\operatorname{\;s.t.\;}n_{k}\leq n<n_{k+1}\\}$. We will refer to these as temporal and spatial adaptivity respectively. Lemma 4 gives a sufficient condition on $n_{k}$ for converging at the rate $O(N^{2(\kappa-1)})$, but it is not necessary. Indeed for $n\leq n_{k}$ we have $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\geq\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])=O(a_{\operatorname{E}}^{-k})=O(n^{2(\kappa-1)}),$ which suggests that to converge faster than $n^{2(\kappa-1)}$ requires choosing smaller $n_{k}$. As an example, in Section 7.2 we will see Algorithm 1 can converge at a near-linear rate, although this is not possible without adaptive refinement times. On the other hand, choice of spatial adaptivity has no impact on rate but can impact computational efficiency. It will be permitted to use greedy discretisation techniques so long as it is sufficient to estimate $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$ accurately. Theorem 4.2 already allows for spatial adaptivity, so we focus on temporal adaptivity. Lemma 4 suggests that a good refinement time strategy is to choose $n_{k}$ to be the minimal integer such that $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1})\lesssim a_{\operatorname{E}}^{-k}$. However, the value of $\operatorname{E}_{0}$ may be hard to estimate and so we retain a “backstop” condition which guarantees that convergence is no slower than the rate given by Theorem 4.2. In the non- classical case of $a_{\operatorname{U}}>1$, we provide the following theorem. ###### Theorem 4.3 thm: stronger exponential FISTA convergence Let $(\mathds{U}^{n}\subset\mathds{H})_{n\in\mathds{N}}$ be a sequence of subsets satisfying (10), compute $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ and $\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]_{n}$ by Algorithm 1. Suppose that there exists a monotone increasing sequence $n_{k}\in\mathds{N}$ such that $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\coloneqq\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1}\in\mathds{U}^{n_{k}}\cap\mathds{U}^{n_{k}+1}\cap\ldots\cap\mathds{U}^{n_{k+1}-1}$ for all $k\in\mathds{N}$. If $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ is an $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$ with $a_{\operatorname{U}}>1$ and $n_{k}^{2}\lesssim a_{\operatorname{E}}^{k}a_{\operatorname{U}}^{2k}$, then $\min_{n\leq N}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})=\min_{n\leq N}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\lesssim\frac{1}{N^{2(1-\kappa)}}\qquad\text{ where }\qquad\kappa=\frac{\log a_{\operatorname{U}}^{2}}{\log a_{\operatorname{E}}+\log a_{\operatorname{U}}^{2}}$ uniformly for $N\in\mathds{N}$. The proof is given in Theorem A.3. If we directly compare Theorems 4.2 and 4.3, both are a direct result of Lemma 4 assuming a specific choice of $n_{k}$ or $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}$ respectively. We note that the convergence rate is the same in both theorems but the price for better adaptivity (i.e. only an upper bound on $n_{k}$) is a slightly weaker stability guarantee (now convergence of $\min_{n\leq N}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$). In Theorem 4.2, as in the original FISTA algorithm, the sequence $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$ is not monotone but the magnitude of oscillation is guaranteed to decay in time. This behaviour is lost in Theorem 4.3. Although we do not prove it here, it can be shown that the stronger condition $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{U}^{n_{k}}\cap\mathds{U}^{n_{k}+1}\cap\ldots\cap\mathds{U}^{n_{k+1}-1}\cap\ldots\cap\mathds{U}^{N}$ (15) is sufficient to restore the stronger last-iterate guarantee on $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{N})$. Again, monotonicity of $\mathds{U}^{n}$ corresponds with improved stability of Algorithm 1. To enable a more practical implementation of Theorem 4.3, the following lemma describes several refinement strategies which provide sufficient condition for $\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})\lesssim a_{\operatorname{E}}^{-k}$. ###### Lemma 6 thm: practical refinement criteria Let $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ be a sequence in $\mathds{H}$ with ${\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right\rVert}\lesssim a_{\operatorname{U}}^{k}$. Suppose $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\widetilde{\mathds{U}}^{k}\coloneqq\mathds{U}^{n_{k}}$ and denote $\operatorname{E}_{0}(\widetilde{\mathds{U}}^{k})\coloneqq\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\widetilde{\mathds{U}}^{k}}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$. Any of the following conditions are sufficient to show that $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}$ is an $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$: 1. 1. Small continuous gap refinement: $\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})\leq\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]a_{\operatorname{E}}^{-k}$ for all $k\in\mathds{N}$, some $\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]>0$. 2. 2. Small discrete gap refinement: $\operatorname{E}_{0}(\widetilde{\mathds{U}}^{k})\leq\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]a_{\operatorname{E}}^{-k}$ and $\operatorname{E}_{0}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})-\operatorname{E}_{0}(\widetilde{\mathds{U}}^{k-1})\leq\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]a_{\operatorname{E}}^{-k}$ for all $k>0$, some $\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]>0$. Otherwise, suppose there exists a Banach space $(\mathds{U},{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|})$ which contains each $\widetilde{\mathds{U}}^{k}$, $\sup_{k\in\mathds{N}}{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<\infty$, and the sublevel sets of $\operatorname{E}$ are ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$-bounded. With the subdifferential $\partial\operatorname{E}\colon\mathds{U}\rightrightarrows\mathds{U}^{*}$, it is also sufficient if either: 1. 3. Small continuous gradient refinement: $\sup_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\inf_{\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\in\partial\operatorname{E}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})}\frac{|\left\langle\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1],\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rangle|}{{\left|\kern-0.75346pt\left|\kern-0.75346pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right|\kern-0.75346pt\right|\kern-0.75346pt\right|}}\leq\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]a_{\operatorname{E}}^{-k}$ for all $k\in\mathds{N}$, some $\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]>0$. 2. 4. Small discrete gradient refinement: $\operatorname{E}_{0}(\widetilde{\mathds{U}}^{k})\leq\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]a_{\operatorname{E}}^{-k}$ and $\sup_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0],\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}\in\widetilde{\mathds{U}}^{k}}\inf_{\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\in\mathds{V}^{k}}\frac{|\left\langle v,\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}\right\rangle|}{{\left|\kern-0.75346pt\left|\kern-0.75346pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}\right|\kern-0.75346pt\right|\kern-0.75346pt\right|}}\leq\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]a_{\operatorname{E}}^{-k}$ for all $k\in\mathds{N}$, some $\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]>0$, where $\mathds{V}^{k}\coloneqq\partial(\operatorname{E}|_{\widetilde{\mathds{U}}^{k}})(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})$. The proof is given in Lemma 14. The refinement criteria described by Lemma 6 can be split into two groups. Cases (1) and (3) justify that any choice of $\mathds{U}^{n_{k}}$ satisfies the required conditions, so long as $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{U}^{n_{k}}$. In cases (2) and (4), $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}$ is sufficient to choose the refinement time $n_{k}$, but an apriori bound is required on $\operatorname{E}_{0}(\widetilde{\mathds{U}}^{k})$. In these cases one could, for example, choose $\widetilde{\mathds{U}}^{k}$ to be a uniform discretisation with a priori estimates. Another splitting of the criteria is into gap and gradient computations. Typically, gradient norms (in (4) and (5)) should be easier to estimate than function gaps because they only require local knowledge rather than global, i.e. $\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$ rather than an estimate of $\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$. Implicitly, the global information comes from an extra condition on $\operatorname{E}$ to assert that sublevel sets are bounded. ## 5 General examples We consider the main use of Algorithm 1 to be where there exists a Banach space $(\mathds{U},{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|})$ such that $\mathds{U}\supset\\{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}\operatorname{\;s.t.\;}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])<\infty\\}$ and $\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])=\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])=\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$ for some $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\mathds{U}$. The cases where $\mathds{H}$ has finite dimension or is separable are more straightforward; if the total number of refinements is finite (i.e. $\mathds{U}^{n}=\mathds{U}^{N}$ for all $n\geq N$, some $N\in\mathds{N}$), then $a_{\operatorname{U}}=1$. This holds for most finite dimensional problems as well as the countable example discussed in detail in Section 6. In this section we give explicit computations of $a_{\operatorname{U}}$ and $a_{\operatorname{E}}$ in the setting where $\mathds{H}=L^{2}(\Omega)$ for some domain $\Omega\subset\mathds{R}^{d}$ and the subsets $\mathds{U}^{n}$ will be finite dimensional finite-element–like spaces, as defined below. ###### Definition 2 Suppose ${\left\lVert\cdot\right\rVert}_{q}\lesssim{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ (i.e. $\mathds{U}\subset L^{q}(\Omega)$) for some $q\in[1,\infty]$ and connected, bounded, measurable domain $\Omega\subset\mathds{R}^{d}$. We say that a collection $\mathds{M}$ is a _mesh_ if $\bigcup_{\omega\in\mathds{M}}\omega\supset\Omega\qquad\text{and}\qquad|\omega\cap\omega^{\prime}|=0\qquad\text{for all $\omega,\omega^{\prime}\in\mathds{M},\ \omega\neq\omega^{\prime}$.}$ Furthermore, we say a sequence of meshes $(\mathds{M}^{k})_{k\in\mathds{N}}$ is _consistent_ if there exists $\omega_{0}\subset\Omega$ such that $\forall\omega\in\mathds{M}^{k}\quad\exists(\IfEqCase{0}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k0]_{\omega},\vec{\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]}_{\omega})\in\mathds{R}^{d\times d}\times\mathds{R}^{d}\quad\text{such that}\quad\vec{x}\in\omega_{0}\iff\IfEqCase{0}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k0]_{\omega}\vec{x}+\vec{\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]}_{\omega}\in\omega.$ Fix $h\in(0,1)$, linear subspaces $\widetilde{\mathds{U}}^{k}\subset\mathds{H}$, and consistent meshes $\mathds{M}^{k}$. We say that the sequence $(\widetilde{\mathds{U}}^{k})_{k\in\mathds{N}}$ is an _$h$ -refining sequence of finite element spaces_ if there exists $c_{\IfEqCase{0}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k0]}>0$ such that: $\forall(\widetilde{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]},\omega)\in\widetilde{\mathds{U}}^{k}\times\mathds{M}^{k},\quad\operatorname{det}(\IfEqCase{0}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k0]_{\omega})\geq c_{\IfEqCase{0}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k0]}h^{kd}\quad\text{and}\quad\exists\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\widetilde{\mathds{U}}^{0}\quad\text{such that}\quad\forall\vec{x}\in\omega_{0},\ \IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0](\vec{x})=\widetilde{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}(\IfEqCase{0}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k0]_{\omega}\vec{x}+\vec{\IfEqCase{1}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k1]}_{\omega}).$ We say that $(\widetilde{\mathds{U}}^{k})_{k\in\mathds{N}}$ is _of order $p$_ if for any $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ there exists a sequence $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ such that $\forall k\in\mathds{N},\qquad\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\widetilde{\mathds{U}}^{k}\quad\text{and}\quad{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\lesssim_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}}h^{kp}.$ (16) We allow the implicit constant to have any dependence on $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ so long as it is finite. For example, in the case of Sobolev spaces we would expect an inequality of the form ${\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}_{W^{0,2}}\lesssim h^{kp}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}_{W^{p,2}}$ Strang1972 . ###### Remark 1 To clarify this definition with an example, suppose we wish to approximate $L^{q}(\Omega)$ with piecewise linear finite elements with a triangulated mesh. Then, $\omega_{0}\subset\Omega$ is a single triangle of diameter $O(h)$ and all meshes $\mathds{M}^{k}$ must be triangulations of $\Omega$ with cell volumes scaling no faster than $O(h^{kd})$. The function $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]$ from the $h$-refining property is an arbitrary linear element, so that each $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\widetilde{\mathds{U}}^{k}$ is linear on each $\omega\in\mathds{M}^{k}$, which leads to an order $p=2$ if $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in W^{1,2}(\Omega)$. We note that any piecewise polynomial finite element (or spline) space can be used to form a $h$-refining sequence of subspaces. Wavelets with a compactly supported basis behave like a multi-resolution finite element space as there is always overlap in the supports of basis vectors. Similarly, a Fourier basis does satisfy the scaling properties, but each basis vector has global support. Both of these exceptions are important and could be accounted for with further analysis but we focus on the more standard finite element case. In order to align these discretisation properties with the assumptions of Theorems 4.2 and 4.3, we make the following observation. ###### Lemma 7 Fix $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ and $p^{\prime},q^{\prime}>0$. If a sequence $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\mathds{H}$ satisfies ${\left\lVert\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right\rVert}\lesssim h^{-kq^{\prime}}\quad\text{and}\quad\operatorname{E}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\lesssim h^{kp^{\prime}},$ then $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ is an $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$ for $a_{\operatorname{U}}=h^{-q^{\prime}}$ and $a_{\operatorname{E}}=h^{-p^{\prime}}$. This is precisely rewriting the statement of Definition 1 into terms of resolution $h$. The following theorem links $p$ and $q$ from Definition 2 with $p^{\prime}$ and $q^{\prime}$ from Lemma 7. ###### Theorem 5.1 Suppose $\mathds{H}=L^{2}(\Omega)$ for some connected, bounded domain $\Omega\subset\mathds{R}^{d}$ and ${\left\lVert\cdot\right\rVert}_{q}\lesssim{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ for some $q\in[1,\infty]$. For $p\geq 0$ and $h\in(0,1)$, if $(\widetilde{\mathds{U}}^{k})_{k\in\mathds{N}}$ is an $h$-refining sequence of finite element spaces of order $p$, then $(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})_{k\in\mathds{N}}$ is an $(a_{\operatorname{U}},a_{\operatorname{E}})$-minimising sequence of $\operatorname{E}$ for $\displaystyle a_{\operatorname{U}}$ $\displaystyle\leq\begin{cases}1&\text{if }q\geq 2,\\\ \sqrt{h^{-d}}&q<2\text{ and }\sup_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\widetilde{\mathds{U}}^{0}}\frac{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}_{L^{\infty}(\omega_{0})}}{{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}_{L^{2}(\omega_{0})}}<\infty\end{cases},$ $\displaystyle a_{\operatorname{E}}$ $\displaystyle\geq\begin{cases}h^{-2p}&\text{if }\nabla\operatorname{E}\text{ is ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$-Lipschitz at }\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*},\\\ h^{-p}\qquad&\text{if }\operatorname{E}\text{ is ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$-Lipschitz at }\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*},\\\ 1&\text{otherwise.}\end{cases}$ The proof of this theorem is in Appendix B. Note that $\sup_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\widetilde{\mathds{U}}^{0}}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}_{L^{\infty}(\omega_{0})}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}_{L^{2}(\omega_{0})}^{-1}$ is finite whenever $\widetilde{\mathds{U}}^{0}\subset L^{\infty}(\Omega)$ is finite dimensional, so this is not a very strong assumption. The main take- home for this theorem is that the computation of $a_{\operatorname{U}}$ and $a_{\operatorname{E}}$ is typically very simple and clear given a particular choice of ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ and $\operatorname{E}$. We also briefly remark that the Lipschitz constants in this lemma do not need to be valid globally, only on the sequence $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}$. The same result holds under a local-Lipschitz assumption, for example on the ball of radius $\sup_{k\in\mathds{N}}{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ which is finite whenever $p\geq 0$. ## 6 L1 penalised reconstruction The canonical example for FISTA is the LASSO problem with a quadratic data fidelity and L1 regularisation. In this section we develop the necessary analytical tools for the variant with general smooth fidelity term which will be used for numerical results in Section 7. We consider three forms which will be referred to as the continuous, countable, and discrete problem depending on whether the space $\mathds{U}$ is $\mathcal{M}([0,1]^{d})$, $\ell^{1}(\mathds{R})$, or $\mathds{R}^{M}$ respectively. We choose $\mathds{H}$ to be $L^{2}([0,1]^{d}),\ \ell^{2}(\mathds{R}),$ or $\mathds{R}^{M}$ correspondingly. Let $\mathsf{A}\colon\mathds{U}\cap\mathds{H}\to\mathds{R}^{m}$ be a linear operator represented by the kernels $\psi_{j}\in\mathds{H}$ such that $\forall\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}\cap\mathds{H},\ j=1,\ldots,m,\qquad(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])_{j}=\left\langle\psi_{j},\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rangle.$ (17) In the continuous case we will assume the additional smoothness $\psi_{j}\in C^{1}([0,1]^{d})$. In Section 6.5 we will formally define and estimate several operator semi-norms for $\mathsf{A}$ of this form, for example Lemma 8 confirms that $\mathsf{A}$ is continuous on $\mathds{H}$ (without loss of generality ${\left\lVert\mathsf{A}\right\rVert}\leq 1$). In each case, the energy we consider is written as $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])=\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\eta)+\mu{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ (18) for some $\mu>0$ where ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}={\left\lVert\cdot\right\rVert}_{1}$. We assume $\operatorname{f}\in C^{1}(\mathds{R}^{m})$ is convex, bounded from below, and $\nabla\operatorname{f}$ is 1-Lipschitz. Let $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$, which is non-empty so long as $\psi_{j}\in C([0,1]^{d})$, see the proof of (Bredies2013, , Prop. 3.1) when $\operatorname{f}$ is quadratic. The aim of this section is to develop all of the necessary tools for implementing Algorithm 1 on the energy (18) using the convergence guarantees of either Theorem 4.2 or Theorem 4.3. This includes computing the rates $a_{\operatorname{U}}$ and $a_{\operatorname{E}}$, estimating the continuous gap $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$, and developing an efficient refinement choice for $\mathds{U}^{n}$. Below we will just describe the form of $\widetilde{\mathds{U}}^{k}$ under the assumption that $\mathds{U}^{n}\subset\widetilde{\mathds{U}}^{k}$ is chosen adaptively for $n=n_{k-1}+1,\ldots,n_{k}$. The index $k$ refers to the scale or resolution and $n$ refers to the iteration number of the reconstruction algorithm. ### 6.1 Continuous case We start by estimating rates in the case $\mathds{U}=\mathcal{M}(\Omega)$ where $\Omega=[0,1]^{d}$. In this case we choose $\widetilde{\mathds{U}}^{k}$ to be the span of all piecewise constant functions on a mesh of squares with maximum side length $2^{-k}$ (i.e. $h=\tfrac{1}{2}$) and $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\coloneqq\sum_{\omega\in\mathds{M}^{k}}\frac{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(\omega)}{|\omega|}\mathds{1}_{\omega}\quad\text{where}\quad\mathds{1}_{\omega}(\vec{x})=\begin{cases}1&\vec{x}\in\omega\\\ 0&\text{else}\end{cases}.$ By construction $\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\in\widetilde{\mathds{U}}^{k}$, however note that for any $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in L^{1}(\Omega)$ and Dirac mass $\delta$ supported in $(0,1)^{d}$, ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\delta\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}=\sup_{\varphi\in C(\Omega),{\left\lVert\varphi\right\rVert}_{L^{\infty}}\leq 1}\left\langle\varphi,\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\delta\right\rangle={\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\delta\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\geq 1=h^{0}.$ (19) Because of this, application of Theorem 5.1 with $p=0$ gives $a_{\operatorname{U}}=2^{\frac{d}{2}}$ but only $a_{\operatorname{E}}\geq 1$. To improve our estimate of $a_{\operatorname{E}}$ requires additional assumptions on $\mathsf{A}$. Note that ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}=\sum_{\omega\in\mathds{M}^{k}}|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(\omega)|\leq{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$, therefore we have $\displaystyle\operatorname{E}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$ $\displaystyle=\operatorname{f}(\mathsf{A}\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\eta)-\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}-\eta)+\mu\left({\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}-{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\right)$ (20) $\displaystyle\leq\nabla\operatorname{f}(\mathsf{A}\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\eta)\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\mathsf{A}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$ (21) $\displaystyle\leq\left[{\left\lVert\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}-\eta)\right\rVert}_{\ell^{2}}+{\left\lVert\mathsf{A}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\right\rVert}_{\ell^{2}}\right]{\left\lVert\mathsf{A}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\right\rVert}_{\ell^{2}}.$ (22) as $\operatorname{f}$ is convex with 1-Lipschitz gradient. Clearly ${\left\lVert\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}-\eta)\right\rVert}_{\ell^{2}}$ is a constant. For the other term, for all $\vec{r}\in\mathds{R}^{m}$ denote $\varphi\coloneqq\mathsf{A}^{*}\vec{r}$, then note that $\vec{r}\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\mathsf{A}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})=\left\langle\varphi,\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rangle=\sum_{\omega\in\mathds{M}^{k}}\int_{\omega}\varphi(\vec{x})\mathop{}\\!\mathrm{d}[\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}]=\sum_{\omega\in\mathds{M}^{k}}|\omega|^{-1}\iint_{\omega^{2}}[\varphi(\vec{x})-\varphi(\vec{y})]\mathop{}\\!\mathrm{d}\vec{x}\mathop{}\\!\mathrm{d}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(\vec{y}).$ (23) With the pointwise bound $|\varphi(\vec{x})-\varphi(\vec{y})|\leq\operatorname{diam}(\omega){\left\lVert\nabla\varphi\right\rVert}_{L^{\infty}}=\sqrt{d}2^{-k}{\left\lVert\nabla[\mathsf{A}^{*}\vec{r}]\right\rVert}_{L^{\infty}}$, we deduce the estimate ${\left\lVert\mathsf{A}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\right\rVert}_{\ell^{2}}=\sup_{\vec{r}\in\mathds{R}^{m}}{\left\lVert\vec{r}\right\rVert}_{\ell^{2}}^{-1}\left\langle\mathsf{A}^{*}\vec{r},\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k}-\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rangle\leq\sqrt{d}2^{-k}{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\sup_{\vec{r}\in\mathds{R}^{m}}{\left\lVert\vec{r}\right\rVert}_{\ell^{2}}^{-1}{\left\lVert\nabla[\mathsf{A}^{*}\vec{r}]\right\rVert}_{L^{\infty}}.$ (24) In Lemma 9 we will show that this last term, which we denote the semi-norm $|\mathsf{A}^{*}|_{\ell^{2}\to C^{1}}$, is bounded by $\sqrt{m}\max_{j\in[m]}{\left\lVert\nabla\Psi_{j}\right\rVert}_{\infty}$. We conclude that $\operatorname{E}(\widetilde{\IfEqCase{2}{{0}{u}{1}{v}{2}{w}}[u2]}_{k})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\lesssim 2^{-k}$. In particular, this computation confirms two things. Firstly that the scaling constant is $a_{\operatorname{E}}=2$, and secondly that the required smoothness to achieve a good rate with Algorithm 1 is that $\mathsf{A}^{*}\colon\mathds{R}^{m}\to C^{1}(\Omega)$ is a bounded operator. This accounts for using the weaker topology of $\mathcal{M}(\Omega)$ rather than $L^{1}(\Omega)$. Inserting the computed rates into Theorem 4.2 or Theorem 4.3 gives the guaranteed convergence rate $\kappa=\frac{\log a_{\operatorname{U}}^{2}}{\log a_{\operatorname{E}}+\log a_{\operatorname{U}}^{2}}=\frac{d}{1+d}\quad\implies\quad\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])\lesssim n^{-2(1-\kappa)}=n^{-\frac{2}{1+d}}.$ (25) This rate can be used to infer the required resolution at each iteration, in particular on iteration $n$ with $n^{2}\simeq(a_{\operatorname{E}}a_{\operatorname{U}}^{2})^{k}$ we expect the resolution to be $2^{-k}=\left(a_{\operatorname{E}}a_{\operatorname{U}}^{2}\right)^{\frac{k}{1+d}}\simeq n^{-\frac{2}{1+d}}.$ (26) ### 6.2 Countable and discrete case We now extend the rate computations to the case when $\mathds{U}=\ell^{1}(\mathds{R})$, or a finite dimensional subspace. The key fact here is that, even when $\mathds{U}$ is infinite dimensional, it is known (e.g. (Unser2016, , Thm 6) and (Boyer2019, , Cor 3.8)) that there exists $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\operatorname*{argmin}_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ with at most $m$ non-zeros. If this is the case, then $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\in\ell^{2}(\mathds{R})$, indeed ${\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}_{\ell^{2}}\leq\sqrt{m}{\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}_{\ell^{1}}$. This makes the estimates of $a_{\operatorname{E}}/a_{\operatorname{U}}$ much simpler than in the continuous case as we can stay in the finite-dimensional Hilbert-space setting. For countable dimensions we consider discretisation subspaces of the form $\widetilde{\mathds{U}}^{k}=\\{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\ell^{1}(\mathds{R})\operatorname{\;s.t.\;}i\notin J_{k}\implies\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{i}=0\\}$ for some sets $J_{k}\subset\mathds{N}$, i.e. infinite vectors with finitely many non-zeros. The key change in analysis from the continuous case is ${\left\lVert\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}\right\rVert}<\infty$, so $a_{\operatorname{U}}=1$ and the expected rate of $n^{-2}$, independent of $a_{\operatorname{E}}$ or any additional properties of $\mathsf{A}$. The number of refinements will also be finite, therefore $n_{k}=\infty$ for some $k$, the remaining conditions of Theorems 4.2 and 4.3 hold trivially. ### 6.3 Refinement metrics Lemma 6 shows that adaptive refinement can be performed based on estimates of the function gap or the subdifferential. In this subsection we provide estimates for the forth case of Lemma 6 which can be easily computed. In this case we consider $\partial\operatorname{E}\colon\mathds{H}\rightrightarrows\mathds{H}$ so that subdifferentials are well behaved, for example for explicit computation assuming validity of the chain/sum rules for differentiation. #### 6.3.1 Bounds for discretised functionals We start by computing estimates for discretised energies. This covers the cases when either the continuous/countable energy is projected onto $\mathds{U}^{n}$, or $\mathds{U}$ is finite dimensional. For notation we will use the continuous case, to recover the other cases just replace continuous indexing with discrete (i.e. $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0](\vec{x})\leadsto\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{i}$). Let $\mathsf{\Pi}_{n}\colon\mathds{H}\to\mathds{U}^{n}$ denote the orthogonal projection. We consider the discretised function $\operatorname{E}|_{\mathds{U}^{n}}\colon\mathds{U}^{n}\to\mathds{R}$ and its subdifferential $\partial_{n}\operatorname{E}(\cdot)=\mathsf{\Pi}_{n}\partial\operatorname{E}(\cdot)$ on $\mathds{U}^{n}$. In our case, the behaviour of $\operatorname{E}|_{\mathds{U}^{n}}$ is equivalent to replacing $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]$ with $\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]$, and $\mathsf{A}^{*}$ with $\mathsf{\Pi}_{n}\mathsf{A}^{*}$. ##### Discrete gradient We can use $\mathsf{\Pi}_{n}$ to compute the discrete subdifferential at $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}\in\mathds{U}^{n}$: $\displaystyle\partial_{n}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})(\vec{x})$ $\displaystyle=[\mathsf{\Pi}_{n}\mathsf{A}^{*}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)](\vec{x})+\begin{cases}\\{+\mu\\}&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})>0\\\ [-\mu,\mu]&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})=0\\\ \\{-\mu\\}&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})<0\end{cases}$ (27) $\displaystyle\eqqcolon[\mathsf{\Pi}_{n}\mathsf{A}^{*}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)](\vec{x})+\mu\mathsf{\Pi}_{n}\operatorname{sign}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x}))$ (28) where we define $s+\mu[-1,1]=[s-\mu,s+\mu]$ for all $s\in\mathds{R}$, $\mu\geq 0$. As ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}={\left\lVert\cdot\right\rVert}_{1}$, the natural metric for $\partial_{n}\operatorname{E}$ is ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}={\left\lVert\cdot\right\rVert}_{\infty}$ which we can estimate $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial_{n}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max_{\vec{x}\in\Omega}\min_{\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]}\left\\{|\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]|\operatorname{\;s.t.\;}\IfEqCase{1}{{0}{u}{1}{v}{2}{w}}[u1]\in\mathsf{\Pi}_{n}\mathsf{A}^{*}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)(\vec{x})+\mu\mathsf{\Pi}_{n}\operatorname{sign}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x}))\right\\}$ (29) $\displaystyle=\max_{\vec{x}\in\Omega}\begin{cases}|[\mathsf{\Pi}_{n}\mathsf{A}^{*}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)(\vec{x})+\mu|&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})>0\\\ |[\mathsf{\Pi}_{n}\mathsf{A}^{*}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)(\vec{x})-\mu|&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})<0\\\ \max\left(|\mathsf{\Pi}_{n}\mathsf{A}^{*}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)(\vec{x})|-\mu,0\right)&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})=0\end{cases}$ (30) which can be used directly in Lemma 6. ##### Discrete gap We now move on to the discrete gap, $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$. This can be computed with a dual representation (e.g. Duval2017a ), $\displaystyle\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\eta)+\mu{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}$ $\displaystyle=\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\max_{\vec{\varphi}\in\mathds{R}^{m}}(\mathsf{A}\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\eta)\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}+\mu{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}-\operatorname{f}^{*}(\vec{\varphi})$ (31) $\displaystyle=\max_{\vec{\varphi}\in\mathds{R}^{m}}\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}(\mathsf{A}\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\eta)\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}+\mu{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}-\operatorname{f}^{*}(\vec{\varphi})$ (32) $\displaystyle=\max_{\vec{\varphi}\in\mathds{R}^{m}}\begin{cases}-\eta\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}-\operatorname{f}^{*}(\vec{\varphi})&\qquad{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}\leq\mu\\\ -\infty&\qquad\text{else}\end{cases}$ (33) $\displaystyle=-\min_{\vec{\varphi}\in\mathds{R}^{m}}\underbrace{\operatorname{f}^{*}(\vec{\varphi})+\eta\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}}_{\eqqcolon\operatorname{E}^{\dagger}(\vec{\varphi})}+\chi({\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}\leq\mu).$ (34) In particular, $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])-\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}^{n}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])=\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])+\min_{\vec{\varphi}\in\mathds{R}^{m}\operatorname{\;s.t.\;}{\left|\kern-0.75346pt\left|\kern-0.75346pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}\right|\kern-0.75346pt\right|\kern-0.75346pt\right|}_{*}\leq\mu}\operatorname{E}^{\dagger}(\vec{\varphi})\leq\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])+\operatorname{E}^{\dagger}(\vec{\varphi})$ (35) for any feasible $\vec{\varphi}\in\mathds{R}^{m}$. We further derive the criticality condition, if $(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*},\vec{\varphi}^{*})$ is a saddle point, then $\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}-\eta\in\partial\operatorname{f}^{*}(\vec{\varphi}^{*}),\qquad\text{or equivalently}\qquad\vec{\varphi}^{*}=\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}-\eta).$ (36) We remark briefly that $\operatorname{E}^{\dagger}$ should be thought of as the dual of $\operatorname{E}$ but without the constraint. We choose to omit it here to highlight that it is only the constraint which changes between the discrete and continuous cases; the value of $\operatorname{E}^{\dagger}$ will remain the same. Given $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}\in\mathds{U}^{n}$, the optimality condition motivates a simple rule for choosing $\vec{\varphi}$: $\vec{\varphi}_{n}\coloneqq\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta),\qquad\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])-\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{\prime}\in\mathds{U}^{n}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{\prime})\leq\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\vec{\varphi}_{n})$ (37) for some $0\leq\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\leq\frac{\mu}{{\left|\kern-0.75346pt\left|\kern-0.75346pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-0.75346pt\right|\kern-0.75346pt\right|}_{*}}$. In the case $\operatorname{f}(\cdot)=\frac{1}{2}{\left\lVert\cdot\right\rVert}_{\ell^{2}}^{2}$, one can use the optimal choice $\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]=\max\left(0,\min\left(\frac{-\eta\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}_{n}}{{\left\lVert\vec{\varphi}_{n}\right\rVert}_{\ell^{2}}^{2}},\frac{\mu}{{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}}\right)\right).$ (38) To apply Algorithm 1, we are assuming that both $\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)$ and $\mathsf{\Pi}_{n}\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)$ are easily computable, therefore $\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]$ and $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\vec{\varphi}_{n})$ are also easy to compute. #### 6.3.2 Bounds for countable functionals Extending the results of Section 6.3.1 to $\mathds{U}=\ell^{1}(\mathds{R})$ is analytically very simple but computationally relies heavily on the specific choice of $\mathsf{A}$. The computations of subdifferentials and gaps carry straight over replacing $\mathsf{\Pi}_{n}$ with the identity and adding the sets $J_{n}\subset\mathds{N}$ which define $\mathds{U}^{n}=\\{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\ell^{1}\operatorname{\;s.t.\;}i\notin J_{n}\implies\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{i}=0\\}$. Recall that ${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}\coloneqq\inf_{s\in\operatorname{sign}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})}{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{A}^{*}\vec{\varphi}_{n}+\mu s\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ where the $\operatorname{sign}$ function has the pointwise set-valued definition as indicated in (27)-(28). Where $[u_{n}]_{i}=0$, the choice $s_{i}=\min(1,\max(-1,-\mu^{-1}[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}))$ achieves the minimal value $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max_{i\in\mathds{N}}\begin{cases}|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}+\mu|&[\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}]_{i}>0\\\ |[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}-\mu|&[\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}]_{i}<0\\\ \max\left(|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}|-\mu,0\right)&[\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}]_{i}=0\end{cases}$ (39) $\displaystyle\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ $\displaystyle\leq\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}),\qquad\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\in\left[0,\frac{\mu}{{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}}\right]$ (40) where $\vec{\varphi}_{n}=\nabla\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}-\eta)\in\mathds{R}^{m}$ is always exactly computable. In the countable case, the sets $J_{n}$ give a clear partition into known/unknown values in these definitions. For $i\in J_{n}$ the computation is the same as in Section 6.3.1, then for $i\notin J_{n}$ we know $[\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}]_{i}=0$ which simplifies the remaining computations. This leads to: $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max\left(\max_{i\in J_{n}}|[\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})]_{i}|,\ \sup_{i\notin J_{n}}|[\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})]_{i}|\right)=\max\left({\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial_{n}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*},\ \sup_{i\notin J_{n}}|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}|-\mu\right)$ (41) $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max\left(\max_{i\in J_{n}}|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}|,\ \sup_{i\notin J_{n}}|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}|\right)\hskip 19.0pt=\max\left({\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*},\ \sup_{i\notin J_{n}}|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}|\right).$ (42) Both estimates only rely on an upper bound of $\max_{i\notin J_{n}}|[\mathsf{A}^{*}\vec{\varphi}_{n}]_{i}|$. One example computing this value is seen in Section 7.2. #### 6.3.3 Bounds for continuous functionals Finally we extend the results of Section 6.3.1 to continuous problems. Similar to the countable case (39)-(40), the exact formulae can be written down immediately: $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max_{\vec{x}\in\Omega}\begin{cases}|[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})+\mu|&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})>0\\\ |[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})-\mu|&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})<0\\\ \max\left(|[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})|-\mu,0\right)&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}(\vec{x})=0\end{cases}$ (43) $\displaystyle\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\inf_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{H}}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])$ $\displaystyle\leq\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}),\qquad\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\in\left[0,\frac{\mu}{{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}}\right]$ (44) with $\operatorname{E}^{\dagger}$ as defined in (34). Recall that there is a mesh $\mathds{M}^{n}$ corresponding to $\mathds{U}^{n}$ such that $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ is constant on each $\omega\in\mathds{M}^{n}$, so we can rewrite these bounds: $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max_{\omega\in\mathds{M}^{n}}\begin{cases}{\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}+\mu\right\rVert}_{L^{\infty}(\omega)}&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}|_{\omega}>0\\\ {\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}-\mu\right\rVert}_{L^{\infty}(\omega)}&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}|_{\omega}<0\\\ \max(0,{\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{\infty}(\omega)}-\mu)&\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}|_{\omega}=0\end{cases}$ (45) $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max_{\omega\in\mathds{M}^{n}}{\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{\infty}(\omega)}.$ (46) Now, both values can be estimated relying on pixel-wise supremum norms of $\mathsf{A}^{*}\vec{\varphi}_{n}$ which we have assumed is sufficiently smooth. We will therefore use a pixel-wise Taylor expansion to provide a simple and accurate estimate. For instance, let $\vec{x}_{i}$ be the midpoint of the pixel $\omega$, then ${\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{\infty}(\omega)}\leq|[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x}_{i})|+\frac{\operatorname{diam}(\omega)}{2}|[\nabla\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x}_{i})|+\frac{\operatorname{diam}(\omega)^{2}}{8}|\mathsf{A}^{*}\vec{\varphi}_{n}|_{C^{2}}.$ (47) In this work we chose a first order expansion because we are looking for extrema of $\mathsf{A}^{*}\vec{\varphi}_{n}$, i.e. we are most interested in the squares $\omega$ such that $|[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x}_{i})|\approx\mu,\qquad|[\nabla\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x}_{i})|\approx 0,\qquad[\nabla^{2}\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x}_{i})\preceq 0.$ (48) A zeroth order expansion would be optimally inefficient (approximating $|[\nabla\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x}_{i})|$ with $|\mathsf{A}^{*}\vec{\varphi}_{n}|_{C^{1}}$) and a second order expansion would possibly be more elegant but harder to implement. We found that a first order expansion was simple and efficient. The bounds presented here for continuous problems emphasise the twinned properties required for adaptive mesh optimisation. The mesh should be refined greedily to the structures of $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$, but also must be sufficiently uniform to provide a good estimate for $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$. This is a classical exploitation/exploration trade-off; exploiting visible structure whilst searching for other structures which are not yet visible. ### 6.4 Support detection The main motivation for using L1 penalties in applications is because it recovers sparse signals, in the case of compressed sensing the support of $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ is also provably close to the “true” support Duval2017a ; Poon2018 . If $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}\approx\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ in the appropriate sense, then we should also be able to quantify the statement $\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\approx\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$. Such methods are referred to as _safe screening_ rules ElGhaoui2010 which gradually identify the support and allow the optimisation algorithm to constrain parts of the reconstruction to 0. In this subsection we propose a new simple screening rule which is capable of generalising to our continuous subspace approximation setting. It is likely that more advanced methods Bonnefoy2015 ; Ndiaye2017 can also be adapted, although that is beyond the scope of this work. The key difference is the allowance of inexact computations resulting from estimates such as (47). The support of $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ has already been characterised very precisely Duval2017a ; Poon2018 . In particular, the support is at most $m$ distinct points and are a subset of $\\{\vec{x}\in\Omega\operatorname{\;s.t.\;}|\mathsf{A}^{*}\vec{\varphi}^{*}|(\vec{x})=\mu\\}$ (an equivalent statement holds for the countable case). Less formally, this can also be seen from the the subdifferential computations in Section 6.3, for all $\vec{x}\in\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$ we have $0\in\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})(\vec{x})=[\mathsf{A}^{*}\vec{\varphi}^{*}](\vec{x})+\mu\operatorname{sign}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(\vec{x})).$ (49) Heuristically, we will use strong convexity of $\operatorname{E}^{\dagger}$ from (34) and smoothness of $\mathsf{A}^{*}$ to quantify the statement: $\text{if}\quad\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})\approx 0\quad\text{then}\quad\left\\{\vec{x}\operatorname{\;s.t.\;}|[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})|\ll\mu\right\\}\subset\\{\vec{x}\operatorname{\;s.t.\;}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(\vec{x})=0\\}.$ Recall that $\nabla\operatorname{f}$ is 1-Lipschitz if and only if $\operatorname{f}^{*}$ is 1-strongly convex (Hiriart2013, , Chapter 10, Thm. 4.2.2). Therefore, if $\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}$ and $\vec{\varphi}^{*}$ are both dual-feasible, then $\tfrac{1}{2}{\left\lVert\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}-\vec{\varphi}^{*}\right\rVert}_{\ell^{2}}^{2}\leq\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})-\operatorname{E}^{\dagger}(\vec{\varphi}^{*})=\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})+\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\leq\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})+\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}),$ (50) which gives an easily computable bound on ${\left\lVert\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}-\vec{\varphi}^{*}\right\rVert}_{\ell^{2}}$. Now we estimate $\mathsf{A}^{*}\vec{\varphi}_{n}$ on the support of $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$: $\displaystyle\min_{\vec{x}\in\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})}|[\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})|$ $\displaystyle\geq\min_{\vec{x}\in\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})}|[\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})|$ (51) $\displaystyle=\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}^{-1}\min_{\vec{x}\in\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})}|[\mathsf{A}^{*}\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}](\vec{x})|$ (52) $\displaystyle\geq\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}^{-1}\min_{\vec{x}\in\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})}|[\mathsf{A}^{*}\vec{\varphi}^{*}](\vec{x})|-|[\mathsf{A}^{*}\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}-\mathsf{A}^{*}\vec{\varphi}^{*}](\vec{x})|$ (53) $\displaystyle=\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}^{-1}\min_{\vec{x}\in\operatorname{supp}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})}\mu-|[\mathsf{A}^{*}\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}-\mathsf{A}^{*}\vec{\varphi}^{*}](\vec{x})|$ (54) $\displaystyle\geq\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}^{-1}\left(\mu-|\mathsf{A}^{*}|_{\ell^{2}\to L^{\infty}}{\left\lVert\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}-\vec{\varphi}^{*}\right\rVert}_{\ell^{2}}\right).$ (55) Therefore, $|[\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}](\vec{x})|<\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}^{-1}\left(\mu-\sqrt{2(\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}))}|\mathsf{A}^{*}|_{\ell^{2}\to L^{\infty}}\right)\qquad\implies\qquad\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}(\vec{x})=0.$ (56) This equation is valid when $\vec{x}$ is either a continuous or countable index, the only distinction is to switch to $\ell^{\infty}$ in the norm of $\mathsf{A}^{*}$. To make the equivalent statement on the discretised problem, simply replace $\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}$ with $\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]$ and $\mathsf{A}^{*}$ with $\mathsf{\Pi}_{n}\mathsf{A}^{*}$. There are two short observations on this formula: * • The convergence guarantee from Theorem 4.2 is for the primal gap $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$, rather than the primal-dual gap $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})$ used here. Although there is no guaranteed rate for the primal-dual gap, it is much more easily computable than the primal gap. * • In Section 6.1, $|\mathsf{A}^{*}|_{\ell^{2}\to C^{1}}<\infty$ was required to compute a rate of convergence for $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$, but only $|\mathsf{A}^{*}|_{\ell^{2}\to L^{\infty}}<\infty$ is needed to estimate the support. ### 6.5 Operator norms For numerical implementation of (18), we are required to accurately estimate several operator norms of $\mathsf{A}$ of the form in (17). In particular, there are kernels $\psi_{j}\in\mathds{H}$ such that $(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])_{j}=\left\langle\psi_{j},\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rangle$ for each $j\in[m]$. Verifying that ${\left\lVert\mathsf{A}\right\rVert}\leq 1$ can be performed by computing $|\mathsf{A}\mathsf{A}^{*}|_{\ell^{2}\to\ell^{2}}$, and the adaptivity described in Sections 6.1, 6.3.3, and 6.4 requires the values of $|\mathsf{A}^{*}|_{\ell^{2}\to L^{\infty}}$, $|\mathsf{A}^{*}|_{\ell^{2}\to C^{1}}$, and $|\mathsf{A}^{*}|_{\ell^{2}\to C^{2}}$. The aim for this section is to provide estimates of these norms and seminorms for the numerical examples presented in Section 7. The following lemma allows for exact computation of the operator norm of $\mathsf{A}$. ###### Lemma 8 If $\mathsf{A}\colon\mathds{H}\to\mathds{R}^{m}$ has kernels $\psi_{j}\in\mathds{H}$ for $j\in[m]$, then $\mathsf{A}\mathsf{A}^{*}\in\mathds{R}^{m\times m}$ has entries $(\mathsf{A}\mathsf{A}^{*})_{i,j}=\left\langle\psi_{i},\psi_{j}\right\rangle$, so the spectral norm ${\left\lVert\mathsf{A}^{*}\mathsf{A}\right\rVert}={\left\lVert\mathsf{A}\mathsf{A}^{*}\right\rVert}$ can be computed efficiently. ###### Proof To compute the entries of $\mathsf{A}\mathsf{A}^{*}\colon\mathds{R}^{m}\to\mathds{R}^{m}$, observe that for any $\vec{r}\in\mathds{R}^{m}$ $(\mathsf{A}\mathsf{A}^{*}\vec{r})_{i}=\left\langle\psi_{i},\mathsf{A}^{*}\vec{r}\right\rangle=\left\langle\psi_{i},\sum_{j=1}^{m}r_{j}\psi_{j}\right\rangle=\sum_{j=1}^{m}\left\langle\psi_{i},\psi_{j}\right\rangle r_{j}$ (57) as required. ∎ If ${\left\lVert\mathsf{A}^{*}\mathsf{A}\right\rVert}$ is not analytically tractable, then Lemma 8 enables it to be computed using standard finite dimensional methods. The operator $\mathsf{A}\mathsf{A}^{*}$ is always finite dimensional, and can be computed without discretisation error. In the continuous case, when $\mathds{H}=L^{2}(\Omega)$ we also need to estimate the smoothness properties of $\mathsf{A}^{*}$. A generic result for this is given in the following lemma. ###### Lemma 9 If $\mathsf{A}\colon L^{2}([0,1]^{d})\to\mathds{R}^{m}$ has kernels $\psi_{j}\in L^{2}(\Omega)\cap C^{k}(\Omega)$ for $j\in[m]$, then for all $\frac{1}{q}+\frac{1}{q^{*}}=1$, $q\in[1,\infty]$, we have $\displaystyle|\mathsf{A}^{*}\vec{r}|_{C^{k}}$ $\displaystyle\coloneqq\sup_{\vec{x}\in\Omega}|\nabla^{k}[\mathsf{A}^{*}\vec{r}]|(\vec{x})\leq\sup_{\vec{x}\in\Omega}{\left\lVert(\nabla^{k}\psi_{j}(\vec{x}))_{j=1}^{m}\right\rVert}_{\ell^{q^{*}}}{\left\lVert\vec{r}\right\rVert}_{\ell^{q}},$ (58) $\displaystyle|\mathsf{A}^{*}|_{\ell^{2}\to C^{k}}$ $\displaystyle\coloneqq\sup_{{\left\lVert\vec{r}\right\rVert}_{\ell^{2}}\leq 1}|\mathsf{A}^{*}\vec{r}|_{C^{k}}\leq\sup_{\vec{x}\in\Omega}{\left\lVert(\nabla^{k}\psi_{j}(\vec{x}))_{j=1}^{m}\right\rVert}_{\ell^{q^{*}}}\times\begin{cases}1&q\geq 2\\\ \sqrt{m^{2-q}}&q<2\end{cases}.$ (59) ###### Proof For the first inequality, we apply the Hölder inequality on $\mathds{R}^{m}$: $|\nabla^{k}[\mathsf{A}^{*}\vec{r}]|(\vec{x})=\left|\sum_{j=1}^{m}\nabla^{k}\psi_{j}(\vec{x})r_{j}\right|\leq\left(\sum_{j=1}^{m}|\nabla^{k}\psi_{j}(\vec{x})|^{q^{*}}\right)^{\frac{1}{q^{*}}}{\left\lVert\vec{r}\right\rVert}_{\ell^{q}}={\left\lVert(\nabla^{k}\psi_{j}(\vec{x}))_{j}\right\rVert}_{\ell^{q^{*}}}{\left\lVert\vec{r}\right\rVert}_{\ell^{q}}\;.$ For the second inequality, if $q\geq 2$ and $\sum_{j=1}^{m}r_{j}^{2}\leq 1$, then $|r_{j}|\leq 1$ for all $j$ and ${\left\lVert\vec{r}\right\rVert}_{\ell^{q}}^{q}\leq{\left\lVert\vec{r}\right\rVert}_{\ell^{2}}^{2}\leq 1$. If $q<2$ and ${\left\lVert\vec{r}\right\rVert}_{\ell^{2}}\leq 1$, then we again use Hölder’s inequality: $\sum_{j=1}^{m}r_{j}^{q}\leq\Big{(}\sum_{j=1}^{m}1^{Q^{*}}\Big{)}^{\frac{1}{Q^{*}}}\Big{(}\sum_{j=1}^{m}r_{j}^{qQ}\Big{)}^{\frac{1}{Q}}\leq m^{\frac{2-q}{2}}$ for $Q=\frac{2}{q}$. ∎ The examples in Section 7 require explicit computations of the expressions in Lemmas 8 and 9. These computations are provided in the appendix, Theorem C.1. ## 7 Numerical examples We present four numerical examples. The first two are in 1D to demonstrate the performance of different variants of Algorithm 1, both with and without adaptivity. In particular, we explore sparse Gaussian deconvolution and sparse signal recovery from Fourier data. We compare with the _continuous basis pursuit_ (CBP) discretisation Ekanadham2011 ; Duval2017b which is also designed to achieve super-resolution accuracy within a convex framework. More details of this method will be provided in Section 7.1. The next example is 2D reconstruction from Radon or X-ray data with wavelet- sparsity and a robust data fidelity. As the forward operator is not sufficiently smooth, we must optimise in $\ell^{1}(\mathds{R})$, which naturally leads to the choice of a wavelet basis. Finally, we process a dataset which represents a realistic application in biological microscopy, referred to as STORM microscopy. In essence, the task is to perform 2D Gaussian de-blurring/super-resolution and denoising to find the location of sparse spikes of signal. In this section, the main aim is to minimise $\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})=\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$, and so this will be our main metric for the success of an algorithm, referred to as the “continuous gap”. Lemma 6 only provides guarantees on the values of $\min_{n\leq N}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})$ so it is this monotone estimate which is plotted. As $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})$ is not known exactly, we always use the estimate $\min_{n\leq N}\operatorname{E}_{0}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\approx\min_{n\leq N}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\min_{n^{\prime}\leq n}\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n^{\prime}})$. Another quantity of interest is minimisation of the discrete energy $\min_{n\leq N}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\min_{n^{\prime}\leq n}\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\vec{\varphi}_{n^{\prime}})$ which will be referred to as the “discrete gap”. Note that for the adaptive schemes the discrete gap may not be monotonic as the discrete dual problem changes with $N$. The code to reproduce these examples can be found online111https://github.com/robtovey/2020SpatiallyAdaptiveFISTA. ### 7.1 1D continuous LASSO In this example we choose $\mathds{U}=\mathcal{M}([0,1])$, $\mathds{H}=L^{2}([0,1])$, $\operatorname{f}(\cdot)=\frac{1}{2}{\left\lVert\cdot\right\rVert}_{\ell^{2}}^{2}$ and $\mathsf{A}\colon\mathds{U}\to\mathds{R}^{30}$ with either random Fourier kernels: $(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])_{j}=\int_{0}^{1}\cos(\IfEqCase{3}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k3]_{j}x)\mathop{}\\!\mathrm{d}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0](x),\qquad\IfEqCase{3}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k3]_{j}\sim\operatorname{Uniform}[-100,100],\ j=1,2,\ldots,30,\ \mu=0.02,$ (60) or Gaussian kernels on a regular grid: $(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0])_{j}=(2\pi\sigma^{2})^{-\frac{1}{2}}\int_{0}^{1}\exp\left(-\frac{(x-(j-1)\Delta)^{2}}{2\sigma^{2}}\right)\mathop{}\\!\mathrm{d}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0](x),\quad\sigma=0.12,\ \Delta=\tfrac{1}{29},\ j=1,2,\ldots,30,\ \mu=0.06.$ (61) Several variants of FISTA are compared for these examples but the key alternative shown here is the CBP discretisation. For this choice of $\operatorname{f}$, we call (18) the continuous LASSO problem, for which there are many numerical methods (c.f. Bredies2013 ; Castro2016 ; Boyd2017 ; Catala2019 ) however, most require the solution of a non-convex problem. We have focused on CBP because it approximates $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ through a convex discrete optimisation problem which is asymptotically exact in the limit $h\to 0$. It can also be optimised with FISTA which allows for direct comparison with the uniform and adaptive mesh approaches. The idea is that for a fixed mesh, the kernels of $\mathsf{A}$ are expanded to first order on each pixel and a particular first order basis is also chosen Ekanadham2011 ; Duval2017b . If $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$ has only one Dirac spike in each pixel, then the zeroth order information should correspond to the mass of the spike, and additional first order information should determine the location. As shown in Section 6, in 1D we have $a_{\operatorname{U}}=a_{\operatorname{E}}=2$. The estimates given in (25) and (26) in dimension $d=1$ predict that the adaptive energy will decay at a rate of $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\lesssim\frac{1}{n}$ so long as the pixel size also decreases at a rate of $h\sim\frac{1}{n}$. To achieve these rates, we implement a refinement criterion from Lemma 6 with guarantee of $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n_{k}-1})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\lesssim 2^{-k}$ using the estimates made in Section 6.3. We choose subspaces $\mathds{U}^{n}$ to approximately enforce $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})\leq 2(\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})+\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\vec{\varphi}_{n})),$ (62) i.e. the continuous gap is bounded by twice the discrete gap. In particular, note that for $\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\approx\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]$, $\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n})=\tfrac{1}{2}{\left\lVert\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\vec{\varphi}_{n}\right\rVert}^{2}+\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}\eta\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}_{n}=\frac{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}}{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]}\left(\frac{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}}{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]}\tfrac{1}{2}{\left\lVert\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\vec{\varphi}_{n}\right\rVert}^{2}+\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\eta\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\vec{\varphi}_{n}\right)\approx\frac{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}}{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]}\operatorname{E}^{\dagger}(\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]\vec{\varphi}_{n}).$ (63) Converting this into a spatial refinement criteria, recall $\frac{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]_{0}}{\IfEqCase{2}{{0}{\alpha}{1}{\beta}{2}{\gamma}{3}{a}{4}{b}{5}{c}}[k2]}\approx\frac{{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}}{{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}}=\frac{\max_{\omega\in\mathds{M}^{n}}{\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{\infty}(\omega)}}{\max_{\omega\in\mathds{M}^{n}}|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}(\omega)|}\approx\max_{\omega\in\mathds{M}^{n}}\frac{{\left\lVert\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{\infty}(\omega)}}{|\mathsf{\Pi}_{n}\mathsf{A}^{*}\vec{\varphi}_{n}(\omega)|}$ (64) is the maximum ratio of second vs. zeroth order Taylor approximations of $\mathsf{A}^{*}\vec{\varphi}_{n}$ on pixel $\omega$. This was found to be an efficient method of selecting pixels for refinement using quantities which had already been computed. Note briefly that this greedy strategy directly targets uncertainty, refinements also happen outside of the support of $u_{n}$ to guarantee that this is representative of $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$. Such refinement is necessary to avoid discrete minimisers of $\operatorname{E}$ which are not global minimisers. Figure 1: Rates of continuous/discrete gap convergence for different LASSO algorithms with 128, 256, or 512 pixels. The “adaptive” method uses the proposed algorithm. Both “fixed” and “CBP” use standard FISTA with a uniform discretisation. Figure 2: Convergence plots for solving 1D problems with different algorithms. “Adaptive” methods use Algorithm 1 with fewer than 1024 pixels and the remaining methods use a uniform discretisation of 1024 pixels. Figure 3: Example reconstruction from the algorithms considered in Fig. 3. Pixel boundaries are indicated on the $x$-axis and the filtering method of Section 6.4 allows us to exclude the red shaded regions from $\operatorname{supp}(u^{*})$. Values on the $y$-axis are normalised to units of mass, i.e. a Dirac mass would have height 1. ##### Comparison of discretisation methods In Fig. 3 we compare the three core approaches: fixed uniform discretisation, adaptive discretisation, and CBP. In particular, we wish to observe their convergence properties as the number of pixels is allowed to grow. In each case we use a FISTA stepsize of $t_{n}=\frac{n+19}{20}$. The adaptive discretisation is started with one pixel and limited to 128, 256, or 512 pixels while the fixed and CBP discretisations have uniform discretisations with the maximum number of pixels. The main observations are: * • The adaptive scheme is much more efficient, in both examples the adaptive scheme with 128 pixels is at least as good as both fixed discretisations with 512 pixels. In fact, only a maximum of 214 pixels were needed by the adaptive method in either example. * • With Fourier kernels the uniform piecewise constant discretisation is more efficient than CBP but in the Gaussian case this is reversed. This suggests that the performance of CBP depends on the smoothness of $\mathsf{A}$. * • The discrete gaps for non-adaptive optimisation behave as is common for FISTA, initial convergence is polynomial until a locally linear regime activates Tao2016 . CBP is always slower to converge than the piecewise constant discretisation. * • The adaptive refinement criterion succeeds in keeping the continuous/discrete gaps close for all $n$, i.e. (62). It is not completely fair to judge CBP with the continuous gap because, although it generates a continuous representation, this continuous representation is not necessarily consistent with the discrete gap being optimised, unlike when discretised with finite element methods. On the other hand, this is still the intended interpretation of the algorithm and we have no more appropriate metric for success in this case. ##### Comparison of FISTA variants Fig. 3 compares many methods with either fixed or adaptive discretisations. Each adaptive scheme is allowed up to 1024 pixels and each uniform discretisation uses exactly 1024. An example of each reconstruction method is shown in Fig. 3. The adaptive method better identifies the support of $u^{*}$ and clearly localises pixels on that support. The reconstruction with uniform grid fails to provably identify the support of $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}$, despite having found a qualitatively accurate discrete minimiser. The “Greedy FISTA” implementation was proposed by in Liang2018 and we include the adaptive variant despite a lack of convergence proof. The remaining FISTA algorithms use a FISTA time step of $t_{n}=\frac{n+a-1}{a}$ for the given value of $a$, as proposed in Chambolle2015 . In this example CBP used the greedy FISTA implementation which gave faster observed convergence. Fig. 3 compares the discrete gaps because it is the accurate metric for fixed discretisations, and for the adaptive discretisation it should also be an accurate predictor of the continuous gap. The main observations are: * • Each algorithm displays very similar convergence properties. The main difference is that the reconstructions with fixed discretisations accelerate after $10^{4}$-$10^{5}$ iterations. * • During the initial “slow” phase, adaptive and fixed discretisations appear to achieve very similar (discrete) convergence rates. The coarse-to-fine adaptivity is not slower than fixed discretisations in this regime. * • Lemma 6 accurately predicts the $\frac{1}{n}$ rate of the adaptive methods, mirrored in the fixed discretisations. This suggests that high-resolution discretisations are also initially limited by this $\frac{1}{n}$ rate before entering the asymptotic regime, consistent with (14). * • The fastest FISTA stepsize choice is consistently the greedy variant, although $a=20$ is very comparable. * • While each adaptive algorithm is allowed to use up to 1024 pixels, in Fig. 3 the most used was 235. ##### Comparison of fixed and adaptive discretisation Motivated by the findings in Fig. 3, we now look more closely at the performance of the $a=20$ and the greedy FISTA schemes. We have convergence results for the former, but the latter typically performs the best for non- adaptive optimisation and is never worse than $a=20$ in the adaptive setting. The question is whether it is faster/more efficient to use the proposed adaptive scheme, or to use a classical scheme at sufficiently high uniform resolution. The fixed discretisations use 1024 pixels (i.e. constant pixel size of $2^{-10}$ in Fig. 6) and the adaptive discretisation starts with two pixels with an upper limit of 1024. As expected, the fixed discretisation starts with a smaller continuous gap before plateauing to a sub-optimal gap around $\operatorname{E}=\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})+0.1$. Fig. 6 shows convergence of pixel size and continuous gap with respect to number of iterations. Fig. 6 shows the more practical attributes of continuous gap and number of pixels against execution time. We see that the adaptive discretisation is consistently capable of computing lower energies with fewer pixels and in less time than the uniform discretisation. The convergence behaviour is very consistent with respect to number of iterations. Suppose that the numerical aim is to find a function $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n}$ with $\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})-\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*})\leq 0.1$, all methods would converge after $O(10^{3})$ iterations, demonstrating some equivalence between the two FISTA algorithms. For $n\in[10^{3},10^{4}]$, in both problems, the adaptive schemes coincide with the fixed schemes in both energy and minimum pixel size. On the other hand, we also see that the adaptive scheme achieves this energy in almost an order of magnitude less time and fewer pixels. Figure 4: Continuous convergence of adaptive (coarse-to-fine pixel size) compared with uniform discretisation (constant pixel size) with respect to number of iterations. Figure 5: Continuous convergence of adaptive compared with uniform discretisation with respect to wall-clock time and total number of pixels (memory requirement). $\begin{aligned} J_{n}&=\\{(0,0),(0,1),(0,2),(1,2),(1,1)\\}\\\ \operatorname{leaf}(J_{n})&=\\{(0,2),(1,2),(1,1)\\}\\\ \mathds{M}^{n}&=\left\\{[0,\tfrac{1}{4}),[\tfrac{1}{4},\tfrac{1}{2}),[\tfrac{1}{2},1)\right\\}\end{aligned}$ $(0,0)$ $[0,1]$ $(0,1)$ $[0,\frac{1}{2}]$ $(0,2)$ $[0,\frac{1}{4}]$ $(1,2)$ $[\frac{1}{4},\frac{1}{2}]$ $(1,1)$ $[\frac{1}{2},1]$ Figure 6: Example tree representation of 1D wavelets. Left: nodes, leaves, and mesh of discretisation. Right: arrangement into a tree with index $(j,k)$ and corresponding support of wavelet $w_{j,k}$ underneath. ### 7.2 2D robust sparse wavelet reconstruction In this example we consider $\mathsf{A}$ to be a 2D Radon transform. In particular, the rows of $\mathsf{A}$ correspond to integrals over the sets $\mathds{X}^{I}_{i}$ where $\mathds{X}_{i}^{I}=\left\\{\vec{x}\in[-\tfrac{1}{2},\tfrac{1}{2}]^{2}\operatorname{\;s.t.\;}\vec{x}\vbox{\hbox{\leavevmode\resizebox{6.0pt}{}{$\cdot$}}}\begin{pmatrix}\cos\theta_{I}\\\ \sin\theta_{I}\end{pmatrix}\in\left[-\tfrac{1}{2}+\tfrac{i-1}{100},-\tfrac{1}{2}+\tfrac{i}{100}\right)\right\\},\quad\theta_{I}=\frac{180^{\circ}}{51}I$ (65) for $i=\in[100]$, $I\in[50]$. This is not exactly in the form analysed by Theorem C.1, only the sets $\\{\mathds{X}^{I}_{i}\operatorname{\;s.t.\;}i\in[100]\\}$ for each $I$ are disjoint, therefore we apply Theorem C.1 block-wise to estimate ${\left\lVert\mathsf{A}\right\rVert}_{L^{2}\to\ell^{2}}\leq\sqrt{\sum_{I\in[50]}\max_{i\in[100]}|\mathds{X}^{I}_{i}|}=\sqrt{\sum_{I\in[50]}\max_{i\in[100]}\int_{\mathds{X}^{I}_{i}}1\mathop{}\\!\mathrm{d}\vec{x}}=\sqrt{\sum_{I\in[50]}\max_{i\in[100]}\ (\mathsf{A}\mathds{1})_{i,I}}\;.$ (66) $\mathsf{A}$ is not smooth, therefore we can’t bound $|\mathsf{A}^{*}|_{C^{k}}$ for $k>0$, and so we must look to minimise over $\ell^{1}$ rather than $L^{1}$. The natural choice is to promote sparsity in a wavelet basis which can be rearranged into the form of (18): $\min_{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\in\mathds{U}}\operatorname{f}(\mathsf{A}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]-\eta)+\mu{\left\lVert\mathsf{W}^{-1}\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]\right\rVert}_{\ell^{1}}=\min_{\widehat{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}\in\ell^{1}(\mathds{R})}\operatorname{f}(\mathsf{A}\mathsf{W}\widehat{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}-\eta)+\mu{\left\lVert\widehat{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}\right\rVert}_{\ell^{1}}.$ (67) The minimisers are related by $\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]^{*}=\mathsf{W}\widehat{\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]}^{*}$ and, for wavelet bases, $\mathsf{W}$ is orthonormal so ${\left\lVert\mathsf{A}\mathsf{W}\right\rVert}_{\ell^{2}\to\ell^{2}}={\left\lVert\mathsf{A}\right\rVert}_{L^{2}\to\ell^{2}}$. In this example we consider the smoothed robust fidelity Rosset2007 $\operatorname{f}(\vec{\varphi})=\sum_{i=1}^{m}\begin{cases}10^{-4}|\varphi_{i}|&|\varphi_{i}|\geq 10^{-4}\\\ \tfrac{1}{2}|\varphi_{i}|^{2}+\tfrac{1}{2}10^{-8}&\text{else}\end{cases}\approx 10^{-4}{\left\lVert\vec{\varphi}\right\rVert}_{\ell^{1}}.$ (68) From Section 6.3 we know that to track convergence and perform adaptive refinement, it is sufficient to accurately bound $|[\mathsf{W}^{\top}\mathsf{A}^{*}\vec{\varphi}_{n}]_{j}|$ for all $j\notin J_{n}$. If $\mathsf{W}$ is a wavelet transformation then its columns, $w_{j}\in L^{2}$, are simply the wavelets themselves and we can use the bound $|\left\langle w_{j},\mathsf{A}^{*}\vec{\varphi}_{n}\right\rangle|=\left|\left\langle w_{j},\mathds{1}_{\operatorname{supp}(w_{j})}\mathsf{A}^{*}\vec{\varphi}_{n}\right\rangle\right|\leq{\left\lVert\mathds{1}_{\operatorname{supp}(w_{j})}\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{2}}\leq{\left\lVert\mathds{1}_{\mathds{X}}\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{2}}$ (69) for all $\mathds{X}\supset\operatorname{supp}(w_{j})$. In the case of the Radon transform, we can compute the left-hand side explicitly for the finitely many $j\in J_{n}$, but we wish to use the right-hand side in a structured way to avoid computing the infinitely many $j\notin J_{n}$. To do this, we will take a geometrical perspective on the construction of wavelets to view them in a tree format. ##### Tree structure of wavelets Finite elements are constructed with a mesh which provided a useful tool for adaptive refinement in Section 6.3.3. For wavelets, we will associate a tree with every discretisation and the leaves of the tree correspond to a mesh. This perspective comes from the multi-resolution interpretation of wavelets. An example is seen in Fig. 6 for 1D Haar wavelets, $w_{j,k}(x)=\sqrt{2}^{k}\psi(2^{k}x-j)$ where $\psi=\mathds{1}_{[0,1)}-\mathds{1}_{[-1,0)}$. In higher dimensions, the only two things which change are the number of children ($2^{d}$ for non-leaves) and at each node you store the coefficients of $2^{d}-1$ wavelets. The support on each node is still a disjoint partition of unity consisting of regular cubes of side length $2^{-k}$ at level $k$. The only change in our own implementation is to translate the support to $[-\tfrac{1}{2},\tfrac{1}{2}]^{2}$. We briefly remark that the tree structuring of wavelets is not novel and appears more frequently in the Bayesian inverse problems literature Castillo2019 ; Kekkonen2021 . ##### Continuous gradient estimate In Section 7.1 we used the continuous gap as a measure for convergence, for wavelets we will use the continuous subdifferential. With the tree structure we can easily adapt the results of Section 6.3 to estimate subdifferentials (or function gaps). In particular, $\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ $\displaystyle=\max\left({\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial_{n}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*},\max_{j\notin J_{n}}|\left\langle w_{j},\mathsf{A}^{*}\vec{\varphi}_{n}\right\rangle|-\mu\right)$ (70) $\displaystyle\leq\max\left({\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial_{n}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*},\max_{j\in\operatorname{leaf}(J_{n})}{\left\lVert\mathds{1}_{\operatorname{supp}(w_{j})}\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{2}}-\mu\right).$ (71) ##### Numerical results We consider two phantoms where the ground-truth is either a binary disc or the Shepp-Logan phantom. Both examples are corrupted with $2\text{\,}\mathrm{\char 37\relax}$ Laplace distributed noise. This is visualised in Fig. 9. All optimisations shown are spatially adaptive using Haar wavelets and initialised with $\mathds{U}^{0}=\\{x\mapsto c\operatorname{\;s.t.\;}c\in\mathds{R}\\}$. The gradient metric shown throughout is the $\ell^{\infty}$ norm. Motivated by (71), the spatial adaptivity is chosen to refine nodes $j\in\operatorname{leaf}(J_{n})$ to ensure that ${\left\lVert\mathds{1}_{\operatorname{supp}(w_{j})}\mathsf{A}^{*}\vec{\varphi}_{n}\right\rVert}_{L^{2}}-\mu\leq 10{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\partial_{n}\operatorname{E}(\IfEqCase{0}{{0}{u}{1}{v}{2}{w}}[u0]_{n})\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}$ for all $j$ and $n$ (i.e. so that the continuous gradient is less than 10 times the discrete gradient). We do not expect wavelet regularisation to have state-of-the-art performance in the examples of Fig. 9. What they demonstrate is the preference Haar wavelets have to align large discontinuities with a coarse grid, even when the discretisation is allowed to be as fine as necessary. There is an average of $2\cdot 10^{6}$ wavelet coefficients in each discretised reconstruction, although the higher frequencies have much smaller intensities. In limited data scenarios, wavelet regularisation automatically selects a local “resolution” which reflects the quality of data. Particularly in the Shepp-Logan reconstruction, we see that the outer ring is detected with a finer precision than the dark interior ellipses. The first numerical results shown in Fig. 9 compare the same adaptive FISTA variants as shown in Fig. 3. In these examples we see that the greedy FISTA and the $a=20$ algorithms achieve almost linear convergence while $a=2$ is significantly slower. Interestingly, in both examples the $a=20$ variant uses half as many wavelets as the Greedy variant, and therefore converges slightly faster in time. Figure 7: Phantoms, data and reconstructions for wavelet-sparse tomography optimisation. Both examples are corrupted with $2\text{\,}\mathrm{\char 37\relax}$ Laplace distributed noise. Figure 8: Convergence of different implementations of Algorithm 1 with an unlimited number of pixels for sparse wavelet optimisation. Figure 9: Example images from STORM dataset. ### 7.3 2D continuous LASSO Our final application is a super-resolution/de-blurring inverse problem from biological microscopy. In mathematical terms, the observed data is a large number of sparse images which are corrupted by blurring and a large amount of noise, examples are seen in Fig. 9. The task is to compute the centres of the spikes of signal in each image and then re-combine into a single super- resolved image, as in Fig. 11. This technique is referred to as _Single Molecule Localisation Microscopy_ (SMLM), of which we consider the specific example of _Stochastic Optical Reconstruction Microscopy_ (STORM). Readers are directed to the references Sage2015 ; Sage2019 ; Schermelleh2019 for further details. The LASSO formulation ($\operatorname{f}(\cdot)=\frac{1}{2}{\left\lVert\cdot\right\rVert}_{\ell^{2}}^{2}$) has previously been shown to be effective in the context of STORM Huang2017 ; Denoyelle2019 . Here we use a simulated dataset provided as part of the 2016 SMLM challenge222http://bigwww.epfl.ch/smlm/challenge2016/datasets/MT4.N2.HD/Data/data.html for benchmarking software in this application. The corresponding LASSO formulation is
# A Survey of Mathematical Models on Somitogenesis Hanyu Song Brandeis University (September 09, 2019) ###### Abstract This paper presents a comprehensive survey of various established mathematical models pertaining to Somitogenesis, a biological process. The study begins by revisiting and replicating the findings from prominent research papers in this domain, subsequently offering a critical evaluation of the strengths and weaknesses inherent in each approach. By synthesizing this knowledge, the paper aims to contribute to a deeper understanding of Somitogenesis, and pave the way for further advancements in the development of enhanced mathematical models for this intricate biological process. The concluding section offers valuable insights and directions for prospective research in this field. ###### Contents 1. Introduction to Somitogenesis 1. Unsolved questions 2. Clock and Wavefront Model 1. Summary 2. Mathematical Equations 3. Analysis 3. Oscillatory-based Model 1. Summary of the PORD Model 2. Mathematical Equation 3. Analysis 4. Excitable Model 1. Summary of the one-dimensional RD Model 2. FhN-type system and excitability 3. Analysis 5. Conclusion 6. A Code 1. Clock and Wavefront 2. Nagahara Discrete ## Introduction to Somitogenesis Somites are blocks of cells that lie along the anterior-posterior (AP) vertebrate embryonic axis of the developing embryo. Somitogenesis is the process by which somites form by segmenting the axis into similar morphological units such as vertebrates etc… Somitogenesis serves as the key biological process in the embryo since it’s responsible for segmenting the vertebrate axis and generating the prepattern that guides the formation of the tendons, ribs, muscle, and other associated features of the body trunk. Figure 1 illustrates the form of somites in an embryo and how segmentation works in the AP axis. Figure 1: Embryonic somite and the AP axis. The left picture is a human embryo[8], where the somites are already in shape when the embryo is still very immature. The right picture is an anterior-posterior axis where somites segment from the PSM cell that lies on both sides of the AP axis[3]. Front the posterior end to the anterior, the cells transform from undetermined, to determined, then finally to somites. Although many details about somitogenesis are still debated, there are some scientific facts that serve as the fundamentals for further research: Somites segment from the presomitic mesoderm (PSM): thick bands of tissue that lie on either side of the AP axis. The segmentation begins with the establishment of a prepattern of gene expression, and it is characterized by periodic activations in regions where future somites will segment. Early scanning microscope images show that the posterior PSM displays a series of cells similar in size and structure, known as somitomeres, which seem to be the precursors of the somites[1]. The existence of this prepattern was confirmed by microsurgical experiments in which isolated parts of the PSM formed somites in strict isolation[7]. Figure 2 demonstrates the wave-like gene expression in a mouse embryo. Figure 2: Gene expression in a mouse embryo, where the gene is marked green[2]. The gene has a wave-like propagation from the posterior end of the PSM to the anterior side. Another fact about the PSM is that the PSM is not a homogeneous tissue[13]. This is supported by microsurgical experiments conducted by Dubrulle and co- workers: AP inversions of somite-length regions of the posterior PSM resulted in normal segmentation whilst inversions of the anterior PSM resulted in somites with reversed polarity[10], which suggested that the anterior-most part of the PSM is determined with regard to its segmentation program, whilst the posterior-most part of PSM is susceptible in this respect. This proves the PSM’s heterogeneity, which is a key feature of the models for somitogenesis. The different regions of the PSM were found to correspond to regions of varying FGF signaling. fgf8, which is a gradient of FGF8 (Fibroblast Growth Factor 8). FGF8 is a gene with dynamic expression in the PSM, peaking at the posterior end of the embryo, whilst decreasing in the direction of the anterior end[6]. See Figure 3. The function of fgf8 is to down-regulate the cells, meaning, higher concentration of fgf8 will prevent the segmentation of PSM, whilst its decrease will make segmentation possible, and when fgf8 decreases past a certain threshold, the cells are then able to segment into somites. We call that threshold ”the determination front”[10]. The uneven distribution of fgf8 implies that the positional information of the PSM cell is crucial. However, the role of positional information is a controversial issue in mathematical biology, and it’s typically not possible to build robust biological structures without additional mechanisms, such as diffusion[15]. ### Unsolved questions We know the important information that the down-regulation of fgf8 heavily affects the somitogenesis process, and we seem to understand the logic behind the process of somitogenesis, but it is difficult to draw any conclusion about which specific type of model is capable of accurately recapitulating this process. There are still many questions that must be determined: whether the PSM cells are oscillatory or excitable with respect to fgf8 levels? Are the cells globally controlled by the gene or do they have local interactions between themselves as well? Does the global fgf8 down-regulation even matter? What will happen if the fgf8 is kept constant, can a reaction-diffusion model, that emphasizes local interactions between cells, explain this process accurately? In the later discussions of this paper, we will look into several kinds of mathematical models, with each of them having distinct answers to the above questions. Admittedly, none of them are deemed to be ”perfect”, with each of them having its own drawbacks. Although there are plenty of models out there, understanding those models’ mechanisms is still important as it could accelerate the process of creating a better one in the future. Although there are plenty of models out there, to date nobody has provided any comparison of them, and most papers on this topic don’t even reference each other as they are in different fields: Mathematical Biology, Bio Development, Physics, etc. Therefore, this paper’s goal is not to compare and find the perfect model, but to see each of their distinctive advantages and try to synthesize them if possible, while avoiding their drawbacks when we attempt to create new models in the future. Figure 3: PSM cell and gene gradients, FGF included[11]. The PSM elongates posteriorly as the somites are formed, whilst the gradient of FGF8 always has a peak at the very posterior end of the PSM cell. As it decreases to a certain level, the prepattern arises (the blocks that are not fully green), then somites are formed. What’s contrasting to the FGF gradient is the RA gradient, which is another type of gene that peaks at the anterior end of the PSM, but it’s not that relevant to the overall process compared to FGF8. ## Clock and Wavefront Model ### Summary One of the most famous and widely studied models is the clock and wavefront (C & W) model. As its name implies, the model proposes the existence of a segmentation clock and a wavefront of FGF8 along the AP axis of vertebrate embryos. This idea was first proposed in 1975 by Cooke and Zeeman, with the gist that there is a longitudinal, global positional information, which is the above-mentioned FGF8 gradient distribution, that interacts with a smooth cellular oscillator, which is the so-called clock, to govern the time for the PSM cells to segment and develop into somites. This idea was then revised by Pourquie and co-workers, where they went into more specifics and proposed that the clock sets the times at which new somite boundaries form whilst the position of the determination front sets where they form[10]. For a cell at a particular point, they assume that competence to segment is only achieved once FGF8 signaling has decreased below a certain threshold, the position of which is known as the determination front. Figure 4: Representation of the vertebrate body plan during somite formation[4]. The top part of the diagram shows the FGF8 wavefront, with a peak in the posterior and a decrease in the direction of the anterior. When the FGF8 decreases to a certain level, it reaches the determination front. The middle section of the diagram shows the AP axis of the embryo with the somites (dark grey blocks), determined region (light grey blocks), and the undetermined region (light grey band) clearly marked. The bottom is a visualization of a segmentation clock, which shows the time needed for cells to gain segmentation. Therefore, the whole somitogenesis process, according to this model, is divided and analyzed into different parts. Before reaching a determination front, a cell will gain the ability to segment by being able to produce a ”somitic factor”, which could be several possibilities of genes. One clock oscillation after reaching the determination front, cells become able to produce the ”signaling molecule”. After a cell is able to produce somitic factor and respond to the signaling molecule, it is specified as somitic and becomes refractory to FGF8 signaling[16] ### Mathematical Equations C & W mathematical equations were first proposed by Collier et al. (2000) and were developed by Mclnerny et al. in 2004, then by Baker and colleagues in 2006. One of the most important features of this model is that local mechanisms, controlled by time points and positional information, will trigger segmentation, which fits the C & W assumption perfectly. After segmentation, cells adhere to each other, creating distinctive somites. When creating this model, Collier made some further assumptions[12]: (1) The AP-axis can be seen as fixed with respect to the cells. The PSM’s length is constant and the segmentation pattern progresses with a constant speed. (In reality, the posterior end is actually elongating.) (2) The signals that are emitted by specified cells when they reach certain points are like pulses. The signaling molecule disperses fast and diffuses rapidly. This is the key assumption since rapid diffusion can ensure that only cells that are in certain positions will respond to the signal, if not, all cells will segment at the same time. (3) Somites are formed continually, and the beginning or end of this process is not considered, which means it’s assumed that signals emitted from cells exist all the time. This model can be well explained by Figure 5. In this diagram, x denotes the distance while t denotes time. There are two key components in this model: u(x,t) and v(x,t), in which u(x,t) represents the degree of concentration of somitic factor a cell is exposed to at a given x and t, while v(x,t) represents the diffusive signaling molecule. A cell that has a high concentration of u is specified as somitic, while those with a low concentration of u are non-somitic. Figure 5: Representation of C & W model illustrating the two time points P1 and P2 and the three key stages within the model.[12]. Cells at the posterior end of the PSM (Region I) are less mature than those in other regions since the somitogenesis process starts from the anterior part. As cells become more mature in Region II, they become capable of responding to the signaling molecule v, emitted by cells at point P2. In Region III they begin to form somites and are no longer able to emit any signals. The mathematical equations proposed by Collier and colleagues are also based on these two components [12]: $\displaystyle\partial_{t}u(x,t)=\frac{(u+\mu v)^{2}}{\gamma+\kappa u^{2}}\chi_{u}(x,t)-\frac{u}{k}$ (1) $\displaystyle\partial_{t}v(x,t)=\frac{\chi_{v}(x,t)}{\epsilon+u}-v-D\frac{\partial^{2}v}{\partial x^{2}}$ (2) $\chi_{u}$ and $\chi_{v}$ are controlled by two Heaviside step functions: $\displaystyle\chi_{u}=H(ct-x+x_{1})$ (3) $\displaystyle\chi_{v}=H(ct-x+x_{2})$ (4) note that the Heaviside step functions’ rule of calculation is: $\displaystyle H(x)=\left\\{\begin{aligned} 1&&x\geq 0\\\ 0&&x<0\end{aligned}\right.$ (5) As mentioned above, u and v represent the concentration of ”somitic growth factor” and ”signaling molecule” respectively, while other variables in these equations are all positive constants. This model uses a zero flux boundary condition. It indicates that this boundary condition prevents anything from leaving this system, which may be an application of the third assumption made by Collier mentioned above. Heaviside functions,$\chi_{u}$ and $\chi_{v}$, play an important role in this model. They can be seen as switches: the elements inside brackets, t, and x, which are the time and location information, together determine the on and off of the dynamics in u and v respectively. In Figure 5, the Heaviside functions are shown along with the regions where the somitic growth factor u is, respectively, high ($x<x_{2}+ct$) and low ($x>x_{1}+ct$). Somitic growth factor and signaling molecules boost the somitogenesis process collectively and they affect each other, as we can see that $\partial_{t}u$ is affected by v and $\partial_{t}v$ is affected by u. Specifically, u inhibits v while v activates u. The model was further expanded by Baker and colleagues in 2006. His team made some revisions to the two equations above and they added a third equation into the system: $\frac{\partial w}{\partial t}$ which represents the changing gradient of FGF8, which down-regulates the somitogenesis process[4]: $\displaystyle\frac{\partial u}{\partial t}=\frac{(u+\mu v)^{2}}{\gamma+u^{2}}\chi_{u}-u$ (6) $\displaystyle\frac{\partial v}{\partial t}=k(\frac{\chi_{v}}{\epsilon+u}-v)+D_{v}\frac{\partial^{2}v}{\partial x^{2}}$ (7) $\displaystyle\frac{\partial w}{\partial t}=\chi_{w}-\eta w+D_{w}\frac{\partial^{2}w}{\partial x^{2}}$ (8) and $\chi_{w}=H(x-x_{n}-c_{n}t)$ where $x_{n}$ and $c_{n}$ are constants. Based on the previous system, this system is reproducing these important behaviors: (1) the increase of somitic factor u is activated by signaling molecules and is self-regulating. (2) The somitic factor is an inhibiting signaling molecule. In other words, signaling molecule is produced rapidly in areas where somitic factor concentration is low. (3) FGF8 is produced in the tail and regresses along the x-axis. ### Analysis The C & W mathematical model proved to be effective in producing a qualitatively reasonable match to reality. We recapitulated and reproduced some of the results of the above mathematical equations shown in Mclnerney and Baker’s papers, and they do support the gist of C & W theory. We first analyzed the qualitative behaviors of this model, and the result can be explained with Figure 6, which is derived from Figure 5: Figure 6: The three stages of somitogenesis from the posterior to the anterior end of the PSM. In region I, since the switches $\chi_{u}=\chi_{v}=0,(u,v)\rightarrow(0,0)$, while in Region II and III, as $\chi_{u}and\chi_{v}$ change with respect to t, the qualitative behavior of these two regions will be different. Below are the phase planes of u and v in Region II and III respectively, which is a reproduction of Fig 2 and Fig 3 in Mclnerney’s paper[12] using XPP: Figure 7: The phase planes of u and v in Region II (left) and Region III (right). In Region II, there are three steady states: two stable equilibriums when u is close to 0 and 1, and a saddle in the middle. The region II carries the pulse of the signaling molecule. In the phase plane, we can see that after the cells pass the determination front and before they finish one clock cycle, the cells gain the ability to respond to signals.- the somitic factor concentration u is always above 0, while they can’t generate signaling molecules themselves, as v remains to be 0. In Region III, after cells undergo one cycle of segmentation clock, $\chi_{u}=\chi_{v}=1$. There is only one stable equilibrium in the phase plane, meaning whatever u and v’s initial values are, they will all arrive at that specific point and their values will remain stable, then cells are identified as somitic. We then reproduced the numerical solutions for equations (1) and (2) in Mclnerney’s Figure 11(a) and (b)[12], shown in figure 8. Figure 8: Numerical solution given by equation (1) and (2) for $0\leq t\leq 300$. Parameter values: $\mu=10^{-1},\gamma=0.2,\kappa=10,c=5*10^{-3},\epsilon=10^{-3},D=100$. This set of parameter values violated one of the conditions mentioned in the paper, so this solution is not a perfect one since for $\mu=10^{-5}$ and $\gamma=0.2$, a high level of v fails to activate u production[12]. We also reproduced the numerical solution of the C & W model in one spatial dimension given by equations (5) (6) (7 )in Baker’s Fig 3[16], using the code provided in the appendix. Figure 9 contains the numerical solutions for $u(x,t),v(x,t)$ and $w(x,t)$ respectively: Figure 9: Numerical solution given by equation (5) (6) and (7), showing the spatiotemporal dynamics of the somitic factor, (a). the signaling molecule, (b), and fgf8 (c). The regression of the FGF8 wavefront is accompanied by a series of pulses in the signaling molecule and coherent rises in the level of the somitic factor. Parameter values: $\mu=10^{-4},\gamma=10^{-3},\kappa=10,\epsilon=10^{-3},\eta=1.0,D_{v}=50,D_{w}=20,x_{n}=0,c_{n}=0.5,D=100$. However, the fact that the verification of the results of the above models, in some ways, shows the models’ validity, can not prove the models to be flawless. There are some issues that need to be considered before constructing a better model. The equation set (1) and (2) is not robust because somites depend sensitively on plenty of factors, such as mesh, the speed c, and initial conditions. Any slight interference in those factors will prevent them from obtaining successful results. For equations (5), (6), and (7), although the results in Fig 9 show a clear and consistent pattern of pulses of somitic factor and signaling molecule, they rely heavily on a very smooth gradient w. Admittedly, in normal cases, the idea that u and v rely on a smooth gene gradient is not, in itself, problematic. However, in this model, it’s simply assumed that a generic FGF8 molecule makes up the gradient controlling the position of the determination front[16], which means although the gene’s name is FGF8, it in fact represents the aggregate influence of multiple genes that may affect the somitogenesis process. In other words, the gradient is modeled at a very phenomenological level. Requiring such a gradient to be perfectly smooth becomes a drawback of this system. A stochastic FGF8 gradient or some random fgf8 pulse will easily mess up the result. This problem is mentioned and demonstrated in Fig 3 in Baker’s paper[16]. Also, in this paper, the position of the determination front is prescribed, yet in reality, it will be subject to plenty of factors such as the gradient slope, etc… ## Oscillatory-based Model ### Summary of the PORD Model In the C & W model mentioned above, the key is that long-range molecular gradients control the movement of the front and therefore the placement of the stripes in the embryo. In this section, we are introducing a fundamentally different system: the progressive oscillatory reaction-diffusion model, or PORD model, which does not rely on a global gradient control, but is driven by short-range interactions. In the first section of this paper, we have introduced several ”facts” and ”unsolved questions” in the somitogenesis process. Although the oscillatory model’s mechanism is very different from C & W, they do share a lot of similarities - it’s just that their interpretations and understandings to those facts are different. The PORD model admits the existence of the posterior movement of the determination front, yet it explains that it’s not controlled by global positional information but by interactions between cells. In Cotterell’s paper, it’s argued that the PORD model could also explain some other important features of somitogenesis, such as size regulation, which previous reaction-diffusion models fail to explain. However, we did find that controlling the FGF8 gradient such as adding random pulse in the C & W model, will result in larger somites, which strengthens the argument that the amount of somitic factor controls the size of somites. Figure 10: The comparison between C & W model’s and the PORD model’s mechanisms[5]. The left figure shows C & W model focuses on global gradient control while the PORD model focuses on short-range interactions between cells. The bottom figure shows the oscillation of gene expression in the PSM cells, which are the stripes. Each stripe of gene expression will, in the future, correspond to a subsequent somite boundary. The right figure is the comparison of the sensitivity of the stripe positions. The positional accuracy of the arrest front will be more sensitive to noise if defined by long-range gradients(top) than if defined by the distance from the last-formed expression stripe (bottom). The PORD model argues that there is a molecular patterning process that sequentially produces stripes of gene expression along the PSM, resulting in the segmentations in the PSM. Figure 10 shows this mechanism and its comparison with the C & W model. There are two dynamical systems that are involved in this process. First, cells of the PSM exhibit oscillations of gene expression. These oscillations are organized into traveling waves and they are locally well synchronized: neighboring cells are in very similar phases of the cycle [5]. Second, these oscillations are arrested in an anterior-to-posterior progression, which means the position where the oscillations are frozen travels posteriorly through the PSM, and that position is called the arrest front. Note that the arrest front is similar but not equivalent to the determination front mentioned above, and is addressed in the discussion section in Cotterell’s paper [5]. Despite being locally self-organizing, the PORD model involves both molecular oscillations in the PSM and a traveling wavefront. Yet, it continues to create stripes even in the absence of a moving FGF gradient. Thus it does not rely on positional information along the PSM. In this reaction-diffusion model, the distance between stripes is defined by the local diffusion of a repressor molecule, which is secreted from the stripes themselves (See the right of Figure 10). However, the fact that the model behaves the same with and without the gradient seems like a potential problem, since the PSM cells have been studied without a gradient and their behaviors seem to be very different [9]. Overall, the PORD model challenges the existing clock and wavefront models by providing a fundamentally alternative theory based on locally self- organization. It could explain somite size scaling and have higher robustness of somite size regulation. Some of the PORD model’s results also stand the test in chick embryos, which shows its validity[5]. ### Mathematical Equation The exploration of mathematical equations for the PORD model is refreshing. Cotterell and colleagues enumerated all possible topologies that are possible for a gene regulatory network of three genes. Of the 9710 possible networks, 210 produced a multi-stripe pattern for at least one parameter set. Of all the stalactites in the topological tree containing successful topologies, they found two versions of the C & W model and several versions of the oscillatory PORD model. The simplest design of the oscillatory model is a network that contains only two nodes ((A) of Figure 11), comprising a cell-autonomous activator (A), which is itself activated by the FGF signal, and a diffusible repressor (R). A and R are defined by the following equations: $\displaystyle\frac{\partial A}{\partial t}=\Phi(\frac{\kappa_{1}A-\kappa_{2}R+\textit{F}+\beta}{1+\kappa_{1}A+\kappa_{2}R+\textit{F }+\beta})-\mu A$ (9) $\displaystyle\frac{\partial R}{\partial t}=\frac{\kappa_{3}A}{1+\kappa_{3}A}-D\nabla^{2}R-\mu R$ (10) where $\kappa_{1},\kappa_{2},\kappa_{3}$ define the strengths of regulatory interactions between A and R. D is the diffusion constant for R, $\mu$ is a fixed decay constant, and F is the regulatory input of the FGF gradient onto a. $\beta$ is the background regulatory input of A. To prevent negative values of morphogens, we use the function $\Phi(x)=xH(x)$, where $H(x)$ is the Heaviside function. Together they form a reaction-diffusion mechanism where R inhibition is responsible for the spacing of adjacent stripes. Since the PORD model does not rely on global positional information, the model does not spontaneously generate segments everywhere but rather progresses from anterior to posterior, which is similar to real-world biological phenomena. (B) and (C) in Figure 11 shows the wave of gene propagation and its oscillatory mechanism. Figure 11: The PORD mechanisms [5]. (A) The minimum somite-patterning circuit that implements the PORD mechanism. It contains an activator molecule (green) and a diffusible repressor (red). $\kappa_{1},\kappa_{2},$ and $\kappa_{3}$ are strengths of interactions between A and R. It shows that the system relies on the FGF to activate but does not need FGF to perpetuate, which fits the PORD character. (B) Gene expressions are initiated at the posterior end and they travel to the anterior end. The white stripe is the formed somite, which is the position where the last gene expression stopped. When the next wave propagation arrives at a certain distance from the last one, it will stop and form the next stripe. (C) A snapshot of gene expression oscillation along the PSM. The oval dashed arrows indicate the oscillation directions. The blue line is the FGF gradient. It’s not directly related to stripe formation. The Buffer Region is generated by the diffusion of the repressor from the last formed stripe, which inhibits oscillations. Therefore, cells cannot go beyond the Buffer Region and will exit oscillations in the Oscillating Edge Region to form new stripes. Newly formed stripes act as the next source of repressor to prevent oscillations. They will form new Buffer Regions and push the arrest front posteriorly. ### Analysis The PORD model proves to be a typical oscillatory model. Its wave propagation theory as well as the mathematical equation both exhibit its oscillatory nature. We used XPP and recapitulated figure E of Figure 2 in the Cotterell paper [5], see the left of Figure 12. Figure 12: XPP analysis of the PORD model. The left figure is the phase portrait for the non-diffusing case of Equations (8) and (9), which means to ignore the diffusing term D. The green and red lines are the nullclines for the activator A and inhibitor R respectively. The right figure is the bifurcation analysis for the same case. It shows the bifurcation of activator A with respect to the FGF gradient. Ignored its diffusion state and made stationary, the system reveals that oscillations are the natural dynamic state for most cells in the PSM. The bifurcation analysis (the right of Figure 12) also reveals its oscillatory nature. The activator has a Hopf bifurcation which starts when the fgf8 drops to a certain level. Meaning, when the fgf8 is high, the activator will be stimulated. When fgf8 decreases to a certain level, the activator will interact with the inhibitor and will start to oscillate. The part in between the two green boundaries is where oscillation exists. And the oscillation will stop once the cells reach the arrest front, which in this case, is when the fgf8 decreases to 0. However, the PORD model has received some criticisms. For one, although in the paper, it’s been claimed multiple times that this model does not require the moving FGF gradient, it nevertheless acts to couple the rate of embryo growth with the integral levels of FGF8 signaling in the PSM [5]. Meaning, it can’t ignore the fact that FGF8 plays an important role in controlling the size of somites. Higher levels of FGF signaling will result in smaller somites. Also, from the bifurcation analysis shown above, we can see that the character of the PORD system is somewhat similar to C & W system in that FGF8 gradient information could control the cell activities in both cases. The position where the activator starts to oscillate can be seen as the determination front in C & W model, and the new terminology ”arrest front” in the PORD system also locates close to where the FGF8 gradient drops to a very low level. Simply put, although the PORD system has created some new terms such as the ”buffer region” and ”the arrest front”, its activities, similar to C & W model, could still be explained by the FGF8 gradient control. Another problem is that, not only in PORD model but also in some other oscillatory-based models, many of them control the specific position of spatial stripes ”manually”, by defining thresholds or piecewise functions [14]. Although this may help create beautiful results, it’s in contrast to the principle of the self-organization of biological systems. Nonetheless, the PORD model, as one of the most famous oscillatory models for somitogenesis, does present a very different perspective. It reveals the possibility that cells themselves carry an oscillatory nature in the absence of diffusion. It also produces several nice movies to show the oscillation process clearly. We tried to reproduce that in Matlab but didn’t succeed. ## Excitable Model ### Summary of the one-dimensional RD Model Both the C & W model and the PORD model have an important feature that has not been mentioned in previous sections. That is, both models set spatial continuity as a key requirement. Spatial continuity, in this case, means that both models ignore the size of cells and see the PSM as a whole unit, or, a spatial continuum. However, although spatial continuity is acceptable in most chemical reaction systems, Nagahara and colleagues argue that it’s not always the case in biological systems. It’s simply because cells in a multicellular organism have a finite size[14]. In the initial stages of an organism’s developmental process, when important biological structures first emerge, the number of cells is usually small and the size of a single cell can not be simply ignored since the size of the field where the phenomena occur is comparable to that of a cell. As we know spatial continuity is not always met, we may have to consider the spatial variations between cells. Yet, Nagahara and colleagues propose that instead of thinking in that way, it’s better to just treat cells as ”interacting discrete nodes” in a network. Since the diffusion inside a cell is much faster than that in a membrane, treating cells as individual nodes that compose a huge network is suitable. Nagahara also criticized the complicity and difficulty in other models, since models based on a continuum will have difficulty producing a narrow boundary between different distinctive behaviors, as sharp as two or three cells. The fact that most models assume the existence of two or more different interactions, such as activator, inhibitor, etc., among the neighboring cells, is also complicated. Therefore, they created a simple, one-dimensional reaction-diffusion model that focuses on three things: (1) No diffusion among inhibitors, (2) cells are discrete instead of continuous, and (3) spatial inhomogeneity, where (1), (2) are new ideas, while (3) is an old one. See Figure 13. Figure 13: An illustration of the one-dimensional reaction-diffusion system proposed by Nagahara and colleagues [14]. The major difference between this one and the PORD model is that the diffusion of the inhibitor v is excluded. There’s only one interaction between neighboring cells, which is the activator u. ### FhN-type system and excitability Below is a hypothetical model that describes the above features of gene expression in somitogenesis: $\displaystyle\frac{\partial u}{\partial t}=f(u,v)+D\mathcal{L}u$ (11) $\displaystyle\frac{\partial v}{\partial t}=g(u,v)$ (12) where u and v are concentrations of the activator and inhibitor respectively. $D\mathcal{L}u$ is the diffusion term for the activator while f and g are the reaction terms of the activator and the inhibitor. f and g are given as follows: $\displaystyle f(u,v)=\frac{1}{\tau_{1}}(\frac{1}{\gamma}u(u-a)(1-u)-v+\beta)$ (13) $\displaystyle g(u,v)=\frac{1}{\tau_{2}}(u-v)$ (14) where $\tau_{1}$ and $\tau_{2}$ represent the time scales of local reaction kinetics of u and v respectively. $\gamma$ represents the spatial gradient and the temporal change in the concentration of a certain substance [14], which is dependent on space x and time t. In reality,$\gamma$’s biological counterpart, in the case of somitogenesis, could be the FGF8 gradient in the PSM. $\gamma$ thus plays an important role in this model. We call this model the ”FhN-type” model. The reason for that is because the reaction terms (12) and (13) resemble the ”Fitzhugh-Nagumo” model. It’s named after Richard Fitzhugh who suggested the model in 1961 and J. Nagumo who created the equivalent circuit the following year. The Fitzhugh-Nagumo model is a generic model for excitable systems. Because of its simple two-variable form and generality, it has been used widely. The Fitzhugh-Nagumo prototype model has the following form: $\displaystyle\dot{v}=v-\frac{v^{3}}{3}-w+I_{ext}$ (15) $\displaystyle\tau\dot{w}=v+a-bw$ (16) The reason why we say this model represents excitable systems, is because when $I_{ext}$, the external stimulus, exceeds a certain threshold level, the system will exhibit a characteristic excursion in the phase plane, before the variables $v$ and $w$ relax back to the rest values. We say the system is excited and will be refractory to excitability for a period of time. When $I_{ext}$ doesn’t exceed that threshold, there will not be an excursion, and we say the system is not yet excited and remains quiescent or excitable. Except for the excursion, the phase plane also contains two nullclines. One is linear and the other is a cubic nullcline, or a sigmoid. The excitability of the system can be discovered by looking at the spatial relationship between the two nullclines: the closer the linear nullcline to the peak of the sigmoid, the more excitable the system is. See Figure 14 for the phase plane. Figure 14: The phase plane for the prototype Fitzhugh-Nagumo model. $I_{ext}$ = 0.5, a = 0.8, b = 0.7. The blue line is the trajectory of the FhN model in the phase space. The pink and yellow lines are two nullclines where the pink line is a cubic nullcline and the yellow line is the linear nullcline. ### Analysis In this model, $\gamma$ is defined as a linear gradient function: $\displaystyle\gamma(x)=0.21-0.20x,\quad\quad\quad x\in[0,1]$ (17) With the given information, we used XPP and created the bifurcation diagrams to find the model’s qualitative features. See Figure 15. We can see that $\gamma$ has a hopf bifurcation. When $x\in[0,1]$, $\gamma$ is invariant and the system is stable, yet when x increases beyond the boundary, the system tends to become oscillatory. $\alpha$’s bifurcation is a saddle-node bifurcation. Together with the stable region of $\gamma$’s bifurcation, the system tends to exhibit a bistable region when $\gamma$ is small. Two stable regions will coexist and then the system will tend to be oscillatory as $\gamma$ increases. The right figure is a cusp bifurcation. Note that the cusp bifurcation has a normal form which resembles equation (12). Figure 15: The bifurcation diagrams for the one-dimensional RD model. The left figure is a hopf bifurcation of $\gamma$. The middle figure is a saddle-node bifurcation of $\alpha$. The right figure is a co-dimension bifurcation diagram, which is a cusp bifurcation of $\gamma$ and $\alpha$. Cusp bifurcation is defined as two branches of saddle-node bifurcation curve that meet tangentially, forming a semi-cubic parabola. The Nagahara paper utilizes the idea of bistability. By changing the parameter $\gamma$, which adjusts the amplitude of cubic function, we can vary the local kinetics from oscillation to bistability [14]. Figure 16 shows the changes in the nullclines $f(u,v)=0$ and $g(u,v)=0$ by manipulating $\gamma$. As we can see, the system gains more excitability when $\gamma$ is decreased, since the decrease in $\gamma$ will make the sigmoid, or the cubic nullcline, elongated vertically. As the linear nullcline will not change, this elongation will close the distance between the linear nullcline and the sigmoid, thus making the system more easily excited. Furthermore, using plotting techniques in Mathematica, we found that manipulating $\alpha$ and $\beta$ in equation (12) will also change the excitability, in which the increase in $\alpha$ makes the system less excitable, while the increase in $\beta$ makes the system more excitable. Figure 16: Phase planes for equation (12) and (13)[14]. They correspond to (a) oscillatory state in big $\gamma$ case and (b) bistable state in small $\gamma$ case. A spatiotemporal diagram is shown in Figure 17, where (a) $\gamma$ is given as invariant. A single pulse triggered from the left boundary propagated to the right and generated a stationary band at a specific position. (b) If we take into account the posterior growth of the PSM, $\gamma$ will be a function of both space and time. The pulses will lead to a static, periodic structure. (c) $\gamma$ sets to decrease as the wave passes, while we ignore the growth of the PSM. The pulses can also create a static, periodic structure, but the bandwidth will be much thicker. Figure 17: The numerical simulations for the one-dimensional RD model, illustrate the manner of wave propagation, depending on the geometrical distribution of $\gamma$ [14]. However, the Nagahara model is not a typical RD model. The structures in Figure 17 will not form unless the model is discrete, which is in opposition to continuous. Spatial discreteness is an important feature, and also the base for this model. Normally in the continuous case, the wave propagation triggered from the left will not stop and generate a stationary band, rather, it will propagate across the field without stopping[14]. The paper proposes that, when D is small enough, the propagation of the wave will be blocked and there exist stable steady solutions. This phenomenon is called ”wave propagation failure”. We used Matlab and simulated what will happen to the system as D varies, and we obtained the result shown in Figure 18, using the code provided in the appendix. Figure 18: The spatiotemporal diagram for the system as D varies. D decreases from (a) to (c). In (a), $D=10^{-3}$. The system is not yet discrete and the wave propagates through the field non-stop. In (b), $D=10^{-4}$. The system becomes discrete, and the wave propagation leaves a steady state solution, just like what’s shown in Fig 17 (a). In (c), $D=10^{-5}$. The system is discrete but the diffusion is so weak that the signal fails to reach the bistable region. Overall, this one-dimensional RD model, which is the FhN-type model, is the most immature model among the three. Unlike the previous two that have already been going through in vitro experiments, this model is highly theoretical and leans more toward physics than biology. However, the simple, fresh idea of excitability opens a new perspective to see the whole process. Excitability in this model, as we have found in Mathematica, can vary with respect to $\gamma,\alpha$, and $\beta$. Depending on three variables seems a bit capricious but the simple philosophy behind excitability - how easy it is to cross the threshold - makes it simple to accurately manipulate the excitability of a system or how to make it ”excited”. We look forward to exploring more about this feature, and to implementing it in an effort to effectively understand somitogenesis. ## Conclusion As they represent several of the mainstream ideas in the field, it is not surprising that all three models provide abundant insights into the hidden mechanisms behind somitogenesis. While their core ideas are different in one way or another, and each of them has its own flaws, each model’s existence nevertheless greatly improves scientists’ understanding toward this field and motivates new experiments. The Clock and Wavefront model proposed a prescribed determination front, which is determined by the level of FGF8 gradient, that controls the positioning of somites. It segments the PSM cell into different regions and explains the somitogenesis process systematically. The result is easy to recapitulate and it shows desired characters that fit the theory. Its biggest drawback is its hard-coded outcomes, for example, the model can not explain experiments in which mutant embryo determination fronts change since its position is prescribed in the model. The PORD model proposes a local reaction-diffusion oscillatory mechanism that can generate stripes of somites without global gradient information. The system is simple and their video result is very impressive. However, in contrast to its claims, the PORD model still relies on the FGF8 gradient to control the size of somites, and many of its mechanisms can still be explained by fgf8 which seems as if the theory is like another ”perspective” to see the fgf8’s effects. Many of the oscillatory models control the specific position of the stripes manually using thresholds or piecewise functions, which violates its self-organizing nature. The excitable model is a one-dimensional reaction-diffusion model that resembles the ”Fitzhugh-Nagumo” model, which is an excitable, generic model widely applied in fields of physics. Discreteness plays an important role in the system: the model decreases the diffusion level until cells are no longer continuous, then blocks the wave propagation generated by excitability, creating fixed stripes of somites, which is a refreshing idea. This model is theoretical and has not gone through in vitro experiments, and its result is not robust as it needs a very fine-tuned diffusion level, which is hard to achieve in reality, but the idea of an excitable system has latent potential and much to be exploited. The C&W and PORD models yield great results that match our desire. However, their strict conditions often do not meet in real-life biology, and it seems that this problem can’t simply be solved under the existing frames of mathematical equations. In my perspective, excitability is what needs to be studied the most rigorously in order to understand somitogenesis. Dr. Hubaud and his colleagues’ research [9] which proposed excitability as a general framework for oscillations in the PSM cell is a great start. While referring to other models for inspiration, we should boldly explore the direction of excitable models, instead of sticking to past experiences and techniques on other models that seem more successful at the moment, since forsaking the mindset that holds for the moment is a must to create a better one. ## References * [1] Martin Hrabě de Angelis Achim Gossler. Somitogenesis. Current Topics in Developmental Biology, 38:225 – 287, 1997. * [2] Alexander Aulehla, Winfried Wiegraebe, Valerie Baubet, Matthias B. Wahl, Chuxia Deng, Makoto Mark Taketo, Mark B Lewandoski, and Olivier Pourquié. A B-catenin gradient links the clock and wavefront systems in mouse embryo segmentation. Nature Cell Biology, 10:186–193, 2008. * [3] Ruth E. Baker, Santiago Schnell, and Philip K. Maini. A clock and wavefront mechanism for somite formation. Developmental biology, 293 1:116–26, 2006. * [4] Maini PK. Baker RE, Schnell S. A mathematical investigation of a clock and wavefront model for somitogenesis. Journal of mathematical biology, 52:458 – 482, 2006. * [5] James Lloyd Cotterell, Alexandre Robert-Moreno, and Jacob C. Sharpe. A local, self-organizing reaction-diffusion model can explain somite patterning in embryos. Cell systems, 1 4:257–69, 2015. * [6] Julien Dubrulle and Olivier Pourquié. Coupling segmentation to axis formation. Development, 131 23:5783–93, 2004. * [7] S.Robert Hilfer Ellen A.G. Chernoff. Calcium dependence and contraction in somite formation. In Tissue and Cell, volume 14, pages 435–449. IEEE, Elsevier, 1982\. * [8] M.A. Hill. Hillh5 stage 16 bf04.jpg. Embryology, 2019. * [9] Alexis Hubaud, Ido Regev, L. Mahadevan, and Olivier Pourquié. Excitable dynamics and yap-dependent mechanical cues drive the segmentation clock. Cell, 171:668–682.e11, 2017. * [10] Olivier Pourquié Julien Dubrulle, Michael J. McGrew. Fgf signaling controls somite boundary position and regulates segmentation clock control of spatiotemporal hox gene activation. Cell, 106(2):219 – 232, July 2001. * [11] Moises Mallo. Revisiting the involvement of signaling gradients in somitogenesis. The FEBS journal, 283 8:0–7, 2016. * [12] David McInerney, Santiago Schnell, Ruth E. Baker, and Philip K. Maini. A mathematical formulation for the cell-cycle model in somitogenesis: analysis, parameter constraints and numerical solutions. Mathematical medicine and biology : a journal of the IMA, 21 2:85–113, 2004. * [13] Stephen Meier. Development of the chick embryo mesoblast formation of the embryonic axis and establishment of the metameric pattern. Developmental Biology, 73(1):25 – 45, November 1979. * [14] Hiroki Nagahara, Yue Ma, Yoshiko Takenaka, Ryoichiro Kageyama, and Kenichi Yoshikawa. Spatiotemporal pattern in somitogenesis: A non-turing scenario with wave propagation. Phys. Rev. E, 80:021906, Aug 2009. * [15] Cristóbal Quiñinao, Alain Prochiantz, and Jonathan Touboul. Local homeoprotein diffusion can stabilize boundaries generated by graded positional cues. The Company of Biologists Ltd, 142(10):1860–1868, 2015. * [16] Philip K. Maini Ruth E. Baker, Santiago Schnell. Mathematical models for somite formation. Current topics in developmental biology, 81:182 – 203, 2008. ## Appendix A Code ### Clock and Wavefront ⬇ function newclockwaves m = 0; % a slab system (in terms of slab,cylindrical,spherical) x = -5:0.01:15; % setting up AP axis t = 0:0.05:25; % setting up time axis sol = pdepe(m,@pdex4pde,@pdex4ic,@pdex4bc,x,t); u = sol(:,:,1); v = sol(:,:,2); w = sol(:,:,3); figure imagesc(x,flipud(t),u); colormap(gray) colorbar set(gca,’YDir’,’normal’) title(’(a) Somitic Factor’) xlabel(’AP axis (x)’) ylabel(’Time (t)’) figure imagesc(x,flipud(t),v); colormap(gray) colorbar set(gca,’YDir’,’normal’) title(’(b) Signaling Molecule’) xlabel(’AP axis (x)’) ylabel(’Time (t)’) figure imagesc(x,flipud(t),w); colormap(gray) colorbar set(gca,’YDir’,’normal’) title(’(c) FGF8’) xlabel(’AP axis (x)’) ylabel(’Time (t)’) % \-------------------------------------------------------------- function [c,f,s] = pdex4pde(x,t,u,DuDx) x1 = 1; x2 = 0; xn = 0; k = 10; mu = 0.0001; epsilon = 0.001; gamma = 0.001; Dv = 50; Dw = 20; n = 1; o = 0; cn = 0.5; xb = 7.5; epsilon1 = 0; %n1=(-cn-sqrt(cn^2+4*n*Dw))/(2*Dw); %n2=(-cn+sqrt(cn^2+4*n*Dw))/(2*Dw); c = [1; 1; 1]; f = [0.00001; Dv; Dw] .* DuDx; % these are the diffusion coefficients, take D1 \approx 0 for this model %w1 = n1/(n*(n1-n2))*exp(n2*(-xn+1)); Xu = heaviside(x1-x+cn.*t); Xv = heaviside(x2-x+cn.*t); Xw = heaviside(x-xn-cn.*t); Xb = heaviside(epsilon1-xb+x)*heaviside(epsilon1+xb-x); F1 = ((u(1)+mu*u(2)).^2)./(gamma+u(1).^2).*Xu-u(1); F2 = k.*(Xv./(epsilon+u(1))- u(2)); F3 = Xw+o*Xb-n.*u(3); s = [F1; F2; F3]; %% Set the initial conditions function u0 = pdex4ic(x) Dv = 50; Dw = 20; %gamma = 0.001; k = 10; xn = 0; n = 1; cn = 0.5; epsilon = 0.001; epsilon1 = 0; lam = sqrt(k/Dv); A = 1/(1+epsilon-epsilon1); B = A*sign(x)/(2*cosh(lam*10)); %n0 = 1/2*(1+sqrt(1-4*gamma/k)); n1=(-cn-sqrt(cn^2+4*n*Dw))/(2*Dw); n2=(-cn+sqrt(cn^2+4*n*Dw))/(2*Dw); w0=heaviside(xn-x)*(n1/(n*(n1-n2))*exp(n2*(x-xn)))+ heaviside(x-xn)*(n2/(n*(n1-n2))*exp(n1*(x-xn))+1/n); u0 = [heaviside(-x); A*heaviside(-x)+B*cosh(lam*(10-abs(x))); w0]; %% Set the boundary conditions function [pl,ql,pr,qr] = pdex4bc(xl,ul,xr,ur,t) epsilon = 0.001; gamma = 0.001; k = 10; Dv = 50; Dw = 20; n = 1; %pl = [ul(1).^2./(gamma+ul(1).^2)-ul(1); k.*(1/(epsilon+ul(1))-ul(2)); 0]; %ql = [0.00001; Dv; Dw]; % can’t be set to [0,D], need to use a small value to approximate 0 %pr = [0; 0; 1-n.*ur(3)]; %qr = [0.00001; Dv; Dw]; pl = [0;0;0]; ql = [1;1;1]; pr = [0;0;0]; qr = [1;1;1]; ### Nagahara Discrete ⬇ %% 1D Simulation of spatial savanna model with diffusion in space tic %% Set options for plots/movies close all fprintf(’\n’); DO_MOVIE = 1; % i.e. write movie to avi file for playback, else just plots ONE_D_PLOT = 0; % plot dynamics of a single point over time END_STATE_PLOT = 0; % 2D plot of the final state of the system THREE_D_PLOT = 0; % plot two spatial dimensions, time and colour SPACE_TIME_PLOT=1; %% Numerical method parameters L = 1; % working on [0,L] N = 100; % N+1 grid points delta = L/N; % spatial discretization parameter -> 0.01 suggested h = 0.01; % time discretisation parameter n = 50000; % number of time steps tau = (n-1)*h; % simulations time domain is [0,tau] %% Function definitions gamma_fun = @(x, a, b) a + b.*x; f_fun = @(u,v,tau1,gamma,alpha,beta) (u.*(u-alpha).*(1-u)./gamma-v+beta)/tau1; g_fun = @(u,v,tau2) (u-v)/tau2; %% Model parameters (values from Nagahara et al. (2009), Phys. Rev. E) tau1 = 0.588; tau2 = 32.1; alpha = 0.4; beta = 0.33; a = 0.21; b = -0.2; % b = -0.2 in the paper D1 = 1*10^(-5); % D too big or small won’t work D2 = 0; % this is set to zero in Nagahara’s paper %% Set up the initial distributions of the cover types on the grid % each row is one time step of the simulation % the initial condition is a pulse of u near x = 0 u(1,:) = 0.2*rand(1,N+1); v(1,:) = 0.2*rand(1,N+1); % Compute the birth and mortality matrices as a function of rainfall X = 0:delta:L; gamma_grad = gamma_fun(X,a,b); % compute the rainfall gradient along the x-axis %% The numerical scheme for i = 2:n % compute convolutions for this time step progressbar(i,n); u(i,:) = u(i-1,:) + h*( f_fun(u(i-1,:),v(i-1,:),tau1,gamma_grad,alpha,beta) + ... D1*([0, u(i-1,1:(end-1))] - 2*u(i-1,:) + [u(i-1,2:end),0])./(delta*delta) ); % NB the zeros reflect that we are using an "open boundary" v(i,:) = v(i-1,:) + h*( g_fun(u(i-1,:),v(i-1,:),tau2) + ... D2*([0, v(i-1,1:(end-1))] - 2*v(i-1,:) + [v(i-1,2:end),0])./(delta*delta) ); end % The following output is useful when trying to discern whether or not % a solution is stationary in time fprintf(’\n’); fprintf(’The maximum changes on the grid for each variable at the last time step were:\n’); fprintf([’u: ’,num2str(max(abs(u(n,:)-u(n-1,:)))),’\n’]); fprintf([’v: ’,num2str(max(abs(v(n,:)-v(n-1,:)))),’\n’]); toc %% Visualise the solution... % either in a movie... if DO_MOVIE subplot_vis = struct(’cdata’,[],’colormap’,[]); w = VideoWriter(’reaction_diffusion_model.avi’); w.FrameRate = 4; open(w) f = figure; for j = 1:300:n f.Name = [’Simulation time: t = ’, num2str((j-1)*h)]; ax1 = subplot(2,1,1); plot(X,u(j,:)); title(’u’); ax2 = subplot(2,1,2); plot(X,v(j,:)); title(’v’); xlim([ax1 ax2],[0 L]) ylim([ax1 ax2],[-0.5 1.5]) writeVideo(w,getframe(gcf)) subplot_vis(j) = getframe(gcf); end close(w) end %% Space-time plot of dynamics if SPACE_TIME_PLOT fST = figure; fST.Name = ’Evolution over time’; subplot(2,1,1) h1 = pcolor(u); shading interp title(’u’); colorbar set(h1, ’EdgeColor’, ’none’); ylabel(’Time’); caxis([-0.5,1.5]) subplot(2,1,2) h2 = pcolor(v); shading interp title(’v’); colorbar set(h2, ’EdgeColor’, ’none’); caxis([-0.5,1.5]) end
# Letter of Intent: Muonium R&D/Physics Program at the MTA C. Gatto,5,6 C. Izzo,2 C. J. Johnstone,2 D. M. Kaplan,11footnotemark: 1 3,4 K. R. Lynch,2 D. C. Mancini,3 A. Mazzacane,2 B. McMorran,7 J. P. Miller,1 J. D. Phillips,322footnotemark: 2 T. J. Phillips,3 R. D. Reasenberg,8 T. J. Roberts,3,4 J. Terry3 1Boston U., 2Fermilab, 3Illinois Institute of Technology, 4Muons, Inc., 5INFN Napoli, 6Northern Illinois U., 7U. Oregon, 8U. California San Diego CASS 11footnotemark: 1 Spokesperson 22footnotemark: 2 Also at Zurich Instruments (December 5, 2022) ## 1 Introduction There is a need for a high-efficiency source of muonium (M $\equiv\mu^{+}e^{-}$, chemically a light isotope of hydrogen), traveling as a beam in vacuum, for fundamental muon measurements, sensitive searches for symmetry violation, and precision tests of theory [1]. Currently PSI in Switzerland is the world leader for such research. With PIP-II Fermilab has the potential to eclipse PSI and become the new world leader. It is prudent to begin the R&D now in order to be ready when PIP-II comes online. Fermilab’s MeV Test Area (MTA) at the 400 MeV H- Linac has a low-energy muon beamline suitable for this R&D, with the potential to compete with PSI for this physics in the pre-PIP-II near term as well. Key muonium measurements include the search for M-$\overline{\rm M}$ conversion, precision measurement of the M atomic spectrum, and the study of antimatter gravity using M. Furthermore, the J-PARC $g-2$ experiment proposes to use a low-energy $\mu^{+}$ beam produced by photo-ionizing a slow beam of muonium, but the needed high-intensity muonium beam has yet to be demonstrated. The technique we propose may form a suitable muonium source for such a $g-2$ measurement as well as for other applications of slow muon beams. M-$\overline{\rm M}$ conversion is a double charged-lepton flavor-violating (CLFV) reaction, allowed (albeit at an undetectably small rate) via neutrino mixing. It may be no less likely — and in some models, more likely — than $\mu$ to $e$ conversion [1]. Thus in a thorough CLFV research program it should be studied as well as Mu2e. The best current limit, $P_{\rm M\overline{M}}\leq 8.3\times 10^{-11}$ (90% C.L.) in 0.1 T field [2], was published over 20 years ago and, given the technical progress since then, is ripe for reexamination. As a pure QED bound state of two point-like particles, muonium offers a more direct test of theory than hydrogen, free of hadronic and finite-size effects. Precise predictions and measurements have been made of its 1S–2S [3] and hyperfine [4] splittings (to 4 and 12 ppb, respectively), again over 20 years ago, and experiments are now under way at PSI and J-PARC to improve them. Antimuon gravity has never been measured but its measurement now appears feasible, thanks to the new approach [5] described below. While the weak equivalence principle (WEP) of general relativity implies that all forms of matter should act identically in a gravitational field, and precision measurements supporting it have been made using torsion pendula, the Earth–Moon–Sun system, and levitated cylinders in earth orbit [6], it has been argued that the WEP may not hold for antimatter [7]. Moreover, in theories that assume maximal WEP violation by antimatter (in which the gravitational acceleration of antimatter on earth, $\overline{g}$, satisfies $\overline{g}=-g$), major puzzles of cosmology can be resolved with no further assumptions and no need of the as-yet-unobserved dark matter and dark energy [8]. Or a 5th force coupling non-universally to leptons, as suggested by muon $g-2$ and $B$-decay anomalies [9], might cause $\overline{g}$ and $g$ to differ slightly. ## 2 Approach Our method (proposed by PSI’s D. Taqqu) relies on the efficient conversion of positive muons to muonium atoms in superfluid helium [10] combined with the predicted expulsion of muonium atoms at the superfluid surface due to the predicted large positive chemical potential of muonium in superfluid helium [11, 12]. An electric field maintained in the superfluid helium by means of an electron pool at the surface will drift stopped $\mu^{+}$ to the surface, where they will combine with electrons to form muonium and be expelled into the vacuum. ## 3 Apparatus, R&D Tasks, Budget Figure 1: (left) Conceptual sketch of “Phase III” muonium-gravity apparatus: degraded surface muons stop in SFHe layer, producing upward-directed M beam, deflected into horizontal by SFHe-coated reflector [12] and entering 3-grating interferometer; calibration soft X-ray beam enters from left; (right) muonium detector concept: fast decay $e^{+}$ detected in barrel SciFi detector triggered by scintillating-bar barrel hodoscope, slow $e^{-}$ in $xy$ scintillator hodoscope array with SiPM readout after electrostatic acceleration ($xy$ hodoscope and accelerating rings not shown). Of the three key muonium measurements described above, muonium gravity is the one that has never been measured, uses the smallest apparatus, and will clearly fit within the MTA; the others require more evaluation and may have to await PIP-II. The key pieces of apparatus for our proposed program are then (1) a small cryostat that can be operated at $\sim 0.1$ K, within which a pool of superfluid helium (SFHe) can be created and maintained; the depth of the pool will be $\sim$ 100 $\mu$m, sufficient to stop an appreciable fraction of the slow $\mu^{+}$ beam incident from below after its energy has been degraded by traversal of the cryostat wall; (2) a 3-grating atom interferometer with $\approx$ 100 nm grating pitch and few-cm grating separation along $z$; and (3) an efficient detector of both the M decay products and a calibration X-ray beam; beams enter through thin spots or beryllium windows in the cryostat walls. The apparatus is shown in Fig. 1 and will be installed within the sample volume of a dilution refrigerator, operated at $\sim$ 0.1 K to reduce the SFHe vapor pressure and suppress M-He scattering. Note that the actual grating separation will be determined after a thorough optimization study; Fig. 1(left) shows a separation corresponding to 2 muon lifetimes at the predicted M velocity of 6.3 mm/$\mu$s. Simulations show that the sign of $\overline{g}$ can be determined to 5$\sigma$ in a few days of running at $10^{5}$ Mu/s incident on the interferometer; if systematics can be adequately controlled, a 10% measurement can be made in about a month. Figure 2: Conceptual sketch of apparatus to test and characterize charge pool at surface of superfluid helium layer, which creates drift field in helium. (A), (B), (C): field electrodes; (T): tungsten electron-injection tips. As a condition to embark on the gravity measurement, we must first demonstrate that Taqqu’s proposed muonium-beam formation method works as predicted. We are greatly aided in this by the recent installation of a low-energy $\pi/\mu$ beamline in the Fermilab MTA. Designed to produce 40 MeV/$c$ muons for a muon- catalyzed fusion ($\mu$CF) experiment, it can easily be tuned for efficient transport of ($\stackrel{{\scriptstyle<}}{{{}_{\sim}}}$ 29 MeV/$c$) surface muons, as we have shown in simulation. Our first task, which can proceed as soon as the $\mu$CF experiment allows, is to verify these simulations by characterizing the $\mu^{+}$ spectrum experimentally as a function of magnet currents. Our second task (which can start in parallel with our first) is to demonstrate sufficient control of the SFHe charge-pool technology to maintain the needed drift field in the helium (Fig. 2). This can be done using an existing IIT cryostat, vacuum pumps, and cryocoolers, preferably at Fermilab. The total scale of the project is a few M$, about half of which is personnel costs. ## 4 Draft Schedule We anticipate that this program will require 3–5 years in total. Phase I: Use G4beamline to simulate the muon beam, optimize its parameters for muonium production in superfluid helium, and characterize the MTA surface-muon beam experimentally; test and characterize the electric-field-in-helium apparatus; design the detectors needed to characterize the produced muonium. Phase II: Obtain a suitable dilution refrigerator to reach the $<1$ K temperature range at which the process works best, build the detectors, install the apparatus within the sample volume, measure muonium production, and determine the optimal operating parameters and maximum muonium yield. Phase III: Build the additional apparatus needed to observe and measure the effect of the earth’s gravity on muonium and carry out a first measurement. ## References * [1] A. A. Petrov, R. Conlin, C. Grant, “Studying $\Delta L=2$ Lepton Flavor Violation with Muons,” Universe 8 (2022) 169, https://www.mdpi.com/2218-1997/8/3/169. * [2] L. Willmann et al., “New Bounds from a Search for Muonium to Antimuonium Conversion,” Phys. Rev. Lett. 82 (1998) 49. * [3] V. Meyer et al., “Measurement of the 1s–2s Energy Interval in Muonium,” Phys. Rev. Lett. 84 (2000) 1136. * [4] W. Liu et al., “High Precision Measurements of the Ground State Hyperfine Structure Interval of Muonium and of the Muon Magnetic Moment,” Phys. Rev. Lett. 82 (1999) 711. * [5] A. Antognini et al., “Studying Antimatter Gravity with Muonium,” Atoms 6 (2018) 17; * [6] T. A. Wagner, S. Schlamminger, J. H. Gundlach, E. G. Adelberger, “Torsion-balance tests of the weak equivalence principle,” Class. Quantum Gravity 29 (2012) 184002; J. G. Williams, S. G. Turyshev, D. H. Boggs, “Progress in Lunar Laser Ranging Tests of Relativistic Gravity,” Phys. Rev. Lett., 93 (2004) 261101; P. Touboul et al. [MICROSCOPE Collaboration], “MICROSCOPE Mission: Final Results of the Test of the Equivalence Principle,” Phys. Rev. Lett. 129 (2022) 121102. * [7] M. M. Nieto, T. Goldman, “The Arguments Against ‘Antigravity’ and the Gravitational Acceleration of Antimatter,” Phys. Rep. 205 (1991) 221–281. * [8] G. Chardin, “Motivations for Antigravity in General Relativity,” Hyp. Int. 109 (1997), 83; A. Benoit-Lévy, G. Chardin, “Introducing the Dirac–Milne universe,” Astron. & Astrophys. 537 (2012) A78; G. Chardin et al., “MOND-like behavior in the Dirac–Milne universe: Flat rotation curves and mass versus velocity relations in galaxies and clusters,” A&A 652 (2021) A91; G. Chardin, “Experimental and Observational Tests of Antigravity,” arXiv:2210.03445 [astro-ph.CO] (2022). * [9] R. Aaij et al. (LHCb Collaboration), “Test of lepton universality in beauty-quark decays,” Nat. Phys. 18 (2022) 277 and references therein. * [10] R. Abela et al., “Muonium in liquid helium isotopes,” JETP Lett. 57 (1993) 157. * [11] D. Taqqu, “Ultraslow Muonium for a Muon beam of ultra high quality,” Phys. Procedia 17 (2011) 216. * [12] V. G. Luppov et al., “Focusing a Beam of Ultracold Spin-Polarized Hydrogen Atoms with a Helium-Film-Coated Quasiparabolic Mirror,” Phys. Rev. Lett. 71 (1993) 2405.
# Possible Implications of Relatively High Levels of Initial 60Fe in Iron Meteorites for the Non-Carbonaceous – Carbonaceous Meteorite Dichotomy and Solar Nebula Formation Alan P. Boss Earth & Planets Laboratory, Carnegie Institution for Science, 5241 Broad Branch Road, NW, Washington, DC 20015-1305 <EMAIL_ADDRESS> ###### Abstract Cook et al. (2021) found that iron meteorites have an initial abundance ratio of the short-lived isotope 60Fe to the stable isotope 56Fe of 60Fe/56Fe $\sim$ $(6.4\pm 2.0)\times 10^{-7}$. This appears to require the injection of live 60Fe from a Type II supernova (SN II) into the presolar molecular cloud core, as the observed ratio is over a factor of ten times higher than would be expected to be found in the ambient interstellar medium (ISM) as a result of galactic chemical evolution. The supernova triggering and injection scenario offers a ready explanation for an elevated initial 60Fe level, and in addition provides a physical mechanism for explaining the non-carbonaceous – carbonaceous (NC-CC) dichotomy of meteorites. The NC-CC scenario hypothesizes the solar nebula first accreted material that was enriched in supernova- derived nuclides, and then later accreted material depleted in supernova- derived nuclides. While the NC-CC dichotomy refers to stable nuclides, not short-lived isotopes like 60Fe, the SN II triggering hypothesis provides an explanation for the otherwise unexplained change in nuclides being accreted by the solar nebula. Three dimensional hydrodynamical models of SN II shock- triggered collapse show that after triggering collapse of the presolar cloud core, the shock front sweeps away the local ISM while accelerating the resulting protostar/disk to a speed of several km/s, sufficient for the protostar/disk system to encounter within $\sim$ 1 Myr the more distant regions of a giant molecular cloud complex that might be expected to have a depleted inventory of supernova-derived nuclides. hydrodynamics — ISM: clouds — ISM: supernova remnants — planets and satellites: formation — protoplanetary disks — stars: formation ## 1 Introduction Laboratory studies of the initial abundances of the short-lived isotope 60Fe (half-life of 2.62 Myr) have produced values differing by orders of magnitude. Tachibana et al. (2006) determined an initial ratio of 60Fe/56Fe $\sim 5-10\times 10^{-7}$ in their studies of high Fe/Ni ferromagnesian chondrules from the ordinary chondrites (OC) Semarkona and Bishunpur. Tang & Dauphas (2012) performed whole rock analyses of a range of meteorites, including unequilibrated ordinary chondrites (UOC), and found a much lower initial abundance ratio, 60Fe/56Fe $\sim 1.15\times 10^{-8}$, a level they considered to be representative of the general ISM. However, when Mishra & Goswami (2014) examined seven chondrules from UOC meteorites, they found an initial ratio of 60Fe/56Fe $\sim 7\times 10^{-7}$. Mishra & Chaussidon (2014) studied three chondrules from the OC Semarkona and the carbonaceous chondrite Efremovka, finding initial ratios of 60Fe/56Fe in the range of $\sim 2-8\times 10^{-7}$. Mishra et al. (2016) inferred ratios of $\sim 8-11\times 10^{-7}$ for chondrules from an UOC, while Telus et al. (2018) found initial ratios in the range of $\sim 0.5-3\times 10^{-7}$ in other UOC chondrules. Telus et al. (2016) attributed the differences between whole rock analyses and in situ studies of individual chondrules to aqueous alteration along chondrule fracture lines, either on the parent body or prior to recovery on the Earth, which could skew the whole rock analyses toward lower initial 60Fe/56Fe ratios. Trappitsch et al. (2018) reanalyzed a Semarkona chondrule with a different in situ technique, finding a low initial ratio of 60Fe/56Fe $\sim(3.8\pm 6.9)\times 10^{-8}$ that is consistent with the low end of the range found by some other in situ analyses and with the low values found by the whole rock measurements of Tang & Dauphas (2012). The Trappitsch et al. (2018) results allow a solar system initial ratio as high as $1.07\times 10^{-7}$. The Tang & Dauphas (2012) result has been accepted by some as evidence of meteoritic 60Fe being the result of galactic chemical evolution (e.g., Forbes et al. 2021). Others have been more circumspect. Vescovi et al. (2018) simply concluded that the initial 60Fe/56Fe ratio lies between $10^{-8}$ and $10^{-6}$, while Lugaro et al. (2018) decided that the initial 60Fe/56Fe ratio has not been determined well enough to draw any conclusions about the source of this short-lived isotope. Given this checkered past, the Cook et al. (2021) study represents a potentially transformational approach to determining the initial 60Fe/56Fe ratio. Rather than search for Fe-Ni-rich phases in the primarily silicate mineralogy of chondritic meteorites, Cook et al. (2021) analyzed 13 samples from two groups of magmatic iron meteorites, the common octahedrites (group IID) and the rare ataxites (group IVB), the former with average to high Ni contents, the latter with very high Ni. Magmatic iron meteorites offer the advantage of samples of a well-mixed iron melt, sidestepping the concerns about whole rock versus in situ measurements that characterize chondritic meteorites. The study was able to rule out contamination of the Fe and Ni isotopes by galactic cosmic-rays. Using the common assumption that the iron meteorites formed from the chondritic reservoir, Cook et al. (2021) found that an initial ratio of 60Fe/56Fe $\sim(6.4\pm 2.0)\times 10^{-7}$ characterized their samples of these two groups of iron meteorites, a value considerably higher than expected as a result of galactic chemical evolution (GCE; e.g., Huss et al. 2009). Cook et al. (2021) concluded that the solar system must have been injected with additional live 60Fe, well above the level to be expected from GCE, and noted that deriving the injected additional 60Fe from a SN II also best explained the deficits in 60Ni and 56Fe abundances they measured in their sample of iron meteorites. If correct, the Cook et al. (2021) study appears to settle the brouhaha over the initial abundance of 60Fe in the solar nebula, and thus provides meteoritical evidence in support of the supernova triggering and injection hypothesis that was first proposed by Cameron & Truran (1977). Lee et al. (1976) found evidence of live initial 26Al (half-life of 0.72 Myr) in Ca, Al- rich refractory inclusions (CAIs) from the Allende meteorite, leading Cameron & Truran (1977) to hypothesize nucleosynthesis of 26Al in a SN II and rapid incorporation of the 26Al into CAIs following injection into the presolar cloud by the SN II shock wave. 26Al is also synthesized during the Wolf-Rayet (WR) phase of massive stars, prior to a core-collapse supernova explosion, and mixed into the interstellar medium by the WR star outflow, and so the solar system level of 26Al has been argued to be a result of GCE (e.g., Young 2016; Reiter 2020) or of triggering by a WR outflow (Dwarkadas et al. 2017). However, 60Fe is not produced in significant amounts by WR stars, but it as well as 26Al is readily produced by SN II (e.g., Tur et al. 2010), making the initial 60Fe abundance the key meteoritical test for a SN II being the primary source of the live 60Fe. Protosolar cosmic rays have also been suggested as a source of 26Al (most recently by Gaches et al. 2020) but such cosmic rays fail to produce the required 60Fe. A long series of papers, from Foster & Boss (1996) through to Boss (2019), has used detailed multidimensional hydrodynamics codes to show that suitable SN II shock waves are able to trigger the self-gravitational collapse of a molecular cloud core, while simultaneously injecting the short-lived isotopes into the collapsing cloud through the Rayleigh-Taylor instability at the shock-cloud boundary. These papers have proven the viability of the Cameron & Truran (1977) hypothesis for the origin of the short-lived isotopes inferred from meteoritical analyses such as those of Cook et al. (2021). The purpose of this paper is to learn whether this series of shock-triggered collapse models might have additional implications for explaining the non- carbonaceous – carbonaceous (NC-CC) dichotomy of meteorites. Warren (2011) was the first to show that meteoritical Cr, Ti, and O stable isotope abundances fall into two distinct groups, the NC and CC. Recent work by Nanne et al. (2019) and Lichtenberg et al. (2021) has developed models for creating and preserving the NC-CC dichotomy. Both scenarios hypothesize that the solar nebula first accreted material that was enriched in supernova-derived nuclides, and then later accreted material depleted in supernova-derived nuclides. No explanation is offered for a physical mechanism to explain this significant difference in accreted matter other than isotopic heterogeneity in the local ISM and an extended period of accretion from the ISM. Thus we seek here to find a more dynamically-based explanation for the NC-CC dichotomy. We extend the preferred triggered-collapse model of Boss (2019) to a much larger calculational volume and use it to track the evolution of matter initially in four distinct initial regions to learn the effect of shock-wave triggering on the time evolution of matter accreted by the presolar cloud and nebula. ## 2 Numerical Hydrodynamics Code The new model presented here was calculated in the same manner as the suite of shock-triggered collapse and injection models of Boss (2019). The three- dimensional hydrodynamics code used was once again FLASH 4.3, based on the algorithms developed by Fryxell et al. (2000). The FLASH codes are adaptive- mesh refinement (AMR) codes, ideal for following the sharp gradients in density and temperature associated with shock fronts as they traverse more uniformly varying regions of the ISM, such as molecular cloud cores. In addition to the AMR feature of FLASH, the models use a sink particle (as developed by Federrath et al. 2010) to represent the newly formed, high density protostar as a point source of gravity able to accrete matter from its surroundings. As in Boss (2019), the new model uses the FLASH 4.3 multigrid Poisson solver, and as a result uses a Cartesian coordinate grid with one top grid block with eight top grid cells in each coordinate direction. The model begins with a maximum of six levels of refinement on the top grid block. Because the computational volume is a rectangular cuboid with sides of length $4\times 10^{17}$ cm in $\hat{x}$ and $\hat{z}$ and $1.12\times 10^{18}$ cm in $\hat{y}$, the resulting cells are not cubical, being almost three times as long in the $\hat{y}$ direction as in $\hat{x}$ and $\hat{z}$. [Note that in Boss (2019), the rectangular cuboid had a significantly smaller length in $\hat{y}$ of $8.2\times 10^{17}$ cm.] A single increase in the level of refinement means that each of the three sides of the computational cells are halved in length, leading to the increased spatial resolution that is needed to be able to accurately follow the dynamics of shock-triggered protostar collapse. With six levels of refinement, the smallest cell size in $\hat{x}$ and $\hat{z}$ is $4\times 10^{17}/(2^{5}\times 8)=1.6\times 10^{15}$ cm and a size 2.8 times larger in $\hat{y}$. When the refinement level is increased to seven, the smallest cell size is $0.8\times 10^{15}$ cm in $\hat{x}$ and $\hat{z}$. ## 3 Initial Conditions The model presented here is identical to one of the models (O) published in Boss (2019), with two exceptions. First, the length of the calculation box was extended in the direction of propagation of the shock front in order to learn more about the effect of the shock front on the ISM gas and dust downstream from the target molecular cloud core. Second, several new color fields were defined, in order to better define the fate of the surrounding ISM matter. The color fields are defined as being initially nonzero in specific regions of the initial configuration, such as inside the target cloud or inside the shock front, and these fields thereafter evolve and trace the location and density of this material as the calculation proceeds. The models in Boss (2019) focused on the material initially inside the shock front, in order to estimate the SN II injection efficiency, and did not specify color fields that tracked the material initially in the target cloud or the surrounding ISM. As a result, the Boss (2019) models could not separate the evolution of the initial target cloud material from that of the surrounding ambient ISM, which is the key goal of this paper. As in Boss (2019), the initial conditions consist of a stationary target molecular cloud core that is about to be struck by a planar shock wave (Figure 1). The target cloud core and the surrounding gas are initially isothermal at 10 K, while the shock front and post-shock gas are isothermal at 1000 K. The target cloud consists of a spherical cloud core with a radius of of 0.053 pc and a Bonnor-Ebert radial density profile. The central density is chosen to produce an initial cloud with a mass of 3.04 $M_{\odot}$, embedded in a background rectangular cuboid of gas with a mass of 1.63 $M_{\odot}$ and with random noise in the background density distribution. The cloud core is assumed to be in solid body rotation about the direction of propagation of the shock wave (the $-\hat{y}$ direction) at an angular frequency $\Omega_{i}=3\times 10^{-14}$ rad s-1. The initial shock wave has a speed of 40 km s-1, a width of $3\times 10^{-4}$ pc, and a density of $7.2\times 10^{-18}$ g cm-3. These choices are based on previous modeling, which examined a wide range of shock parameters (Boss et al. 2010; Boss & Keiser 2010), in order to choose shock parameters suitable for triggered collapse and injection of SN II-derived short-lived isotopes. ## 4 Results Figures 1 though 5 shows the initial configuration and the four different color fields used to trace the evolution of different regions in the model. The first color field (denoted mass scalar 1 or ms1, in the FLASH code) depicts the matter in the target cloud, while ms2 traces the matter in the shock front, ms3 does the same for the matter behind the shock front, and ms4 follows the ambient ISM surrounding the target cloud core. The time evolution of this model proceeds exactly the same as the corresponding model O in Boss (2019): the shock front smacks the top edge of the target cloud core, leading to a Rayleigh-Taylor instability at the shock- cloud interface. This instability allows shock front material carrying SN II- produced short-lived isotopes such as 60Fe and 26Al to be injected into the target cloud core, which is soon compressed sufficiently by the shock front to initiate sustained, self-gravitational collapse. Once the collapsing region exceeds a critical density of $10^{-15}$ g cm-3 (0.048 Myr), a sink cell is formed at the location of the density maximum, and this sink cell thereafter accretes the gas and dust in its vicinity, using the same sink cell parameters as used in model O in Boss (2019). The model started with a maximum of six levels of refinement, which was increased to seven levels after 0.050 Myr of evolution. The portions of the shock that do not strike the target cloud exit off the calculational grid by 0.010 Myr. Figures 6 through 10 depict the results for the density and four color fields at the final time calculated for the model of 0.063 Myr. This new AMR model required a run time of three months on three 32-core nodes of the Carnegie memex cluster. At the final time, the sink cell is located close to the center of the density maximum evident in Figure 6 (at $y=-1.32\times 10^{17}$ cm), has been accelerated by the shock front to a speed of 3.2 km/s in the direction of the initial shock front, and has acquired a mass of $\sim 0.5M_{\odot}$. The protostar is accreting mass at a rate of $\sim 10^{-6}M_{\odot}$/yr, implying that it will grow to a final mass of $\sim 1M_{\odot}$ in $\sim 0.5$ Myr if that accretion rate could be sustained indefinitely. However, at the final time, the mass accretion rate is in decline, as the nearby gas and dust available for accretion is being depleted. Figure 7 shows that the initial cloud core has been only partially triggered into self-gravitational collapse, with the majority of the initial mass of 3.04 $M_{\odot}$ having been accelerated beyond the gravitational reach of the accreting protostar formed by shock wave compression. Figure 8 indicates that, as in the previous model O (Boss 2019), the shock front material has been mixed into the same region occupied by the initial cloud core gas and dust, leading to an injection efficiency into the collapsing protostar essentially the same as was previously determined, as discussed in detail in Boss (2017). Figure 9 shows that the low-density, hot post-shock gas follows right behind the shock front matter with only minimal mixing into the compressed region of high density gas and dust. Figure 10 presents the key result of this simulation: the material initially in the immediate vicinity of the target cloud core has been quite efficiently swept away from the region of the protostar and of the material that it is still accreting. Only an insignificant amount of the initial ambient gas and dust remains close enough to the protostar to be accreted, less than 1/20 (by mass) compared to the shock front matter at densities above $2\times 10^{-16}$ g cm-3. The model shows that SN II-derived isotopes will be injected thereafter, with only minimal subsequent accretion of isotopes from the initial ambient ISM in the immediate vicinity of the target molecular cloud core, which is assumed to be initially pristine and unpolluted by SN II-shock ejecta. This sweeping away of the local ISM allows the protostar to experience two distinct phases of accretion, first due to SN II-shock injection, and later during its subsequent traverse of the GMC. Figures 11 and 12 provide close-in views of the model at the final evolution time, showing the newly- formed protostar/protoplanet disk surrounding the location of the protostar (i.e., sink cell). Figure 12 shows that only faint wisps of the initial ambient ISM material still remain in the region of the protostar. Figure 13 shows that the shock wave isotopes have been injected deep within the protostar and disk system, as a result of the Rayleigh-Taylor fingers that pierce the target cloud core early in the evolution. Finally, Figure 14 depicts the temperature distribution at the final time, showing that the protostar is located well behind the high temperature (1000 K) shocked region and retains its initial temperature of 10 K as a result of molecular line cooling in optically thin regions (Boss et al. 2008). The protostar thereafter continues its evolution into the GMC with its ambient temperature of 10 K. Figure 11 superficially resembles Figure 5 of Boss (2019), which depicts the results of a model (N) with an initial cloud rotation rate 2/3 of that in the current model, based on model O and shown in Figure 6 of Boss (2019). In Boss (2019), model O produced a large-scale disk by 0.121 Myr, with a diameter of order 1000 au, whereas model N did not produce a large-scale disk by the time that the model was halted, 0.081 Myr. The present model shown in Figure 11 was halted at 0.063 Myr, even earlier than model N in Boss (2019), as the result of an increasingly smaller time step problem that did not afflict model O in Boss (2019). As the present model is identical to model O, save for the extension of the numerical grid from $y=-2\times 10^{17}$ cm to $y=-5\times 10^{17}$ cm effected in order to better study the fate of the surrounding ISM matter, one must ascribe this small time step problem to the computational cells being increasingly elongated in $\hat{y}$ that had to be employed in order to use the improved multigrid Poisson solver of FLASH 4.3, as explained by Boss (2019). Hence one can expect that the present model would produce a large-scale disk identical to that of model O if the model could be calculated as far as model O in Figure 6 of Boss (2019), i.e., to 0.121 Myr. A careful examination of the present model in Figure 11 shows that a disk is beginning to form, as there are distinct ”horns” of accreting matter both above and below the location of the central protostar, with the horns above the protostar being clearly bent outward compared to those of model N in Figure 5 of Boss (2019), indicative of the higher initial rotation rate of the present model compared to model N and of the subsequent disk formation in model O. Regardless of the small time step problem, the present model’s extension in $\hat{y}$ allows Figure 10 to demonstrate that the shock front effectively clears away the residual ISM matter from the vicinity of the protostar and disk system, a fact that cannot be gleaned from model O’s Figure 6 in Boss (2019), where the disk system is about to reach the end of the numerical grid. The mass of the initial shock front is 0.50 $M_{\odot}$ and it is traveling at a speed of 40 km s-1, giving it a total momentum of 20 $M_{\odot}$ km s-1. At the final time, the protostar has a mass of $\approx 0.5M_{\odot}$ and has been accelerated to 3.2 km s-1 in the direction of the initial shock front, giving it a total momentum of $\approx 1.6M_{\odot}$ km s-1. Thus the protostar has acquired less than 1/10 of the total initial momentum of the shock front. At the rate of 3.2 km s-1, the protostar/disk system will traverse $\sim$ 3 pc through the background giant molecular cloud complex (GMC) in 1 Myr, accreting gas and dust from other, more distant regions of the GMC that have not been polluted recently by a SN II explosion. Given that GMC diameters range from $\sim$ 5 pc to $\sim$ 200 pc, there would appear to be an adequate GMC volume for a protostar launched at $\sim$ 3 km/s to scatter off other protostars in the GMC star-forming regions and accrete further ISM material and non-SN II derived isotopes. ## 5 Implications for the NC-CC Dichotomy By this time, the implications of this model should be clear: they provide a physically reasonable explanation for the NC-CC dichotomy, as advanced by Nanne et al. (2019, see their Figure 7) and Lichtenberg et al. (2021, see their Figure 6). These authors hypothesized that the solar nebula initially accreted material that was enriched in SN II-derived isotopes, and some time later accreted material depleted in SN II-derived isotopes. While no time scale for these two accretion phases is presented by either Nanne et al. (2019) or Lichtenberg et al. (2021) other than the phrases ”early infall” and ”late infall”, the present model suggests that the ”early infall” phase lasted for $\sim 0.1$ Myr, based on the final model time reached, while the ”late infall” phase lasted for $\sim 1$ Myr, based on the speed of the protostar and GMC sizes. These suggestions are supported by the work on the NC-CC dichotomy by Kruijer et al. (2020), whose Figure 5 explicitly states that the ”early infall” occurs at ”t = 0” Myr, implying a time scale much less than 1 Myr, along with assuming the simultaneous formation of CAIs, while ”late infall” stops by 1 Myr, when Jupiter is supposed to have formed and prohibited mixing of the inner NC and outer CC reservoirs. Isotopic heterogeneity throughout the GMC where the solar system was formed is required, but the present model provides a physical reason for the change in the SN II-derived isotopic composition of the matter being accreted by the protosun and solar nebula. Jacquet et al. (2019) make a similar argument for isotopic heterogeneity in the matter accreted by the solar nebula without specifying a physical mechanism to explain the origin of the heterogeneity. Others have argued that physicochemical processing of dust grains could be a better explanation for certain isotopic anomalies (150Nd) than heterogeneous infall (Saji et al. 2021). Hopp et al. (2022) have argued that Fe isotopic abundances in iron meteorites reflect the same NC-CC dichotomy as other meteorites. They conclude that the Fe isotope dichotomy can be explained by nuclear statistical equilibrium in either type Ia SN or in core-collapse SN, i.e., SN II. Ideally the present calculation could be extended indefinitely further in time to capture both the ”early infall” and ”late infall” phases advanced by Nanne et al. (2019) and Lichtenberg et al. (2021). This is not possible for the present model for two reasons. First, increasingly smaller time steps are required to push the calculation any farther in time than the 0.063 Myr shown in Figures 6 through 12, effectively halting this particular model. Second, even without the time step problem, the protostar and disk system seen in Figure 12 at $y=-1.3\times 10^{17}$ cm would reach the end of the computational volume at $y=-5\times 10^{17}$ cm within about another 0.038 My, traveling at 3.2 km/s, too quickly to transition to the ”late infall” phase. Note that prior to reaching the bottom of the computational volume, the dense gas that is accreting onto the central system (orange-red colors in Figure 12) will have been accreted, as the free fall time for collapse at a gas density of $10^{-16}$ g cm-3 is 0.0067 Myr, effectively ending the ”early infall” phase. If the present scenario is deemed interesting, future work would be needed to consider a more global model of an entire GMC struck by a SN II shock wave that follows the progression of the resulting shock-triggered protostar(s) as they traverse the GMC. In lieu of such an ambitious three-dimensional hydrodynamics model, we can predict the mass accretion rate that should characterize the ”late infall” phase. The protostar will accrete gas from the region of the GMC it traverses at a rate that can be calculated by the Bondi- Hoyle-Lyttleton (BHL) formula given by Ruffert & Arnett (1994): $\dot{M}_{BHL}=\pi R_{A}^{2}\rho_{A}v_{A}$, where the accretion radius $R_{A}$ is given by $R_{A}=2GM/v_{A}^{2}$, with $G$ being the gravitational constant, $\rho_{A}$ being the ambient GMC gas density, and $v_{A}$ is the speed of the protostar with respect to the ambient gas. With an ambient GMC gas density of $\rho_{A}=10^{-21}$ g cm-3, typical of GMC average densities and the same as the background gas in the present model (see Figure 1), $v_{A}=3.2$ km/s, and a protostar and disk system mass of $1M_{\odot}$, we obtain a mass accretion rate of $\dot{M}_{BHL}=10^{-4}M_{\odot}$/Myr of ambient GMC gas and dust. If the protostar system should pass through a dense molecular cloud core with a mean density of $\sim 10^{-19}$ g cm-3, as assumed in the present model (Figure 1), then the mass accretion rate would increase to $\dot{M}_{BHL}\approx 10^{-2}M_{\odot}$/Myr while traversing the molecular cloud core, which would take about 0.03 Myr at 3.2 km/s. These estimates suggest that the mass of pre-existing GMC gas and dust that was not enriched by the triggering SN II shock wave but was accreted by the protostar during a ”late infall” phase lasting $\sim$ 1 Myr would be in the range of $10^{-4}M_{\odot}$ of ambient matter to $3\times 10^{-4}M_{\odot}$ of dense cloud core gas and dust. If the ”late infall” phase lasts longer than 1 Myr, given that a protostar moving at 3.2 km/s travels 3 pc in 1 Myr, but GMCs can span $\sim$ 5 pc to $\sim 200$ pc in size, the protostar could continue to accrete ambient GMC matter for longer than 1 Myr. Either way, the total accreted mass in the ”late infall” phase is likely to be of order $10^{-4}M_{\odot}$ to $\sim 3\times 10^{-4}M_{\odot}$. What are the implications of these estimates for the masses of the CC and NC component? Given that model O in Boss (2019), the model recalculated here with a longer computational volume, produced a disk with an initial mass of $0.05M_{\odot}$, representing the ”early infall” or CC component, whereas the ”late infall” or NC component mass accretion estimate is in the range of $10^{-4}M_{\odot}$ to $\sim 3\times 10^{-4}M_{\odot}$, this implies that the CC component was about 170 to 500 times as massive as the NC component. While specific values for the masses of the ”early infall” enriched matter and ”late infall” depleted matter are not given by Nanne et al. (2019), Lichtenberg et al. (2021) suggest that their Reservoir I (NC) had a total mass of about a Earth mass of planetesimals, while their Reservoir II (CC) had about a Jupiter mass of planetesimals, implying that the ”early infall” CC matter was about 318 times as massive as the ”late infall” NC matter. Given the uncertainties involved in both the present model and the Lichtenberg et al. (2021) estimates, having these two estimates agree to within a factor of two is remarkable and suggests that the scenario proposed herein is worthy of further scrutiny. ## 6 Conclusions Observations of star-forming regions have demonstrated that star formation can be triggered by supernova explosions (e.g., Bialy et al. 2021). The meteoritical evidence discussed in this paper, coupled with the results of the new model presented here, along with the previous papers in this series, appear to provide a reasonable argument that the solar system was formed as a result of the interaction of a SN II shock wave with a dense molecular cloud core residing within a GMC complex. The computations were performed on the Carnegie Institution memex computer cluster (hpc.carnegiescience.edu) with the support of the Carnegie Scientific Computing Committee. I thank Floyd Fayton for his invaluable assistance with the use of memex and Myriam Telus for discussions about the Cook et al. (2021) paper. The referees provided numerous useful suggestions for improving the paper. The software used in this work was in large part developed by the DOE- supported ASC/Alliances Center for Astrophysical Thermonuclear Flashes at the University of Chicago. ## References * (1) * (2) Bialy, S., Zucker, C., Goodman, A., et al. 2021, ApJL, 919, L5 * (3) * (4) Boss, A. P. 2017, ApJ, 844, 113 * (5) * (6) Boss, A. P. 2019, ApJ, 870, 3 * (7) * (8) Boss, A. P., Ipatov, S. I., Keiser, S. A., Myhill, E. A., & Vanhala, H. A. T. 2008, ApJL, 686, L119 * (9) * (10) Boss, A. P., & Keiser, S. A. 2010, ApJL, 717, L1 * (11) * (12) Boss, A. P., Keiser, S. A., Ipatov, S. I., Myhill, E. A., & Vanhala, H. A. T. 2010, ApJ, 708, 1268 * (13) * (14) Cameron, A. G. W., & Truran, J. W. 1977, Icarus, 30, 447 * (15) * (16) Cook, D. L., Meyer, B. S., & Schönbächler, M. 2021, ApJ, 917, 59 * (17) * (18) Dwarkadas, V. V., Dauphas, N., Meyer, B., Boyajian, P., & Bojazi, M. 2017, ApJ, 851, 147 * (19) * (20) Federrath, C., Banerjee, R., Clark, P. C., & Klessen, R. S. 2010, ApJ, 713, 269 * (21) * (22) Forbes, J. C., Alves, J., & Lin, D. N. C. 2021, Nature Astron., 5, 1009 * (23) * (24) Foster, P. N., & Boss, A. P. 1996, ApJ, 468, 784 * (25) * (26) Fryxell, B., Olson, K., Ricker, P., et al. 2000, ApJS, 131, 273 * (27) * (28) Gaches, B. A. L., Walch, S., Offner, S. S. R., & Münker, C, 2020, ApJ, 898, 79 * (29) * (30) Hopp, T., Dauphas, N., Spitzer, F., Burkhardt, C.. & Kleine, T. 2022, E&PSL, 577, 117245 * (31) * (32) Huss, G. R., Meyer, B. S., Srinivasan, G., Goswami, J. N., & Sahijpal, S. 2009, GeCoA, 73, 4922 * (33) * (34) Jacquet, E., Pignatale, F. C., Chaussidon, M., & Charnoz, S. 2019, ApJ, 884, 32 * (35) * (36) Kruijer, T. S., Kleine, T., & Borg, L. E. 2022, Nature Astronomy, 4, 32 * (37) * (38) Lee T., Papanastassiou, D. A., & Wasserburg, G. J. 1976, Geophys. Res. Lett., 3, 109 * (39) * (40) Lichtenberg, T., Drazkowska, J., Schönbächler, M., Golabek, G. J., & Hands, T. O. 202, Science, 371, 365 * (41) * (42) Lugaro, M., Ott, U., & Kereszturi, A. 2018, Prog. Part. Nuc. Phys., 102, 1 * (43) * (44) Mishra, R. K., & Chaussidon, M. 2014, E&PSL, 398, 90 * (45) * (46) Mishra, R. K., & Goswami, J. N. 2014, GeCoA, 132, 440 * (47) * (48) Mishra, R. K., Marhas, K. K., & Sameer 2016, E&PSL, 436, 71 * (49) * (50) Nanne, J. A. M., Nimmo, F., Cuzzi, J. N., & Kleine, T. 2019, E&PSL, 511, 44 * (51) * (52) Reiter, M. 2020, A&A, 644, L1 * (53) * (54) Ruffert, M., & Arnett, D. 1994, ApJ, 427, 351 * (55) * (56) Saji, N. S., Schiller, M., Holst, J. C., & Bizzarro, M. 2021, ApJL, 919, L8 * (57) * (58) Tachibana, S., Huss, G. R., Kita, N. T., Shimoda, G., & Morishita, Y. 2006, ApJ, 639, L87 * (59) * (60) Tang, H., & Dauphas, N. 2012, E&PSL, 359, 248 * (61) * (62) Telus, M., Huss, G. R., Ogliore, R. C., et al. 2016, GeCoA, 178, 87 * (63) * (64) Telus, M., Huss, G. R., Nagashima, K., Ogliore, R. C., & Tachibana, S. 2018, GeCoA, 221, 342 * (65) * (66) Trappitsch, R., Boehnke, P., Stephan, T., et al. 2018, ApJL, 857, L15 * (67) * (68) Tur, C., Heger, A., & Austin, S. M. 2010, ApJ, 718, 357 * (69) * (70) Vescovi, D., Busso, M., Palmerini, S., et al. 2018, ApJ, 863, 115 * (71) * (72) Warren, P. H. 2011, E&PSL, 311, 93 * (73) * (74) Young, E. D., 2016, ApJ, 826, 129 * (75) Figure 1: Initial log density cross-section ($z$ = 0) showing the entire computational grid with a maximum of six levels of refinement. Figure 2: Initial cross-section ($z$ = 0) of color field 1, representing the material initially within the target cloud core, plotted as in Figure 1. The target cloud is initially stationary. Figure 3: Initial cross-section ($z$ = 0) of color field 2, representing the material initially within the shock front, plotted as in Figure 1. The shock is initially moving downwards at 40 km -1. Figure 4: Initial cross-section ($z$ = 0) of color field 3, representing the material initially behind the shock front, plotted as in Figure 1. The material behind the shock is also moving downwards at 40 km -1. Figure 5: Initial cross-section ($z$ = 0) of color field 4, representing the material initially outside the target cloud core, plotted as in Figure 1. This material is initially stationary. Figure 6: Final log density cross-section ($z$ = 0) after 0.063 Myr with seven levels of refinement. By this time, the portions of the shock front that did not strike the target cloud have exited the bottom of the grid. Figure 7: Final cross-section ($z$ = 0) of color field 1, representing the material initially within the target cloud core, plotted as in Figure 6. Figure 8: Final cross-section ($z$ = 0) of color field 2, representing the material initially within the shock front, plotted as in Figure 6. Note the scale change compared to Figure 3. Figure 9: Final cross-section ($z$ = 0) of color field 3, representing the material initially behind the shock front, plotted as in Figure 6. Figure 10: Final cross-section ($z$ = 0) of color field 4, representing the material initially outside the target cloud core, plotted as in Figure 6. Note the scale change compared to Figure 5. Figure 11: Close-up view of the final log density cross-section ($z$ = 0) after 0.063 Myr as seen in Figure 6. The sink cell is located in the center of the density maximum and represents the newly-formed protostar and disk system. Figure 12: Close-up view of the final cross-section ($z$ = 0) of color field 4, representing the material initially outside the target cloud core, plotted as in Figure 11. Figure 13: Close-up view of the final cross-section ($z$ = 0) of color field 2, representing the material initially inside the shock front, plotted as in Figure 11. Note the scale change compared to Figure 3. Figure 14: Close-up view of the final cross-section ($z$ = 0) of the log temperature distribution, plotted as in Figure 11.
[description= Comoving momentum: ]commov$q\,T_{\star}$ [description= Metric perturbations in synchronous gauge: ]handeta$h$ and $\eta$ [description= NCDM “temperature”: ]ftemp$T_{\star}$ [description= Phase space distribution: ]fased$f$ [description= Equation of state: ]eqstate$w$ [description= Phase space distribution perturbation: ]psif$\Psi$ [description= Transfer function: ]trans$\mathcal{T}$ [description= WDM mass: ]wdmass$m_{\rm WDM}$ [description= NCDM mass: ]ncdmm$m_{\rm DM}$ [description=Momentum (modulus)]p$p$ [description=Momentum]4$\mathbf{p}$ [description=Energy (of a particle)]5$p_{0}$ [description= Relative density fluctuation: ]deltar$\delta$ [description= Pressure: ]Pp$P$ [description= Energy density: ]rhop$\rho$ [description= Velocity divergence: ]thetap$\theta$ [description= Anisotropic stress: ]sigmap$\sigma$ [description= Scale factor (FLRW): ]ap$a$ [description= Hubble parameter (function): ]Hp$H$ [description= Conformal Hubble parameter (function): ]mathcalH$\mathcal{H}$ [description= Comoving “energy”: ]comene$\epsilon$ [description=Conformal time’]13$\tau$ [description=Cosmic time]14$t$ [description= Phase space distribution parametrization: ]fasedp$\alpha,\beta,\gamma$ [description= Reduced Planck mass: ]mplanck$M_{P}$ [description= Collision term: ]collisionterm$\mathcal{C}$ [description= Number density of the species $\chi$: ]nchi$n_{\chi}$ [description= Normalization scale of the $2\rightarrow 2$ cross section: ]Lambda$\Lambda$ [description= Rest-frame sound speed: ]cs$\hat{c}_{s}$ [description= Adiabatic sound speed: ]ca$c_{a}$ [description= Free-streaming scale: ]kFS$k_{\text{FS}}$ [description= Free-streaming horizon: ]kH$k_{\text{H}}$ [description= Reheating temperature: ]Treh$T_{\rm reh}$ [description= Highest temperature during reheating: ]Tmax$T_{\rm max}$ [description= Internal degrees of freedom of the species $\chi$: ]gchi$g_{\chi}$ [description= Reheating temperature parametrization: ]b$b$ [description= Effective number of relativistic species: ]Neff$N_{\rm eff}$ [description= Number of effective entropy degrees of freedom: ]gstars$g_{\rm*s}$ [description= Inflaton scalar field: ]inflaton$\phi$ [description= Inflaton decay width: ]Gammaphi$\Gamma_{\phi}$ IFT-UAM/CSIC-20-135 How warm are non-thermal relics? Lyman-$\alpha$ bounds on out-of-equilibrium dark matter Guillermo Ballesteros, Marcos A. G. Garcia, Mathias Pierre Instituto de Física Teórica UAM/CSIC, Calle Nicolás Cabrera 13-15, Cantoblanco E-28049 Madrid, Spain Departamento de Física Teórica, Universidad Autónoma de Madrid (UAM) Campus de Cantoblanco, 28049 Madrid, Spain Abstract > We investigate the power spectrum of Non-Cold Dark Matter (NCDM) produced in > a state out of thermal equilibrium. We consider dark matter production from > the decay of scalar condensates (inflaton, moduli), the decay of thermalized > and non-thermalized particles, and from thermal and non-thermal freeze-in. > For each case, we compute the NCDM phase space distribution and the linear > matter power spectrum, which features a cutoff analogous to that for Warm > Dark Matter (WDM). This scale is solely determined by the equation of state > of NCDM. We propose a mapping procedure that translates the WDM > Lyman-$\alpha$ mass bound to NCDM scenarios. This procedure does not require > expensive ad hoc numerical computations of the non-linear matter power > spectrum. By applying it, we obtain bounds on several NCDM possibilities, > ranging from $m_{\rm DM}\gtrsim{\rm EeV}$ for DM production from inflaton > decay with a low reheating temperature, to sub-keV values for non-thermal > freeze-in. We discuss the phenomenological implications of these results for > specific examples which include strongly-stabilized and non-stabilized > supersymmetric moduli, gravitino production from inflaton decay, > $Z^{\prime}$ and spin-2 mediated freeze-in, and non-supersymmetric spin-3/2 > DM. ###### Contents 1. 1 Introduction and results 1. 1.1 Motivation 2. 1.2 Ly-$\alpha$ constraints on out-of-equilibrium dark matter 3. 1.3 Dark matter production mechanisms 2. 2 Non-cold dark matter cosmology 1. 2.1 Linear cosmological perturbation theory 2. 2.2 Large scale structure 3. 2.3 Analytical rescaling and generalized phase space distribution 3. 3 Decay of a classical condensate 1. 3.1 Perturbative inflaton decay 1. 3.1.1 DM phase space distribution 2. 3.1.2 Power spectrum and Ly-$\alpha$ constraints 3. 3.1.3 Relic density and phenomenology 2. 3.2 Moduli decays 1. 3.2.1 DM phase space distribution 2. 3.2.2 Power spectrum and Ly-$\alpha$ constraints 3. 3.2.3 Relic density and phenomenology 4. 4 Freeze-in via decay 1. 4.1 Thermal decay 1. 4.1.1 DM phase space distribution 2. 4.1.2 Power spectrum and Ly-$\alpha$ constraints 3. 4.1.3 Relic density and phenomenology 2. 4.2 Non-thermal decay 1. 4.2.1 DM phase space distribution 2. 4.2.2 Power spectrum and Ly-$\alpha$ constraints 3. 4.2.3 Relic density and phenomenology 5. 5 Ultraviolet freeze-in via scatterings 1. 5.1 Thermal freeze-in 1. 5.1.1 DM phase space distribution 2. 5.1.2 Power spectrum and Ly-$\alpha$ constraints 3. 5.1.3 Relic density and phenomenology 2. 5.2 Non-thermal freeze-in 1. 5.2.1 DM phase space distribution 2. 5.2.2 Power spectrum and Ly-$\alpha$ constraints 3. 5.2.3 Relic density and phenomenology 6. 6 Light, but not too light, dark matter 7. 7 Conclusions 8. A The Boltzmann equation in an expanding universe 1. A.1 Generalities 2. A.2 Freeze-in via scatterings 3. A.3 Non-thermal freeze-in ## 1 Introduction and results ### 1.1 Motivation After a few decades of remarkable improvement, dark matter (DM) direct detection experiments have reached a sensitivity on the nucleon-DM scattering cross section around $10^{-46}\leavevmode\nobreak\ \text{cm}^{2}$ for DM masses of the order of the electroweak scale [1]. The absence of any confirmed experimental signal (also in indirect detection and colliders) strongly constrains the viable parameter space of Weakly Interacting Massive Particle (WIMP) models of DM based on the vanilla freeze-out mechanism. This calls for a reassessment of the attractiveness of this framework in the simplest models [2, 3, 4]. In this context, exploring theoretically and experimentally other scenarios [5] that can achieve the correct DM abundance is necessary. A well- known example of such a scenario is the freeze-in mechanism [6]. Other examples are a dark sector that thermalizes only with itself [7, 8] and a DM depletion process lead by cannibalization [9, 10, 11]. These proposals assume feeble couplings between the SM and the DM,111A large energy scale, even larger than the reheating temperature, can be invoked to justify suppressed SM-DM interactions. Just to give some examples, this scale can be identified with the Planck mass in gravitino DM [12, 13, 14, 15, 16, 17, 18], with the mass of heavy gauge fields in Grand Unified Theories [19, 20, 21] and with a new physics threshold in scenarios inspired by modified gravity [22, 23, 24, 25]. helping them to satisfy current bouds. Consequently, this tends to reduce the chances of testing them by traditional means [5]. However, several phenomenological studies [7, 26, 27, 28, 29, 30, 31] have highlighted various possibilities for observing such DM candidates. Scenarios in which the DM is produced by non-standard mechanisms may feature an important DM self-interaction cross section or a free-streaming scale, affecting the large scale structure of the universe. These properties could allow to alleviate purported tensions in the $\Lambda$CDM model at galactic and sub-galactic scales [32, 33, 34]. Also, non-thermal DM models have been proposed to address the tensions between early and late time determinations of the Hubble constant [35] and of the clustering of matter [36, 37]. Indeed, in absence of thermodynamic equilibrium between the DM and the SM, the DM phase space distribution can differ significantly from the standard freeze- out case. This opens a possibility for discriminating between different DM models and production mechanisms. The DM component in the standard $\Lambda$CDM model of cosmology is assumed to be entirely pressureless. A non- vanishing DM kinetic energy would then result in a cutoff in the matter power spectrum on small wavelength Fourier modes (as compared to $\Lambda$CDM prediction). An interesting possibility for testing these Non-Cold Dark Matter (NCDM) models –which do not conform to the standard freeze-out mechanism– is to measure the Ly-$\alpha$ forest of absorption lines of light emitted by distant quasars around redshift $z=2-4$, which is produced due to the neutral hydrogen present in the intergalactic medium. This provides enough information on the matter power spectrum at sufficiently small scales for probing the aforementioned cutoff. ### 1.2 Ly-$\alpha$ constraints on out-of-equilibrium dark matter The well known Ly-$\alpha$ bound on the DM mass for Warm Dark Matter (WDM) [38, 39, 33, 40, 41, 42, 43], $\;\gtrsim\;(1.9-5.3)\leavevmode\nobreak\ \text{keV at 95\% C.L.}\,,$ (1.1) can be mapped into constraints on various out-of-equilibrium NCDM production mechanisms. To do this, we compute (for the first time) the phase space distributions in several of these models by integrating the Boltzmann transport equation, numerically and/or analytically, depending on the production process. For the large majority of the scenarios that we consider, the resulting phase space distributions can be remarkably well described by a generalized distribution of the form $f(q)\,\propto\,q^{\alpha}\,\exp{\left(-\beta\,q^{\gamma}\right)}\,,$ (1.2) where $q$ denotes the DM comoving momentum and are model-dependent constants. We then use CLASS [44, 45] to compute, for each of the NCDM models we consider, the linear power spectrum $\mathcal{P}_{\text{NCDM}}(k)$, or, more precisely, the linear transfer function, defined in terms of the ratio to the $\Lambda$CDM spectrum as follows, $(k)\,\equiv\,\left(\dfrac{\mathcal{P}_{\text{NCDM}}(k)}{\mathcal{P}_{\Lambda\text{CDM}}(k)}\right)^{1/2}\,.$ (1.3) We assume that the DM is entirely composed of a single NCDM species (and is produced by only one mechanism in each scenario). By varying the DM mass, we match the transfer function to the one of a fermionic WDM scenario (for which the bound (1.1) applies). We perform this matching numerically and, also, with an approximate semi-analytical procedure, demonstrating their equivalence. We find that the matching can be done with great accuracy for all the models we consider (and for all the relevant ranges of their parameters). In addition, by approximating the NCDM species as a perfect fluid, we show that the cutoff in the transfer function can be entirely characterized in terms of the equation of state parameter of NCDM, $w$, which allows to translate the WDM mass Ly-$\alpha$ limit (1.1) to the NCDM case. This can be done for each of the NCDM models, without having to run specifically tailored N-body simulations or doing a dedicated non-linear analysis of the NCDM perturbations at small scales. A general analytical expression relates these bounds to each other via only the knowledge of the first and second moments of the phase space distribution, $\,=\,m_{\text{WDM}}\left(\dfrac{T_{\star}}{T_{\text{WDM},0}}\right)\sqrt{\dfrac{\langle q^{2}\rangle}{\langle q^{2}\rangle_{\text{WDM}}}}\quad\quad\text{(Lyman-$\alpha$ bound)}\,,$ (1.4) where $\propto\langle q\rangle$ is the present “temperature” of NCDM, understood as the energy scale that normalizes the typical momentum of the distribution. In all cases, we find a remarkable agreement between this approximation and the numerical computation of the linear power spectrum. Using this procedure, we achieve (for most of our scenarios and in the range of scales of interest) a $\lesssim 3\%$ error in the matching to the transfer function of WDM, see Figure 3. In the (very few) least precise examples that we consider the matching worsens to at most $10\%$. Figure 1: Pipeline applied in this paper to derive bounds on a given NCDM model from the Lyman-$\alpha$ WDM mass limit. The boxes represent the computational steps or inputs used to derive a bound on the NCDM mass. Starting from the NCDM collision term and Lyman-$\alpha$ bound on WDM, two possible paths allow to derive a bound on NCDM. Our matching procedure, going through the equation of state matching, allows us to obtain the NCDM bound without computing numerically the NCDM transfer function. Figure 2: Out-of- equilibrium dark matter as a probe for early universe dynamics. The top figure depicts schematically the phase space distributions for the production scenarios considered in the present work, mapped over the history of the early universe. The relic distributions for the thermal freeze-in production with $n\geq 6$ and non-thermal freeze-in with $n>2$ are set at the thermalization time-scale, after inflation but well before the end of reheating. The relic abundance and the distribution of dark matter produced from inflaton decay or low-$n$ freeze-in are set at the matter-radiation transition at the end of reheating. Dark matter can be also produced through decays of thermalized and non-thermalized fields during early radiation domination. Most distributions have the approximate form $f(q)\propto q^{\alpha}e^{-\beta q^{\gamma}}$. The bottom left figure shows that, in all cases, the transfer function of the linear power spectrum determines the lower bound on the dark matter mass, since $\mathcal{T}(k)$ can be matched to the corresponding WDM bound. The bottom right figure shows the range of masses that can be bound by Ly-$\alpha$ observations, depending on the dark matter production mechanism and the reheating temperature (here $T_{\rm reh}=10^{10}\,{\rm GeV}$). This bound-mapping procedure is summarized in Figs. 1 and 2. The top panel of the latter shows schematically the different shapes of the distribution functions corresponding to six distinct NCDM production processes, active during or after reheating. We now proceed to enumerate these processes, and summarize our results for each of them. ### 1.3 Dark matter production mechanisms We consider various non-equilibrium DM production mechanisms. They are all assumed to proceed perturbatively via the scattering or decay of particles, with time-scales ranging from the very end of inflation, during the earliest stages of reheating, to the radiation dominated universe occurring after the end of reheating. We list these mechanisms below; see also Fig. 2. Inflaton decay (Section 3.1). It is often assumed that the DM may have been produced from the decay of the inflaton field, $\phi$. Even in the absence of tree-level inflaton-DM couplings, DM-SM interactions can generate a non- vanishing inflaton $\rightarrow$ DM decay channel at higher order in perturbation theory [46]. Assuming that this decay proceeds perturbatively through a two-body process, we find that the DM phase space distribution is of the form $f(q)\propto q^{-3/2}e^{-0.74q^{2}}$, where the power-like behaviour at low $q$ arises from redshifting during the matter dominated reheating epoch, and the Gaussian tail comes from the depletion of the inflaton condensate at the end of reheating. The resulting Ly-$\alpha$ constraint on the DM mass is proportional to the ratio of the inflaton mass to the reheating temperature, being $m_{\rm DM}\gtrsim 3.8\,{\rm MeV}$ for $T_{\rm reh}=10^{10}\,{\rm GeV}$ and $m_{\phi}\simeq 3\times 10^{13}\,{\rm GeV}$, the later fixed by the measurement of the amplitude of the curvature power spectrum [47, 48]. Higher reheating temperatures can reduce the limit down to the keV range, whereas lower reheating temperatures can increase it well beyond the TeV range. Moduli decay (Section 3.2). In many SM extensions, in particular in supergravity and string constructions, there is a plethora of scalar fields with very weak couplings to the SM (typically of gravitational strength) and masses that are typically of the order of the weak scale. These fields are known as moduli, and can have far reaching cosmological consequences if they are excited away from their vacuum values in the early Universe. We consider DM production from moduli decays in two scenarios: when the modulus dominates the energy of the Universe and decays at late times, and when the modulus is always subdominant to the inflaton/radiation background due to some stabilization mechanism. In the first case, the shape of the DM phase space distribution is identical to that for DM produced from inflaton decay, and the lower bound on $m_{\rm DM}$ is proportional to the ratio of the modulus mass ($m_{Z}$) to its reheating temperature, with $m_{\rm DM}\gtrsim 13\,{\rm GeV}$ for $m_{Z}=10\,{\rm TeV}$ and $T_{\rm reh}=1\,{\rm MeV}$. For the decay of stabilized moduli, we find non-thermal DM distributions of the form $f(q)\propto q^{-3/2}e^{-q^{3/2}}$ or $f(q)\propto q^{-1}e^{-q^{2}}$, depending on whether the modulus decays during or after reheating, respectively. In these cases, the limit on $m_{\rm DM}$ depends on the ratio of the modulus mass to the background temperature evaluated at the moment of its decay, and on the ratio of the inflaton and modulus decay widths. Thermal and non-thermal decays (Section 4). DM could have been produced also from the decay of free particles. In such a case the DM phase space distribution and its present abundance depend strongly on the initial momentum distribution of the decaying particles. We consider here two possibilities: the decay of a thermalized particle species during radiation domination (Section 4.1.1), and the decay of a particle with a non-equilibrium distribution, assumed to be produced from the decay of the inflaton (Section 4.2). In both cases we assume that the decaying particle is much lighter than the inflaton, yet much heavier than DM. For the thermal decay case, we find that the DM inherits a quasi-thermal distribution, $f(q)\sim q^{-1/2}e^{-q}$, and the bound on its mass is given by $m_{\rm DM}\gtrsim 7\,{\rm keV}$. For the non-thermal decay, we find that the shape of the distribution is highly dependent on the momentum of the parent particle when it decays. If this initial state decays while it is relativistic, the DM inherits the Gaussian tail of the parent unstable particle, $f(q)\sim q^{-5/2}e^{-0.74q^{2}}$. The Ly-$\alpha$ constraint is identical to that for the direct decay of the inflaton to DM, reduced by a factor of $\sim 0.3$. If instead the decaying particle is non-relativistic, the DM phase space distribution is highly non-thermal, skewed towards large momenta, and not suitable for a fit of the form (1.2). The Ly-$\alpha$ constraint depends on the mass and width of the decaying particles; more specifically proportional to the ratio of the mass to the temperature $T_{\rm dec}\propto\Gamma^{1/2}$ at which the decay occurs. Thermal freeze-in via scatterings (Section 5.1). We consider the possibility of a DM population generated via the freeze-in mechanism by annihilations of thermalized SM particles. We assume that the typical DM-SM scattering amplitude can be parametrized by $|\mathcal{M}|^{2}\;=\;16\pi\frac{s^{\frac{n}{2}+1}}{\Lambda^{n+2}}\,,$ (1.5) where $n$ is an integer, $\sqrt{s}$, the square root of the Mandelstam variable, is the center-of-mass energy (in the high-energy limit), and $\Lambda$ is some high-energy scale.222Small differences on the dependence on Maldestam variables in the high energy limit can be absorbed into the value of $\Lambda$. For $0\leq n<6$, the DM is produced at the end of the reheating process, at the reheating temperature $T_{\rm reh}$. For $n=0,2$ the resulting DM momentum distribution is quasi-thermal with $\beta\sim 1$ and $\gamma=1$. Instead, for $n=4$ it has a nearly Gaussian tail. For these three scenarios, the matching of the power spectrum to WDM is excellent and the bound translates to $m_{\rm DM}\gtrsim 6-9\,{\rm keV}$, with the precise value depending on $n$ and the quantum statistics of the thermalized scatterers. When $n\geq 6$, most of the DM is produced on the earliest stages of reheating, at the maximum temperature $T_{\rm max}$. When this is the case, the fitting expression (1.2) fails. In particular, for $n=6$ $f(q)$ interpolates between a $q^{3}$ behaviour at $q\ll 1$, and an exponential tail at $q\gg 1$, through a region where $f(q)\sim q^{-3}$. This relatively complicated form of the distribution translates to an imperfect match with the WDM power spectrum, which nevertheless leads to a bound of the form $m_{\rm DM}^{2}\gtrsim 81\,{\rm keV^{2}}/\ln(T_{\rm max}/T_{\rm reh})$. Non-thermal freeze-in via scatterings (Section 5.2). The delay between the end of inflation and the onset of thermal equilibrium in the primordial plasma can leave an imprint on the DM phase space distribution if the parent scatterers are produced directly from inflaton decays. Inflaton decay products are typically very energetic, with momenta of the order of the inflaton mass. Only after a process of soft radiation emission and energy transfer through scatterings these decay products reach thermal equilibrium. Thermalization occurs after the beginning of reheating, but well before it ends. As it turns out, if $n>2$ in (1.5), most of the DM could have been produced non-thermally by the very first SM particles present in the universe [49]. As a proof of concept, we consider here annihilations with $n=4$. Notably, under the freeze-in assumption, the transport equation can be solved in a closed, albeit complicated, form. We find that the approximation $f(q)\sim q^{-3/2}e^{-2.5q^{2.6}}$ adequately describes the DM phase space distribution. The power spectrum matching with WDM can be performed accurately, leading to a minimum DM mass of the form $m_{\rm DM}\propto m_{\phi}^{23/15}T_{\rm reh}^{-7/15}$, where $m_{\phi}$ is the inflaton mass, and $T_{\rm reh}$ the reheating temperature. For $T_{\rm reh}=10^{10}\,{\rm GeV}$, $m_{\rm DM}\gtrsim 0.4\,{\rm keV}$. Our paper is organized as follows. In Section 2 we review the treatment of NCDM relics in cosmological linear perturbation theory, and discuss the properties of the transfer function (1.3) and the re-scaling into NCDM of the Lyman-$\alpha$ bounds coming from WDM. In Sections 3 to 5 we study the production mechanisms we just listed, their Ly-$\alpha$ bounds and the corresponding phenomenological implications. In Section 6 we discuss the implications for the effective number of relativistic species. We present our conclusions in Section 7. Appendix A contains a brief review of the Boltzmann equation in the early universe, as well a detailed calculation of the generic form of the collision term for DM production via freeze-in (Appendix A.2), and the integration of this collision term for the $n=4$ non-thermal freeze-in scenario (Appendix A.3). A glossary of the main symbols used in this paper is provided in Appendix LABEL:symbols. We use a natural system of units in which $k_{B}=\hbar=c=1$. ## 2 Non-cold dark matter cosmology ### 2.1 Linear cosmological perturbation theory In the standard $\Lambda$CDM model of cosmology, the DM is assumed to be cold (CDM), i.e. presureless. Therefore, its equation of state parameter $w$ – defined by the relation $\bar{P}=\,\bar{\rho}$ where $\bar{\rho}$ and $\bar{P}$ are its (time-dependent) background energy density and pressure – is exactly vanishing. However, DM particles produced in the early universe, of thermal or non-thermal origin, would actually possess some momentum distribution with a non-vanishing averaged momentum $\langle p\rangle\neq 0$, which could manifests in a deviation from $w=0$ and, possibly, also through other moments of it. We will now discuss a set of approximations under which $w$ can be the sole function encoding the deviations from CDM, both at the level of the background dynamics and for linear perturbations analyses. In Sections 3–5 we will study concrete examples of such DM creation processes. The phase space distribution, $f$, of a general cosmological species is a function of position, momentum and (conformal) time, $\tau$, that characterizes its energy-momentum tensor. It is convenient to split it into a time-dependent homogeneous background part, $\bar{f}(|\boldsymbol{p}|,\tau)$, plus a fluctuation quantified by a function $\ll 1$, such that $f(\boldsymbol{x},\boldsymbol{p},\tau)=\bar{f}(|\boldsymbol{p}|,\tau)[1+\Psi(\boldsymbol{x},\boldsymbol{p},\tau)]$; see e.g. [50]. The background energy density and pressure functions, $\bar{\rho}$ and $\bar{P}$, of a NCDM relic are then $\bar{\rho}=4\pi\left(\dfrac{T_{\star}}{a}\right)^{4}\int q^{2}\epsilon\bar{f}(q)\mathop{}\\!\mathrm{d}q\,,\qquad\bar{P}=\dfrac{4\pi}{3}\left(\dfrac{T_{\star}}{a}\right)^{4}\int q^{2}\dfrac{q^{2}}{\epsilon}\bar{f}(q)\mathop{}\\!\mathrm{d}q\,,$ (2.1) where is the scale factor of the Universe and $T_{\star}$ is a convenient energy scale that characterizes the DM density at the present time. Following the conventions of [45]333Our $T_{\star}$ is a time-independent quantity denoted by $T_{\text{NCDM},0}$ in [45]. we define $q=\dfrac{p\,a}{{T_{\star}}}\quad\text{with}\,\quad=\sqrt{q^{2}+\left(\dfrac{m_{\text{DM}}\,a}{T_{\star}}\right)^{2}}\,,$ (2.2) where the product is the comoving momentum and $p=|\boldsymbol{p}|$ is the (absolute value) of the momentum of individual NCDM particles. In Fourier space, the perturbation $\Psi$ of the NCDM phase space distribution can be expanded in Legendre polynomials $P_{\ell}$ as follows: $\Psi(\boldsymbol{k},\hat{\boldsymbol{n}},q,\tau)=\sum_{\ell=0}^{\infty}(-i)^{\ell}(2\ell+1)\Psi_{\ell}(\boldsymbol{k},q,\tau)P_{\ell}(\hat{\boldsymbol{k}}\cdot\hat{\boldsymbol{n}})\,,$ (2.3) where $k$ is the comoving wavenumber of the perturbations in Fourier space, $\boldsymbol{k}=k\,\hat{\boldsymbol{k}}$ and $p=\hat{\boldsymbol{n}}\cdot\vec{\boldsymbol{p}}$. The quantities defining the perturbed energy-momentum tensor are $\displaystyle\begin{aligned} \delta\,=\,&4\pi\left(\dfrac{T_{\star}}{a}\right)^{4}\int q^{2}\epsilon\bar{f}(q)\Psi_{0}\,\mathop{}\\!\mathrm{d}q,\quad&&\text{energy density fluctuation}\\\ \delta\,=\,&\dfrac{4\pi}{3}\left(\dfrac{T_{\star}}{a}\right)^{4}\int q^{2}\dfrac{q^{2}}{\epsilon}\bar{f}(q)\Psi_{0}\,\mathop{}\\!\mathrm{d}q,\quad&&\text{pressure (density) fluctuation}\\\ (\bar{\rho}+\bar{P})\,=\,&4\pi k\left(\dfrac{T_{\star}}{a}\right)^{4}\int q^{3}\bar{f}(q)\Psi_{1}\,\mathop{}\\!\mathrm{d}q,\quad&&\text{velocity divergence}\\\ (\bar{\rho}+\bar{P})\,=\,&\dfrac{8\pi k}{3}\left(\dfrac{T_{\star}}{a}\right)^{4}\int q^{2}\dfrac{q^{2}}{\epsilon}\bar{f}(q)\Psi_{2}\,\mathop{}\\!\mathrm{d}q,\quad&&\text{anisotropic stress.}\end{aligned}$ (2.4) For decoupled NCDM, the phase space distribution satisfies the collisionless Boltzmann equation $\frac{\partial f}{\partial\tau}+\frac{\mathop{}\\!\mathrm{d}x^{i}}{\mathop{}\\!\mathrm{d}\tau}\frac{\partial f}{\partial x^{i}}+\frac{\mathop{}\\!\mathrm{d}q}{\mathop{}\\!\mathrm{d}\tau}\frac{\partial f}{\partial q}+\frac{\mathop{}\\!\mathrm{d}n_{i}}{\mathop{}\\!\mathrm{d}\tau}\frac{\partial f}{\partial n_{i}}\;=\;0\,,$ (2.5) with $i=1,2,3$ and $\boldsymbol{n}$ being a unitary 3-vector pointing in the direction of the momentum, as defined above. In the synchronous gauge, this equation leads to the following system for the quantities $\Psi_{\ell}$, $\displaystyle\begin{aligned} \dot{\Psi}_{0}\,=\,&-\dfrac{qk}{\epsilon}\Psi_{1}+\dfrac{1}{6}\dot{h}\dfrac{\mathop{}\\!\mathrm{d}\ln\bar{f}}{\mathop{}\\!\mathrm{d}\ln q}\,,\\\ \dot{\Psi}_{1}\,=\,&\dfrac{qk}{3\epsilon}\Big{(}\Psi_{0}-2\Psi_{2}\Big{)}\,,\\\ \dot{\Psi}_{2}\,=\,&\dfrac{qk}{5\epsilon}\Big{(}2\Psi_{1}-3\Psi_{3}\Big{)}-\Big{(}\dfrac{1}{15}\dot{h}+\dfrac{2}{5}\dot{\eta}\Big{)}\dfrac{\mathop{}\\!\mathrm{d}\ln\bar{f}}{\mathop{}\\!\mathrm{d}\ln q}\,,\\\ \dot{\Psi}_{\ell}\,=\,&\dfrac{qk}{(2\ell+1)\epsilon}\Big{(}\ell\Psi_{\ell-1}-(\ell+1)\Psi_{\ell+1}\Big{)}\,,\quad[\ell\geq 3]\end{aligned}$ (2.6) where are the trace and traceless part of the metric perturbation [50]. For a non-relativistic species, higher multipoles are typically suppressed by (positive) powers of $q/\epsilon\sim p/m_{\text{DM}}$, making any $\Psi_{\ell}$ with $\ell\geq 2$ much smaller than $\Psi_{0}$ and $\Psi_{1}$. In this case, the Boltzmann hierarchy can be truncated imposing $\Psi_{\ell}=0$ for $\ell>1$, as discussed in [51], whose analysis shows the validity of this truncation. As argued also in [52], in this (non- relativistic) case $\Psi_{0}$ depends only mildly on the variable $q$; and the integrals in (2.4) are dominated by the low $q\ll\epsilon$ regime,444Notice that for heavy-tailed distributions, the later approximation is no longer valid and one cannot apply the analytical arguments presented in this section. For instance in the case where DM particles could have been produced from Primordial-Black-Hole evaporation [53, 54], where the distribution function behaves as $\bar{f}(q)\sim 1/q^{5}$ at large $q$. In that case the integral appearing in the background-pressure expression is always dominated by $q\gtrsim m_{\text{DM}}a/T_{\star}$, and hence it cannot be restricted to $q\ll\epsilon$. so that we can identify $\delta P/\delta\rho\,\simeq\,\bar{P}/\bar{\rho}=w$. In this limit, the first two equations of (2.6) can be integrated over $q$, allowing us to describe the NCDM species with a coupled system of (continuity and Euler) equations: $\displaystyle\dot{\delta}\,$ $\displaystyle=\,-(1+w)\Big{(}\theta+\dfrac{\dot{h}}{2}\Big{)}-3\mathcal{H}\left(\hat{c}_{s}^{2}-w\right)\delta+9\mathcal{H}^{2}(1+w)\left(\hat{c}_{s}^{2}-c_{a}^{2}\right)\dfrac{\theta}{k^{2}}\,,$ (2.7) $\displaystyle\dot{\theta}\,$ $\displaystyle=\,-\mathcal{H}\left(1-3\hat{c}_{s}^{2}\right)\theta+\dfrac{\hat{c}_{s}^{2}}{1+w}k^{2}\delta\,,$ (2.8) where $\,\equiv\,\delta\rho/\bar{\rho}$, and $=a$, where $H\equiv\dot{a}=\mathop{}\\!\mathrm{d}a/\mathop{}\\!\mathrm{d}\tau$. To first order in $w\ll 1$, the adiabatic sound speed is ${}^{2}\equiv\dot{\bar{P}}/\dot{\bar{\rho}}\simeq 5w/3$. In addition, as shown in [45], for sufficiently non-relativistic species, the (rest frame) sound speed555See e.g. [55] for its definition. can be reasonably well approximated by the adiabatic sounds speed ${}^{2}\simeq c_{a}^{2}$. Notice that by taking $w=0$ one recovers the usual CDM perturbation equation $\dot{\delta}=-1/2\dot{h}$. In the NCDM domination era, from the perturbed Einstein equations, the trace of the metric fluctuation $h$ satisfies the equation $\ddot{h}+\mathcal{H}\dot{h}+3(1+3w)\mathcal{H}^{2}\delta\,=\,0\,,$ (2.9) allowing the system (2.7)–(2.8) to be reduced to $\ddot{\delta}+\mathcal{H}\dot{\delta}-\frac{3}{2}\mathcal{H}^{2}\left(1-w\dfrac{10}{9}\frac{k^{2}}{\mathcal{H}^{2}}\right)\delta\,=\,0\,.$ (2.10) In the limit where $w=0$ exactly, overdensities grow “democratically”, i.e. independently of $k$ (as in $\Lambda$CDM). However, for non-vanishing $w$, at a given time, there is a suppressed growth for modes larger than the free- streaming wavenumber $k>k_{\text{FS}}(a)$ with ${}^{2}(a)\,=\,\dfrac{9}{10}\dfrac{\mathcal{H}^{2}}{w}\,=\,\dfrac{3}{2}\dfrac{\mathcal{H}^{2}}{c_{g}^{2}}\,.$ (2.11) Thus, a cutoff in the power spectrum can be observed at a given time for modes larger than the free-streaming horizon wavenumber $k_{H}(a)$, which can be expressed in term of $k_{\text{FS}}$ as [36] $(a)\,\equiv\,\left[\int_{0}^{a}\dfrac{1}{k_{\text{FS}}(\tilde{a})}\dfrac{\mathop{}\\!\mathrm{d}\tilde{a}}{\tilde{a}}\right]^{-1}\,.$ (2.12) From these equations, we see that $w$ is the only quantity (together with the current DM density) that controls in first approximation the suppression of the power spectrum at large $k$. In the non-relativistic limit, $w$ can be expressed in terms of the normalized second moment of the distribution function $w\,\simeq\dfrac{\delta P}{\delta\rho}\,=\,\dfrac{T_{\star}^{2}}{3m_{\text{DM}}^{2}}\frac{\langle q^{2}\rangle}{a^{2}}\,,$ (2.13) with the $n$-th moment being $\langle q^{n}\rangle\,\equiv\,\dfrac{\int q^{n+2}\bar{f}(q)\mathop{}\\!\mathrm{d}q}{\int q^{2}\bar{f}(q)\mathop{}\\!\mathrm{d}q}\,.$ (2.14) As a result, given a phase space distribution for the DM, determination of its second moment is sufficient to estimate the cutoff of the matter power spectrum. ### 2.2 Large scale structure For a given NCDM cosmology, the cutoff can be described in terms of the transfer function $\mathcal{T}(k)$ defined as $\mathcal{T}(k)=\left(\dfrac{\mathcal{P}(k)}{\mathcal{P}_{\Lambda\text{CDM}}(k)}\right)^{1/2}\,,$ (2.15) which compares (at the present time) the power spectrum for a given NCDM cosmology to the typical $\Lambda$CDM case. As we will now discuss, a small scale cutoff in the matter power spectrum may be one of the few possibilities at our disposal for distinguishing NCDM cosmologies from the paradigmatic $\Lambda$CDM model and thus probe the degree of DM “warmness”. Light emitted by distant quasars and subsequently interacting with the neutral Hydrogen of the intergalactic medium around redshifts $z\sim 2-6$ generates a pattern of absorption lines around $\sim 1000\,$Å: the Ly-$\alpha$ forest. This allows to probe the power spectrum on scales $k\sim(0.1-10)\,h\,\text{Mpc}^{-1}$ at the present time, by estimating the amount of matter through a determination of the Ly-$\alpha$ optical depth, thus providing one of the most stringent ways of testing NCDM models. Constraints from the Ly-$\alpha$ flux power spectrum on the DM properties are usually given as a lower bound on the WDM mass parameter, $m_{\text{WDM}}$, used as a reference. Given $m_{\text{WDM}}$, the WDM phase space is characterized by a single quantity: $T_{\text{WDM}}$. In spite of our notation, this quantity, which decreases with time, is not a temperature, stricto sensu, since we assume the WDM species not to be in thermal equilibrium at recombination and later times. Such a DM candidate is assumed to have achieved a state of thermal equilibrium at some earlier time in the evolution of the Universe and would have subsequently decoupled later on, as it happens e.g. for neutrinos in the SM. Indeed, a good benchmark scenario for WDM, which we will assume from now on, is a fermionic DM candidate with two degrees of freedom having a Fermi-Dirac distribution. In this case the WDM relic density can be related to its mass and “temperature” $T_{\text{WDM}}$ by $\Omega_{\text{WDM}}h^{2}\simeq\left(\dfrac{m_{\text{WDM}}}{94\leavevmode\nobreak\ \text{eV}}\right)\left(\dfrac{T_{\text{WDM}}}{T_{\nu}}\right)^{3}\,,$ (2.16) where $T_{\nu}=(4/11)^{1/3}\,T$ is the neutrino temperature as expected in the SM after $e^{+}e^{-}$ annihilations, assuming instantaneous decoupling, expressed as a function of the photon temperature $T$. As usual, $h$ denotes the reduced Hubble constant, defined by the relation $H_{0}\equiv 100\,h\,\text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}$. Assuming that the WDM saturates the DM density determined by Planck [47], a numerical evaluation of the free-streaming horizon in Eq. (2.11) gives that the cutoff in the linear matter power spectrum occurs at $k_{H}(a=1)\,\simeq\,3.5\,h\,\text{Mpc}^{-1}\,.$ (2.17) for $m_{\text{WDM}}=1$ keV. As shown in [56, 57, 39], an analytical fit for the transfer function in the WDM case is given by $\mathcal{T}(k)=\left(1+(\alpha k)^{2\nu}\right)^{-5/\nu}\,,$ (2.18) with the parameters $\nu=1.12\quad\text{and}\quad\alpha=0.24\left(\dfrac{}{1\leavevmode\nobreak\ \text{keV}}\dfrac{T_{\text{WDM}}}{T_{\nu}}\right)^{-0.83}\left(\dfrac{\Omega_{\text{WDM}}h^{2}}{0.12}\right)^{-0.16}\text{Mpc}\,.$ (2.19) Importantly, these fitting parameters are independent of the standard cosmological parameters (other than the DM abundance, $\Omega_{\text{WDM}}$). The non-observation of a cutoff in actual data for the matter power spectrum can be translated into a constraint on the WDM mass. A recent analysis [42] gives a bound $m_{\text{WDM}}>5.3$ keV at 95% C.L., while the reference [43] derived a less stringent bound $m_{\text{WDM}}>1.9$ keV at 95% C.L., by claiming a more conservative treatment of thermal history for the intergalactic medium. In the following we will take $m_{\text{WDM}}>3$ keV as a reference but allow our results to be translated for a different value. The most up-to-date lower bounds on the WDM mass from Ly-$\alpha$ data (1.1) have been obtained using the medium resolution X-shooter spectrographic observations of the intermediate redshift ($z:3-4.2$) XQ-100 sample of quasars [58, 59] and the higher-resolution, higher-redshift ($z:4.2-5.4$) data from the HIRES/MIKE spectrographs [60, 61]. These data can be used in combination with probes of the matter power spectrum at smaller comoving scales ($k<(\rm{km/s})^{-1}$) via Lyman-$\alpha$ data, in particular from the Baryon Oscillation Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey (SDSS- III) [62, 63]. For future prospects (including higher redshifts), potentially allowing an enhanced sensitivity to the cutoff of the spectrum see [64]. In principle, it may seem reasonable to assume that in order to compare the expected matter power spectrum for a (more general, non Fermi-Dirac) NCDM cosmology to the WDM case, it should be essential to take into account the non-linear behaviour of the DM density field on the small cosmological distances ($1-100$ Mpc) probed by Ly-$\alpha$ data. Performing such a comparison requires costly N-body simulations for each possible NCDM case of interest. Nevertheless, the authors of Ref. [65, 66] have performed a large set of N-body simulations of models featuring an ample variety of transfer functions, confronting the resulting power spectra to Ly-$\alpha$ data, and concluding that all the models that are ruled out can also be rejected by doing a simpler, linear analysis. As we will show in the next sections, the shape of the linear power spectra of the various NCDM models we consider turns out to be very similar to the one for WDM, in spite of having, in some cases, notable differences at the level of the phase space. Therefore, we can translate directly the WDM Ly-$\alpha$ bounds by computing, numerically, the linear transfer functions for our NCDM models using a Boltzmann code, such as CLASS [44, 45], and comparing the result with the linear transfer function in the WDM case. The shape of the transfer function at the scales relevant for the change induced in the matter power spectrum by WDM free-streaming can also be probed by comparing the number of satellite galaxies of the Milky Way with N-body simulations [67, 68]. This method gives a bound on the WDM mass that is complementary and comparable to those obtained from Ly-$\alpha$ data. The initial conditions for these N-body simulations were set in [67, 68] to mimic the (linear) transfer function (2.18). Assuming that the formation of satellite galaxies only depends on the nature of the DM through the linear transfer function, we can also map these WDM mass bounds into constraints on NCDM models that feature different distribution functions, just as we do with Ly-$\alpha$ bounds. ### 2.3 Analytical rescaling and generalized phase space distribution Let us now consider a NCDM model for which, by assumption, $w\ll 1$ is the only quantity needed to characterize the cutoff in the linear transfer function. Then, we can estimate the bound on $w$ from Ly-$\alpha$ by finding the value of $m_{\text{DM}}$ such that $\displaystyle w(m_{\text{DM}})=w_{\text{WDM}}(m_{\text{WDM}})\,.$ (2.20) The bound is obtained by assuming that the cutoff scale of the linear matter power spectrum for WDM can be translated to that of NCDM equating the equations of state. A correspondence between two NCDM scenarios (a sterile neutrino and a particle that decouples while being relativistic) was proposed for the first time (to our knowledge) in Ref. [69]. By equating the power spectra, the authors found a relation between these two scenarios, which possess distribution functions with the same analytical expression but with different parameters. A similar matching procedure using the mean square of the DM velocity was proposed in [70] and extended in [71, 72] for several freeze-in models. As we show below, our (generalized) matching relation can be applied to a wide variety of NCDM scenarios, even for those in which thermal equilibrium is not established before DM decoupling. From Eq. (2.13) we can write $w_{\text{WDM}}$ as $w_{\text{WDM}}(a)\,\simeq\,6\times 10^{-15}\,a^{-2}\,\left(\dfrac{\text{keV}}{m_{\text{WDM}}}\right)^{8/3}\,,$ (2.21) implying that the bound on $m_{\text{WDM}}\sim$ keV from Ly-$\alpha$ [39, 42, 43] translates into $\displaystyle w_{\text{WDM}}(a=1)\lesssim 10^{-15}\,,$ (2.22) showing that DM is indeed very cold. It is worth emphasizing that the constraint from Eq. (2.22) corresponds to a constraint at recombination of $w_{\text{WDM}}(a\sim 10^{-3})\lesssim 10^{-9}$ whereas analyses based on CMB data constrain this value only at the level of $w_{\text{WDM}}(a\sim 10^{-3})\lesssim 10^{-4}$ [52, 73]. For instance, a typical WIMP with a mass $m_{\rm DM}=100\leavevmode\nobreak\ \text{GeV}$ that decoupled at a freeze-out temperature $T_{F}\simeq m_{\rm DM}/20$ inherits a Maxwell-Boltzmann distribution after decoupling of the form $f(p,t)\,=\,\dfrac{g_{\rm DM}}{(2\pi)^{3}}\exp\left[\dfrac{-p^{2}a(t)^{2}}{2m_{\rm DM}a_{F}^{2}T_{F}}\right]\,.$ (2.23) In the non-relativistic limit, the energy and pressure densities can be evaluated analytically and $w$ can be expressed as $w(a)\,\simeq\,\dfrac{a_{F}^{2}}{a^{2}}\dfrac{T_{F}}{m_{\rm DM}}\,\simeq\,10^{-29}\left(\dfrac{1}{a^{2}}\right)\left(\dfrac{20\,T_{F}}{m_{\rm DM}}\right)\left(\dfrac{100\leavevmode\nobreak\ \text{GeV}}{m_{\rm DM}}\right)^{2}\left(\dfrac{100}{g_{*}^{F}}\right)^{2/3}\,,$ (2.24) where $g_{*}^{F}$ denotes the effective number of degrees of freedom. This value for $w$ is several orders of magnitude lower than the typical value constrained by Ly-$\alpha$. For our NCDM case, Eq. (2.20) leads to $m_{\text{DM}}\,=\,m_{\text{WDM}}\left(\dfrac{T_{\star}}{T_{\text{WDM}}}\right)\sqrt{\dfrac{\langle q^{2}\rangle}{\langle q^{2}\rangle_{\text{WDM}}}}\simeq\,7.56\leavevmode\nobreak\ \text{keV}\,\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}\left(\dfrac{T_{\star}}{T}\right)\sqrt{\langle q^{2}\rangle}\,.$ (2.25) Alternatively, the bound can be expressed in terms of the mean momentum at the present time, $\langle p\rangle_{0}=\langle q\rangle\,T_{\star}$ where $\langle q\rangle$ is defined in Eq. (2.14), giving $m_{\text{DM}}\,\simeq\,7.56\leavevmode\nobreak\ \text{keV}\,\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}\left(\dfrac{\langle p\rangle_{0}}{T_{0}}\right)\dfrac{\sqrt{\langle q^{2}\rangle}}{\langle q\rangle}\,.$ (2.26) As we will show, most of the NCDM cases discussed in this paper can be well described with a generalized phase space distribution of the form $f(q)\,\propto\,q^{\alpha}\,\exp{\left(-\beta\,q^{\gamma}\right)}\,,$ (2.27) with constant $\alpha>-3$ and $\beta,\gamma>0$ as required for the DM number density to be finite. For this distribution the normalized $n$-th moment (2.14) is $\langle q^{n}\rangle\,=\,\beta^{\frac{2-n}{\gamma}}\,\dfrac{\Gamma\left(\frac{1+n+\alpha}{\gamma}\right)}{\Gamma\left(\frac{3+\alpha}{\gamma}\right)}\,.$ (2.28) The rescaling of the mass reproducing the same cutoff as the WDM case then gives $m_{\text{DM}}\,\simeq\,7.56\leavevmode\nobreak\ \text{keV}\,\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}\left(\dfrac{\langle p\rangle_{0}}{T_{0}}\right)\,\sqrt{\dfrac{\Gamma\left(\frac{3+\alpha}{\gamma}\right)\,\Gamma\left(\frac{5+\alpha}{\gamma}\right)}{\Gamma^{2}\left(\frac{4+\alpha}{\gamma}\right)}}\,,$ (2.29) which does not depend explicitly on $\beta$.666However, the mean momentum $\langle p\rangle_{0}$ depends actually on this quantity. As we will show later for the various examples we consider, this allows to translate any bound from Ly-$\alpha$ on $m_{\text{WDM}}$ to a bound for a given NCDM model on $m_{\text{DM}}$ provided that the DM phase space distribution can be well described by (2.27), with good precision and without requiring a numerical computation of the power spectrum.777Notably, in the case we consider for which (2.27) does not apply ($n=6$ thermal freeze-in, for example), we can still find an analytical form for the constraint using the more general form of Eq. (2.25). Provided that the NCDM equation-of-state parameter evolves as $w(a)\simeq w_{0}\,a^{-2}$ at redshift $z<10^{6}$, for which the wavenumbers $k$ relevant for Lyman-$\alpha$ data enters the horizon, the NCDM power spectrum should exhibit the same features as the WDM power spectrum at first order in $w$. A different $a$-dependence of $w(a)$ would affect our matching procedure. For instance, in cannibalistic dark matter scenarios, the equation-of-state parameter behaves as $w(a)\propto 1/\log a$ when number-changing processes are active and $w(a)\propto a^{-2}$ in the non-relativistic regime at later times $z<10^{3}$[36]. In this case it has been shown that the NCDM power spectrum can be matched to a WDM one with a good precision[36], by introducing a similar matching procedure up to a correction factor of order one[74]. Figure 3: Relative difference between the transfer functions of the various NCDM models considered in this work and the WDM case (assuming $m_{\text{WDM}}=3$ keV). The scale $k_{1/2}^{\text{WDM}}$ defined in Section 2.3 is represented as a black vertical dashed line. Our matching procedure performs at the level of $\sim 3\%$ or better at this scale for most models. The notation for the modulus decay cases is introduced in Section 3.2.1 (see Fig. 8). For thermal particle decays, FD and BE stand for the Fermi-Dirac and Bose-Einstein distributions for the decaying particle, respectively (see Fig. 11). For non-thermal decays, R and NR denote a relativistic or a non- relativistic decaying particle, respectively (see Eq. 4.14). For thermal and non-thermal freeze-in, the parameter $n$ has been introduced in Eq. (1.5). The numerical precision achieved using our rescaling procedure is represented in Fig. 3, which shows the relative difference between the various NCDM transfer functions considered in this work and the one for the WDM case. The transfer functions are computed numerically with CLASS and using our rescaling procedure to for a given WDM mass, which we assume to be $m_{\text{WDM}}=3$ keV in Fig. 3. The NCDM transfer functions match accurately, mostly with a precision below the percent level, the WDM transfer function for $k<20\,h\,\text{Mpc}^{-1}$. The precision decreases for larger modes $k$. In order to estimate the difference on the cutoff scale expected between NCDM and WDM using our procedure, with a quantity more relevant for observational data, we represent in Fig. 3 the scale $k_{1/2}^{\text{WDM}}$ defined such that $\mathcal{P}_{\text{WDM}}(k_{1/2}^{\text{WDM}})=(1/2)\mathcal{P}_{\Lambda\text{CDM}}(k_{1/2}^{\text{WDM}})$ for a given WDM mass. Fig. 3 shows that at $k_{1/2}^{\text{WDM}}$, our rescaling procedure allows to achieve a $3\%$ difference or better on the NCDM transfer function, relative to the WDM case, for most of our scenarios. The least precise cases achieve a $\sim 10\%$ difference at $k_{1/2}^{\text{WDM}}$. These correspond to DM production from the decay of a non-thermal relativistic particle and DM production via thermal freeze-in with $n=6$. An accurate estimate of the transfer function is numerically more challenging due to the specific shape of the phase space distributions in these cases. For this reason, we believe that the larger relative difference displayed in Fig. 3 for these cases can be partially attributed to the requirement of having a reasonable computation time, at the price of a limited precision. ## 3 Decay of a classical condensate We consider as the first application of our formalism the production of DM from the perturbative decay of a classical, spatially homogeneous, oscillating condensate. As a first example, we study the decay of the inflaton field into DM during reheating, assuming all other DM interactions can be neglected. We then consider the decay of a modulus field, a scalar field present in the early Universe with a non-vanishing vacuum misalignment: a displacement from its post-inflationary global minimum, which leads to a subsequent epoch of oscillations about this minimum. We explore the scenario in which the oscillations of the modulus dominate the energy density of the Universe at late times and, also, the case in which they are subdominant to the inflaton or radiation background. In all cases we find the non-thermal DM phase space distributions, and the corresponding Ly-$\alpha$ bounds on the DM mass. ### 3.1 Perturbative inflaton decay Let us assume that the production of a DM particle $\chi$ proceeds through the two-body decay of the inflaton field during reheating, i.e. through a process of the form $\phi\rightarrow\chi+\psi$. The rest frame decay rate for this process is given by $\Gamma_{\phi\rightarrow\chi\psi}={\rm Br}_{\chi}$, where ${\rm Br}_{\chi}$ denotes the branching ratio to $\chi$, and is the total decay rate of the inflaton. We assume the coupling of $\chi$ with $\phi$ is sufficiently weak to disregard the re-population of $\phi$ from inverse decays.888This is an assumption which is justified a posterori by requiring the generated DM density to match the observed relic abundance. Moreover, as in all other cases, we assume that the couplings of $\chi$ to the visible sector or to itself are not strong enough to bring it to kinetic and/or chemical equilibrium, and may therefore be disregarded. It must be emphasized that, for simplicity, in each of the cases discussed in this paper we assume that $100\%$ of the DM relic abundance is produced by a single mechanism. In addition, we assume that the DM particles do not have significant interactions between them.999As highlighted recently in [36, 75] self-interactions could affect the power spectrum, in particular for cases with light DM masses. Moreover, we neglect thermal effects that would give rise to a subdominant contribution for UV freeze-in but have been shown to alter the produced DM phase space distribution in IR-dominated freeze-in specific scenarios [76]. In order to apply the procedure described in Section 2 for mapping the WDM bound on $m_{\text{WDM}}$ into a bound on the mass of $\chi$, we must first determine the form of the phase space distribution $f_{\chi}$ generated from decays of the inflaton field, by solving the Boltzmann transport equation $\frac{\partial f_{\chi}}{\partial t}-H|\boldsymbol{p}|\frac{\partial f_{\chi}}{\partial|\boldsymbol{p}|}\,=\,\mathcal{C}[f_{\chi}(|\boldsymbol{p}|,t)]\,,$ (3.1) where $[f_{\chi}]$ denotes the collision term, determined by the inflaton-DM interaction. In Appendix A we provide the general form of this collision term, as well as the general solution of (3.1) in the absence of inverse processes and in the free-streaming limit. #### 3.1.1 DM phase space distribution Under the assumptions discussed above, the decay of the inflaton to $\chi$ will be perturbative. If this is true for all its decay channels, then $\phi$ is, on average, spatially homogeneous, and the phase space distribution may be written as $f_{\phi}(k,t)=(2\pi)^{3}n_{\phi}(t)\delta^{(3)}(\boldsymbol{k})$, with $n_{\phi}$ the instantaneous inflaton number density. Disregarding inverse decays, the collision term for the transport equation that determines the distribution function for $\chi$ takes the form $\displaystyle\mathcal{C}[f_{\chi}(p,t)]\;$ $\displaystyle=\;\frac{1}{2p_{0}}\int\frac{\mathop{}\\!\mathrm{d}^{3}{\boldsymbol{k}}}{(2\pi)^{3}2k_{0}}\frac{g_{\psi}\mathop{}\\!\mathrm{d}^{3}{\boldsymbol{p}}_{\psi}}{(2\pi)^{3}2p_{\psi 0}}(2\pi)^{4}\delta^{(4)}(k-p-p_{\psi})$ $\displaystyle\hskip 145.0pt\times|\mathcal{M}|^{2}_{\phi\rightarrow\chi\psi}f_{\phi}(k)\left(1\pm f_{\chi}(p)\pm f_{\psi}(p_{\psi})\right)$ (3.2) $\displaystyle=\;\frac{\pi n_{\phi}}{4m_{\phi}p_{0}}\int_{\rm RF}\frac{g_{\psi}\mathop{}\\!\mathrm{d}^{3}{\boldsymbol{p}}_{\psi}}{p_{\psi 0}}\delta(m_{\phi}-p_{0}-p_{\psi 0})\delta^{(3)}(\boldsymbol{p}+\boldsymbol{p}_{\psi})|\mathcal{M}|^{2}_{\phi\rightarrow\chi\psi}\left(1\pm f_{\chi}(p)\pm f_{\psi}(p_{\psi})\right)$ $\displaystyle=\;\frac{2\pi^{2}}{g_{\chi}\varepsilon^{2}_{\psi}}n_{\phi}\Gamma_{\phi\rightarrow\chi\psi}(1\pm f_{\chi}(p_{0})\pm f_{\psi}(\varepsilon_{\psi}))\delta(p_{0}-\varepsilon_{\psi})\,,$ (3.3) where notations and conventions are detailed in the appendix. Here $\varepsilon_{\psi}=(m_{\phi}^{2}+m_{\psi}^{2}-m_{\text{DM}}^{2})/2m_{\phi}$ denotes the energy of the daughter particle. The collision term can be further simplified in the limit when $m_{\text{DM}},m_{\psi}\ll m_{\phi}$, so that $p_{0}\simeq|\boldsymbol{p}|=p$, and if the quantum statistics of the decay products can be neglected.101010This is ensured provided that the effective coupling $y_{\chi}\equiv(8\pi\Gamma_{\phi\rightarrow\chi\psi}/m_{\phi})^{1/2}\ll 10^{-5}$. If this is the case we can simply write $\mathcal{C}[f_{\chi}(p,t)]\;=\;\frac{8\pi^{2}}{g_{\chi}m_{\phi}^{2}}n_{\phi}\Gamma_{\phi\rightarrow\chi\psi}\delta(p-m_{\phi}/2)\,,$ (3.4) Substitution of this collision term into the transport equation (3.1) yields an equation that has an exact solution in terms of the Hubble parameter $H$ and the inflaton occupation number [49, 77], $\displaystyle f_{\chi}(p,t)\;$ $\displaystyle=\;\frac{16\pi^{2}\Gamma_{\phi\rightarrow\chi\psi}n_{\phi}(\hat{t})}{g_{\chi}m_{\phi}^{3}H(\hat{t})}\theta(t-\hat{t})\,,$ (3.5) where $\hat{t}$ is the solution to the equation $\frac{a(t)}{a(\hat{t})}=\frac{m_{\phi}}{2p}\,.$ (3.6) In order to obtain a closed form for $f_{\chi}$ we need to solve for the inflaton number density and the expansion rate. This can be achieved by integrating the Friedmann-Boltzmann system of equations $\displaystyle\dot{\rho}_{\phi}+3H\rho_{\phi}+\Gamma_{\phi}\rho_{\phi}\;$ $\displaystyle=\;0\,,$ (3.7) $\displaystyle\dot{\rho}_{r}+4H\rho_{r}-\Gamma_{\phi}\rho_{\phi}\;$ $\displaystyle=\;0\,,$ (3.8) $\displaystyle\rho_{\phi}+\rho_{r}\;=\;3H^{2}$ $\displaystyle M_{P}^{2}\,,$ (3.9) where the reduced Planck is $=1/\sqrt{8\pi\,G}$ (being $G$ Newton’s gravitational constant) and where we denote by $\rho_{\phi}$ and $\rho_{r}$ the energy densities of the inflaton condensate and that of its relativistic decay products, respectively. Note that (3.8) is nothing but the integrated version of the transport equation (3.1) for an ultrarelativistic species with ${\rm Br}_{r}=1$. Straightforward integration gives [78] $n_{\phi}(t)\;=\;\frac{\rho_{\phi}(t)}{m_{\phi}}\;=\;\frac{\rho_{\rm end}}{m_{\phi}}\left(\frac{a(t)}{a_{\rm end}}\right)^{-3}e^{-\Gamma_{\phi}(t-t_{\rm end})}\,,$ (3.10) where the sub-index “end” denotes quantities at the end of inflation. For $t_{\rm end}\ll t\ll\Gamma_{\phi}^{-1}$ the exponential in the previous expression can be disregarded: the Universe is dominated by the matter-like oscillations of $\phi$. Therefore, we may also approximate $a\propto t^{2/3}$, and $\hat{t}\simeq(2p/m_{\phi})^{3/2}t$. Substitution into (3.5) yields the following expression for the phase space distribution of $\chi$ well before the end of reheating at $t_{\rm reh}\simeq\Gamma_{\phi}^{-1}$, ($t\ll t_{\rm reh}$) $\displaystyle\begin{aligned} f_{\chi}(p,t)\;&=\;\frac{24\pi^{2}{\rm Br}_{\chi}\Gamma_{\phi}}{g_{\chi}m_{\phi}^{3}}\left(\frac{m_{\phi}}{2p}\right)^{3/2}t\,n_{\phi}(t)\,\theta(m_{\phi}/2-p)\\\ &\simeq\;\frac{24\pi^{2}n_{\chi}(t)}{g_{\chi}m_{\phi}^{3}}\left(\frac{m_{\phi}}{2p}\right)^{3/2}\,\theta(m_{\phi}/2-p)\,.\end{aligned}$ (3.11) Here we have approximated the number density of decay products as $n_{\chi}(t)\;\simeq\;{\rm Br}_{\chi}\frac{\rho_{\rm end}}{m_{\phi}}\left(1-e^{-\Gamma_{\phi}(t-t_{\rm end})}\right)\left(\frac{a(t)}{a_{\rm end}}\right)^{-3}\,.$ (3.12) obtained by counting the quanta produced from inflaton decay. Note the consistency of (3.11) with the defining relation (A.3) between $f_{\chi}$ and $n_{\chi}$. The distribution (3.11) will come handy for our study of non-thermal freeze-in in Section 5.2. For our present purposes, though, this distribution is incomplete, as it lacks the high momentum tail that will be generated when the inflaton energy density begins to get exhausted. We must therefore extend (3.11) beyond the end of reheating. As a first approximation, we evaluate (3.5) at $t_{\rm reh}=\Gamma_{\phi}^{-1}$, the moment of time at which the energy density in $\phi$ is approximately equal to that in radiation, $\rho_{\phi}\simeq\rho_{r}$, and where $H_{\rm reh}\simeq 2\Gamma_{\phi}/3$. With the reheating temperature given by $T_{\rm reh}\;=\;\left(\frac{30\rho_{\rm rad}}{\pi^{2}g_{*s}^{\rm reh}}\right)^{1/4}\,,$ (3.13) where denotes the effective number of relativistic degrees of freedom for entropy, we can substitute into (3.5) to obtain $f_{\chi}(p,t_{\rm reh})\;\simeq\;\frac{4\pi^{4}{\rm Br}_{\chi}g_{*s}^{\rm reh}}{5g_{\chi}}\left(\frac{T_{\rm reh}}{m_{\phi}}\right)^{4}\left(\frac{m_{\phi}}{2p}\right)^{3/2}e^{1-(2p/m_{\phi})^{3/2}}\theta(m_{\phi}/2-p)\,.$ (3.14) Naively, this distribution would evolve at later times simply in accordance to (A.6). However, the production of entropy from inflaton decay does not suddenly stop at $t_{\rm reh}$, but continues for some time into the radiation domination era. The continuous transition $w=0\rightarrow 1/3$ makes the analytical estimation of $f_{\chi}$ beyond $t_{\rm reh}$ complicated, although not impossible (see e.g. [79, 48]). Nevertheless, Eqs. (3.5) and (3.10) make an estimate of the shape of the tail of the distribution straightforward. During radiation domination $a\propto t^{1/2}$, implying that for momenta which satisfy the relation $t_{\rm reh}\ll\hat{t}=(2p/m_{\phi})^{2}t$, the time-dependence of $n_{\phi}$ yields ($t_{\rm reh}\ll(2p/m_{\phi})^{2}t$) $\displaystyle f_{\chi}(p,t)\;\propto\;\exp\left[\left(\frac{2p}{m_{\phi}}\right)^{2}\frac{t}{t_{\rm reh}}\right]\theta(m_{\phi}/2-p)\,,$ (3.15) i.e. a Gaussian tail. A better approximation for $f_{\chi}(p,t)$ beyond the end of reheating can be constructed by solving numerically the Friedmann-Boltzmann system (3.7)-(3.9) together with (3.1) with collision term (3.4). This solution is shown as the continuous black curve in Fig. 4, in the form of the rescaled distribution $\bar{f}_{{\rm R}}$, defined through the relation Figure 4: The rescaled distribution function $\bar{f}_{{\rm R}}$, defined in (3.16), as a function of the rescaled momentum $q$, for DM produced from inflaton decay. Solid, black: the numerically computed result. Dashed-dotted, blue: the analytical result (3.14) without the Heaviside function. Dashed, orange: the phenomenological fit (3.18). The part of the distribution for which $q<1$ is populated during $t<t_{\rm reh}$. The part of the distribution for which $q>1$ is populated during $t>t_{\rm reh}$. $f_{\chi}(p,t)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{p}\;=\;\frac{4\pi^{4}{\rm Br}_{\chi}g_{*s}^{\rm reh}}{5g_{\chi}}\left(\frac{T_{\rm reh}}{m_{\phi}}\right)^{4}\,\left(\frac{a_{0}}{a(t)}\right)^{3}T_{\star}^{3}\,\bar{f}_{\rm R}(q)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{q}\,.$ (3.16) Here $q$ is defined as in (2.2), and in this scenario $T_{\star}\;=\;\frac{m_{\phi}}{2}\frac{a_{\rm reh}}{a_{0}}\;=\;\left(\frac{g_{*s}^{0}}{g_{*s}^{\rm reh}}\right)^{1/3}\frac{m_{\phi}}{2T_{\rm reh}}\,T_{0}\,.$ (3.17) The numerical solution was computed at $t=50t_{\rm reh}$, well beyond the matter-radiation equality that signals the end of reheating. At this time the universe is dominated by radiation, and the production of entropy from inflaton decay has ceased. The particle population that was produced during $t<t_{\rm reh}$ occupies the distribution at $q<1$, while the population created during $t>t_{\rm reh}$ corresponds to the $q>1$ tail. Shown in Fig. 4 is also the analytical solution (3.14), ignoring the Heaviside cutoff at $q=1$. As expected, this expression accurately describes the distribution at small momenta, $\bar{f}_{\rm R}\propto q^{-3/2}$, but the tail is not matched. Given that we expect the large momentum regime to be described by (3.15), we also show in the figure, as an orange dashed curve, a fitting function that mimics the low- and high-energy behavior of the distribution, $\bar{f}_{{\rm R}}(q)\;\simeq\;2.28\,q^{-3/2}e^{-0.74q^{2}}\,.$ (3.18) This approximation is of the form (2.27), and provides an excellent fit to the exact form of $\bar{f}_{\rm R}$. Note the seeming mismatch between the ratio $(t/t_{\rm reh})^{1/2}$ and the ratio $a(t)/a_{\rm reh}$ through which $q$ is defined, quantified by the factor 0.74 in the exponent. This is due to the relatively complicated dependence of the scale factor on time in the matter- radiation transition at the end of reheating, affecting the high-energy tail of the distribution. #### 3.1.2 Power spectrum and Ly-$\alpha$ constraints With the phase space distribution for DM produced from direct inflaton decay, we can now make use of Eq. (2.25) to map the WDM Ly-$\alpha$ constraints on the DM mass for this scenario. Straightforward calculation gives the following rescaling of the bound on the DM mass, $\displaystyle m_{\text{DM}}\;\gtrsim\;\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}$ $\displaystyle\left(\dfrac{106.75}{g_{*s}^{\rm reh}}\right)^{1/3}$ $\displaystyle\times\left(\dfrac{m_{\phi}}{3\times 10^{13}\leavevmode\nobreak\ \text{GeV}}\right)\left(\dfrac{10^{10}\leavevmode\nobreak\ \text{GeV}}{T_{\text{reh}}}\right)\begin{cases}3.78\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Numerical}\,,\\\ 4.11\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Analytical}\,,\\\ 3.79\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Fit}\,.\end{cases}$ (3.19) The numerical, analytical and fit approximations correspond to the numerically computed distribution shown in Fig. 4, to (3.14), and to (3.18), respectively. For low reheating temperatures the bound on the NCDM mass becomes significantly larger than that for WDM. This can be understood by fixing the inflaton mass and decreasing progressively the reheating temperature. The bulk of DM is produced around the reheating temperature with typical momentum $p\sim m_{\phi}/2$, regardless of the radiation temperature. Reducing the reheating temperature therefore prevents the momentum of the DM particle from redshifting too much, resulting in a hotter spectrum at the present time than that expected for large reheating temperatures. Figure 5: Linear transfer function for the scenario in which DM is produced by inflaton decay, assuming the numerical, analytical or fitted phase space distributions described in Section 3.1, by taking the mass estimated in (3.19) and identical reheating temperatures, $T_{\rm reh}=10^{10}\,{\rm GeV}$. The transfer function for the WDM case is shown for comparison with a dashed black line. We depicted as well the transfer function for numerical, analytical or fitted phase space distributions with identical masses and reheating temperature, $T_{\rm reh}=10^{12}\,{\rm GeV}$. Fig. 5 shows the form of the transfer function for the matter power spectrum, as computed with CLASS [44, 45]. Depicted are the results for the numerical, analytical and fit approximations to $f_{\chi}$. The rightmost set of curves shows the form of each $\mathcal{T}(k)$ for a reheating temperature $T_{\rm reh}=10^{10}\,{\rm GeV}$, and masses given by Eq. (3.19). The overlap of all three curves with each other, and with the reference WDM transfer function, demonstrates the validity of our method for this DM production mechanism. At $k_{1/2}^{\rm WDM}$ the relative difference between WDM and the numerical result is in particular smaller than $10^{-3}$, c.f. Fig. 3. The leftmost cluster of curves shows the form of $\mathcal{T}(k)$ for the three approximations for a larger reheating temperature, $T_{\rm reh}=10^{12}\,{\rm GeV}$, assuming a mass of $10\,{\rm keV}$. These curves do not overlap with the WDM bound, and they differ slightly between each other, albeit the agreement between the numerical and fit cases is still excellent. #### 3.1.3 Relic density and phenomenology We now discuss the phenomenological implications of a lower bound on a light DM particle produced from inflaton decay. Given a reheating temperature and a DM mass, the normalization of the distribution function is determined by the value of the present DM fraction $\Omega_{\chi}=\rho_{\chi}/\rho_{c}$, where $\rho_{c}\simeq 1.05\times 10^{-5}h^{2}{\rm GeV\,cm}^{-3}$ is the present critical density of the Universe [80]. Integration of (3.16) at $t\gg t_{\rm reh}$ gives $n_{\chi}(t)\;\simeq\;0.70\pi^{2}{\rm Br}_{\chi}g_{*s}^{\rm reh}\left(\frac{T_{\rm reh}}{m_{\phi}}\right)^{4}\,\left(\frac{a_{0}}{a(t)}\right)^{3}T_{\star}^{3}\,,$ (3.20) which in turn yields $\Omega_{\chi}h^{2}\;\simeq\;0.1\left(\frac{{\rm Br}_{\chi}}{5.5\times 10^{-4}}\right)\left(\frac{m_{\text{DM}}}{1\,{\rm MeV}}\right)\left(\frac{T_{\rm reh}}{10^{10}\,{\rm GeV}}\right)\left(\frac{3\times 10^{13}\,{\rm GeV}}{m_{\phi}}\right)\,.$ (3.21) Combining the bounds on the DM mass (3.19) and on the relic abundance (3.21), the following constraint can be derived for the branching ratio of the decay of the inflaton into dark matter, ${\rm Br}_{\chi}\;\lesssim\;1.5\times 10^{-4}\,\left(\frac{g_{*s}^{\rm reh}}{106.5}\right)^{1/3}\left(\frac{3\,{\rm keV}}{m_{\rm WDM}}\right)^{4/3}\,.$ (3.22) Note the universality of this bound: it is independent of the inflaton mass and the reheating temperature. As mentioned earlier, such a limit will apply even in the absence of tree-level couplings between the inflaton and DM. Assuming a dominant fermionic decay channel of the inflaton, with these decay products in turn coupled to DM through an effective interaction of the following form, $\mathcal{L}\;=\;y\phi\bar{f}f+\frac{1}{\Lambda^{2}}\bar{f}f\bar{\chi}\chi\,,$ (3.23) (which could arise from the exchange of a massive field with mass $\sim\Lambda$), a non-vanishing decay rate for the $\phi\rightarrow\bar{\chi}\chi$ process is induced at 1-loop [46], $\Gamma_{\phi\rightarrow\bar{\chi}\chi}\;\simeq\;\frac{y^{2}}{128\pi^{5}}\left(1+\frac{\pi^{2}}{4}\right)\frac{m_{\phi}^{5}}{\Lambda^{4}}\,,$ (3.24) corresponding to ${\rm Br}_{\chi}=\frac{1}{16\pi^{4}}(1+\frac{\pi^{2}}{4})(\frac{m_{\phi}}{\Lambda})^{4}$. Substitution into (3.22) reveals that $\Lambda\;\gtrsim\;2\,m_{\phi}\left(\frac{106.5}{g_{*s}^{\rm reh}}\right)^{1/12}\left(\frac{m_{\rm WDM}}{3\,{\rm keV}}\right)^{1/3}\,,$ (3.25) a condition consistent with the form of the effective action (3.23), assumed to be valid at all times during reheating. We finish this section by emphasizing that the bounds (3.19) and (3.22) apply for the perturbative decay of the inflaton $\phi$ while it oscillates about a quadratic minimum. A different production mechanism, e.g. through perturbative decay in a non-quadratic potential [81, 82, 83], or via non-adiabatic particle production [84, 85, 86, 87, 88, 89], will lead to a different constraint on ${\rm Br}_{\chi}$. ### 3.2 Moduli decays The inflaton is not necessarily the only scalar condensate that can decay in the early Universe. In many BSM constructions, notably supersymmetric and string SM extensions, a plethora of weakly-interacting unstable scalar fields, collectively known as moduli, arise [90, 91, 92, 93, 94, 95]. During inflation, these moduli can be excited away from the minima of their potential, resulting in a posterior roll towards these minima. Depending on the initial misalignment, and the masses of the moduli, the subsequent oscillations about the minima may eventually dominate the energy density of the Universe. The decay of these fields would then reheat the Universe at temperatures below the inflationary reheating temperature, diluting any relics produced earlier (such as DM) and the baryon asymmetry. This process would also lead to deviations from the standard Big Bang Nucleosynthesis (BBN), which is strongly constrained by the data, unless $T_{\rm reh}\gtrsim 1\,{\rm MeV}$ [96, 97]. If a modulus $Z$ has a non-vanishing branching ratio to DM, the Ly-$\alpha$ bounds derived in the previous section can be mapped to its decay (provided that $m_{Z}\gg m_{\rm DM}$) simply by replacing the inflaton mass and reheating temperatures with their corresponding modulus values. In particular, for the mass bound, we can write $m_{\rm DM}\;\gtrsim\;12.6\,{\rm GeV}\left(\frac{m_{\rm WDM}}{3\,{\rm keV}}\right)^{4/3}\left(\frac{106.75}{g_{*s}^{{\rm reh},Z}}\right)^{1/3}\left(\frac{m_{Z}}{10\,{\rm TeV}}\right)\left(\frac{1\,{\rm MeV}}{T_{{\rm reh},Z}}\right)\,,$ (3.26) while (3.22) remains unchanged, except for the replacement $g_{*s}^{\rm reh}\rightarrow g_{*s}^{{\rm reh},Z}$. Note that for moduli with masses $m_{Z}\gtrsim 100\,{\rm TeV}$, the lower bound on the DM mass is $\gtrsim 100\,{\rm GeV}$, on the range of electroweak-scale DM candidates such as the lightest neutralino. Moreover, the late decay of $Z$ would ensure that the non-thermal phase space distribution (3.18) remains imprinted into this relic. This is due to the fact that most of the DM is produced around $T_{{\rm reh},Z}$, well below the corresponding thermal decoupling (freeze-out) temperature. Fig. 6 shows the limit (3.26) in the mass vs. reheating temperature plane, excluding the model-dependent $T_{{\rm reh},Z}>m_{Z}$ region. Note the wide range of values for $m_{\rm DM}$. Figure 6: Ly-$\alpha$ constraint on the DM mass, as a function of the modulus mass $m_{Z}$ and reheating temperature, in the case in which the oscillation of $Z$ dominates the energy density of the Universe, leading to entropy production upon its decay. The gray region corresponds to $T_{{\rm reh},Z}>m_{Z}$, where in-medium and/or non-perturbative effects may determine the decay of $Z$. At $m_{Z}=3\times 10^{13}\,{\rm GeV}$ the inflaton decay scenario is recovered. We emphasize that this kind of constraint must be accounted for in any discussion regarding DM production in non-standard thermal histories, with an intermediate matter-dominated epoch between the end of reheating and BBN [98, 99, 100, 101, 102, 103, 104]. For a sufficiently large branching ratio of $Z$ to $\chi$, this non-thermal production can dominate over DM freeze-out, which would have occurred during the modified expansion history. We finally mention that it is typical of the decay of a modulus into dark matter to occur in two stages, $Z\rightarrow A\rightarrow\chi$, where $A$ is an intermediate unstable particle, such as the gravitino. This scenario is studied in detail in Section 4.2. As we show there, although the phase space distribution of $A$ and $\chi$ differ noticeably in their shape, the rescaled bound on $m_{\rm DM}$ is only corrected by an $\mathcal{O}(1)$ factor in some regimes. Our main focus in this section is instead stabilized moduli: scalar condensates that oscillate and subsequently decay in the early Universe, while never dominating the energy budget of the Universe [105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123]. Without modifying inflation [124], this is typically achieved in model-building by introducing additional interactions that rise the mass of the modulus, increasing its decay rate, and by decreasing the amount of initial misalignment. It is important to realize that for a subdominant decaying scalar, the post- inflationary background dynamics will be determined by either the oscillating inflaton, or by its redshifting relativistic decay products. Therefore, it is necessary to distinguish between three different scenarios: (a) the modulus begins oscillating and decays during reheating, (b) the modulus begins oscillating during reheating, but decays during radiation domination, or (c) the modulus oscillates and decays during radiation domination. We now proceed to determine the phase space distribution in all three cases, to subsequently determine the Ly-$\alpha$ bounds and the corresponding phenomenologies. As we discuss below, the observed DM abundance can be obtained from the decay of a stabilized modulus when its energy density is much smaller than that of radiation. #### 3.2.1 DM phase space distribution Case a: Oscillation and decay during reheating ($m_{Z}>\Gamma_{Z}>\Gamma_{\phi}$) We begin by studying the scenario in which the field $Z$ begins its oscillations during the matter-dominated reheating, and fully decays before the end of reheating. Given that we follow the decay of a classical condensate, its distribution function will be of the form $f_{Z}(k,t)=(2\pi)^{3}n_{Z}(t)\delta^{(3)}(\boldsymbol{k})$, where $n_{Z}$ is the modulus number density (see Eq. (A.3)), and hence the DM distribution will be given by (3.5), upon replacing $\phi\rightarrow Z$. The solution of Eq. (3.6), necessary to determine the cosmic-time dependence of $f_{\chi}$, can be found in a straightforward way, and is given by $\hat{t}=(2p/m_{Z})^{3/2}t$. Moreover, the number density of the decaying $Z$ is found by integration of (3.7), again replacing $\phi\rightarrow Z$, $n_{Z}(t)\;=\;\frac{\rho_{\rm osc}}{m_{Z}}\left(\frac{a(t)}{a_{\rm osc}}\right)^{-3}e^{-\Gamma_{Z}(t-t_{\rm osc})}\,.$ (3.27) Here the subindex ‘osc’ refers to the beginning of the oscillation of $Z$, which occurs at $t_{\rm osc}\simeq\frac{3}{2}H_{\rm osc}\simeq m_{Z}$. Assuming, as we did for the inflaton, a quadratic minimum for the potential of $Z$, we can write $\rho_{\rm osc}\simeq\frac{1}{2}m_{Z}^{2}Z_{0}^{2}$, where $Z_{0}$ denotes the value of $Z$ at the initial misalignment. Straightforward substitution gives then $f_{\chi}(p,t)\;\simeq\;\frac{12\pi^{2}{\rm Br}_{\chi}}{g_{\chi}(\Gamma_{Z}t)}\left(\frac{Z_{0}\Gamma_{Z}}{m_{Z}^{2}}\right)^{2}\left(\frac{m_{Z}}{2p}\right)^{3/2}e^{-(\Gamma_{Z}t)(2p/m_{Z})^{3/2}+(\Gamma_{Z}/m_{Z})}\theta(m_{Z}/2-p)\,.$ (3.28) In analogy to the inflaton case, we estimate the decoupling time to be $t=\Gamma_{Z}^{-1}$. The effect of any subsequent production is to populate the exponential tail of the distribution. Hence, in what follows we evaluate the distribution at this decoupling time, and disregard the effect of the Heaviside function. Moreover, we will always work in the limit when $\Gamma_{Z}\ll m_{Z}$, as is the case even for stabilized moduli. To evolve the distribution at later times we make use of the decoupled-regime solution (A.6). Note that in order to apply it we need to account for the redshift that occurs from the decay of $Z$ to the end of reheating, and the subsequent redshift from the end of reheating to present times. Since $\frac{a(t)}{a_{\rm dec}}\;=\;\frac{a(t)/a_{0}}{a_{\rm dec}/a_{0}}\;=\;\frac{a(t)/a_{0}}{(a_{\rm dec}/a_{\rm reh})(a_{\rm reh}/a_{0})}\;\simeq\;\frac{a(t)}{a_{0}}\left(\frac{g_{*s}^{\rm reh}}{g_{*s}^{0}}\right)^{1/3}\left(\frac{T_{\rm reh}}{T_{0}}\right)\left(\frac{\Gamma_{Z}}{\Gamma_{\phi}}\right)^{2/3}\,,$ (3.29) we can finally write, at late times, $f_{\chi}(p,t)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{p}\;\simeq\;\frac{16\pi^{2}{\rm Br}_{\chi}}{g_{\chi}}\left(\frac{Z_{0}}{m_{Z}}\right)^{2}\left(\frac{\Gamma_{Z}}{m_{Z}}\right)^{2}\left(\frac{a(t)}{a_{0}}\right)^{3}T_{\star,a}^{3}\bar{f}_{{\rm M},a}(q)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{q}\,,$ (3.30) where $T_{\star,a}\;=\;\frac{m_{Z}}{2T_{\rm reh}}\left(\frac{g_{*s}^{0}}{g_{*s}^{\rm reh}}\right)^{1/3}\left(\frac{\Gamma_{\phi}}{\Gamma_{Z}}\right)^{2/3}T_{0}\,,$ (3.31) and $\bar{f}_{{\rm M},a}(q)\;=\;\frac{3}{4}q^{-3/2}e^{-q^{3/2}}\,.$ (3.32) Figure 7: The rescaled distribution functions $\bar{f}_{{\rm M},i}$, $i=\\{a,b,c\\}$, defined in (3.32), (3.40) and (3.44), as a functions of the rescaled momentum $q$, for DM produced from modulus decay. For the case $m_{Z}>\Gamma_{\phi}>\Gamma_{Z}$, the reheating-radiation domination transition scale $q_{Z}$ has been chosen here to be $q_{Z}=1/4$, and both the analytical approximation and a numerical solution are shown. Fig. 7 shows the form of this rescaled distribution (blue, solid curve). The low momentum power-law dependence and the exponential tail are evident. Clearly, this distribution is of the form (2.27) with $\alpha=-\gamma=3/2$ and $\beta=1$. Case b: Oscillation during reheating, decay after reheating ($m_{Z}>\Gamma_{\phi}>\Gamma_{Z}$) Let us now consider the case for which $Z$ starts oscillating during reheating, and its decay is not completed until the subsequent radiation domination. It is crucial to notice that when this occurs there are two possible solutions for Eq. (3.6), $\hat{t}\;\simeq\;\begin{cases}t\left(\dfrac{2p}{m_{Z}}\right)^{2}\,,&p>p_{\rm reh}\\\\[10.0pt] t_{\rm reh}\left(\dfrac{t}{t_{\rm reh}}\right)^{3/4}\left(\dfrac{2p}{m_{Z}}\right)^{3/2}\,,&p<p_{\rm reh}\,,\end{cases}\,,\qquad p_{\rm reh}\equiv\frac{m_{Z}}{2}\left(\frac{t_{\rm reh}}{t}\right)^{1/2}\,.$ (3.33) Here we have assumed for simplicity a sharp transition from matter to radiation domination at $t_{\rm reh}$, with $a\propto t^{2/3}$ in the former case and $a\propto t^{1/2}$ in the later case. This approximation necessarily leads to a discontinuity in the Hubble parameter, which will translate into a discontinuity in the distribution function $f_{\chi}$. This is nothing but an artifact of our approximations, and it has minimal phenomenological consequences as we will show below. For $p>p_{\rm reh}$ we have $\hat{t}>t_{\rm reh}$. In this case we write the number density of $Z$ as follows, $\displaystyle n_{Z}(\hat{t})\;$ $\displaystyle\simeq\;\frac{\rho_{\rm osc}}{m_{Z}}\left(\frac{a_{\rm reh}}{a_{\rm osc}}\right)^{-3}\left(\frac{a(\hat{t})}{a_{\rm reh}}\right)^{-3}e^{-\Gamma_{Z}(\hat{t}-t_{\rm osc})}$ $\displaystyle\simeq\;\frac{1}{2}m_{Z}Z_{0}^{2}\left(\Gamma_{\phi}t\right)^{-3/2}\left(\frac{\Gamma_{\phi}}{m_{Z}}\right)^{2}\left(\frac{m_{Z}}{2p}\right)^{3}e^{-(\Gamma_{Z}t)(2p/m_{Z})^{2}}\,,$ (3.34) and $H(\hat{t})\;\simeq\;\frac{1}{2t}\left(\frac{m_{Z}}{2p}\right)^{2}\,.$ (3.35) On the other hand, if $p<p_{\rm reh}$, $\hat{t}<t_{\rm reh}$. Therefore, $\displaystyle n_{Z}(\hat{t})\;$ $\displaystyle\simeq\;\frac{\rho_{\rm osc}}{m_{Z}}\left(\frac{a(t)}{a_{\rm osc}}\right)^{-3}e^{-\Gamma_{Z}(\hat{t}-t_{\rm osc})}$ $\displaystyle\simeq\;\frac{1}{2}m_{Z}Z_{0}^{2}\left(\Gamma_{\phi}t\right)^{-3/2}\left(\frac{\Gamma_{\phi}}{m_{Z}}\right)^{2}\left(\frac{m_{Z}}{2p}\right)^{3}\exp\left[-\left(\Gamma_{\phi}t\right)^{3/4}\left(\frac{\Gamma_{Z}}{\Gamma_{\phi}}\right)\left(\frac{2p}{m_{Z}}\right)^{3/2}\right]\,,$ (3.36) and $H(\hat{t})\;\simeq\;\frac{2}{3}\Gamma_{\phi}\left(\Gamma_{\phi}t\right)^{-3/4}\left(\frac{m_{Z}}{2p}\right)^{3/2}\,.$ (3.37) By substituting into (3.5) and evaluating at $t_{\rm dec}=\Gamma_{Z}^{-1}$ we obtain the distribution at decoupling. Moreover, noting that in this case the redshift occurs in the absence of intermediate entropy production, we can finally write the form of the distribution at late times in the following simplified way, $f_{\chi}(p,t)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{p}\;\simeq\;\frac{16\pi^{2}{\rm Br}_{\chi}}{g_{\chi}}\left(\frac{Z_{0}}{m_{Z}}\right)^{2}\left(\frac{\Gamma_{\phi}}{m_{Z}}\right)^{2}\left(\frac{\Gamma_{Z}}{\Gamma_{\phi}}\right)^{3/2}\left(\frac{a(t)}{a_{0}}\right)^{3}T_{\star,b}^{3}\bar{f}_{{\rm M},b}(q)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{q}\,,$ (3.38) with $T_{\star,b}\;=\;\frac{m_{Z}}{2T_{\rm dec}}\left(\frac{g_{*s}^{0}}{g_{*s}^{\rm reh}}\right)^{1/3}T_{0}\,,$ (3.39) where $T_{\rm dec}=(45/(2\pi^{2}g_{*s}^{\rm dec}))^{1/4}(\Gamma_{Z}M_{P})^{1/2}$ denotes the background temperature at the moment of decay, and $\bar{f}_{{\rm M},b}(q)\;=\;\begin{cases}q^{-1}e^{-q^{2}}\,,&q>q_{Z}\\\\[10.0pt] \dfrac{3}{4}q_{Z}^{1/2}q^{-3/2}e^{-q_{Z}^{1/2}q^{3/2}}\,,&q<q_{Z}\end{cases}\,,\qquad q_{Z}\;\equiv\;\left(\frac{\Gamma_{Z}}{\Gamma_{\phi}}\right)^{1/2}\,.$ (3.40) This rescaled distribution is shown in Fig. 7 for $q_{Z}=1/4$. The analytical expression (3.40) is shown as the light green dot-dashed curve. It shows the different scaling with $q$ for $q>q_{Z}$ and $q<q_{Z}$, with a jump at $q=q_{Z}$. As we mention above, this discontinuity is an artifact of our approximations, demonstrated by the dark green, dotted curve in this same figure, which shows the fully numerical solution, which interpolates smoothly between the two regimes. Note that for $q_{Z}\sim 1$ the fitting function (2.27) fails to accurately describe the distribution. Nevertheless, for $q_{Z}\ll 1$, it accurately describes the DM phase space distribution for any $q\sim\mathcal{O}(1)$, with $\alpha=-\beta=-1$ and $\gamma=2$. Case c: Oscillation and decay during radiation domination ($\Gamma_{\phi}>m_{Z}>\Gamma_{Z}$) For the last case we assume that the beginning of the oscillation of $Z$ is delayed beyond the end of reheating, due to a rapidly decaying inflaton, a relatively light $Z$, or a combination of both. The absence of a matter- radiation crossover during oscillations, and of an intermediate entropy production regime, make this analysis straightforward. The solution of (3.6) is simply given by $\hat{t}=t(2p/m_{Z})$, and from it we obtain the following expressions for the number density in $Z$, $n_{Z}(\hat{t})\;\simeq\;\frac{1}{2}m_{Z}Z_{0}^{2}(m_{Z}t)^{3/2}\left(\frac{m_{Z}}{2p}\right)^{2}e^{-(\Gamma_{Z}t)(2p/m_{Z})^{2}}\,,$ (3.41) and the Hubble parameter, $H(\hat{t})\;=\;\frac{1}{2t}\left(\frac{m_{Z}}{2p}\right)^{2}\,.$ (3.42) Substitution into (3.5) and (A.6) $f_{\chi}(p,t)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{p}\;\simeq\;\frac{16\pi^{2}{\rm Br}_{\chi}}{g_{\chi}}\left(\frac{Z_{0}}{m_{Z}}\right)^{2}\left(\frac{\Gamma_{Z}}{m_{Z}}\right)^{3/2}\left(\frac{a(t)}{a_{0}}\right)^{3}T_{\star,c}^{3}\bar{f}_{{\rm M},c}(q)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{q}\,,$ (3.43) with $T_{\star,c}=T_{\star,b}$ and $\bar{f}_{{\rm M},c}(q)\;=\;q^{-1}e^{-q^{2}}\,.$ (3.44) The resulting distribution is trivially of the form (2.27), and is shown in Fig. 7 as the red, dashed curve. #### 3.2.2 Power spectrum and Ly-$\alpha$ constraints The analytical determination of the phase space distributions in all cases allows us to map the WDM Ly-$\alpha$ constraints to the production of DM from moduli decay. The main hurdle consists in the evaluation of the second moment of the distribution in the case when the oscillation and the decay of $Z$ occur in different epochs, $\langle q^{2}\rangle\;=\;\begin{cases}\Gamma(7/3)\,,&m_{Z}>\Gamma_{Z}>\Gamma_{\phi}\\\\[10.0pt] e^{-q_{Z}^{2}}(1+q_{Z}^{2})-q_{Z}^{4}E_{-4/3}(q_{Z}^{2})+\dfrac{\Gamma(7/3)}{q_{Z}^{2/3}}\,,&m_{Z}>\Gamma_{\phi}>\Gamma_{Z}\,,\\\\[10.0pt] 1\,,&\Gamma_{\phi}>m_{Z}>\Gamma_{Z}\,.\end{cases}$ (3.45) Here $E_{n}(x)$ denotes the exponential integral function. Nevertheless, we find the following to be a good approximation, $m_{\rm DM}\;\gtrsim\;3.78\,{\rm keV}\,\left(\frac{m_{\rm WDM}}{3\,{\rm keV}}\right)^{4/3}\left(\frac{g_{*s}^{0}}{g_{*s}^{\rm reh}}\right)^{1/3}\frac{m_{Z}}{T_{\rm dec}}\times\begin{cases}\sqrt{\Gamma(7/3)}q_{Z}^{-4/3}\,,&\Gamma_{Z}>\Gamma_{\phi}\,,\\\\[10.0pt] 1\,,&\Gamma_{\phi}>\Gamma_{Z}\,.\end{cases}$ (3.46) As expected, the limit on the DM mass is weakened if $Z$ decays during reheating, relative to $Z$ decay during radiation domination. In this case, DM is cooled down in two stages: from the redshift from $t_{\rm dec}$ to $t_{\rm reh}$ and from the subsequent redsift from the end of inflation to the present epoch. Fig. 8 shows the transfer function for stabilized modulus decay compared to WDM with $m_{\rm WDM}=1$ and $3\,{\rm keV}$, for the three cases discussed in this section. The overlap between NCDM and WDM is good for all shown scales, albeit a slight shift can be observed for $m_{\rm WDM}=1\,{\rm keV}$. As Fig. 3 shows, the relative difference is in all cases $\lesssim 1\%$. The DM masses are taken from (3.46), where the modulus mass and decay temperature are in turn chosen to be $m_{Z}\simeq 3\times 10^{6}\,{\rm GeV}$ and $T_{\rm dec}\simeq 1\,{\rm GeV}$ for case (a), $m_{Z}\simeq 5\times 10^{7}\,{\rm GeV}$ and $T_{\rm dec}\simeq 800\,{\rm GeV}$ for case (b), and $m_{Z}\simeq 3\times 10^{8}\,{\rm GeV}$ and $T_{\rm dec}\simeq 10^{5}\,{\rm GeV}$ for case (c). These values are motivated by our discussion of the phenomenology of a strongly stabilized Polonyi-like modulus, in Sec. 3.2.3. For these choices of the $Z$ mass and decay temperature both the Lyman-$\alpha$ bound and the closure fraction bound $\Omega_{\chi}h^{2}\simeq 0.1$ are saturated (see Fig. 9). Figure 8: Linear transfer function for the scenario where DM is produced by moduli decay, for the cases (a), (b) and (c) described in Section 3.2.1. The transfer function for the Warm Dark Matter case is shown in gray and black dashed lines, with $m_{\text{DM}}=1,\,3$ keV, respectively, for comparison. The numerical values chosen for $m_{\rm DM}$ are estimated from Eq. (3.46), with $m_{Z}$ and $T_{\rm dec}$ given by the values that saturate the Ly-$\alpha$ and abundance constraints for the strongly stabilized Polonyi scenario discussed in Section 3.2.3, shown as stars in Fig. 9 (see text for details). #### 3.2.3 Relic density and phenomenology We now consider the possible phenomenological consequences of the Ly-$\alpha$ bound on $m_{\rm DM}$ found above. We first determine the DM relic abundance from stabilized moduli decays. Integration of Eqs. (3.30), (3.38) and (3.43) provides the following expression for the late-time DM number density, $n_{\chi}(t_{0})\;=\;4{\rm Br}_{\chi}\left(\frac{Z_{0}}{m_{Z}}\right)^{2}\times\begin{cases}\left(\dfrac{\Gamma_{Z}}{m_{Z}}\right)^{2}T_{\star,a}^{3}\,,&m_{Z}>\Gamma_{Z}>\Gamma_{\phi}\\\\[10.0pt] \left(\dfrac{\Gamma_{\phi}}{m_{Z}}\right)^{2}\left(\dfrac{\Gamma_{Z}}{\Gamma_{\phi}}\right)^{3/2}T_{\star,b}^{3}\,,&m_{Z}>\Gamma_{\phi}>\Gamma_{Z}\,,\\\\[10.0pt] \left(\dfrac{\Gamma_{Z}}{m_{Z}}\right)^{3/2}T_{\star,c}^{3}\,,&\Gamma_{\phi}>m_{Z}>\Gamma_{Z}\,.\end{cases}$ (3.47) As mentioned above, the discontinuity in the phase space distribution for $\chi$ is not inherited by the number density, justifying our approximations. We emphasize that our results are valid only if the field $Z$ does not dominate the energy budget of the Universe at any time. For the first scenario, decay before reheating, this is ensured for $Z_{0}\ll M_{P}$, since if $\rho_{\rm osc}<\rho_{\phi}(t_{\rm osc})$ then it will continue being so until the decay of $Z$. For the other two cases we must ensure that the energy density in radiation, $\rho_{r}$, is always greater than $\rho_{Z}$. Since the oscillating modulus redshifts more slowly than the background radiation, it is sufficient to enforce this condition at $Z$-decay. With $\rho_{Z}(t_{\rm dec})\simeq\frac{1}{2}m_{Z}^{2}Z_{0}^{2}(a_{\rm osc}/a_{\rm dec})^{3}$ and $\rho_{r}(t_{\rm dec})\simeq\Gamma_{\phi}^{2}M_{P}^{2}(a_{\rm reh}/a_{\rm dec})^{4}$, we can evaluate the scale factors explicitly to obtain that $\frac{\rho_{Z}(t_{\rm dec})}{\rho_{r}(t_{\rm dec})}\;\simeq\frac{1}{2}\left(\frac{Z_{0}}{M_{P}}\right)^{2}\times\begin{cases}\left(\dfrac{\Gamma_{\phi}}{\Gamma_{Z}}\right)^{1/2}\,,&m_{Z}>\Gamma_{\phi}>\Gamma_{Z}\\\\[10.0pt] \left(\dfrac{m_{Z}}{\Gamma_{Z}}\right)^{1/2}\,,&\Gamma_{\phi}>m_{Z}>\Gamma_{Z}\,.\end{cases}$ (3.48) This ratio must be $<1$ if our present analysis is to be valid. Otherwise, a $Z$-dominated epoch occurs, and the bound (3.26) applies. Note that for a branching ratio ${\rm Br}_{\chi}=1$, the condition $\rho_{Z}\ll\rho_{r}$ is necessary to obtain the observed DM abundance. For example, saturating the Ly-$\alpha$ bound (3.46) one obtains $\Omega_{\chi}h^{2}\sim 270(\rho_{Z}/\rho_{r})$ for cases b and c. We now consider as a proof-of-concept example a particular realization of modulus stabilization, corresponding to a strongly stabilized Polonyi field111111The Polonyi field, if left unstabilized, is an example of a problematic modulus for BBN that can arise in $\mathcal{N}=1$ supergravity [125, 126, 127, 128, 129]. This field, responsible for the breaking of supersymmetry, communicates with the SM through Planck-suppressed interactions. It is also relatively light: its mass of the order of the gravitino mass, which in turn is parametrically related to the scale at which supersymmetry is broken. Moreover, typically its initial misalignment is $\mathcal{O}(M_{P})$. in $\mathcal{N}=1$ supergravity, stabilized by the non- minimal addition to the Kähler potential $\Delta K=-(Z\bar{Z})^{2}/\Lambda_{Z}^{2}$ [107, 108, 109, 110, 111, 118]. For our purposes it is sufficient to note the following values of the Polonyi modulus mass, its misalignment, and its decay rate $m_{Z}\;=\;\sqrt{12}\,m_{3/2}\left(\frac{M_{P}}{\Lambda_{Z}}\right)\,,\qquad Z_{0}\;=\;\frac{\Lambda_{Z}^{2}}{\sqrt{6}M_{P}}\,,\qquad\Gamma_{Z}\;=\;\frac{3\sqrt{3}m_{3/2}^{3}M_{P}^{3}}{\pi\Lambda_{Z}^{5}}\,.$ (3.49) Here $m_{3/2}\gtrsim\mathcal{O}(10\,{\rm TeV})$ is the gravitino mass for this particular case of gravity-mediated supersymmetry breaking, and $\Lambda_{Z}\ll M_{P}$. Hence, the entropy production problem is averted by simultaneously increasing the $Z$ mass well above the electroweak scale, by reducing the misalignment to deep sub-Planckian values, and by enhancing the decay rate. The dominant decay channel of $Z$ is to two gravitinos, which then subsequently decay into the lightest neutralino. Although in this example this decay chain implies that the (rescaled) DM distribution will not be exactly given by the $\bar{f}_{{\rm M},i}(q)$, the scaling of the Ly-$\alpha$ constraint will be maintained up to $\mathcal{O}(1)$ corrections (see Section 4.2). Figure 9: Allowed range for $\Lambda_{Z}$ as a function of $d_{\phi}^{-2/3}m_{3/2}/m_{\phi}$, where $\Gamma_{\phi}=d_{\phi}^{2}m_{\phi}^{3}/M_{P}^{2}$, for the stabilized modulus defined by (3.49). Shown are the regions excluded by $Z$-domination (entropy production) and by the Ly-$\alpha$ constraint, assuming ${\rm Br}_{\chi}=1$. The allowed parameter space is divided into the regions where $Z$ oscillates and decays after reheating (left), where it begins oscillations during reheating, and decays after reheating (middle), and where it oscillates and decays during reheating (right). The orange curve corresponds to $\Omega_{\chi}h^{2}=0.1$ for $m_{\rm DM}=100\,{\rm GeV}$. Above it, DM is overproduced. The stars correspond to the points selected to construct the transfer functions shown in Fig. 8. Where necessary, the gravitino mass is chosen $m_{3/2}=10^{-13}M_{P}$. See [118] for further details. Fig. 9 shows the allowed parameter space for $\Lambda_{Z}$ as a function of the quantity $d_{\phi}^{-2/3}m_{3/2}/m_{\phi}$, where $\Gamma_{\phi}=d_{\phi}^{2}m_{\phi}^{3}/M_{P}^{2}$. Here $d_{\phi}\lesssim\mathcal{O}(10^{-1})$ includes the inflaton-matter (or radiation) couplings and the phase space factors of the width. This parametrization is chosen to coincide with that of [118], and is inspired by the Planck-suppressed decays which are a generic feature of supersymmetric reheating (see e.g. [130, 119]). As it can be seen, when the stabilization scale is close to the Planck scale, the modulus ceases to be strongly stabilized, and it dominates the energy budget of the Universe after inflation. This is averted for $\Lambda_{Z}\;\lesssim\;M_{P}\times\begin{cases}1.5\left(\dfrac{m_{3/2}^{3}}{M_{P}^{2}\Gamma_{\phi}}\right)^{1/13}\,,&m_{Z}>\Gamma_{\phi}>\Gamma_{Z}\,,\\\\[10.0pt] 1.4\left(\dfrac{m_{3/2}}{M_{P}}\right)^{1/6}\,,&\Gamma_{\phi}>m_{Z}>\Gamma_{Z}\,.\end{cases}$ (3.50) In this figure we have also shown the domain restricted by Ly-$\alpha$ observations. We observe that it extends the disallowed region (due to entropy production) by about an order of magnitude in $\Lambda_{Z}$. Its boundary, and the orange line for which the observed DM abundance is obtained for $m_{\rm DM}=100\,{\rm GeV}$, are determined through the following expression, $\displaystyle\Omega_{\chi}h^{2}\;\simeq\;0.1$ $\displaystyle\left(\frac{106.75}{g_{*s}^{\rm reh}}\right)^{1/4}\left(\frac{m_{\chi}}{100\,{\rm GeV}}\right)$ $\displaystyle\times\begin{cases}\left(\dfrac{\Lambda_{Z}}{6.2\times 10^{14}\,{\rm GeV}}\right)^{9/2}\left(\dfrac{10^{-13}M_{P}}{m_{3/2}}\right)^{-1/2}\,,&\Gamma_{\phi}>m_{Z}\,,\\\\[10.0pt] d_{\phi}\left(\dfrac{\Lambda_{Z}}{2.4\times 10^{15}\,{\rm GeV}}\right)^{5}\left(\dfrac{m_{\phi}}{3\times 10^{13}\,{\rm GeV}}\right)^{3/2}\left(\dfrac{10^{-13}M_{P}}{m_{3/2}}\right)\,,&m_{Z}>\Gamma_{\phi}\,.\end{cases}$ (3.51) In the parameter range shown in the figure, the Ly-$\alpha$ and DM abundance constraints are simultaneously saturated for $m_{\rm DM}\;\simeq\;3.5\,{\rm MeV}\,\left(\frac{m_{\rm WDM}}{3\,{\rm keV}}\right)^{4/3}\times\begin{cases}\left(\dfrac{1.3\times 10^{-5}}{d_{\phi}}\right)^{3/13}\left(\dfrac{10^{-13}M_{P}}{m_{3/2}}\right)^{2/13}\,,&m_{Z}>\Gamma_{\phi}>\Gamma_{Z}\,,\\\\[10.0pt] 1\,,&\Gamma_{\phi}>m_{Z}>\Gamma_{Z}\,,\end{cases}$ (3.52) assuming ${\rm Br}_{\chi}=1$. For $d_{\phi}^{-2/3}m_{3/2}/m_{\phi}=\\{10^{-1},10^{-6},10^{-10}\\}$, $m_{\rm DM}\simeq\\{3.99\,{\rm GeV},71.3\,{\rm MeV},\allowbreak 3.70\,{\rm MeV}\\}$, c.f. Fig. 8. We finish by noting that for this particular stabilization scenario, the Ly-$\alpha$ constraint is irrelevant compared to the requirement that $\Omega_{\chi}h^{2}\simeq 0.1$ assuming electroweak-scale LSP masses. Nevertheless, the power spectrum bound may be relevant for alternative constructions in which the modulus mass and the DM mass are independent. ## 4 Freeze-in via decay In the previous section we considered the production of DM from the decay of the spatially homogeneous condensate. We now extend our discussion to decays of particles with distributions populated above the zero-momentum mode. Specifically, we will determine the phase space distribution and the mass lower bound for DM produced from the decay of a thermalized relic, and from the decay of a non-thermalized inflaton decay product. As in all cases, we will assume that DM interactions are sufficiently suppressed to prevent it from reaching kinetic and/or chemical equilibrium. For this reason we dub this scenario freeze-in through decays [6]. ### 4.1 Thermal decay #### 4.1.1 DM phase space distribution Let us first consider the decay of a population of particles in thermal equilibrium, which decays during radiation domination totally or partially into DM. For definiteness we will assume again that the unstable particle, denoted here by $A$, decays to DM, $\chi$, via a two-body channel, $A\rightarrow\chi+\psi$. The integration of the corresponding collision term can be performed in complete analogy to the inflaton decay scenario (see Eq. (3.1.1)). Noting in particular that, for a two-body decay, the unpolarized amplitude squared is determined solely by the masses of the initial and final state particles, we can write $\displaystyle\mathcal{C}[f_{\chi}(p,t)]\;$ $\displaystyle=\;\frac{|\mathcal{M}|^{2}_{A\rightarrow\chi\psi}}{2p_{0}}\int\frac{\mathop{}\\!\mathrm{d}^{3}{\boldsymbol{k}}}{(2\pi)^{3}2k_{0}}\frac{g_{\psi}\mathop{}\\!\mathrm{d}^{3}{\boldsymbol{p}}_{\psi}}{(2\pi)^{3}2p_{\psi}^{0}}(2\pi)^{4}\delta^{(4)}(k-p- p_{\psi})f_{A}(k_{0})$ $\displaystyle=\;\frac{{\rm Br}_{\chi}\Gamma_{A}m_{A}}{p_{0}\sqrt{p_{0}^{2}-m_{\text{DM}}^{2}}}\int_{k_{-}}^{k_{+}}\mathop{}\\!\mathrm{d}k_{0}\,f_{A}(k_{0})\,,$ (4.1) where $\displaystyle 2m_{\text{DM}}^{2}k_{\pm}\;=\;p_{0}$ $\displaystyle(m_{A}^{2}+m_{\text{DM}}^{2}-m_{\psi}^{2})$ $\displaystyle\pm\sqrt{(p_{0}^{2}-m_{\text{DM}}^{2})(m_{A}^{4}+m_{\text{DM}}^{4}+m_{\psi}^{4}-2m_{\text{DM}}^{2}m_{\psi}^{2}-2m_{\text{DM}}^{2}m_{A}^{2}-2m_{\psi}^{2}m_{A}^{2})}\,.$ (4.2) Note that up to this point no assumptions have been made regarding the form of $f_{A}$. For our exploration of the decay of a thermalized relic $A$ into DM, we can assume that $m_{A}\gg m_{\text{DM}},m_{\psi}$, and substitute a thermal Bose-Einstein (BE) of Fermi-Dirac (FD) form for $f_{A}$, $f_{A}(k_{0})\;=\;\frac{1}{e^{k_{0}/T}\pm 1}\,.$ (4.3) Substitution into (4.1) yields the following collision term, $\displaystyle\mathcal{C}[f_{\chi}(p,t)]\;$ $\displaystyle\simeq\;\frac{{\rm Br}_{\chi}\Gamma_{A}m_{A}}{p^{2}}\int_{p+\frac{m_{A}^{2}}{4p}}^{\infty}\frac{\mathop{}\\!\mathrm{d}k_{0}}{e^{k_{0}/T}\pm 1}$ $\displaystyle=\;(\pm)\frac{{\rm Br}_{\chi}\Gamma_{A}m_{A}T}{p^{2}}\ln\left[1\pm\exp\left(-\frac{p}{T}-\frac{m_{A}^{2}}{4pT}\right)\right]\,.$ (4.4) Disregarding the inverse decay process, and recalling the relation between time and temperature during radiation domination, $H\;=\;\left(\frac{\pi^{2}g_{*\rho}(T)}{90}\right)^{1/2}\frac{T^{2}}{M_{P}}\;\simeq\;\frac{1}{2t}\,,$ (4.5) the solution of the transport equation (3.1) is a straightfoward application of the freeze-in solution (A.4). After some algebraic manipulation, the DM phase space distribution can be cast in the following form [131, 132] $\displaystyle f_{\chi}\left(p,T\right)\,=\,$ $\displaystyle(\pm){\rm Br}_{\chi}\frac{\Gamma_{A}T^{2}M_{\rm Pl}}{p^{2}m_{A}^{2}}\left(\dfrac{90}{\pi^{2}}\right)^{1/2}g_{*s}^{2/3}(T)\int_{0}^{m_{A}/T}\mathop{}\\!\mathrm{d}x\,x^{2}\,g_{*s}^{-2/3}(m_{A}/x)\,g_{*\rho}^{-1/2}(m_{A}/x)$ $\displaystyle\times\left(1-\dfrac{1}{3}\dfrac{\mathop{}\\!\mathrm{d}\log g_{*s}}{\mathop{}\\!\mathrm{d}\log x}\right)\ln\left[1\pm\exp\left(-\dfrac{p}{T}\left(\dfrac{g_{*s}(m_{A}/x)}{g_{*s}(T)}\right)^{\frac{1}{3}}-\frac{x^{2}T}{4p}\left(\dfrac{g_{*s}(T)}{g_{*s}(m_{A}/x)}\right)^{\frac{1}{3}}\right)\right]\,.$ (4.6) Such expression is valid up to the decoupling temperature $T>T_{\rm dec}\sim m_{A}$ below which the dark matter production from the thermal bath is negligible. Figure 10: The rescaled distribution function $\bar{f}_{{\rm TD}}$, defined in (4.7), as a function of the rescaled momentum $q=p/T$, assuming $T\ll m_{A}\ll T_{0}$. Solid: the numerically computed phase space distributions for a fermionic (blue) or bosonic (red) decaying thermalized particle. Dashed: the phenomenological fits (4.8). A closed form for $f_{\chi}$ for either bosonic or fermionic $A$ is not available, and (4.6) must be integrated numerically. These distributions are presented in Fig. 10 in the limit when $T\ll m_{A}\ll T_{\rm reh}$, by neglecting the temperature evolution of the effective degrees of freedom during production, in terms of the rescaled distribution $\bar{f}_{{\rm TD}}(q)\;\equiv\;\sqrt{\frac{g_{*s}^{\rm dec}}{90}}\frac{\pi m_{A}^{2}}{{\rm Br}_{\chi}\Gamma_{A}M_{P}}f_{\chi}(q)\,.$ (4.7) Here $q=p/T$, noting that (4.5) can be extended up to recombination, where $g_{*s}\simeq g_{*s}^{0}$. The continuous red (blue) curve corresponds to a decaying fermion (boson) $A$. It is worth noting that the difference between the two curves is relatively small, which suggests that a phenomenological Maxwell-Boltzmann-like fit could describe these distributions. Indeed, Fig. 10 also shows two dashed curves which correspond to the following fitting functions, $\bar{f}_{{\rm TD}}(q)\;\simeq\;q^{-1/2}e^{-q}\times\begin{cases}3.38\,,&\text{FD}\,,\\\ 3.77\,,&\text{BE}\,.\end{cases}$ (4.8) Save for the fitting factors, the functional form for this expression may trivially be obtained from (4.6) in the Maxwell-Boltzmann limit, for which $\ln\left[1\pm\exp\left(-\frac{p}{T}-\frac{x^{2}T}{4p}\right)\right]\rightarrow\pm\exp\left(-\frac{p}{T}-\frac{x^{2}T}{4p}\right)$ [133, 134, 72]. Worth noting is the mapping of the exponential tail from the thermalized progenitor $A$ to the daughter particles. Nevertheless, the low- momentum behavior is different, manifesting the lack of thermal equilibrium in the $\chi$ sector. This distribution is of the form (2.27), with $\gamma=1$. #### 4.1.2 Power spectrum and Ly-$\alpha$ constraints The fact that the phase space distribution of $\chi$ is quasi-thermal suggests that the power spectrum should match the one of WDM. Fig. 11 attests the reliability of this matching. The leftmost set of curves shows the transfer functions for the thermal decay cases with BE or FD initial states with masses determined by Eq. (2.25), which in this case corresponds to the following rescaled bound, Figure 11: Linear transfer function for the scenario where DM is produced by decay of a thermalized particle (denoted by $A$ in the main text), assuming a Fermi-Dirac (FD), Bose-Einstein (BE) or a fitting phase space distribution as described in Section 3.1, by taking the mass estimated in Eq. (4.9). The transfer function for the WDM case is shown for comparison in a black dashed line. Also depicted here are the transfer functions for FD, BE and fitted phase space distribution (4.8) with identical masses $m_{\text{DM}}=\leavevmode\nobreak\ 3\ \text{keV}$. $m_{\text{DM}}\,\gtrsim\,\,\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}\left(\dfrac{106.75}{g_{*s}(m_{A})}\right)^{1/3}\times\begin{cases}7.51\leavevmode\nobreak\ \text{keV}\,,\quad&{\rm FD}\,,\\\ 7.32\leavevmode\nobreak\ \text{keV}\,,\quad&{\rm BE}\,,\\\ 7.43\leavevmode\nobreak\ \text{keV}\,,\quad&{\rm Fit}\,.\end{cases}$ (4.9) Here ‘fit’ stands for both the FD and BE approximations (4.8), which differ only by a $q$-independent numerical factor. The overlap of these transfer functions with the WDM result is evident in the whole range of scales shown in the figure, the relative deviation being $\simeq 1\%$ at $k_{1/2}^{\rm WDM}$ (see Fig. 3). In Fig. 11 we also show the form of $\mathcal{T}(k)$ if we consider a smaller DM mass, and ignore the difference in statistics. In this case, all three curves shift to the left, as expected, but the difference between them remains small. As mentioned earlier, this is the result of the relatively minimal dependence of $f_{\chi}$ on the spin of the decaying particle $A$. #### 4.1.3 Relic density and phenomenology In addition to the power spectrum constraint on the mass discussed above, one must address the limit from the DM abundance which determines the normalization of the $\chi$ distribution function. Integration of $f_{\chi}$ gives the following expression for the DM number density at late times, $T\ll T_{\rm reh}$, $n_{\chi}(T)\;\simeq\;\sqrt{90}\frac{g_{\chi}{\rm Br}_{\chi}\Gamma_{A}M_{P}T^{3}}{2\pi^{3}m_{A}^{2}}g_{*s}(T)\left(\dfrac{1}{g_{*s}^{\rm dec}}\right)^{3/2}\times\begin{cases}4.58\,,\quad&{\rm FD}\,,\\\ 4.89\,,\quad&{\rm BE}\,.\end{cases}$ (4.10) Correspondingly, $\Omega_{\chi}h^{2}\;\simeq\;0.12\,{\rm Br}_{\chi}\,\left(\dfrac{g_{\chi}}{2}\right)\left(\frac{106.5}{g_{*s}^{\rm dec}}\right)^{3/2}\left(\frac{m_{\rm DM}}{6\,{\rm keV}}\right)\left(\frac{\Gamma_{A}}{10^{-14}\,{\rm GeV}}\right)\left(\frac{1\,{\rm TeV}}{m_{A}}\right)^{2}\times\begin{cases}1.17\,,\quad&{\rm FD}\,,\\\ 1.02\,,\quad&{\rm BE}\,.\end{cases}$ (4.11) Except for the number of degrees of freedom, which we consistently normalize to the SM value, the normalizations chosen in the previous equation are inspired by the decay of thermalized supersymmetric particles into light DM candidates, such as the Higgsino $\rightarrow$ axino + Higgs production process in $R$-parity violating DFSZ models [135, 136], for which $\Gamma(\tilde{H}\rightarrow\tilde{a}+H)\;=\;\frac{1}{8\pi}\left(\frac{\mu}{f_{a}}\right)^{2}\mu\,,$ (4.12) with the $\mu$-term parameter $\mu\sim 500\,{\rm GeV}$, and the Peccei-Quinn scale $f_{a}\sim 10^{10}\,{\rm GeV}$. Similarly to the inflaton decay case, a mass-independent constraint on the branching ratio to DM from the decay of the thermalized $A$ could be derived. Nevertheless, this bound would not be universal, as the mass and width of $A$ are model dependent, as opposed to the inflaton decay case (see Eq. (3.22)). ### 4.2 Non-thermal decay #### 4.2.1 DM phase space distribution Let us now assume that the particle $A$ whose decay produces the DM interacts very weakly with the SM and was produced via inflaton decay, but does not reach thermal equilibrium. Unlike in the previously studied thermal case, this particle cannot be assumed to be produced abundantly in the thermal plasma during the decay of the latter, Therefore, in principle the imprint that its decay leaves on its phase space distribution must be taken into account. Disregarding the effect of Bose enhancement/Pauli blocking, and the inverse decay process, the Boltzmann equation satisfied by this non-thermal unstable relic is given by [137] $\frac{\partial f_{A}}{\partial t}-Hp\frac{\partial f_{A}}{\partial p}\;=\;-\frac{m_{A}\Gamma_{A}}{\sqrt{m_{A}^{2}+p^{2}}}f_{A}\,.$ (4.13) This equation can be exactly solved in the relativistic and non-relativistic regimes. In both cases the decay of $A$ proceeds exponentially in time. For this reason we will be content to approximate the evolution of $f_{A}$ as that of a free-streaming particle until its sudden decay, which occurs at $t_{\rm dec}\;\simeq\;\begin{cases}\Gamma_{A}^{-1}\,,&\dfrac{\Gamma_{A}}{H_{A}}\ll 1\,,\\\\[10.0pt] \left(\dfrac{m_{\phi}\langle q_{A}\rangle}{2m_{A}\Gamma_{A}\Gamma_{\phi}^{1/2}}\right)^{2/3}\,,&\dfrac{\Gamma_{A}}{H_{A}}\gg 1\,.\end{cases}$ (4.14) Here $H_{A}$ denotes the Hubble parameter at the time when $A$ becomes non- relativistic. We have estimated the effective lifetime as the inverse of the mean $f_{A}$ prefactor in the right-hand side of (4.13) [138]. With the previous arguments in mind, for $t<t_{\rm dec}$ we write the collision term for $\chi$ (4.1) as $\displaystyle\mathcal{C}[f_{\chi}(p,t)]\;=\;\frac{4\pi^{4}g_{*s}^{\rm reh}{\rm Br}_{\chi}{\rm Br}_{A}\Gamma_{A}m_{A}}{5g_{A}p^{2}}$ $\displaystyle\left(\frac{T_{\rm reh}}{m_{\phi}}\right)^{4}\left(\frac{m_{\phi}}{2}\right)\left(\frac{a_{\rm reh}}{a(t)}\right)$ $\displaystyle\times\int_{\left|\frac{2p}{m_{\phi}}\frac{a(t)}{a_{\rm reh}}-\frac{m_{A}^{2}}{2pm_{\phi}}\frac{a(t)}{a_{\rm reh}}\right|}^{\infty}\frac{z\,\mathop{}\\!\mathrm{d}z}{\sqrt{z^{2}+\left(\frac{2m_{A}a(t)}{m_{\phi}a_{\rm reh}}\right)^{2}}}\,\bar{f}_{\rm R}\left(z\right)\,,$ (4.15) where the distribution for inflaton decay products $\bar{f}_{\text{R}}$, given in terms of the 3D momentum magnitude, was defined in (3.16). In this expression ${\rm Br}_{A}$ stands for the branching ratio of the decay from inflaton to $A$. Substitution into the general freeze-in solution (A.4) gives $\displaystyle f_{\chi}(p,t_{\rm dec})\;=\;\;$ $\displaystyle\frac{8\pi^{4}g_{*s}^{\rm reh}{\rm Br}_{\chi}{\rm Br}_{A}\Gamma_{A}m_{A}}{5g_{A}m_{\phi}}\left(\frac{T_{\rm reh}}{m_{\phi}}\right)^{4}q_{\rm dec}^{-2}$ $\displaystyle\times\int_{t_{\rm reh}}^{t_{\rm dec}}\mathop{}\\!\mathrm{d}t^{\prime}\,\frac{a(t^{\prime})}{a_{\rm reh}}\int_{\left|q_{\rm dec}-\frac{1}{q_{\rm dec}}\left(\frac{m_{A}}{m_{\phi}}\frac{a(t^{\prime})}{a_{\rm reh}}\right)^{2}\right|}^{\infty}\frac{z\,\mathop{}\\!\mathrm{d}z}{\sqrt{z^{2}+\left(\frac{2m_{A}a(t^{\prime})}{m_{\phi}a_{\rm reh}}\right)^{2}}}\,\bar{f}_{\rm R}(z)\,,$ (4.16) where $q_{\rm dec}=(2p/m_{\phi})(a_{\rm dec}/a_{\rm reh})$. The ratio $\displaystyle\frac{m_{A}a(t)}{m_{\phi}a_{\rm reh}}\;\propto\;\frac{m_{A}}{\langle p\rangle}\,,$ (4.17) quantifies how relativistic the distribution for $A$ is at a given moment of time. In particular, we define $\mathcal{R}\;\equiv\;\frac{m_{A}a_{\rm dec}}{m_{\phi}a_{\rm reh}}\;=\;\left(\frac{g_{*s}^{\rm reh}}{g_{*s}^{\rm dec}}\right)^{1/3}\frac{m_{A}T_{\rm reh}}{m_{\phi}T_{\rm dec}}\;=\;\begin{cases}\left(\dfrac{2H_{A}}{\Gamma_{A}}\right)^{1/2}\gg 1\quad&\text{for}\quad\dfrac{\Gamma_{A}}{H_{A}}\ll 1\,,\\\\[10.0pt] \left(\langle q_{A}\rangle\dfrac{3H_{A}}{2\Gamma_{A}}\right)^{1/3}\ll 1\quad&\text{for}\quad\dfrac{\Gamma_{A}}{H_{A}}\gg 1\,.\end{cases}\,.$ (4.18) Extending the solution past $t_{\rm dec}$ we can write $\displaystyle f_{\chi}(p,t)\,\mathop{}\\!\mathrm{d}^{3}\boldsymbol{p}\;=\;\frac{24\pi^{3}\sqrt{10g_{*s}^{\rm reh}}{\rm Br}_{\chi}{\rm Br}_{A}\Gamma_{A}M_{P}}{5g_{A}m_{A}^{2}}\left(\frac{T_{\rm reh}}{m_{\phi}}\right)^{2}\mathcal{F}(q,\mathcal{R})\left(\frac{a_{0}}{a(t)}\right)^{3}T_{\star}^{3}\mathop{}\\!\mathrm{d}^{3}\boldsymbol{q}\,,$ (4.19) where $\mathcal{F}(q,\mathcal{R})\;=\;q^{-2}\int_{0}^{\mathcal{R}}\mathop{}\\!\mathrm{d}y\,y^{2}\int_{\left|q-\frac{y^{2}}{q}\right|}^{\infty}\frac{z\,\mathop{}\\!\mathrm{d}z}{\sqrt{q^{2}+4y^{2}}}\,\bar{f}_{\rm R}(z)\;\simeq\;\begin{cases}\bar{f}_{\rm D,NR}(q)\,,&\mathcal{R}\gg 1\,,\\\\[10.0pt] \dfrac{\mathcal{R}^{3}}{3}\bar{f}_{\rm D,R}(q)\,,&\mathcal{R}\ll 1\,.\end{cases}$ (4.20) Here $q$ and $T_{\star}$ are the same as in (3.16) and (3.17). The rescaled distributions $\bar{f}_{\rm D,NR}$ and $\bar{f}_{\rm D,R}$ can be computed by making use of the fit approximation (3.18) for $\bar{f}_{\rm R}$. We obtain $\displaystyle\bar{f}_{\rm D,NR}(q)\;$ $\displaystyle\simeq\;0.36\,q^{-1}\left[0.43\,q\,\Gamma\left(\frac{1}{4},0.19\,q^{2}\right)-\Gamma\left(\frac{3}{4},0.19\,q^{2}\right)+2\,\Gamma\left(\frac{3}{4}\right)\right]\theta\big{(}\mathcal{R}-q\big{)}\,,$ (4.21) $\displaystyle\bar{f}_{\rm D,R}(q)\;$ $\displaystyle=\;q^{-2}\int_{q}^{\infty}\mathop{}\\!\mathrm{d}z\,\bar{f}_{\rm R}(z)\;\simeq\;1.06\,q^{-2}\,\Gamma\left(-\frac{1}{4},0.74\,q^{2}\right)\,,$ (4.22) where $\Gamma(a,x)$ denotes the upper incomplete gamma function. Figure 12: The rescaled distribution function $\bar{f}_{{\rm D,NR}}$, defined in (4.21), as a function of the rescaled momentum $q$ and the order parameter $\mathcal{R}=(m_{A}/m_{\phi})(\Gamma_{\phi}/\Gamma_{A})^{1/2}$. Solid, black: numerically computed phase space distribution. Dashed, orange: the fit approximation (4.21). The DM phase space distribution corresponding to the decay of a non- relativistic particle $A$ is shown in Fig. 12 as a function of $q$ and $\mathcal{R}>1$. The solid black line shows the result of the numerical integration of (4.20). The distribution grows with an almost linear universal envelope, independent of the decay rate of $A$, until $q\sim\mathcal{R}$, at which point the distribution sharply decreases. This non-universality of the cutoff prevents us from constructing a reasonable fit approximation of the form (2.27) for generic values of $\mathcal{R}$. In the same figure, the orange dashed lines show the analytical approximation (4.21), which as can be seen is equivalent to imposing a hard cutoff at $q=\mathcal{R}$ on the universal envelope.121212The numerical distribution can be well fitted by substituting the $\theta$ function in Eq. (4.21) by a logistic function. Fig. 13 shows the numerically computed relativistic distribution $\bar{f}_{\rm D,R}$ as the solid black curve, and the analytical approximation given by (4.22) as the orange, dashed curve. In the same figure a ‘fit’ approximation of the form (2.27) is also shown. This approximation is obtained by mimicking the asymptotic behavior of the gamma function at large and small $q$, while preserving the normalization, and is given by $\displaystyle\bar{f}_{\rm ND}(q)\;$ $\displaystyle\approx\;2.19q^{-5/2}e^{-0.74q^{2}}\,.$ (4.23) It is worth noting that in this case the Gaussian tail is of the same form as that of the parent unstable particle. It is important to emphasize that this distribution is obtained in the limit $\mathcal{R}\rightarrow 0$, as we discuss below. Figure 13: The rescaled distribution function $\bar{f}_{{\rm D,R}}$, defined in (4.22), as a function of the rescaled momentum $q$. Solid, black: numerically computed phase space distribution. Dashed, orange: the analytical approximation (4.22). Dashed-dotted, blue: the fit approximation (4.23). Fig. 14 shows the form of the function $\mathcal{F}(q,\mathcal{R})$, defined in Eq. (4.20), for several values of $\mathcal{R}$, ranging from $10^{-2}$ to 10. Here we can appreciate the transition between the relativistic and non- relativistic decay cases. In all cases the phase space distribution peaks at $q\simeq\mathcal{R}$, with a positive skew for a relativistic $A$, and a negative skew for non-relativistic $A$. For $\mathcal{R}<1$ the analytical approximation (4.22) describes well the exact distribution for $q\gtrsim\mathcal{R}$. For $\mathcal{R}>1$, the non-relativistic approximation (4.21) is in turn a good fit for the exact distribution for $q\gtrsim 1/2$. Figure 14: The function $\mathcal{F}(q,\mathcal{R})$, defined in (4.20), as a function of the rescaled momentum $q$ and the decay parameter $\mathcal{R}$. Solid: numerically computed distributions. Dashed: the non-relativistic analytical approximation (4.21) for $\mathcal{R}\gg 1$. Dashed-dotted: the relativistic analytical approximation (4.22). #### 4.2.2 Power spectrum and Ly-$\alpha$ constraints For the distributions that we have derived, we can make use of (2.25) to determine the rescaling of the bound on the DM mass. For the case of a relativistic (R) decay we find that $\displaystyle m_{\text{DM}}\,\gtrsim\,\,$ $\displaystyle\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}\left(\dfrac{106.75}{g_{*s}^{\rm reh}}\right)^{1/3}$ $\displaystyle\times\left(\dfrac{m_{\phi}}{3\times 10^{13}\leavevmode\nobreak\ \text{GeV}}\right)\left(\dfrac{10^{10}\leavevmode\nobreak\ \text{GeV}}{T_{\text{reh}}}\right)\times\begin{cases}1.23\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Numerical}\leavevmode\nobreak\ {\rm(R)}\,,\\\ 1.26\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Analytical}\leavevmode\nobreak\ {\rm(R)}\,,\\\ 2.19\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Fit}\leavevmode\nobreak\ {\rm(R)}\,,\\\ \end{cases}$ (4.24) while for the non-relativistic (NR) case, $\displaystyle m_{\text{DM}}\,\gtrsim\,\,$ $\displaystyle\left(\dfrac{m_{\text{WDM}}}{3\leavevmode\nobreak\ \text{keV}}\right)^{4/3}\left(\dfrac{106.75}{g_{*s}^{\rm reh}}\right)^{1/3}$ $\displaystyle\times\left(\dfrac{m_{\phi}}{3\times 10^{13}\leavevmode\nobreak\ \text{GeV}}\right)\left(\dfrac{10^{10}\leavevmode\nobreak\ \text{GeV}}{T_{\text{reh}}}\right)\left(\dfrac{\mathcal{R}}{6}\right)\times\begin{cases}16.1\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Analytical}\leavevmode\nobreak\ {\rm(NR)}\,,\\\ 16.2\leavevmode\nobreak\ \text{MeV}\,,\leavevmode\nobreak\ &{\rm Numerical}\leavevmode\nobreak\ {\rm(NR)}\,.\end{cases}$ (4.25) The $m_{\text{DM}}\propto\mathcal{R}$ behavior is only correct for large $\mathcal{R}\gg 1$ but remains a reasonable approximation for $\mathcal{R}\sim\mathcal{O}(1-10)$. We note here that the lower bound on the NCDM mass can be many orders of magnitude larger than the corresponding WDM bound, and it increases as the reheating temperature is decreased. This is expected, as in this case the parent particle is produced from inflaton decay (see Sec. 3.1.2). The difference between the numerical and analytical results is minimal, consistent with the agreement between both curves in Fig. 12 and Fig. 13. However, the fit approximation for the relativistic case provides a relatively poor approximation to the bound, overestimating it by a factor of $\sim 1.8$. Figure 15: Linear transfer function for DM produced by the decay of a non- thermalized non-relativistic particle. We show here the numerical results for two sets of cosmological parameters: $T_{\rm reh}=10^{12}\,{\rm GeV}$ and $m_{\rm WDM}=1\,{\rm keV}$, and $T_{\rm reh}=10^{10}\,{\rm GeV}$ and $m_{\rm WDM}=3\,{\rm keV}$, making use of the rescaled bound (4.25). For comparison we also show the transfer function for the corresponding WDM cases. Figure 16: Linear transfer function for DM produced by the decay of a relativistic non- thermalized particle. We show here the numerical, analytical and fit approximations discussed in the text, for two sets of cosmological parameters: $T_{\rm reh}=10^{12}\,{\rm GeV}$ and $m_{\rm WDM}=1\,{\rm keV}$, and $T_{\rm reh}=10^{10}\,{\rm GeV}$ and $m_{\rm WDM}=3\,{\rm keV}$, making use of the rescaled bound (4.24). For comparison we also show the transfer function for the corresponding WDM cases. Figs. 15 and 16 show the results of the numerical evaluation of the transfer functions with CLASS [44, 45], and their comparison with the WDM case.131313For the relativistic case, the small disagreement between the numerical transfer function with values from Eq. (4.24) and the corresponding WDM spectrum is also attributed to the sharp drop of the phase space distribution for $q<\mathcal{R}\ll 1$, akin to a low-momentum cutoff. Such a cutoff results in a loss of numerical precision if reasonable computation times are required. For the two sets of curves shown in each figure, we use the rescaled Ly-$\alpha$ bound (4.24) or (4.25). For the leftmost set we take $T_{\rm reh}=10^{12}\,{\rm GeV}$ and $m_{\rm WDM}=1\,{\rm keV}$, while for the rightmost set we consider $T_{\rm reh}=10^{10}\,{\rm GeV}$ and $m_{\rm WDM}=3\,{\rm keV}$. For the decay of a non-relativistic particle, a comparison is made between the three different choices for $\mathcal{R}=2,6$ and $10$. Note the overlap between the three curves, with a relative difference of $\sim 1\%$ (see Fig. 3, where the relative difference is plotted as a function of $k$ for $\mathcal{R}=6$). For the decay of a relativistic particle, the agreement between the numerical and analytical results can be immediately appreciated, as well as the difference between these and the result of using the fit approximation (4.23) for the DM distribution.141414The analytical expression of Eq. (4.21) is not represented in Fig. 15 as the sharp $\theta$-function cannot be handled properly with CLASS as it requires the distribution function to smoothly decrease at large $q$. Even more evident though is the difference of the NCDM transfer functions with respect to the one for WDM, of around $10\%$ at $k_{1/2}^{\rm WDM}$, c.f. Fig. 3. For the relativistic case, the distribution $f_{\chi}$ has a very non thermal shape, monotonically decreasing with $p$, resulting in a power spectrum that, although not too dissimilar from the WDM case, exhibits in the figure an appreciable difference from it. #### 4.2.3 Relic density and phenomenology The present relic abundance of DM is obtained from integration of (4.19). To do this we make use of the (numerical) result $\int_{0}^{\infty}\mathop{}\\!\mathrm{d}q\,q^{2}\bar{f}_{\rm D,NR}(q)\;\simeq\;0.4\,\mathcal{R}^{2}\,.$ (4.26) At $t\gg t_{\rm dec}$ the number density has the form $n_{\chi}(t)\;\simeq\;g_{*s}^{0}{\rm Br}_{\chi}{\rm Br}_{A}\left(\frac{g_{\chi}}{g_{A}}\right)\left(\frac{T_{\rm reh}}{m_{\phi}}\right)\left(\frac{a_{0}}{a(t)}\right)^{3}T_{0}^{3}\times\begin{cases}\left(\dfrac{g_{*s}^{\rm reh}}{g_{*s}^{\rm dec}}\right)^{1/6}\,,&\mathcal{R}\gg 1\,,\\\\[10.0pt] \left(\dfrac{g_{*s}^{\rm reh}}{g_{*s}^{\rm dec}}\right)^{1/4}\,,&\mathcal{R}\ll 1\,.\end{cases}$ (4.27) Note that both expressions agree up to a different power of the number of relativistic degrees of freedom. This agreement is to be expected, as the total number of the decaying particle $A$ and its decay product must be a constant in a comoving volume. Considering for definiteness the case of a relativistic decaying particle, we determine that the present abundance is given by $\displaystyle\Omega_{\chi}h^{2}\;\simeq\;0.1\,{\rm Br}_{\chi}$ $\displaystyle\left(\frac{{\rm Br}_{A}}{5.5\times 10^{-4}}\right)\left(\frac{g_{\chi}}{g_{A}}\right)\left(\dfrac{g_{*s}^{\rm reh}}{g_{*s}^{\rm dec}}\right)^{1/4}\left(\frac{m_{\rm DM}}{1\,{\rm MeV}}\right)\left(\frac{T_{\rm reh}}{10^{10}\,{\rm GeV}}\right)\left(\frac{3\times 10^{13}\,{\rm GeV}}{m_{\phi}}\right)\,.$ (4.28) As expected, $\Omega_{\chi}$ is independent of the properties of $A$, and corresponds simply to a re-scaling by degrees of freedom of the inflaton decay result (3.21). Given this result, a universal lower bound on ${\rm Br}_{\chi}$ can be obtained, in full analogy with the inflaton decay scenario. Let us discuss it in the context of a specific model. Consider the decay chain inflaton $\rightarrow$ gravitino $\rightarrow$ LSP (lightest supersymmetric particle), which is generically present in supersymetric models of inflation with supersymmetry breaking mediated gravitationally [139, 140, 141, 142, 143].151515In typical gauge-mediation scenarios, the gravitino can be very light, $m_{3/2}\sim{\rm keV}$ and is produced through thermal freeze-out, thus being an example of WDM [144, 145, 146, 147, 39]. Assuming a minimal supersymmetric extension of the Standard Model (MSSM), the decay rate of the spin-3/2 gravitino is [148] $\Gamma_{3/2}\;=\;\frac{193}{384\pi}\frac{m_{3/2}^{3}}{M_{P}^{2}}\,.$ (4.29) Generically, ${\rm Br}_{\rm LSP}=\mathcal{O}(1)$. Substitution into (4.24) and (4.28) leads to the following absolute constraints on the branching ratio of the decay of the inflaton into gravitinos, independent of the DM mass: For non-relativistic decaying particles, $T_{\rm reh}\gg 10^{5}\,{\rm GeV}(m_{3/2}/10\,{\rm TeV})^{1/2}$ and ${\rm Br}_{3/2}\;\lesssim\;1.3\times 10^{-8}\left(\dfrac{3\,{\rm keV}}{m_{\rm WDM}}\right)^{4/3}\left(\frac{m_{\phi}}{3\times 10^{13}\,{\rm GeV}}\right)\left(\frac{10^{10}\,{\rm GeV}}{T_{\rm reh}}\right)\left(\frac{m_{3/2}}{10\,{\rm TeV}}\right)^{1/2}\,,$ (4.30) while for relativistic decaying ones, $T_{\rm reh}\ll 10^{5}\,{\rm GeV}(m_{3/2}/10\,{\rm TeV})^{1/2}$ and ${\rm Br}_{3/2}\;\lesssim\;1.2\times 10^{-3}\left(\dfrac{3\,{\rm keV}}{m_{\rm WDM}}\right)^{4/3}\,.$ (4.31) In this (MSSM) scenario, the excluded DM masses span a phenomenologically interesting region in the parameter space of the model, as shown in Fig. 17. The exclusion region corresponds to $m_{\rm LSP}\;\lesssim\;\begin{cases}86\,{\rm GeV}\left(\dfrac{m_{\rm WDM}}{3\,{\rm keV}}\right)^{4/3}\left(\dfrac{10\,{\rm TeV}}{m_{3/2}}\right)^{1/2}\,,&T_{\rm reh}\gg 10^{5}\,{\rm GeV}\left(\dfrac{m_{3/2}}{10\,{\rm TeV}}\right)^{1/2}\,,\\\\[10.0pt] 95\,{\rm GeV}\left(\dfrac{m_{\rm WDM}}{3\,{\rm keV}}\right)^{4/3}\left(\dfrac{10^{5}\,{\rm GeV}}{T_{\rm reh}}\right)\,,&T_{\rm reh}\ll 10^{5}\,{\rm GeV}\left(\dfrac{m_{3/2}}{10\,{\rm TeV}}\right)^{1/2}\,.\end{cases}$ (4.32) These bounds are or the order of the electroweak scale, and are comparable to collider and direct detection limits [149, 150]. Note that for a model-fixed LSP mass, the Lyman-$\alpha$ constraint puts a bound on the inflaton-matter couplings. For $m_{\rm LSP}\gtrsim 100\,{\rm GeV}$, $T_{\rm reh}\lesssim 100\,{\rm TeV}$ are excluded. A straightforward computation shows that independently of the mean momentum of the decaying gravitino, the decay occurs at temperatures at which the LSP can be safely assumed to be decoupled from the thermal plasma, and hence preserves its non-equilibrium phase space distribution. Figure 17: Ly-$\alpha$ constraint on the LSP mass, as a function of temperature, in the case of production through the decay chain inflaton $\rightarrow$ gravitino $\rightarrow$ LSP. For $T_{\rm reh}>10^{5}\,{\rm GeV}(m_{3/2}/10\,{\rm TeV})^{1/2}$ the decay of the gravitino occurs when it is non-relativistic. For $T_{\rm reh}<10^{5}\,{\rm GeV}(m_{3/2}/10\,{\rm TeV})^{1/2}$, the gravitino is relativistic at the moment of decay. ## 5 Ultraviolet freeze-in via scatterings In this section we consider the production of light DM from scatterings in the primordial plasma. We will restrict ourselves to $2\rightarrow 2$ processes, for which the integrated effective cross section is assumed to be of the form $\sigma(s)\;=\;\frac{s^{\frac{n}{2}}}{{}^{n+2}}\,,$ (5.1) where $n$ is an integer and $s$ is the Mandelstam variable, related at high energies with the center of mass energy $E$ by $\sqrt{s}=E$. Although for non- negative $n$ this cross section naively violates unitarity at high energies, we assume that it merely corresponds to the low-energy effective description of a UV-complete model. The energy scale $\Lambda$ can be thought to be parametrically related to the mass of a heavy mediator. The suppression by $\Lambda$ guarantees that the primordial abundance is determined by forward processes (plasma $\rightarrow$ DM) rather than by annihilations. Therefore, Pauli-blocking/Bose-enhancement for $\chi$ can be safely disregarded, and in the absence of other interaction channels, $\chi$ never reaches thermal equilibrium with the plasma. Thus, freeze-in is realized [6, 151]. Assuming no post-reheating entropy production (that is, a standard thermal history), particle production is dominated by temperatures $T\geq T_{\rm reh}$ if $n>-1$. This is referred to as ultraviolet (UV) freeze-in [152, 153, 49]. Moreover, for $n>-1$, and for a sufficiently large reheating temperature, we can safely assume that both the parent scatterers and the produced DM are ultrarelativistic at the time of production,161616This justifies disregarding any dependence on thresholds. For $n\leq-1$, the masses of the scatterers play an important role to determine the lower bound on $m_{\rm DM}$ [71, 72]. if the former are in thermal equilibrium. If the parent scatterers are not in equilibrium at production time, the condition that their masses are $\ll m_{\phi}$ suffices. Here we will consider both scenarios. In order to evaluate the necessary collision terms for thermal and non-thermal production, we need to make assumptions regarding the form of the scattering amplitude. Its dependence on the angles (or Mandelstam variables $s,t,u$) involved in the scattering varies between different microscopic descriptions of the process. We will assume that for the scattering process $A(k)+B(\tilde{k})\rightarrow\chi(p)+\psi(\tilde{p})$, the mean, unpolarized squared scattering amplitude can be parametrized in the following way, $|\mathcal{M}|^{2}\;=\;16\pi\frac{s^{\frac{n}{2}+1}}{\Lambda^{n+2}}\,.$ (5.2) Integration with respect to the two-particle phase space recovers (5.1). For a different combination of $s,t,u$, our results will generically only differ by numerical factors, which can be absorbed into the value of $\Lambda$.171717Exceptions include those cases in which finite-temperature in- medium effects are necessary to regulate infrared divergences, which arise from the exchange of massless mediators. Thermal axion production and gravitino production in low-scale supersymmetry are included in these cases [154, 155, 156, 157, 158, 159]. Under the freeze-in assumption, and with the square amplitude given by (5.2), the collision term for the production of $\chi$ can be written as follows, $\mathcal{C}[f_{\chi}]\;=\;\frac{16\pi g_{A}g_{B}g_{\psi}}{\Lambda^{n+2}2p_{0}}\int\frac{\mathop{}\\!\mathrm{d}^{3}\tilde{\boldsymbol{p}}}{2(2\pi)^{3}\tilde{p}_{0}}\frac{\mathop{}\\!\mathrm{d}^{3}\boldsymbol{k}}{2(2\pi)^{3}k_{0}}\frac{\mathop{}\\!\mathrm{d}^{3}\tilde{\boldsymbol{k}}}{2(2\pi)^{3}\tilde{k}_{0}}(2\pi)^{4}\delta^{(4)}(p+\tilde{p}-k-\tilde{k})s^{\frac{n}{2}+1}f_{A}(k_{0})f_{B}(\tilde{k}_{0})\,.$ (5.3) The integration of this collision term for arbitrary $f_{A,B}$ can be easily done following the steps of [155, 160]. We detail these steps in Appendix A.2. As a result, we obtain Eq. (A.13), which will be the starting point of our discussion of thermal and non-thermal UV freeze-in. ### 5.1 Thermal freeze-in We begin by applying the general solution (A.13) to the production of DM from thermalized scatterers, i.e. with Fermi-Dirac or Bose-Einstein distributions. As stated earlier in this section, we focus on UV freeze-in, for which the bulk of the DM relic abundance is produced during reheating. Thermal production during reheating is the dominant production channel in the absence of significant direct inflaton $\rightarrow$ DM decays for $-1<n\leq 2$ in (5.1). Moreover, for higher $n$, thermal production can dominate over non- thermal effects if the parent scatterers that couple to the dark sector are
11institutetext: Jadavpur University, Department Of Mathematics, 188, Raja S.C. Mallick Rd, Kolkata 700032 11email<EMAIL_ADDRESS><EMAIL_ADDRESS> # On Zero-Sum Two Person Perfect Information Semi-Markov Games††thanks: Supported by Department of Science and Technology, Govt. of India, INSPIRE Fellowship Scheme. Sagnik Sinha Kushal Guha Bakshi 0000-0002-5215-9828 ###### Abstract A zero-sum two-person Perfect Information Semi-Markov game (PISMG) under limiting ratio average payoff has a value and both the maximiser and the minimiser have optimal pure semi-stationary strategies. We arrive at the result by first fixing an arbitrary initial state and forming the matrix of undiscounted payoffs corresponding to each pair of pure stationary strategies of the two players and proving that this matrix has a pure saddle point. ###### Keywords: Semi-Markov games Perfect Information (Pure) Semi-Stationary Strategies. ## 1 Introduction A semi-Markov game (SMG) is a generalisation of a Stochastic (Markov) game (Shapley(1953) [11]). Such games have already been studied in the literature (e.g. Lal- Sinha(1992) [1], Luque-Vasquez(1999) [2], Mondal(2015) [4]). Single player SMGs are called semi-Markov decision processes (SMDPs) which were introduced by Jewell(1963) [5] and Howard(1971) [16]. A perfect information semi-Markov game (PISMG) is a natural extension of perfect information stochastic games (PISGs) (Raghavan et al.(1997) [6]), where at each state all but one player is a dummy (i.e., he has only one available action in that state). Note that for such a game, perfect information is a state property. In this paper, we prove that such games (PISMGs) have a value and both players have pure semi-stationary optimal strategies under undiscounted (limiting ratio average) pay-offs. We prove this by showing the existence of a pure saddle point in the pay-off matrix of the game for each initial state. The paper is organized as follows. Section 2 contains definitions and properties of an undiscounted two person zero-sum semi-Markov game considered under limiting ratio average pay-off. Section 3 contains main result of this paper. In Section 4 we state the algorithm to compute a Cesaro limiting matrix of a transition matrix, proposed by Lazari et al., [9]. Section 5 contains a numerical example illustrating our theorem. Section 6 is reserved for the conclusion. ## 2 Preliminaries ### 2.1 Finite zero-sum two-person semi-Markov games A zero-sum two-person finite SMG is described by a collection of objects $\Gamma=<S,\\{A(s):s\in S\\},\\{B(s):s\in S\\},q,P,r>$, where $S=\\{1,2,\cdots,N\\}$ is the finite non-empty state space and $A(s)=\\{1,2,\cdots,m_{s}\\},B(s)=\\{1,2,\cdots,n_{s}\\}$ are respectively the non-empty sets of admissible actions of the players I and II respectively in the state $s$. Let us denote $K=\\{(s,i,j):s\in S,i\in A(s),j\in B(s)\\}$ to be the set of admissible triplets. For each $(s,i,j)\in K$, we denote $q(.\mid s,i,j)$ to be the transition law of the game. Given $(s,i,j)\in K$ and $s^{{}^{\prime}}\in S$, let $\tau_{ij}^{ss^{{}^{\prime}}}$ be the transition time random variable which denotes the time for a transition to a state $s^{{}^{\prime}}$ from a state $s$ by a pair of actions $(i,j)\in A(s)\times B(s)$. Let $P_{ij}^{ss^{{}^{\prime}}}=Prob(\tau_{ij}^{ss^{{}^{\prime}}}\leq t)$ for each $(s,i,j)\in K,s^{{}^{\prime}}\in S$ be a probability distribution function on [$0,\infty$) and it is called the conditional transition time distribution function. Finally $r$ is the real valued functions on $K$, which represents the immediate (expected) rewards for the player-I (and $-r$ is the immediate reward for player-II). Let us consider player I as the maximiser and player II as the minimiser in the zero-sum two person SMG. The semi-Markov game over infinite time is played as follows. At the $1$st decision epoch, the game strats at $s_{1}\in S$ and the players I and II simultaneously and independently choose actions $i_{1}\in A(s_{1})$ and $j_{1}\in B(s_{1})$ respectively. Consequently player I and II get immediate rewards $r(s_{1},i_{1},j_{1})$ and $-r(s_{1},i_{1},j_{1})$ respectively and the game moves to the state $s_{2}$ with probability $q(s_{2}\mid s_{1},i_{1},j_{1})$. The sojourn time to move from state $s_{1}$ to the state $s_{2}$ is determined by the distribution function $P_{i_{1}j_{1}}^{s_{1}s_{2}}(.)$. After reaching the state $s_{2}$ on the next decision epoch, the game is repeated over infinite time with the state $s_{1}$ replaced by $s_{2}$. By a strategy (behavioural) $\pi_{1}$ of the player I, we mean a sequence $\\{(\pi_{1})_{n}(.\mid hist_{n})\\}_{n=1}^{\infty}$, where $(\pi_{1})_{n}$ specifies which action is to be chosen on the $n$-th decision epoch by associating with each history $hist_{n}$ of the system up to $n$th decision epoch (where $hist_{n}$=$(s_{1},a_{1},b_{1},s_{2},a_{2},b_{2}\cdots,s_{n-1},a_{n-1},b_{n-1},\\\ s_{n})$ for $n\geq 2$, $hist_{1}=(s_{1})$ and $(s_{k},a_{k},j_{k})\in K$ are respectively the state and actions of the players at the $k$-th decision epoch) a probability distribution $(\pi_{1})_{n}(.\mid hist_{n})$ on $A(s_{n})$. Behavioural strategy $\pi_{2}$ for player II can be defined analogously. Generally by any unspecified strategy, we mean behavioural strategy here. We denote $\Pi_{1}$ and $\Pi_{2}$ to be the sets of strategies (behavioural) of the players I and II respectively. A strategy $f=\\{f_{n}\\}_{n=1}^{\infty}$ for the player I is called semi- Markov if for each $n$, $f_{n}$ depends on $s_{1},s_{n}$ and the decision epoch number $n$. Similarly we can define a semi-Markov strategy $g=\\{g_{n}\\}_{n=1}^{\infty}$ for the player II. A stationary strategy is a strategy that depends only on the current state. A stationary strategy for player I is defined as $N$ tuple $f=(f(1),f(2),\cdots,f(N))$, where each $f(s)$ is the probability distribution on $A(s)$ given by $f(s)=(f(s,1),f(s,2),\cdots,f(s,m_{s}))$. $f(s,i)$ denotes the probability of choosing action $i$ in the state $s$. By similar manner, one can define a stationary strategy $g$ for player II as $g=(g(1),g(2),\cdots,g(N))$ where each $g(s)$ is the probability distribution on $B(s)$. Let us denote $F_{1}$ and $F_{2}$ to be the set of stationary strategies for player I and II respectively. A stationary strategy is called pure if any player selects a particular action with probability $1$ while visiting a state $s$. We denote $F_{1}^{s}$ and $F_{2}^{s}$ to be the set of pure stationary strategies of the players I and II respectively. A semi-stationary strategy is a semi-Markov strategy which is independent of the decision epoch $n$, i.e., for a initial state $s_{1}$ and present state $s_{2}$, if a semi-Markov strategy $g(s_{1},s_{2},n)$ turns out to be independent of $n$, then we call it a semi-stationary strategy. Let $\xi_{1}$ and $\xi_{2}$ denote the set of semi-stationary strategies for the players I and II respectively and $\xi_{1}^{sp}$ and $\xi_{2}^{sp}$ denote the set of pure semi-stationary strategies for the players I and II respectively. Definition 1 A zero-sum two person SMG $\Gamma=<S,\\{A(s):s\in S\\},\\{B(s):s\in S\\},q,P,r>$ is called a perfect information semi-Markov game (PISMG) if the following properties hold (i)$S=S_{1}\cup S_{2},S_{1}\cap S_{2}=\phi$. (ii)$\mid B(s)\mid=1$, for all $s\in S_{1}$, i.e., on $S_{1}$ player-II is a dummy. (iii)$\mid A(s)\mid=1$, for all $s\in S_{2}$, i.e., on $S_{2}$ player-I is a dummy. ### 2.2 Zero-Sum Two-Person Semi-Markov Games under Limiting Ratio Average (Undiscounted) Payoff Let $(X_{1},A_{1},B_{1},X_{2},A_{2},B_{2}\cdots)$ be a co-ordinate sequence in $S\times(A\times B\times S)^{\infty}$. Given behavioural strategy pair $(\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}$, initial state $s\in S$, there exists a unique probability measure $P_{\pi_{1}\pi_{2}}(.\mid X_{1}=s)$ (hence an expectation $E_{\pi_{1}\pi_{2}}(.\mid X_{1}=s)$) on the product $\sigma$\- field of $S\times(A\times B\times S)^{\infty}$ by Kolmogorov’s extension theorem. For a pair of strategies $(\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}$ for the players I and II respectively, the limiting ratio average (undiscounted) pay-off for player I, starting from a state $s\in S$ is defined by: $\phi(s,\pi_{1},\pi_{2})$= $\liminf_{n\to\infty}\frac{E_{\pi_{1}\pi_{2}}\sum_{m=1}^{n}[r(X_{m},A_{m},B_{m})\mid X_{1}=s]}{E_{\pi_{1}\pi_{2}}\sum_{m=1}^{n}[\bar{\tau}(X_{m},A_{m},B_{m})\mid X_{1}=s]}$. Here $\bar{\tau}(s,i,j)=\sum_{s^{{}^{\prime}}\in S}q(s^{{}^{\prime}}\mid s,i,j)\int_{0}^{\infty}tdP_{ij}^{ss^{{}^{\prime}}}(t)$ is the expected sojourn time in the state $s$ for a pair of actions $(i,j)\in A(s)\times B(s)$. Definition 2 For each pair of stationary strategies $(f,g)\in F_{1}\times F_{2}$ we define the transition probability matrix as $Q(f,g)=[q(s^{{}^{\prime}}\mid s,f,g)]_{N\times N}$, where $q(s^{{}^{\prime}}\mid s,f,g)=\sum_{i\in A(s)}\sum_{j\in B(s)}q(s^{{}^{\prime}}\mid s,i,j)f(s,i)g(s,j)$ is the probability that starting from the state $s$, next state is $s^{{}^{\prime}}$ when the players choose strategies $f$ and $g$ respectively (For a stationary strategy $f$, $f(s,i)$ denotes the probability of choosing action $i$ in the state $s$). For any pair of stationary strategies $(f,g)\in F_{1}\times F_{2}$ of player I and II, we write the undiscounted pay-off for player I as: $\phi(s,f,g)=\liminf_{n\to\infty}\frac{\sum_{m=1}^{n}r^{m}(s,f,g)}{\sum_{m=1}^{n}\bar{\tau}^{m}(s,f,g)}$ for all $s\in S$. Where $r^{m}(s,f,g)$ and $\bar{\tau}^{m}(s,f,g)$ are respectively the expected reward and expected sojourn time for player I at the $m$ th decision epoch, when player I chooses $f$ and player II chooses $g$ respectively and the initial state is $s$. We define $r(f,g)=[r(s,f,g)]_{N\times 1}$, $\bar{\tau}(f,g)=[\bar{\tau}(s,f,g)]_{N\times 1}$ and $\phi(f,g)=[\phi(s,f,g)]_{N\times 1}$ as expected reward, expected sojourn time and undiscounted pay-off vector for a pair of stationary strategy $(f,g)\in F_{1}\times F_{2}$. Now $\begin{array}[]{cc}r^{m}(s,f,g)~{}~{}~{}~{}~{}=\sum_{s^{{}^{\prime}}\in S}P_{fg}(X_{m}=s^{{}^{\prime}}\mid X_{1}=s)r(s^{{}^{\prime}},f,g)\\\ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\sum_{s^{{}^{\prime}}\in S}r(s^{{}^{\prime}},f,g)q^{m-1}(s^{{}^{\prime}}\mid s,f,g)\\\ =[Q^{m-1}(f,g)r(f,g)](s)\end{array}$ and $\begin{array}[]{cc}\bar{\tau}^{m}(s,f,g)~{}~{}~{}~{}~{}=\sum_{s^{{}^{\prime}}\in S}P_{fg}(X_{m}=s^{{}^{\prime}}\mid X_{1}=s)\bar{\tau}(s^{{}^{\prime}},f,g)\\\ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\sum_{s^{{}^{\prime}}\in S}\bar{\tau}(s^{{}^{\prime}},f,g)q^{m-1}(s^{{}^{\prime}}\mid s,f,g)\\\ =[Q^{m-1}(f,g)\bar{\tau}(f,g)](s)\end{array}$ Since $Q(f,g)$ is a Markov matrix, we have by Kemeny et al., [12] $\lim_{n\to\infty}\frac{1}{n}\sum_{m=1}^{n}Q^{m}(f,g)$ exists and equals to $Q^{\ast}(f,g)$. It is obvious that $\lim_{n\to\infty}\frac{1}{n}\sum_{m=1}^{n}r^{m}(f,g)=[Q^{\ast}(f,g)r(f,g)](s)$ and $\lim_{n\to\infty}\frac{1}{n}\sum_{m=1}^{n}\bar{\tau}^{m}(f,g)=[Q^{\ast}(f,g)\bar{\tau}(f,g)](s)$. Thus we have for any pair of stationary strategies $(f_{1},f_{2})\in F_{1}\times F_{2}$, $\phi(s,f,g)=\frac{[Q^{*}(f,g)r(f,g)](s)}{[Q^{*}(f,g)\bar{\tau}(f,g)](s)}$ for all $s\in S$ where $Q^{*}(f,g)$ is the Cesaro limiting matrix of $Q(f,g)$. Definition 3 A zero-sum two person undiscounted semi-Markov game is said to have a value vector $\phi=[\phi(s)]_{N\times 1}$ if $\sup_{\pi_{1}\in\Pi_{1}}\inf_{\pi_{2}\in\Pi_{2}}\phi(s,\pi_{1},\pi_{2})=\phi(s)=\inf_{\pi_{2}\in\Pi_{2}}\sup_{\pi_{1}\in\Pi_{1}}\phi(s,\pi_{1},\pi_{2})$ for all $s\in S$. A pair of strategies $(\pi_{1}^{\ast},\pi_{2}^{\ast})\in\Pi_{1},\times\Pi_{2}$ is said to be an optimal strategy pair for the players if $\phi(s,\pi_{1}^{\ast},\pi_{2})\geq\phi(s)\geq\phi(s,\pi_{1},\pi_{2}^{\ast})$ for all $s\in S$ and all $(\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}$. Throughout this paper, we use the notion of undiscounted pay-off as limiting ratio average pay-off. ## 3 Results Theorem 1 Any zero-sum two person undiscounted perfect information semi-Markov game has a solution in pure semi-stationary strategies under limiting ratio average pay-offs. ###### Proof Let $\Gamma=<S=S_{1}\cup S_{2},A=\\{A(s):s\in S_{1}\\},B=\\{B(s):s\in S_{2}\\},q,P,r>$ be a zero-sum two person perfect information semi-Markov game under limiting ratio average pay-off, where $S=\\{1,2,,\cdots,N\\}$ is the finite state space. Let us fix an initial state $s\in S$. We assume that in $\mid S_{1}\mid$ number of states (i.e., states $\\{1,2,\cdots,S_{1}\\}$), player-II is a dummy and from states $\\{\mid S_{1}\mid+1,\cdots,\mid S_{1}\mid+\mid S_{2}\mid\\}$ player-I is a dummy. We assume that in this perfect information game, player-I has $d_{1},d_{2},\cdots,d_{S_{1}}$ number of pure actions in the states where he is non-dummy and similarly player-II has $t_{S_{1}+1},t_{S_{1}+2},\cdots,t_{S_{1}+S_{2}}$ number of pure actions available in the states where he is non-dummy. Let $D_{1}=\Pi_{i=1}^{S_{1}}d_{i}$ and $D_{2}=\Pi_{i=S_{1}+1}^{S_{1}+S_{2}}t_{i}$. Let us the consider the pay-off matrix $A_{D_{1}\times D_{2}}=\left[{\begin{array}[]{cccc}\phi(s,f_{1},g_{1})&\phi(s,f_{1},g_{2})&\cdots&\phi(s,f_{1},g_{D_{2}})\\\ \phi(s,f_{2},g_{1})&\phi(s,f_{2},g_{2})&\cdots&\phi(s,f_{2},g_{D_{2}})\\\ \vdots&\vdots&\ddots&\vdots\\\ \phi(s,f_{D_{1}},g_{1})&\phi(s,f_{2},g_{2})&\cdots&\phi(s,f_{D_{1}},g_{D_{2}})\\\ \end{array}}\right]$ Where $(f_{1},f_{2},\cdots,f_{D_{1}})$ and $(g_{1},g_{2},\cdots,g_{D_{2}})$ are the pure stationary strategies chosen by player-I and II repsectively. In order to prove the existence of a pure semi-stationary strategy, we have to prove that this matrix has a pure saddle point for each initial state $s\in S$. Now by theorem $2.1$ (“Some topics in two-person games”, in the Advances in Game Theory.(AM-52), Volume 52, 1964, page-$6$) proposed by Shapley [17], we know that, if A is the pay-off matrix of a two-person zero-sum game and if every $2\times 2$ submatrix of $A$ has a saddle point, then A has a saddle point. So, we concentrate only on a $2\times 2$ matrix and observe if it has a saddle point or not. We consider the $2\times 2$ submatrix: $\left[{\begin{array}[]{cccc}\phi(s,f_{i},g_{j})&\phi(s,f_{i},g_{j^{{}^{\prime}}})\\\ \phi(s,f_{i^{{}^{\prime}}},g_{j})&\phi(s,f_{i}^{{}^{\prime}},g_{j^{{}^{\prime}}})\\\ \end{array}}\right]$ Where $i^{{}^{\prime}},i\in\\{d_{1},d_{2}\cdots,d_{S_{1}}\\},(i\neq i^{{}^{\prime}})$ and $j,j^{{}^{\prime}}\in\\{t_{S_{1}+1},t_{S_{1}+2},\cdots,t_{S_{1}+S_{2}}\\},(j\neq j^{{}^{\prime}})$. Now, by suitably renumbering the strategies, we can write the above sub-matrix as: $A^{{}^{\prime}}_{2\times 2}=\left[{\begin{array}[]{cccc}\phi(s,f_{1},g_{1})&\phi(s,f_{1},g_{2})\\\ \phi(s,f_{2},g_{1})&\phi(s,f_{2},g_{2})\\\ \end{array}}\right]$ Now we know $\phi(s,f_{i.},g_{.j})=\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{i.})r(t,f_{i.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.j})r(v,g_{.j})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{i.})\tau(t,f_{i.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.j})\tau(v,g_{.j})]}$ We replace $\phi(s,f_{i.},g_{.j})$ by the expression above in the matrix $A$. Let us rename the elements of the $2\times 2$ sub-matrix as we consider the following two cases when $A$ can not have a pure saddle point. Case-1: $\phi(s,f_{1},g_{1})$ is row-minimum and column-minimum, $\phi(s,f_{1},g_{2})$ is row-maximum and column-maximum, $\phi(s,f_{2},g_{1})$ is row-maximum and column-maximum and $\phi(s,f_{2},g_{2})$ is row-minimum and column-minimum. These four conditions can be written as: $\phi(s,f_{1},g_{1})<\phi(s,f_{1},g_{2})$, $\phi(s,f_{1},g_{1})<\phi(s,f_{2},g_{1})$ $\phi(s,f_{2},g_{2})<\phi(s,f_{2},g_{1})$ and $\phi(s,f_{2},g_{2})<\phi(s,f_{1},g_{2})$. So, the above four inequalities can be written elaborately as: $\displaystyle\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}$ (3.1) $\displaystyle<\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}$ $\displaystyle\noindent\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}$ (3.2) $\displaystyle<\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}$ $\displaystyle\noindent\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}$ (3.3) $\displaystyle<\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}$ $\displaystyle\noindent\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}$ (3.4) $\displaystyle<\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}$ We rename the strategies $f_{1.},f_{2.},g_{.1}$ and $g_{.2}$ as $1.$, $2.$, $.1$ and $.2$ respectively to avoid notational complexity. Hence, $(3.1)$ yields $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.2)[\tau(t,1.)r(v,.2)-r(t,1.)\tau(v,.2)]\\\ +\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.1)[r(t,1.)\tau(v,.1)\\\ -r(v,.1)\tau(t,1.)]\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.1)[\tau(v,.1)r(v,.2)-r(v,.1)\tau(v,.2)]\textgreater 0\end{split}$ (3.5) $(3.3)$ yields $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)q^{\ast}(v\mid s,.1)[\tau(t,2.)r(v,.1)-r(t,2.)\tau(v,.1)]\\\ +\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.2)q^{\ast}(v\mid s,.1)[\tau(v,.2)r(v,.1)\\\ -r(v,.2)\tau(v,.1)]+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)q^{\ast}(v\mid,s.2)[r(t,2.)\tau(v,.2)-\tau(t,.2)r(v,.2)]>0\end{split}$ (3.6) $(3.2)$ yields $\begin{split}\sum_{t=1}^{S_{1}}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)q^{\ast}(t\mid s,.2)[\tau(t,1.)r(t,2.)-r(t,1.)\tau(t,2.)]\\\ +\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.1)[r(v,1.)\tau(t,.1)\\\ -r(t,1.)\tau(v,.1)]+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)q^{\ast}(v\mid s,.1)[\tau(v,.1)r(t,2.)-r(v,.1)\tau(t,2.)]>0\end{split}$ (3.7) $(3.4)$ yields $\begin{split}\sum_{t=1}^{S_{1}}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)q^{\ast}(t\mid s,.2)[r(t,1.)\tau(t,2.)-r(t,2.)\tau(t,1.)]\\\ +\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)q^{\ast}(v\mid s,.2)[r(v,.2)\tau(t,2.)\\\ -r(t,2.)\tau(v,.2)]+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.2)[r(t,1.)\tau(v,.2)-r(v,.2)\tau(t,1.)]>0\end{split}$ (3.8) Using the fact that, $0<q^{\ast}(s^{{}^{\prime}}\mid s,a)<1$, (where $s,s^{{}^{\prime}}\in\\{1,2,\cdots,N\\}$, $a$ is the action chosen by either player-I or II) and adding $(3.5)$ and $(3.6)$, we get $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(\tau(t,1.)r(v,.2)-r(t,1.)\tau(v,.2))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(t,1.)\tau(v,.1)-\tau(t,1.)r(v,.1))+\\\ \sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(v,.1)\tau(t,2.)-r(t,2.)\tau(v,.1))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\\\ (r(t,.2)\tau(v,2.)-\tau(t,2.)r(v,.2))>0\end{split}$ (3.9) Similarly adding $(3.7)$ and $(3.8)$, we get $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(\tau(v,.2)r(t,1.)-r(v,.2)\tau(t,1.))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(v,.1)\tau(t,1.)-\tau(v,.1)r(t,1.))+\\\ \sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(t,2.)\tau(v,.1)-r(v,.1)\tau(t,2.))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\\\ (r(v,.2)\tau(t,2.)-\tau(t,.2)r(v,2.))>0\end{split}$ (3.10) From $(3.9)$ and $(3.10)$ we clearly get a contradiction. Now we consider the next case: Case-2: $\phi(s,f_{1},g_{1})$ is row maximum and column maximum, $\phi(s,f_{1},g_{2})$ is row minimum and column minimum, $\phi(s,f_{2},g_{1})$ is row-minimum and column minimum and $\phi(s,f_{2},g_{2})$ is row-maximum and column-maximum. These four conditions can be written as: $\phi(s,f_{1},g_{1})>\phi(s,f_{1},g_{2})$, $\phi(s,f_{1},g_{1})>\phi(s,f_{2},g_{1})$, $\phi(s,f_{2},g_{2})>\phi(s,f_{2},g_{1})$ and $\phi(s,f_{2},g_{2})>\phi(s,f_{1},g_{2})$. We can re-write them as follows: $\displaystyle\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}$ (3.11) $\displaystyle>\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}.$ $\displaystyle\noindent\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}$ (3.12) $\displaystyle>\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}.$ $\displaystyle\noindent\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}$ (3.13) $\displaystyle>\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})r(v,g_{.1})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.1})\tau(v,g_{.1})]}.$ $\displaystyle\noindent\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{2.})\tau(t,f_{2.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}$ (3.14) $\displaystyle>\frac{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})r(v,g_{.2})]}{\sum_{t=1}^{S_{1}}[q^{\ast}(t\mid s,f_{1.})\tau(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_{2}}[q^{\ast}(v\mid s,g_{.2})\tau(v,g_{.2})]}.$ Like the previous case we also rename the strategies $f_{1.},f_{2.},g_{1.}$ and $g_{2.}$ as $1.$, $2.$, $.1$ and $.2$ respectively to avoid notational complexity. Hence, $(3.11)$ yields $\begin{split}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)\tau(t,1.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid,s.1)r(v,.1)+\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)r(t,1.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid,s.2)\tau(v,.2)\\\ +\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.2)r(v,.1)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.2)\tau(v,.2)-\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.2)\tau(t,1.)r(v,.2)\\\ -\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.1)r(t,1.)\tau(v,.1)-\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.2)r(v,.2)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.1)\tau(v,.1)>0\end{split}$ (3.15) $(3.13)$ yields $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)\tau(t,2.)q^{\ast}(v\mid,s.2)r(v,.2)+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)r(t,2.)q^{\ast}(v\mid,s.1)\tau(v,.1)\\\ +\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.1)\tau(v,.1)q^{\ast}(v\mid s,.2)r(v,.2)-\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)q^{\ast}(v\mid s,.2)\tau(t,2.)r(v,.1)\\\ -\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.1)\tau(t,.2)r(v,2.)-\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.2)\tau(v,.2)q^{\ast}(v\mid s,.1)r(v,.1)>0\end{split}$ (3.16) $(3.12)$ yields $\begin{split}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,2.)\tau(t,2.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid,s.1)r(v,.1)+\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)r(t,1.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid,s.1)\tau(v,.1)\\\ +\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)r(t,1.)\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,2.)\tau(t,2.)-\sum_{t=1}^{S_{1}}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)q^{\ast}(t\mid s,.2)\tau(t,1.)r(t,2.)-\\\ \sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,1.)q^{\ast}(v\mid s,.1)r(v,.1)\tau(t,1.)-\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,.2)r(t,.2)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.1)\tau(v,.1)>0\end{split}$ (3.17) $(3.14)$ yields $\begin{split}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)\tau(t,1.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid,s.2)r(v,.2)+\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,2.)r(t,2.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid,s.2)\tau(v,.2)\\\ +\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,2.)r(t,2.)\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)\tau(t,1.)-\sum_{t=1}^{S_{1}}\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)q^{\ast}(t\mid s,.2)r(t,1.)\tau(t,2.)\\\ -\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(t\mid s,2.)q^{\ast}(v\mid s,.2)r(v,.2)\tau(t,2.)-\sum_{t=1}^{S_{1}}q^{\ast}(t\mid s,1.)r(t,1.)\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{\ast}(v\mid s,.2)\tau(v,.2)>0\end{split}$ (3.18) Similarly using the fact that $0<q^{\ast}(s^{{}^{\prime}}\mid s,a)<1$, (where $s,s^{{}^{\prime}}\in\\{1,2,\cdots,N\\}$, $a$ is the action chosen by either player-I or II) and adding $(3.15)$ and $(3.16)$, we get $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(\tau(v,.2)r(t,1.)-r(v,.2)\tau(t,1.))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(v,.1)\tau(t,1.)-\tau(v,.1)r(t,1.))+\\\ \sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(t,2.)\tau(v,.1)-r(v,.1)\tau(t,2.))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\\\ (r(v,.2)\tau(t,2.)-r(t,.2)\tau(v,2.))>0\end{split}$ (3.19) Now adding $(3.17)$ and $(3.18)$ we get $\begin{split}\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(\tau(t,1.)r(v,.2)-r(t,1.)\tau(v,.2))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(t,1.)\tau(v,.1)-\tau(t,1.)r(v,.1))+\\\ \sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}(r(v,.1)\tau(t,2.)-r(t,2.)\tau(v,.1))+\sum_{t=1}^{S_{1}}\sum_{v=S_{1}+1}^{S_{1}+S_{2}}\\\ (r(t,.2)\tau(v,2.)-r(v,.2)\tau(t,2.))>0\end{split}$ (3.20) From $(3.19)$ and $(3.20)$ we get a contradiction. Thus, every $2\times 2$ submatrix has a pure saddle point and by theorem $2.1$ proposed by Shapley ([17], (page-$6$)), the matrix $A$ has a pure saddle point and the game $\Gamma$ has a pure stationary optimal strategy pair for each initial state. Suppose $(f_{1},f_{2},\cdots,f_{N})$ be optimal pure stationary strategies for player-I when the initial states are $1,2,\cdots,N$ respectively and $(g_{1},g_{2},\cdots,g_{N})$ be optimal pure stationary strategies for player- II when the initial states are $1,2,\cdots,N$ respectively. The $f^{\ast}=(f_{1},f_{2},\cdots,f_{N})$ and $g^{\ast}=(g_{1},g_{2},\cdots,g_{N})$ are the optimal pure semi-stationary strategies for player-I and II respectively in the perfect information semi- Markov game $\Gamma$. ## 4 Calculating the Cesaro Limiting Matrix of A Transition Matrix Lazari et al., [9] proposed an algorithm to compute the Cesaro limiting matrix of any Transition (Stochastic) matrix $Q$ with $n$ states. The algorithm runs as follows: Input: Let the transition matrix $Q\in M_{n}(\mathbb{R})$ (where $M_{n}(\mathbb{R})$ is the set of $n\times n$ matrices over the field of real numbers). Output: The Cesaro limiting matrix $Q^{\ast}\in M_{n}(\mathbb{R})$. Step $1$: Determine the characteristic polynomial $C_{Q}(z)=\mid Q-zI_{n}\mid$. Step $2$: Divide the polynomial $C_{Q}(z)$ by $(z-1)^{m(1)}$ (where $m(1)$ is the algebraic multiplicity of the eigenvalue $z_{0}=1$) and call it quotient $T(z)$. Step $3$: Compute the quotient matrix $W=T(Q)$. Step $4$: Determine the limiting matrix $Q^{\ast}$ by dividing the matrix $W$ by the sum of its elements of any arbitrary row. ## 5 An Example Example: Consider a PISMG $\Gamma$ with four states $S=\\{1,2,3,4\\}$, $A(1)=\\{1,2\\}=A(2)$, $B(1)=B(2)=\\{1\\}$, $B(3)=B(4)=\\{1,2\\}$, $A(3)=A(4)=\\{1\\}.$ Player II is the dummy player in the state $1$ and $2$ and player I is the dummy player for the states $3$ and $4$. Rewards, transition probabilities and expected sojourn times for the players are given below State-1: 1.1 ($\frac{1}{2}$,$\frac{1}{2}$,0,0) 1 1 ($\frac{1}{3}$,$\frac{2}{3}$,0,0) 0.9 State-2: 3.1 ($\frac{1}{2}$,$\frac{1}{2}$,0,0) 1 3 ($\frac{2}{3}$,$\frac{1}{3}$,0,0) 1.1 State-3: $3$ $5.8$ $(0,0,1,0)$ ($0,0,1,0$) 1 2 State-4: $4$ $2$ ($\frac{1}{2}$,0,$\frac{1}{2}$,0) ($\frac{1}{2}$,0,$\frac{1}{2}$,0) $2$ $1.1$ Where a cell $(r)$ $(q_{1},q_{2},q_{3},q_{4})$ $\bar{\tau}$ represents that $r$ is the immediate rewards of the playes, $q_{1}$, $q_{2}$, $q_{3}$, $q_{4}$ represents that the next states are $1$, $2$, $3$ and $4$ respectively and $\bar{\tau}$ is the expected sojourn time if this cell is chosen at present. Here player I is the row player and player II is the column player. Player-I has the pure statioanry strategies $f_{1}=\\{(1,0),(1,0),1,1\\}$, $f_{2}=\\{(1,0),(0,1),1,1\\}$, $f_{3}=\\{(0,1),(1,0),1,1\\}$ and $f_{4}=\\{(0,1),(0,1),1,1\\}$. Similarly the pure stationary strategies for player-II are $g_{1}=\\{1,1,(1,0),(1,0)\\}$, $g_{2}=\\{1,1,(1,0),(0,1)\\}$, $g_{3}=\\{1,1,(0,1),(1,0)\\}$ and $g_{4}=\\{1,1,(0,1),(0,1)\\}$. Now, we calculate the undiscounted value of the PISMG for each initial state by complete enumeration method. Using the alogrithm described in section $4$, we calculate the Cesaro limiting matrices as follows: $Q^{\ast}(f_{1},g_{1})=Q^{\ast}(f_{2},g_{1})=Q^{\ast}(f_{3},g_{1})=Q^{\ast}(f_{4},g_{1})=\left[\begin{array}[]{rrrrr}\frac{1}{2}&\frac{1}{2}&0&0\\\ \frac{1}{2}&\frac{1}{2}&0&0\\\ 0&0&1&0\\\ \frac{1}{2}&\frac{1}{2}&0&0\\\ \end{array}\right]$, $Q^{\ast}(f_{2},g_{1})=Q^{\ast}(f_{2},g_{2})=Q^{\ast}(f_{2},g_{3})=Q^{\ast}(f_{2},g_{4})=\left[\begin{array}[]{rrrrr}\frac{1}{2}&\frac{1}{2}&0&0\\\ \frac{2}{3}&\frac{1}{3}&0&0\\\ 0&0&1&0\\\ \frac{1}{2}&0&\frac{1}{2}&0\\\ \end{array}\right]$, $Q^{\ast}(f_{3},g_{1})=Q^{\ast}(f_{3},g_{2})=Q^{\ast}(f_{3},g_{3})=Q^{\ast}(f_{3},g_{4})=\left[\begin{array}[]{rrrrr}\frac{1}{3}&\frac{2}{3}&0&0\\\ \frac{1}{2}&\frac{1}{2}&0&0\\\ 0&0&1&0\\\ \frac{1}{2}&0&\frac{1}{2}&0\\\ \end{array}\right]$, $Q^{\ast}(f_{4},g_{1})=Q^{\ast}(f_{4},g_{2})=Q^{\ast}(f_{4},g_{3})=Q^{\ast}(f_{4},g_{4})=\left[\begin{array}[]{rrrrr}\frac{1}{3}&\frac{2}{3}&0&0\\\ \frac{2}{3}&\frac{1}{3}&0&0\\\ 0&0&1&0\\\ \frac{1}{2}&0&\frac{1}{2}&0\\\ \end{array}\right]$. Now the reward vector $\hat{r}(f_{1},g_{1})=(1.1,3.1,3,4)$ and expected sojourn time vector $\bar{\tau}(f_{1})=(1,1,1,2)$. Thus by using the definition of $\hat{\phi}$, we get $\hat{\phi}(f_{1},g_{1})=(2.1,2.1,3,0.9)$. Similarly we calculate the undiscounted pay-offs for other pairs of pure stationary strategies as: $\hat{\phi}(f_{1},g_{2})=(2.1,2.1,3,0.9)$, $\hat{\phi}(f_{1},g_{3})=(2.1,2,2.9,0.9)$, $\hat{\phi}(f_{1},g_{4})=(2.1,2.1,2.9,0.9)$, $\hat{\phi}(f_{2},g_{1})=(1.8353,1.8362,3,0.53)$, $\hat{\phi}(f_{2},g_{3})=(1.8353,1.8362,2.9,1.2776)$, $\hat{\phi}(f_{2},g_{2})\\\ =(1.8353,1.8362,3,0.58)$, $\hat{\phi}(f_{2},g_{4})=(1.8353,1.8362,2.9,1.2773)$, $\hat{\phi}(f_{3},g_{1})=(2.2985,2.2985,3,0.4088)$, $\hat{\phi}(f_{3},g_{2})=(2.2985,2.2985,3,0.4088)$, $\hat{\phi}(f_{3},g_{3})=(2.2985,2.2985,2.9,0.4088)$, $\hat{\phi}(f_{3},g_{4})=(2.2985,2.2988,\\\ 2.9,1.2182)$, $\hat{\phi}(f_{4},g_{1})=(2.2979,2.2985,3,0.4277)$, $\hat{\phi}(f_{4},g_{3})=(2.112,2.1129,2.9,1.2141)$, $\hat{\phi}(f_{4},g_{2})=(2.112,2.1129,3,0.4267)$, $\hat{\phi}(f_{4},g_{4})=(2.112,2.1129,2.9,1.2137)$. For initial state $1$, we get the pay-off matrix $A$ as described in section $3$, as: $A^{1}_{4\times 4}=\left[{\begin{array}[]{cccc}2.1&2.1&2.1&2.1\\\ 1.8353&1.8353&1.8353&1.8353\\\ 2.2985&2.2985&2.2985&2.2985\\\ 2.2979&2.112&2.112&2.112\\\ \end{array}}\right].$ So, this matrix has a pure saddle point at the $3$rd row, $3$rd column position and we conclude $(f_{3},g_{3})$ is the optimal pure stationary strategy pair for the players for initial state $1$. Similarly for initial state $2$, $3$ and $4$ we get pay-off matrices as: $A^{2}_{4\times 4}=\left[{\begin{array}[]{cccc}2.1&2.1&2&2.1\\\ 1.8362&1.8362&1.8362&1.8362\\\ 2.2985&2.2985&2.2985&2.2988\\\ 2.2985&2.1129&2.1129&2.1129\\\ \end{array}}\right]$, $A^{3}_{4\times 4}=\left[{\begin{array}[]{cccc}3&3&2.9&2.9\\\ 3&3&2.9&2.9\\\ 3&3&2.9&2.9\\\ 3&3&2.9&2.9\\\ \end{array}}\right]$ and $A^{4}_{4\times 4}=\left[{\begin{array}[]{cccc}0.9&0.9&0.9&0.9\\\ 0.53&0.58&1.2776&1.2773\\\ 0.4088&0.4088&0.4088&1.2182\\\ 0.4277&0.4267&1.2141&1.2137\\\ \end{array}}\right]$. The optimal pure statioanry strategy pairs of the player-I and player-II for the initial states $2$, $3$ and $4$ are $(f_{3},g_{3})$, $(f_{1},g_{3})$ and $(f_{1},g_{2})$ repectively. Thus the optimal pure semi-stationary strategy is $f^{\ast}=(f_{3},f_{3},f_{1},f_{1})$ and $g^{\ast}=(g_{3},g_{3},g_{3},g_{2})$ for the players I and II respectively and the game has a value $(2.2985,2.2985,2.9,0.9)$. ## 6 Conclusion The purpose of this paper is to show that there exists an optimal pure semi- stationary strategy pair by just looking at the pay-off matrix in any Perfect Information semi-Markov game. Thus, the existence of the value and a pair of pure semi-stationary optimals for the players in a zero-sum two person Perfect Information undiscounted semi-Markov game can be obtained as a corollary of Shapley’s paper (1964) ([17]) directly without going through the discounted version. Furthermore, the existence of a pure optimal strategy (not necessarily stationary/ semi-statioanry) for an $N$ person Perfect Information non-cooperative semi-Markov game under any standard (discounted/ undiscounted) pay-off criteria can be shown. We shall elaborate on this in a forthcoming paper. ## References * [1] Lal, Arbind K and Sinha, Sagnik, : Zero-sum two-person semi-Markov games. Journal of applied probability, Cambridge University Press (1992) * [2] Luque-Vasquez, Fernando and Hernandez-Lerma, Onesimo, : Semi-Markov control models with average costs. Applicationes mathematicae, Instytut Matematyczny Polskiej Akademii Nauk (1999) * [3] Sinha, Sagnik and Mondal, Prasenjit : Semi-Markov decision processes with limiting ratio average rewards. Journal of Mathematical Analysis and Applications, Elsevier(2017). * [4] Mondal, Prasenjit and Sinha, Sagnik, : Ordered field property for semi-Markov games when one player controls transition probabilities and transition times.International Game Theory Review (2015). * [5] Jewell, William S, : Markov-renewal programming. I: Formulation, finite return models, : Operations Research, INFORMS(1963). * [6] Thuijsman, Frank and Raghavan, Thirukkannamangai ES, : Perfect information stochastic games and related classes. International Journal of Game Theory, Springer(1997). * [7] Adler, Ilan and Resende, Mauricio GC and Veiga, Geraldo and Karmarkar, Narendra, : An implementation of Karmarkar’s algorithm for linear programming. Mathematical programming, Springer (1989) San Diego(1995). * [8] Mondal, Prasenjit, : Computing semi-stationary optimal policies for multichain semi-Markov decision processes. Annals of Operations Research, Springer(2020). * [9] Lazari, Alexandru and Lozovanu, Dmitrii, :New algorithms for finding the limiting and differential matrices in Markov chains. Buletinul Academiei de Moldovei. Matematica (2020). * [10] Mondal, Prasenjit, : On zero-sum two-person undiscounted semi-Markov games with a multichain structure. Advances in Applied Probability, Cambridge University Press(2017). * [11] Shapley, Lloyd S, : Stochastic games. Proceedings of the national academy of sciences, National Acad Sciences (1953). * [12] Kemeny, John G and Snell, J Laurie, : Finite continuous time Markov chains. Theory of Probability & Its Applications, SIAM (1961). * [13] Gillette, Dean, : 9. STOCHASTIC GAMES WITH ZERO STOP PROBABILITIES. Contributions to the Theory of Games (AM-39), Volume IIIPrinceton University Press (2016). * [14] Liggett, Thomas M and Lippman, Steven A, : Stochastic games with perfect information and time average payoff.Siam Review, SIAM (1969) * [15] Derman, Cyrus, : On sequential decisions and Markov chains. Management Science, INFORMS(1962). * [16] Howard, Ronald A, : Semi-Markov and decision processes. Wiley(1971). * [17] Dresher, M. and Berkovitz, L.D. and Aumann, R.J. and Shapley, L.S. and Davis, M.D. and Tucker, A.W, : Advances in Game Theory. Annals of Mathematics Studies, Princeton University Press(1964).
@todonotes@numberoftodonotes *[inlinelist,1]label=(), # Sound and Complete Proof Rules for Probabilistic Termination Rupak Majumdar 0000-0003-2136-0542 Max Planck Institute for Software Systems (MPI-SWS)Paul-Ehrlich-Straße, Building G26Kaiserslautern67663Germany <EMAIL_ADDRESS>and V. R. Sathiyanarayana 0009-0006-5187-5415 Max Planck Institute for Software Systems (MPI-SWS)Paul-Ehrlich-Straße, Building G26Kaiserslautern67663Germany<EMAIL_ADDRESS> ###### Abstract. Termination is a fundamental question in the analysis of probabilistic imperative programs. We consider the _qualitative_ and _quantitative_ probabilistic termination problems for an imperative programming model with discrete probabilistic choice and demonic bounded nondeterminism. The qualitative question asks if the program terminates almost surely, no matter how nondeterminism is resolved; the quantitative question asks for a bound on the probability of termination. Despite a long and rich literature on the topic, no sound and relatively complete proof systems were known for this problem. We provide the first sound and relatively complete proof rules for proving qualitative and quantitative termination in the assertion language of arithmetic. Our proof rules use supermartingales as estimates of likelihood of the prgroam’s evolution—the key insight is to use appropriately defined finite-state sub-instances. Our completeness result shows how to construct a suitable supermartingales from an almost-surely terminating program. We also show that proofs of termination in many existing proof systems can be transformed to proofs in our system, pointing to its applicability in practice. As an application of our proof rule, we show a proof of almost sure termination for the two-dimensional random walker. ## 1\. Introduction Probabilistic programming languages extend the syntax of usual deterministic computation with primitives for random choice. Thus, probabilistic programs express randomized computation and have found applications in many domains where randomization is essential. We study the _termination problem_ for probabilistic programs with discrete probabilistic and nondeterministic choice. Termination is a fundamental property of programs and formal reasoning about (deterministic) program termination goes back to Turing (Turing, 1937). Its extension to the probabilistic setting can be either qualitative or quantitative. _Qualitative_ termination, or Almost-Sure Termination $(\mathsf{AST})$ asks if the program terminates almost-surely, no matter how the nondeterminism is resolved. _Quantitative_ termination, on the other hand, relates to finding upper and lower bounds on the probability of termination that hold across all resolutions of nondeterminism. For finite-state probabilistic programs, both qualitative and quantitative termination problems are well understood: there are sound and complete _algorithmic_ procedures for termination that operate by analyzing the underlying finite-state Markov decision processes. Intuitively, every run of the system eventually arrives in an end component and thus, computing the termination probabilities reduces to computing the reachability probabilities for the appropriate end components (Hart et al., 1983; Vardi, 1985; Courcoubetis and Yannakakis, 1995; Bianco and de Alfaro, 1995; de Alfaro et al., 2007; de Alfaro and Henzinger, 2000). The story is different for infinite state spaces. Existing techniques for deducing termination typically take the form of _sound_ proof rules over a program logic (Hart et al., 1983; de Alfaro et al., 2007; McIver and Morgan, 2005; Chakarov and Sankaranarayanan, 2013; Chatterjee et al., 2017; McIver et al., 2018; Chatterjee et al., 2022). These rules ask for certificates consisting of a variety of mathematical entities that satisfy locally- checkable properties. None of them, however, are known to be complete; that is, we do not know if certificates can always be found for terminating programs. The search for relatively complete proof rules has been a long- standing open problem. Note that, since the termination problem is undecidable, one can only hope for completeness _relative_ to an underlying logic. In this paper, we describe the first sound and relatively complete proof rules for qualitative and quantitative termination of probabilistic programs. We present our rules in a simple proof system in the style of Floyd (1993) that applies naturally to our program model. This proof system uses arithmetic as its assertion language, interpreted over the standard model of rational numbers. Soundness means that if our proof rule applies, then indeed the system satisfies the (qualitative or quantitative) termination criterion. Completeness of our rules is relative to the completeness of a proof system for the underlying assertion language, i.e., arithmetic. Accordingly, we show an effective reduction from the validity of our program logic to the validity of a finite number of assertions in arithmetic. Whenever the original program terminates (qualitatively or quantitatively), one can construct a proof in our program logic in such a way that all relevant certificates can be expressed in the assertion language. This is important: merely knowing that certain semantic certificates exist may not be sufficient for a proof system, e.g., if these certificates are provided non-constructively or require terms that cannot be expressed in the assertion language. Let us be more precise. We work in an imperative programming model with variables ranging over rationals. Our model fixes a finite set of program locations, and defines a guarded transition relation between the locations representing computational steps. At marked locations, the model contains primitives for probability distributions over available transitions. This allows for the expression of _bounded_ nondeterministic and probabilistic choice; we assume the nondeterminism is resolved demonically. We fix the language of arithmetic as our expression and assertion language, and interpret formulas over the standard model of the rational numbers.111 One can generalize our result to _arithmetical_ structures (Harel et al., 2000), but we stick to the simpler setting for clarity. The semantics of our programming language is given by a Markov decision process on countably many states, where a demonic scheduler resolves the nondeterminism. Since the language has bounded nondeterminism, we note that each state has a finite number of immedicate successor states and so, for every scheduler, the number of states reached in a bounded number of steps is finite. Given a program and a terminal state, the _qualitative termination_ question asks if the infimum over all schedulers resolving nondeterminism of the probability of reaching the terminal state is one, that is, if the program almost surely terminates under all possible schedulers. The _quantitative termination_ question asks if the probability of reaching the terminal state is bounded above or below by a given probability $p$. For the special case of programs without probabilistic choice, sound and relatively complete proof systems for termination are known (Manna and Pnueli, 1974; Harel, 1980; Apt, 1981): they involve finding a _variant_ function from (reachable) program states to a well-founded domain that decreases on every step of the program, and which maps the terminal state to a minimal element. A natural generalization of variant functions is a _ranking supermartingale_ : a function from states to reals that reduces in expectation by some amount on each step of the program. Ranking supermartingales of various flavors are the workhorse of existing proof rules for qualitative termination. Unfortunately, an example from (Majumdar and Sathiyanarayana, 2024) shows that a proof rule based only on a ranking supermartingale is incomplete: there may not exist a ranking supermartingale that decreases in expectation on each step and one may require transfinite ordinals in proofs. #### Basic Ingredients We take a different perspective. Instead of looking for a mapping that represents the distance to a terminal state that goes down in each step, as in previous approaches, we consider modeling the relative likelihood of the program’s evolution instead. We provide two different proof rules. In the first, we certify almost sure termination using a function that is unbounded and non-increasing in expectation on _“most”_ states. In the second, we certify almost sure termination using a family of functions, one for each reachable state, each non-increasing in expectation. Both certificates track the execution’s relative likelihood in subtly different ways, and both require additional side conditions. We prove an _unrolling lemma_ that is a central tool for our completeness results. It states that if the infimum over all schedulers of the probability of reaching a terminal state is at least $p$, then for every $\epsilon$, there is a finite upper bound $k$ such that the infimum over all schedulers of the probability mass of reaching the terminal state within $k$ steps is at least $p-\epsilon$. In particular, the set of states reachable in $k$ steps defines a finite state space. The unrolling lemma appears as a basic ingredient in characterizing the complexity of almost sure termination (Kaminski et al., 2019; Majumdar and Sathiyanarayana, 2023). We show that it provides a surprisingly powerful tool in proving completeness of proof systems by “carving out” finite state systems out of infinite-state termination problems. #### Our Proof Rules for Qualitative Termination I. Our first rule asks for a supermartingale (Doob, 1953) $V$ that is non- increasing in expectation on all states except for some set containing the terminal state. In addition, our rule asks for a variant function $U$ that certifies that every reachable state has some finite path to the terminal state. We also require a few compatibility conditions on $U$ from $V$ to let us conclude the almost-sure escape from sets of states within which $V$ is bounded. We show the rule is sound by partitioning the collection of all runs and strategically employing variant arguments and/or martingale theory within each partition. The completeness of the rule uses the following observation. Fix an enumeration of the reachable states $s_{1},s_{2},\ldots$ of an almost-surely terminating program. Let $\Pr_{s}[\Diamond\mathbbm{1}_{>n}]$ denote the probability that a run starting from state $s$ reaches some state in $\\{s_{n},s_{n+1},\ldots\\}$ in the enumeration. For a fixed $s$, we first show that $\Pr_{s}[\Diamond\mathbbm{1}_{>n}]$ goes to zero as $n\rightarrow\infty$. Following a diagonal-like construction from the theory of countable Markov chains (Mertens et al., 1978), we next define a sequence $n_{1},n_{2},\ldots$ such that $\Pr_{j}[\Diamond\mathbbm{1}_{s_{n_{k}}}]\leq\frac{1}{2^{k}}$ for all $j\leq k$ (this is possible since the limit goes to zero and by the unrolling lemma). This lets us define a supermartingale defined over the states $s$ as $\sum_{k\in\mathbb{N}}\Pr{}_{s}[\Diamond\mathbbm{1}_{>n_{k}}]$ This supermartingale satisfies the requirements of our rule. Moreover, we show that this supermartingale can be expressed in arithmetic, granting the rule relative completeness. II. We provide a second dual proof rule that takes a more local view. Our first rule required certificates inferred from the global behaviour of the program. Our second rule, by contrast, requires proofs of _near termination_ , i.e., termination with some non-zero probability, from _every_ reachable state. If these proofs together indicate a non-zero lower bound of termination across all states, a zero-one law indicates almost-sure termination. Therefore, our rule asks for a proof of termination with probability at least $1-\epsilon$, for some $\epsilon>0$. How does our rule certify termination from a given state with probability $1-\epsilon$? It incorporates a proof rule for quantitative termination by Chatterjee et al. (2022) that builds finite supermartingales that take on non- trivial values for only finitely many states. This rule employs _stochastic invariants_ (Chatterjee et al., 2017): pairs $(\mathsf{SI},p)$ such that the probability with which executions leave the set of states $\mathsf{SI}$ is bounded above by $p$. Our proof rule needs a family of stochastic invariants, one for each reachable state. The completeness of this rule uses the unrolling lemma in a crucial way. The unrolling lemma implies a finite state space around every state that accumulates a termination probability mass of $1-\epsilon$. We use this fact to show that the proof rule of Chatterjee et al. (2022) is complete for finite-state systems. This completeness in turn induces the finite nature of these supermartingales describing the stochastic invariants around each state. #### Quantitative Termination All our rules for quantitative termination again use the stochastic invariants of Chatterjee et al. (2017). A stochastic invariant easily implies an upper bound rule: if there is a stochastic invariant $(\mathsf{SI},p)$ that avoids terminal states, the probability of termination is upper bounded by $p$. For lower bounds, our starting point is the proof rule proposed by Chatterjee et al. (2022) using stochastic invariants: if there is a stochastic invariant $(\mathsf{SI},p)$ such that runs almost surely terminate within $\mathsf{SI}$ or leave $\mathsf{SI}$, then the probability of termination is at least $1-p$. While they claimed soundness and completeness for their rule, there were two issues. First, their rule was paramterized by a certificate for qualitative termination. In the absence of a relatively complete proof rule for qualitative termination, one could not achieve relative completeness. Second, we show in Section 5.3 an explicit example where their rule cannot apply. We show a sound and complete rule that is a modification of their rule: we require that for each $n\in\mathbb{N}$, there is a stochastic invariant $(\mathsf{SI}_{n},p+\frac{1}{n})$ such that all runs almost surely terminate within $\mathsf{SI}_{n}$ or leave $\mathsf{SI}_{n}$. Sound and relatively complete certificates for almost sure termination are now given using our previous technique for qualitative termination. In summary, we provide the first sound and relatively complete proof rules for qualitative and quantitative termination, culminating the substantial body of work on probabilistic termination in the last four decades. #### Other Related Work Our proof rules use supermartingales that are closely related to Lyapunov functions in stability of dynamical systems. Lyapunov functions have been used to characterize recurrence and transience in infinite-state Markov chains, going back to the work of Foster (Foster, 1951, 1953). Completeness of Lyapunov functions was shown in general by Mertens et al. (1978). Our proof of soundness and completeness uses insights from Mertens et al. (1978), but we have to overcome several technical issues. First, we have demonic nondeterminism and therefore require the unrolling lemma to deal with infimums over all schedulers. Second, we do not have irreducibility. Finally, whereas a Markov chain is either recurrent or transient, we cannot assume that a program that is not almost sure terminating has a strong transience property. Thus, we have to prove these properties ab initio. In the literature, there exist sound proof rules for $\mathsf{AST}$ that use supermartingales. One rule by Huang et al. (2018) uses supermartingales that exhibit a lower bound on their variation in each step. A much more closely related work is the excellent $\mathsf{AST}$ proof rule by McIver et al. (2018). Their work shows that a proof rule consisting of a supermartingale function that also acts as a distance variant is sound for almost-sure termination. While the former is believed to be incomplete (McIver et al., 2018), it is not known if the latter rule is complete. Our work indicates that one can achieve completeness by separating the roles of the distance variant and the supermartingale into two functions. The issue of an appropriate assertion language for proof rules for termination have been mostly elided in the literature, and most rules are presented in an informal language of “sufficiently expressive” mathematical constructs. Important exceptions are the assertion languages of Batz et al. (2021) and den Hartog and de Vink (2002). Their work shows that the language of arithmetic extended with suprema and infima of functions over the state space is relatively complete (in the sense of Cook (1978)) for weakest-preexpectation style reasoning of probabilistic programs without nondeterminism. That is, given a function $f$ definable in their language and a program $P$, they show that their language is expressive enough to represent the weakest pre- expectation of $f$ with respect to $P$. The need for suprema and infima is motivated by demonstrating simple probabilistic programs whose termination probabilities involve transcendental numbers. We note that arithmetic is sufficient for relative completeness because suprema and infima arising in probabilistic termination can be encoded (through quantifiers). We believe that presenting the rules directly in the language of arithmetic allows greater focus on the nature of the certificates required by the rules. Note that for extensions of the programming model, such as with unbounded nondeterministic choice, arithmetic is no longer sufficient for relative completeness, and this holds already without probabilistic choice in the language (Hitchcock and Park, 1972; Apt and Kozen, 1986; Apt and Plotkin, 1986). While we focus on almost sure termination, there are related qualitative termination problems: _positive_ almost sure termination ($\mathsf{PAST}$) and _bounded_ almost sure termination $(\mathsf{BAST}$). These problems strengthen almost sure termination by requiring that the expected time to termination is finite. (Note that a program may be almost surely terminating but the expected run time may be infinite: consider a one-dimensional symmetric random walk where $0$ is an absorbing state.) The difference is that $\mathsf{PAST}$ allows the expected run time to depend on the scheduler that resolves nondeterminism, and $\mathsf{BAST}$ requires a global bound that holds for every scheduler. Sound and complete proof rules for $\mathsf{BAST}$ have been studied extensively (Bournez and Garnier, 2005; Fioriti and Hermanns, 2015; Fu and Chatterjee, 2019; Avanzini et al., 2020). More recently, a sound and complete proof rule for $\mathsf{PAST}$ was given (Majumdar and Sathiyanarayana, 2024). Completeness in these papers are semantic, and relative completeness in the sense of Cook was not studied. Our techniques would provide a relative completeness result for $\mathsf{BAST}$. In contrast, Majumdar and Sathiyanarayana (2024) show that $\mathsf{PAST}$ is $\Pi_{1}^{1}$-complete (AST and $\mathsf{BAST}$ are arithmetical, in comparison); thus Peano arithmetic would be insufficient as a (relatively complete) assertion language (see (Apt and Plotkin, 1986) for similar issues and appropriate assertion languages for nondeterministic programs with countable nondeterminism). Our results apply to discrete probabilistic choice. While discrete choice and computation captures many randomized algorithms, our proofs of completeness do not apply to programs with, e.g., sampling from continuous probaility distributions. The use of real values introduce measure-theoretic complexities in the semantics (Takisaka et al., 2021). These can be overcome, but whether there is a sound and relatively complete proof rule for an appropriate assertion language remains open. While we focus on the theoretical aspects here, there is a large body of work on _synthesizing_ certificates automatically and tools for probabilistic verification (Chakarov and Sankaranarayanan, 2013; Chatterjee et al., 2022; McIver et al., 2018; Feng et al., 2023). We show a “compilation” of many existing rules into our rules, thus, such tools continue to work in our proof system. ## 2\. Probabilistic Programs ### 2.1. Syntax and Semantics #### Syntax. We work with _probabilistic control flow graphs_ , a program model used by Chatterjee et al. (2022) to detail proof rules for quantitative termination. Variables in this model range over the rationals. Assignment statements and loop guards are terms and boolean combinations of atomic formulae expressible in the language of arithmetic. This is standard in program logics (Apt, 1981), and facilitates the use of the language of rational arithmetic with addition, multiplication, and order to make assertions about desirable program properties. For sake of conciseness, we augment this assertion language with additional computable predicates as “syntactic sugar” in our proofs. We interpret assertions over the standard model of rationals. ###### Definition 2.1 (Control Flow Graphs). A Control Flow Graph ($\mathsf{CFG}$) $\mathcal{G}$ is a tuple $(L,V,l_{init},\mathbf{x}_{init},\mathord{\mapsto},G,\mathsf{Pr},\mathsf{Upd})$, where * • $L$ is a finite set of program locations, partitioned into assignment, nondeterministic, and probabilistic locations $L_{A}$, $L_{N}$, and $L_{P}$, respectively. * • $V=\\{x_{1},x_{2},\ldots x_{n}\\}$ is a finite set of program variables. * • $l_{init}\in L$ is the initial program location, and $\mathbf{x}_{init}\in\mathbb{Q}^{V}$ is the initial variable valuation. * • $\mathord{\mapsto}\subseteq L\times L$ is a finite set of transitions. If $\tau=(l,l^{\prime})\in\mathord{\mapsto}$, then $l$ and $l^{\prime}$ are respectively referred to as the source and target locations of $\tau$. * • $G$ is a function mapping each $\tau\in\mathord{\mapsto}$ to a Boolean expression over $V\cup L$. * • $\mathsf{Pr}$ is a function assigning each $(l,l^{\prime})\in\mathord{\mapsto}$ with $l\in L_{P}$ a fractional expression $p$ over the variables $V$ representing a rational probability value. * • $\mathsf{Upd}$ is a map assigning each $(l,l^{\prime})\in\mathord{\mapsto}$ with $l\in L_{A}$ an update pair $(j,u)$ where $j\in\\{1,\ldots,|V|\\}$ is the target variable index and $u$ is an arithmetic expression over the variables $V$. * • At assignment locations, there is at most one outgoing transition. * • At probabilistic locations $l$, it must be that $\mathsf{Pr}(l,\\_)[\mathbf{x}]>0$ and $\sum G(l,\\_)[(l,\mathbf{x})]\times\mathsf{Pr}((l,\\_))[\mathbf{x}]=1$ over all transitions $(l,\\_)\in\mathord{\mapsto}$ for all $\mathbf{x}\in\mathbb{Q}^{V}$. We use the boldface notation $\mathbf{x}$ for variable assignments and write $\mathbf{0}$ for the assignment that maps every variable to zero. $\mathsf{Pr}(l,l^{\prime})[\mathbf{x}]$, $G(l,l^{\prime})[\mathbf{x}]$, and $\mathsf{Upd}(l,l^{\prime})[\mathbf{x}]$ refer to the output of the expressions $\mathsf{Pr}(l,l^{\prime})$, $G(l,l^{\prime})$, and $\mathsf{Upd}(l,l^{\prime})$ on the assignment $\mathbf{x}$. Note that the finiteness of $L$ implies that the branching at both nondeterministic and probabilistic locations is bounded. Without loss of generality, we assume simple structural conditions that ensures that every state has a successor. This follows similar assumptions made in prior work (Chatterjee et al., 2022). Observe that, while the probabilistic choice is simple, it is sufficient to model probabilistic Turing machines and some quite sophisticated probabilistic phenomena (Flajolet et al., 2011). However, we explicitly forbid unbounded nondeterministic choice or sampling from continuous distributions. ###### Remark 2.2. While we use $\mathsf{CFG}$s as our formal model of programs, we could have equivalently used probabilistic guarded command language ($\mathsf{pGCL}$). $\mathsf{pGCL}$ is the probabilistic extension of the Guarded Command Language of Dijkstra (1976), and is a convenient language for specifying probabilistic computation. There is a large body of work (Kaminski et al., 2019; Feng et al., 2023; Batz et al., 2021; McIver et al., 2018; McIver and Morgan, 2005) that uses $\mathsf{pGCL}$ syntax. Our choice of $\mathsf{CFG}$s follows the same choice made by Chatterjee et al. (2022) to describe quantitative termination. It is standard to compile $\mathsf{pGCL}$ programs into $\mathsf{CFG}$s and vice versa. For readability, we employ the syntax of $\mathsf{pGCL}$ in some of our examples. #### States, Runs, and Reachable States. Fix a $\mathsf{CFG}$ $\mathcal{G}=(L,\allowbreak V,\allowbreak l_{init},\allowbreak x_{init},\allowbreak\mathord{\mapsto},\allowbreak G,\allowbreak\mathsf{Pr},\allowbreak\mathsf{Upd})$. A _state_ is a tuple $(l,\mathbf{x})$, where $l\in L$ and $\mathbf{x}\in\mathbb{Q}^{V}$. A state $(l,\mathbf{x})$ is termed _assignment_ (resp., _nondeterministic_ , or _probabilistic_) if the location $l$ is assignment (resp., nondeterministic, or probabilistic). We will refer to assignment locations $l$ where the updates of all transitions sourced at $l$ don’t change the variable values as _deterministic_ locations. Accordingly, states $(l,\mathbf{x})$ are termed _deterministic_ when $l$ is deterministic. A transition $(l,l^{\prime})\in\mathord{\mapsto}$ is _enabled_ at a state $(l,\mathbf{x})$ if the guard $G(l,l^{\prime})$ evaluates to true under $(l,\mathbf{x})$. The vector $\mathbf{x}^{\prime}$ is the _result_ of the update pair $(j,u)$ from the state $(l,\mathbf{x})$ if 1 for all $i\neq j$, $\mathbf{x}^{\prime}[i]=\mathbf{x}[i]$, and 2 $\mathbf{x}^{\prime}[j]=u(\mathbf{x})$. A state $(l^{\prime},\mathbf{x}^{\prime})$ is a _successor_ to $(l,\mathbf{x})$ if the transition $(l,l^{\prime})\in\mathord{\mapsto}$ is enabled at $(l,\mathbf{x})$ and $\mathbf{x}^{\prime}$ is the result of $\mathsf{Upd}(l,l^{\prime})$ on $\mathbf{x}$. A _finite path_ is a sequence of states $(l_{1},\mathbf{x}_{1}),(l_{2},\mathbf{x}_{2}),\ldots,(l_{n},\mathbf{x}_{n})$ with $(l_{k+1},\mathbf{x}_{k+1})$ being a successor to $(l_{k},\mathbf{x}_{k})$. A _run_ (or _execution_) of $\mathcal{G}$ is a sequence of states that 1 begins with the initial state $(l_{init},\mathbf{x}_{init})$, and 2 only induces finite paths as prefixes. A state $(l^{\prime},\mathbf{x}^{\prime})$ is said to be _reachable_ from a state $(l,\mathbf{x})$ if there exists a finite path beginning at $(l,\mathbf{x})$ and ending at $(l^{\prime},\mathbf{x}^{\prime})$. We write $\mathsf{Reach}(\mathcal{G},(l,\mathbf{x}))$ for the set of states reachable from $(l,\mathbf{x})$; we simply write $\mathsf{Reach}(\mathcal{G})$ when the initial state is $(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$. An $\mathsf{CFG}$ is said to be _finite state_ if the set of states reachable from its initial state is finite. #### Probability Theory We now introduce some basic probability theoretic notions. The tuple $(X,\mathcal{F},\mathbb{P})$ is called a _probability space_ where $X$ is the sample space, $\mathcal{F}$ is a $\sigma$-algebra over $X$, and $\mathbb{P}$ is the probability measure over $\mathcal{F}$. The sequence $(\mathcal{F}_{n})$, where $n$ ranges over $\mathbb{N}$, is a _filtration_ of the probability space $(X,\mathcal{F},\mathbb{P})$ if each $\mathcal{F}_{n}$ is a sub $\sigma$-algebra of $\mathcal{F}$ and $\mathcal{F}_{i}\subseteq\mathcal{F}_{j}$ for all $i\leq j$. A _random variable_ is a measurable function from $X$ to $\mathbb{R}^{+}$, the set of positive real numbers. The _expected value_ of the random variable $X$ is denoted by $\mathbb{E}[X]$. A _stochastic process_ over the filtered probability space is a sequence of random variables $(X_{n})$ such that each $X_{n}$ is measurable over $\mathcal{F}_{n}$. A _supermartingale_ is a stochastic process $(X_{n})$ that does not increase in expectation, i.e., $\mathbb{E}[X_{n+1}]\leq\mathbb{E}[X_{n}]$ for each $n\in\mathbb{N}$. We will abuse notation slightly and refer to any function over the state space of a $\mathsf{CFG}$ that doesn’t increase in expectation at each execution step as a _supermartingale function_. We shall make use of the following version of Doob’s Martingale Convergence Theorem. ###### Theorem 2.3 ((Doob, 1953)). If a supermartingale $(X_{n})$ is bounded below, then there almost-surely exists a random variable $X_{\infty}$ such that $\mathbb{P}\left(X_{\infty}=\lim_{n\to\infty}X_{n}\right)=1\quad\text{and}\quad\mathbb{E}[X_{\infty}]\leq\mathbb{E}[X_{0}]$ #### Schedulers and Probabilistic Semantics. Now, we introduce the operational semantics of our programs. These are standard operational semantics that can be found in many other places (Baier and Katoen, 2008; Chatterjee et al., 2022). A _scheduler_ is a mapping from finite paths ending at nondeterministic states to successors from these states. Note that, since the state space for any $\mathsf{CFG}$ is countable, we do not require measurability conditions on the scheduler. The semantics of $\mathcal{G}$ is understood through a probability space over the runs of $\mathcal{G}$. Formally, let $\mathsf{Runs}_{\mathcal{G}}$ be the collection of all executions of $\mathcal{G}$. Further, for a finite path $\pi$, denote by $\mathsf{Cyl}_{\mathcal{G}}(\pi)$ the _cylinder set_ containing all runs $\rho\in\mathsf{Runs}_{\mathcal{G}}$ such that $\pi$ is a prefix of $\rho$. Now, call $\mathcal{F}_{\mathcal{G}}$ the smallest $\sigma$-algebra on $\mathsf{Runs}_{\mathcal{G}}$ containing all cylinder sets of finite paths of $\mathcal{G}$. A scheduler $\mathfrak{s}$ induces a probability space over the collection of all runs of the $\mathsf{CFG}$ $\mathcal{G}$. A finite path (or run) $\pi$ is said to be _consistent_ with $\mathfrak{s}$ if for every prefix $\pi^{\prime}$ of $\pi$ ending at a nondeterministic state, the finite path (or run) obtained by appending the successor state $\mathfrak{s}(\pi^{\prime})$ to $\pi^{\prime}$ is a prefix of $\pi$. A scheduler is said to induce a finite path (or run) if the path (or run) is consistent with the scheduler. The semantics of the $\mathsf{CFG}$ $\mathcal{G}$ under the scheduler $\mathfrak{s}$ is captured by the probability space $(\mathsf{Runs}_{\mathcal{G}},\allowbreak\mathcal{F}_{\mathcal{G}},\mathbb{P}_{\mathfrak{s}})$, where for every consistent finite path $\pi=((l_{1},\mathbf{x}_{1}),(l_{2},\mathbf{x}_{2}),\ldots(l_{n},\mathbf{x}_{n}))$ with probabilistic locations at indices $i_{1},i_{2},\ldots i_{n}$, $\mathbb{P}{}_{\mathfrak{s}}(\pi)=\mathsf{Pr}(l_{i_{1}},l_{i_{1}+1})[\mathbf{x}_{i_{1}}]\times\cdots\mathsf{Pr}(l_{i_{n}},l_{i_{n}+1})[\mathbf{x}_{i_{n}}]$ We analogously define a probability space $(\mathsf{Runs}_{\mathcal{G}(l,\mathbf{x})},\allowbreak\mathcal{F}_{\mathcal{G}(l,\mathbf{x})},\allowbreak\mathbb{P}_{\mathfrak{s}})$ to refer to the probability space induced by the scheduler $\mathfrak{s}$ on the $\mathsf{CFG}$ obtained from $\mathcal{G}$ by setting the initial state to $(l,\mathbf{x})$. For a scheduler $\mathfrak{s}$, the _canonical filtration_ of $\mathcal{G}$ is the sequence $(\mathcal{F}_{n})_{n\in\mathbb{N}}$ such that $\mathcal{F}_{n}$ is the smallest sub-$\sigma$-algebra of $\mathcal{F}_{\mathcal{G}(\sigma)}$ that contains the cylinder sets $\mathsf{Cyl}_{\mathcal{G}(\sigma)}(\pi_{\leq n})$ of all finite paths $\pi_{\leq n}$ of length at most $n$. Under this filtration, the semantics of $\mathcal{G}$ under $\mathfrak{s}$ can also be viewed as a stochastic process $(X^{\mathfrak{s}}_{n})_{n\in\mathbb{N}}$ measurable against $(\mathcal{F}_{n})_{n\in\mathbb{N}}$ such that $X^{\mathfrak{s}}_{n}$ takes on an encoding of the state of the execution after $n$ steps. When convenient, we will use this view as well. ### 2.2. The Termination Problem Fix an $\mathsf{CFG}$ $\mathcal{G}$ and a scheduler $\mathfrak{s}$. Let $l_{\mathit{out}}$ be a distinguished location we refer to as _terminal_. Denote by $\Diamond(l_{\mathit{out}},\mathbf{0})$ the set of all runs that reach $(l_{\mathit{out}},\mathbf{0})$; we call these the _terminating runs_. Observe that $\Diamond(l_{\mathit{out}},\mathbf{0})$ is measurable. The $\mathsf{CFG}$ $\mathcal{G}$ is said to _terminate_ with probability $p$ under the scheduler $\mathfrak{s}$ if $\mathbb{P}_{\mathfrak{s}}[\Diamond(l_{\mathit{out}},\mathbf{0})]=p$. ###### Definition 2.4 (Termination Probability). Let $\mathcal{G}$ be an $\mathsf{CFG}$ and for a scheduler $\mathfrak{s}$, let $(\mathsf{Runs}_{\mathcal{G}},\mathcal{F}_{\mathcal{G}},\mathbb{P}_{\mathfrak{s}})$ be the probability space induced by $\mathfrak{s}$ on the executions of $\mathcal{G}$. The termination probability of $\mathcal{G}$, denoted by $\Pr_{\mathrm{term}}(\mathcal{G})$, is the infimum of $\mathbb{P}_{\mathfrak{s}}[\Diamond(l_{\mathit{out}},\mathbf{0})]$ over all schedulers $\mathfrak{s}$. We also use the notation $\Pr_{\mathrm{term}}(\mathcal{G}(\sigma))$ to refer to the termination probability of $\mathcal{G}$ if its initial state were changed to $\sigma$. An $\mathsf{CFG}$ is said to be _almost surely terminating_ ($\mathsf{AST}$) if its termination probability is $1$. Our work is on sound and complete proof rules for deciding, for an $\mathsf{CFG}$ $\mathcal{G}$, 1 the $\mathsf{AST}$ problem, i.e., whether $\mathcal{G}$ is almost surely terminating. 2 the _Lower Bound_ problem, i.e., whether $\Pr_{\mathrm{term}}(\mathcal{G})$ exceeds some $p<1$, 3 the _Upper Bound_ problem, i.e., whether $\Pr_{\mathrm{term}}(\mathcal{G})$ is bounded above by some $p>0$. We remark that, for the lower and upper bound problems, the proof rules we describe are applicable to any number $p$ that is representable in our program logic. This means that $p$ can take on irrational and transcendental values; this is important, as termination probabilities can often take on such values (Batz et al., 2021; Flajolet et al., 2011). We will elaborate in Section 2.4. Note that, while we define termination for a specific state $(l_{\mathit{out}},\mathbf{0})$, more general termination conditions can be reduced to this case by a syntactic modification. ⬇ 1x, y $\coloneqq$ 1, 1 2while (x $\neq$ 0 $\lor$ y $\neq$ 0): 3 { x $\coloneqq$ x + 1 $\oplus_{\frac{1}{2}}$ x $\coloneqq$ x - 1 } $\oplus_{\frac{1}{2}}$ 4 { y $\coloneqq$ y + 1 $\oplus_{\frac{1}{2}}$ y $\coloneqq$ y - 1 } Prg. 1. The 2D symmetric random walker. The symbol $\oplus$ is a probabilistic choice operator. ###### Example 2.5 (Symmetric Random Walk). A $d$-dimensional _symmetric random walk_ has $d$ integer variables $x_{1},\ldots,x_{d}$. Initially, all variables are $1$. In each step, the program updates the variables to move to a “nearest neighbor” in the $d$-dimensional lattice $\mathbb{Q}^{d}$; that is, the program picks uniformly at random one of the variables and an element in $\\{-1,+1\\}$, and adds the element to the chosen variable. Program 1 shows the code for $d=2$. It is well known (Pólya, 1921) that the symmetric random walk is recurrent in dimension $1$ and $2$, and transient otherwise. Thus, if we set any element in the lattice, say $0$, to be an terminal state, then the program is almost surely terminating in dimension $1$ and $2$ but not almost surely terminating when $d\geq 3$. We shall refer to the $d=1$ and $d=2$ cases as 1DRW and 2DRW, respectively. ∎ ### 2.3. The Unrolling Lemma Let $\mathcal{G}$ be an $\mathsf{CFG}$ such that $\Pr_{\mathrm{term}}(\mathcal{G})\geq p$ for some rational $p>0$. Fix a scheduler $\mathfrak{s}$. Let $(\pi_{1},\pi_{2},\ldots)$ be an ordering of the terminating runs of $\mathcal{G}$ consistent with $\mathfrak{s}$ such that $|\pi_{1}|\leq|\pi_{2}|\leq\cdots$. For some $\epsilon>0$, let $i_{n}$ be the smallest number such that $\mathbb{P}_{\mathfrak{s}}(\pi_{1})+\cdots+\mathbb{P}_{\mathfrak{s}}(\pi_{i_{n}})\geq p-\epsilon$ where $\mathbb{P}_{\mathfrak{s}}$ is the probability measure induced by $\mathfrak{s}$ over the set of all runs of $\mathcal{G}$. We call $|\pi_{n}|$ the _required simulation time_ of $\mathcal{G}$ under $\mathfrak{s}$ to assimilate a termination probability of $p-\epsilon$. The required simulation time of $\mathfrak{s}$ is simply the length of the longest terminating run that must be accounted for in the termination probability series for it to cross $p-\epsilon$. Define the simulation time of $\mathcal{G}$ w.r.t. $\epsilon$ as the supremum over all schedulers $\mathfrak{s}$ of the required simulation time of $\mathcal{G}$ under $\mathfrak{s}$ and $\epsilon$. The following lemma is at the core of showing that the almost sure termination problem is $\Pi^{0}_{2}$-complete (Majumdar and Sathiyanarayana, 2023). ###### Lemma 2.6 (Unrolling Lemma (Majumdar and Sathiyanarayana, 2023)). Let $\mathcal{G}$ be an $\mathsf{CFG}$ such that $\Pr_{\mathrm{term}}(\mathcal{G})\geq p$. For any $\epsilon$, the simulation time of $\mathcal{G}$ w.r.t. $\epsilon$ is bounded above. Lemma 2.6 is a generalization of Lemma B.3 of Majumdar and Sathiyanarayana (2023). It holds for $\mathsf{CFG}$s because the branching at nondeterministic locations is bounded. To prove the unrolling lemma, for each $m\in\mathbb{N}$, we consider unrollings of $\mathcal{G}$ for $m$ steps, running under partial schedules that resolve nondeterministic choices for up to $m$ steps. Partial schedules are naturally ordered into a tree, where a partial schedule $\mathfrak{s}$ is extended by $\mathfrak{s}^{\prime}$ if $\mathfrak{s}^{\prime}$ agrees with $\mathfrak{s}$ when restricted to the domain of $\mathfrak{s}$. An infinite path in this tree defines a scheduler. For each scheduler $\mathfrak{s}$, we mark the $k$-th node in its path if $k$ is the minimum number such that the $k$-step unrolled program amasses termination probability at least $p-\epsilon$. If two schedulers agree up to $k$ steps, then they both mark the same node. The key observation is that, since the nondeterminism is finite-branching, the scheduler tree is finite- branching. Thus, if we cut off the tree at marked nodes and still have an infinite number of incomparable marked nodes, there must be an infinite path in the tree that is not marked. But this is a contradiction, because this infinite path corresponds to a scheduler that never amasses $p-\epsilon$ probability mass for termination. Corollary 2.7 follows directly, and is used multiple times in the proofs of the soundness and completeness of our rules. ###### Corollary 2.7. Let $\sigma$ be a state of an $\mathsf{CFG}$ $\mathcal{G}$. Suppose $\Pr_{\mathrm{term}}(\mathcal{G}(\sigma\allowbreak))>0$. Then, varied across schedulers, there is an upper bound on the length of the shortest consistent terminal run from $\sigma$. ###### Proof. For a scheduler $\mathfrak{s}$, let $\pi_{\mathfrak{s}}$ be the smallest terminal run of $\mathcal{G}(\sigma)$ consistent with $\mathfrak{s}$. The simulation time to assimilate a termination probability of $\epsilon$ for some $0<\epsilon<\Pr_{\mathrm{term}}(\mathcal{G}(\sigma))$ under scheduler $\mathfrak{s}$ is necessarily at least as large as $|\pi_{\mathfrak{s}}|$. If the collection of lengths $|\pi_{\mathfrak{s}}|$ across schedulers $\mathfrak{s}$ wasn’t bounded, then this simulation time is unbounded. This contradicts the unrolling lemma. ∎ ### 2.4. Assertion Language and Program Logic Our language of choice for specifying assertions is the language of arithmetic with addition, multiplication, and order interpreted over the domain of rationals. We fix the interpretation model for our assertions as the standard model of rationals. Refer to this interpretation by $\mathsf{I}_{\mathbb{Q}}$. 222 Instead of fixing $\mathsf{I}_{\mathbb{Q}}$, one can use any _arithmetical structure_ (Harel et al., 2000) to specify and interpret assertions. All our proof rules will remain sound and relatively complete with this change. Let $\mathsf{Th}(\mathbb{Q})$ denote the _theory of rationals_ , i.e., the collection of assertions that are true in the standard model of rationals. The evaluation of our assertions is tantamount to their implication by $\mathsf{Th}(\mathbb{Q})$. All proof techniques we present in our work are relative to complete proof systems for $\mathsf{Th}(\mathbb{Q})$. Assertions are evaluated at program states. Fix a $\mathsf{CFG}$ $\mathcal{G}$ with transition relation $\mathord{\mapsto}_{\mathcal{G}}$. A state $\sigma$ of $\mathcal{G}$ _satisfies_ an assertion $\varphi$ if the interpretation $\mathsf{I}_{\mathbb{Q}}$ augmented with the variable valuation encoded in $\sigma$ models $\varphi$. We denote this by $\sigma\vDash\varphi$. An assertion $\varphi$ is _valid_ if $\sigma\vDash\varphi$ for all states $\sigma$. Valid assertions are contained in $\mathsf{Th}(\mathbb{Q})$. We employ a program logic inspired by the seminal work of Floyd (1993). Statements in our logic affix assertions as preconditions and postconditions to transitions in $\mathcal{G}$. For example, the transition $\tau\in\mathord{\mapsto}_{\mathcal{G}}$ could be affixed a precondition $\varphi_{\tau}$ and postcondition $\psi_{\tau}$ to yield the sentence $\\{\varphi_{\tau}\\}\tau\\{\psi_{\tau}\\}$. The precondition $\varphi_{\tau}$ is evaluated at the program state before taking $\tau$, and the postcondition $\psi_{\tau}$ is evaluated at states reached immediately after $\tau$. The sentence $\\{\varphi_{\tau}\\}\tau\\{\psi_{\tau}\\}$ is _true_ for $\mathcal{G}$ if for every state $\sigma$ with $\sigma\vDash\varphi$, if $\tau$ is enabled at $\sigma$ and $\sigma^{\prime}$ is a successor of $\sigma$ through $\tau$, then $\sigma^{\prime}\vDash\psi_{\tau}$. We use the notion of _inductive invariants_ in our proof rules. An _inductive invariant_ is an assertion with $n+1$ free variables, the first ranging over $L$ and the others over $\mathbb{Q}$, that is closed under the successor operation. That is, an assertion $\mathsf{Inv}$ is an inductive invariant if, whenever $(l,\mathbf{x})$ satisfies $\mathsf{Inv}$, and $(l^{\prime},\mathbf{x}^{\prime})$ is a successor to $(l,\mathbf{x})$, then $(l^{\prime},\mathbf{x}^{\prime})$ satisfies $\mathsf{Inv}$. It follows that if $(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$ satisfies $\mathsf{Inv}$, then every reachable state satisfies $\mathsf{Inv}$. Floyd (1993) specified axioms for a proof system over this program logic. _Proof rules_ extend this system by enabling the deduction of complicated program properties, such as termination. These rules are composed of _antecedents_ and _consequents_. Antecedents are finite collections of statements written in the program logic. Consequents detail properties of the program at which the antecedents are evaluated. _Soundness_ of a proof rule implies that if the antecedents are true for a program $\mathcal{G}$, then the consequents hold for $\mathcal{G}$. _Completeness_ of a proof rule implies that if the consequents are true for some $\mathcal{G}$, then one can come up with proofs for the antecedents of the rule in the underlying proof system. The completeness of all proof rules in this work is dependent on the existence of a complete proof system for $\mathsf{Th}(\mathbb{Q})$. Such proof rules are said to be complete _relative to a proof system for $\mathsf{Th}(\mathbb{Q})$_. Relative completeness of this kind is standard in program logics. To show the relative completeness of our proof rules, we will need to be able to encode _computable relations_ in our assertion language. A relation is computable if its characteristic function is decidable. It is known that the theory of arithmetic interpreted over natural numbers can encode all computable relations. Let $\mathsf{I}_{\mathbb{N}}$ refer to the interpretation model of the standard model of naturals. Denote by $\mathsf{Th}(\mathbb{N})$ the collection of all true assertions under $\mathsf{I}_{\mathbb{N}}$. $\mathsf{Th}(\mathbb{N})$ is generally referred to as the theory of natural numbers. Thus, for each computable relation $R(x_{1},x_{2},\ldots x_{n})$, there is an assertion $\varphi_{R}(x_{1},x_{2},\ldots x_{n})$ that is true in $\mathsf{Th}(\mathbb{N})$. To represent computable relations in $\mathsf{Th}(\mathbb{Q})$, we use a result by Robinson (1949). ###### Theorem 2.8 (Robinson (1949)). $\mathbb{N}$ is definable in $\mathsf{Th}(\mathbb{Q})$. We refer to the assertion that encodes $\mathbb{N}$ by $\mathsf{Nat}$. Therefore, $\mathsf{Nat}(x)$ is true in $\mathsf{I}_{\mathbb{Q}}$ _iff_ $x\in\mathbb{N}$. All computable relations can be encoded in our assertion language through liberal usage of $\mathsf{Nat}$. An important implication is that termination probabilities are expressible in our assertion language. ###### Lemma 2.9. For a $\mathsf{CFG}$ $\mathcal{G}$ and a $p\in[0,1]$ with $\Pr_{\mathrm{term}}(\mathcal{G})=p$, there is an assertion $\psi(x)$ with one free variable $x$ such that $\mathsf{Th}(\mathbb{Q})\vDash\psi(x)\Leftrightarrow x\leq p$. ###### Proof. We know that $\Pr_{\mathrm{term}}(\mathcal{G})\geq p$ _iff_ $\Pr_{\mathrm{term}}(\mathcal{G})\geq p-\epsilon$ for all $\epsilon>0$. The unrolling lemma implies that for all $\epsilon>0$, there is a $k\in\mathbb{N}$ such that the probability mass of the $k$-unrolled program is at least $p-\epsilon$. Finite unrollings of $\mathcal{G}$ are, by definition, computable, and checking the probability of termination amassed in this finite unrolling is also computable; see Kaminski et al. (2019) for details. This means that a relation $R(\epsilon,k)$ representing this relationship between every rational $\epsilon$ and natural $k$. Such computable relations are representable in $\mathsf{Th}(\mathbb{Q})$ through Theorem 2.8, completing the proof. ∎ Notice that while the termination probabilities $p$ are real numbers, the lower bounds verified by the assertion $\psi$ in the above lemma are entirely rational. However, by representing the set of rational numbers under $p$, $\psi$ has effectively captured the Dedekind cut of $p$. This expressibility shows how irrational lower bounds on termination probabilities can be deduced using our proof techniques. ## 3\. Almost-Sure Termination: Martingales In all rules in this work, we fix a $\mathsf{CFG}$ $\mathcal{G}=(L,\allowbreak V,\allowbreak l_{init},\allowbreak\mathbf{x}_{init},\allowbreak\mathord{\mapsto},\allowbreak G,\allowbreak\mathsf{Pr},\allowbreak\mathsf{Upd})$. We also abuse notation slightly and use the inductive invariant $\mathsf{Inv}$ as a shorthand for all states of $\mathcal{G}$ satisfying the predicate $\mathsf{Inv}$. Our proof rules consist of sets and functions over the state spaces that satisfy certain properties. Each of these entities must be representable in our assertion language; therefore, they must be arithmetical expressions over the program variables and program locations. Instead of specifying each condition in our rules as formal statements in our program logic, we directly describe the properties these entities must satisfy. We do so to emphasize these entities themselves over the formalism surrounding them. It is nevertheless possible to write each of the following proof rules as finite sets of statements in the program logic. Recall that a proof rule is _sound_ if, whenever we can find arithmetical expressions in our assertion language that satisfy the conditions outlined in the premise of a rule, the conclusion of the rule holds. A proof rule is _relatively complete_ if, whenever the conclusion holds (e.g., a program $\mathcal{G}$ is $\mathsf{AST}$), we can find certificates in the assertion language that satisfy all the premises. ### 3.1. McIver and Morgan’s Variant Rule We start with a well-known rule for almost-sure termination from (McIver and Morgan, 2005). The rule is sound but complete only for finite-state programs McIver and Morgan (2005, Lemma 7.6.1). If there is (1) an inductive invariant $\mathsf{Inv}$ containing the initial state $(l_{init},\mathbf{x}_{init})$, (2) a function $U:\mathsf{Inv}\to\mathbb{Z}$, (3) bounds $\mathsf{Lo}$ and $\mathsf{Hi}$ such that for all states $(l,\mathbf{x})\in\mathsf{Inv}$, $\mathsf{Lo}\leq U(l,\mathbf{x})<\mathsf{Hi}$, and (4) an $\epsilon>0$, such that, for each state $(l,\mathbf{x})\in\mathsf{Inv}$, (a) if $(l,\mathbf{x})$ is a terminal state, $U(l,\mathbf{x})=0$. (b) if $(l,\mathbf{x})$ is an assignment, or nondeterministic state, $U(l^{\prime},\mathbf{x}^{\prime})<U(l,\mathbf{x})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$. (c) if $(l,\mathbf{x})$ is a probabilistic state, $\sum\mathsf{Pr}(l,l^{\prime})[\mathbf{x}]>\epsilon$ over all successor states $(l^{\prime},\mathbf{x}^{\prime})$ with $U(l^{\prime},\mathbf{x}^{\prime})<U(l,\mathbf{x})$. Then, $\mathcal{G}$ is $\mathsf{AST}$. ###### Lemma 3.1 (McIver and Morgan (2005)). Section 3.1 is sound for all $\mathsf{AST}$ programs. It is relatively complete for finite-state $\mathsf{AST}$ $\mathsf{CFG}$s. While McIver and Morgan (2005) claim completeness and not relative completeness, their proof trivially induces relative completeness. This rule is not complete, however. This is because, if the rule is applicable, the program is guaranteed a terminal run of length at most $\mathsf{Hi}-\mathsf{Lo}$ from any state. But the 1D random walk (outlined in Example 2.5) does not satisfy this property, even though it terminates almost- surely. Over the years, McIver and Morgan’s proof rule has been extended many times (McIver et al., 2018; Huang et al., 2018). The most significant extension is the proof rule of McIver et al. (2018), where they require the function $U$ to additionally be a supermartingale. None of these extensions have managed to be proven complete. Because we do not use ideas from these extensions, we do not present them here. ### 3.2. Our Rule We present a martingale-based proof rule for $\mathsf{AST}$ that exploits the fact that $\mathsf{AST}$ programs, when repeatedly run, are recurrent. If there exists (1) an inductive invariant $\mathsf{Inv}$ containing the initial state, (2) a set $A\subset\mathsf{Inv}$ containing the terminal state, (3) a supermartingale function $V:\mathsf{Inv}\to\mathbb{R}$ that assigns $0$ to the terminal state and at all states $(l,\mathbf{x})\in\mathsf{Inv}\setminus A$, (a) $V(l,\mathbf{x})>0$, (b) $V(l,\mathbf{x})>V(l_{A},\mathbf{x}_{A})$ for each $(l_{A},\mathbf{x}_{A})\in A$, (c) if $(l,\mathbf{x})$ is an assignment or nondeterministic state, then $V(l,\mathbf{x})\geq V(l^{\prime},\mathbf{x}^{\prime})$ for all possible successor states $(l^{\prime},\mathbf{x}^{\prime})$, and (d) if $(l,\mathbf{x})$ is a probabilistic state, then, over all successor states $(l^{\prime},\mathbf{x}^{\prime})$, $V(\sigma)\geq\sum\Pr(l,l^{\prime})\allowbreak V(l^{\prime},\mathbf{x}^{\prime})$, (4) a variant function $U:\mathsf{Inv}\to\mathbb{N}$ that (a) assigns $0$ to the terminal state, (b) ensures that at nondeterministic and assignment states $(l,\mathbf{x})\in\mathsf{Inv}$, $U(l,\mathbf{x})>U(l^{\prime},\mathbf{x}^{\prime})$ for all possible successor states $(l^{\prime},\mathbf{x}^{\prime})$, and (c) satisfies the following compatibility criteria with the sublevel sets $V_{\leq r}=\\{\sigma\in\mathsf{Inv}\mid V(\sigma)\leq r\\}$ for each $r\in\mathbb{R}$: (i) the set $\\{u\in\mathbb{N}\mid\sigma\in V_{\leq r}\land u=U(\sigma)\\}$ is bounded, and (ii) there exists an $\epsilon_{r}>0$ such that, for all probabilistic states $(l,\mathbf{x})\in V_{\leq r}$, the sum $\sum\mathsf{Pr}(l,l^{\prime})[\mathbf{x}]>\epsilon_{r}$ over all successor states $(l^{\prime},\mathbf{x}^{\prime})$ with $U(l^{\prime},\mathbf{x}^{\prime})<U(l,\mathbf{x})$. Under these conditions, $\mathcal{G}$ is $\mathsf{AST}$. In this rule, $U$ is meant to play the role of the variant function from Section 3.1. $A$ is meant to form a “ball” around the terminal state; it is useful in applications where the supermartingale properties of $V$ are difficult to establish at all states. If the execution were to be restricted within this ball $A$, the rule makes it easy to establish almost-sure termination. This is because while $V$ must only be a supermartingale outside of $A$, the variant $U$ must still decrease within $A$. Observe that $A$ must be a strict subset of $\mathsf{Inv}$; this is to enforce an upper bound on the collection of $V$-values of states in $A$. It’s easy to see that this rule reduces to Section 3.1 if the supermartingale $V$ was bounded. Intuitively, $V$ can be thought of as a measure of _relative likelihood_. The probability that a transition increases $V$ by an amount $v$ reduces as $v$ increases. Unlikely transitions are associated with greater increments to $V$, and (relatively) unlikely states have greater $V$ values. Separately, the variant $U$ reprises its role from Section 3.1: it effectively measures the shortest distance to a terminal state. At a high level, Section 3.2 works for the following reasons. Suppose $V$ is unbounded and executions begin at some initial state $\sigma_{0}\in\mathsf{Inv}$. The supermartingale property of $V$ implies that from $\sigma_{0}$, the probability of reaching a state $\sigma$ with $V(\sigma)>V(\sigma_{0})$ approaches $0$ as $V(\sigma)$ grows to $+\infty$. Now, fix an unlikely state $\sigma$ with $V(\sigma)\gg V(\sigma_{0})$. Let’s now restrict our attention to the executions that remain in states $\gamma\in\mathsf{Inv}$ with $V(\gamma)\leq V(\sigma)$. The compatibility conditions satisfied by the variant $U$ with $V$ at the sublevel set $V_{\leq V(\sigma)}$ implies the almost-sure termination of these executions. The remaining executions must reach some unlikely state $\gamma^{\prime}$ with $V(\gamma^{\prime})\geq V(\sigma)$. Thus, as the probability of reaching unlikely states $\gamma^{\prime}$ reduces the “further away” (from the perspective of $V$) they are, the probability of terminating approaches $1$. Since $V$ is unbounded, the probability of termination is $1$. _Remark._ We note that our rule is quite similar to the $\mathsf{AST}$ rule of McIver et al. (2018). Their rule consisted of a single supermartingale $V$ that, with the help of a few antitone functions, also exhibited the properties of a distance variant. In other words, they combined the duties of the functions $V$ and $U$ into a single function $V$. It is not known if their rule is complete. ###### Lemma 3.2 (Soundness). Section 3.2 is sound. ###### Proof. Let us first dispense of the case where $V$ is bounded. If $V$ is bounded, the compatibility criteria forces a bound on the variant function $U$. The soundness of Section 3.1 implies $\mathcal{G}\in\mathsf{AST}$. Therefore, from now on, $V$ is assumed to be unbounded. Denote the initial state by $\sigma_{0}$. For each $n\in\mathbb{N}$, define $\Pi_{n}$ to be the collection of runs from $\sigma_{0}$ that reach a maximum $V$ value of $n$. This means that for each state $\sigma$ encountered in executions in $\Pi_{n}$, $V(\sigma)\leq n$. Define $\Pi_{\infty}$ to be the remaining collection of executions beginning at $\sigma_{0}$ that don’t have a bound on the $V$ values that they reach. This means that for each execution $\pi\in\Pi_{\infty}$ and each $n\in\mathbb{N}$, there are states $\sigma\in\pi$ such that $V(\sigma)>n$. We have thus partitioned the collection of executions of the $\mathsf{CFG}$ $\mathcal{G}$ to $\Pi_{\infty}\cup(\bigcup_{i\in\mathbb{N}}\Pi_{i})$. We will now argue that under every scheduler, the probability measure of all non-terminating executions in each $\Pi_{n}$ is $0$. By definition, all executions in $\Pi_{n}$ lie entirely within the sublevel set $V_{\leq n}$. The compatibility of $U$ with $V_{\leq n}$ implies that the variant $U$ is bounded across states in $\Pi_{n}$. Consider an $\mathsf{CFG}$ $\mathcal{G}_{\leq n}$ that mirrors $\mathcal{G}$ inside $V_{\leq n}$, but marks states in $\mathcal{G}$ outside $V_{\leq n}$ as terminal. Applying Section 3.1 using the now bounded variant $U$ allows us to deduce that $\mathcal{G}_{\leq n}$ is almost-surely terminating. Observe now that the collection of non-terminating runs of $\mathcal{G}_{\leq n}$ is precisely the collection of non-terminating runs in $\Pi_{n}$. This immediately gives us what we need. We now turn our attention to the final collection $\Pi_{\infty}$. Observe that $\Pi_{\infty}$ must only contain non-terminal executions. Suppose that, under some scheduler $\mathfrak{s}$, the probability measure of $\Pi_{\infty}$ wasn’t $0$. Let the probability space defining the semantics of $\mathcal{G}$ under $\mathfrak{s}$ (see Section 2.1) be $(\mathsf{Runs}_{\mathcal{G}(\sigma)},\mathcal{F}_{\mathcal{G}(\sigma)},\mathbb{P}_{\mathfrak{s}})$, and let its canonical filtration be $\\{\mathcal{F}_{n}\\}$. Define a stochastic process $\\{X^{\mathfrak{s}}_{n}\\}$ over the aforementioned probability space augmented with the filtration $\\{\mathcal{F}_{n}\\}$ that tracks the current state of the execution of the program. Define another stochastic process $\\{Y^{\mathfrak{s}}_{n}\\}$ as $Y^{\mathfrak{s}}_{n}\triangleq\begin{cases}V(X^{\mathfrak{s}}_{n})&X^{\mathfrak{s}}_{n}\not\in A\\\ 0&\text{otherwise}\end{cases}$ It’s easy to see that $Y^{\mathfrak{s}}_{n}$ is a non-negative supermartingale. Since $Y^{\mathfrak{s}}_{n}$ is non-negative, Doob’s Martingale Convergence Theorem (Doob, 1953) implies the almost-sure existence of a random variable $Y^{\mathfrak{s}}_{\infty}$ that the process $\\{Y^{\mathfrak{s}}_{n}\\}$ converges to. This means that $\mathbb{E}[Y^{\mathfrak{s}}_{\infty}]\leq Y^{\mathfrak{s}}_{0}$. Under the condition that $\Pi_{\infty}$ occurs, $Y^{\mathfrak{s}}_{\infty}=+\infty$. Since the probability measure of these non-terminal executions isn’t $0$, we have that $\mathbb{E}[Y^{\mathfrak{s}}_{\infty}]=+\infty>Y^{\mathfrak{s}}_{0}=V(X^{\mathfrak{s}}_{0})$. This raises a contradiction, completing the proof. ∎ To show completeness, we adapt a technique by Mertens et al. (1978) to build the requisite supermartingale $V$. Suppose $\mathcal{G}$ is $\mathsf{AST}$. Let $\mathsf{Reach}(\mathcal{G})$ be the set of its reachable states. Fix a computable enumeration $\mathsf{Enum}$ of $\mathsf{Reach}(\mathcal{G})$ that assigns $0$ to its terminal state. Intuitively, $\mathsf{Enum}$ is meant to order states in a line so that the probability of reaching a state that’s far to the right in this order is small. This is because the $\mathsf{AST}$ nature of $\mathcal{G}$ forces executions to “lean left” toward the terminal state. Note that we place no other requirements on $\mathsf{Enum}$; these intuitions will work no matter how $\mathsf{Enum}$ orders the states. A state $\sigma$ is said to be indexed $i$ if $\mathsf{Enum}(\sigma)=i$. From now on, we will refer to the state indexed $i$ by $\sigma_{i}$. A crucial part of our construction is the following function $R:(\mathbb{N}\times\mathbb{N})\to[0,1]$. Intuitively, $R$ measures the ability of executions beginning from a state to reach states that are far to the right of it in the $\mathsf{Enum}$ order. Let $\mathcal{G}_{i}$ be the $\mathsf{CFG}$ obtained from $\mathcal{G}$ by switching its initial state to $\sigma_{i}$. Let the semantics of $\mathcal{G}_{i}$ under a scheduler $\mathfrak{s}$ be the probability space $(\mathsf{Runs}_{\mathcal{G}_{i}},\allowbreak\mathcal{F}_{\mathcal{G}_{i}},\allowbreak\mathbb{P}^{i}_{\mathfrak{s}})$. Define $R(i,n)$ at indices $i$ and $n$ to be (1) $R(i,n)\triangleq\inf{}_{\mathfrak{s}}\mathbb{P}^{i}_{\mathfrak{s}}\left(\Diamond\left(\left\\{\sigma_{m}\in\mathsf{Reach}(\mathcal{G})\mid m\geq n\right\\}\right)\right)$ Where $\Diamond(C)$ represents the event of eventually reaching the set $C$. We will refer to the first argument $i$ as the source index and the second argument $n$ as the minimum target index. Put simply, $R(i,n)$ measures the infimum probability of reaching the target indices $\\{n,n+1,\ldots\\}$ from the source $\sigma_{i}$. ###### Lemma 3.3. $R(i,n)\to 0$ as $n\to\infty$ at every $i\in\mathbb{N}$, i.e., $\forall i\in\mathbb{N}\cdot\lim_{n\to\infty}R(i,n)=0$. ###### Proof. Denote by $E_{n}$ the event that executions beginning from $\sigma_{i}$ reach states with index $\geq n$. Clearly, $R(i,n)$ measures the infimum probability of $E_{n}$. Denote by $E_{\infty}$ the event that executions beginning from $\sigma_{i}$ increase the maximum observed state index infinitely often. It’s easy to see that, for every $n\in\mathbb{N}$, the event $E_{n}$ contains $E_{\infty}$. Also, each execution outside $E_{\infty}$ must be inside some $E_{n}\setminus E_{n+1}$, as it must yield a maximum state index contained in it. Additionally, $E_{n+1}\subseteq E_{n}$ for all $n$. These three facts imply $E_{\infty}=\bigcap_{i\in\mathbb{N}}E_{n}=\lim_{n\to\infty}E_{n}$ Suppose that, $\lim_{n\to\infty}R(i,n)>0$ for some index $i$. Since $R(i,n)=\inf{}_{\mathfrak{s}}\mathbb{P}^{i}_{\mathfrak{s}}[E_{n}]$, we have $\inf{}_{\mathfrak{s}}\mathbb{P}^{i}_{\mathfrak{s}}[E_{\infty}]=\inf{}_{\mathfrak{s}}\left(\lim_{n\to\infty}\mathbb{P}^{i}_{\mathfrak{s}}[E_{n}]\right)=\lim_{n\to\infty}\inf{}_{\mathfrak{s}}\mathbb{P}^{i}_{\mathfrak{s}}[E_{n}]=\lim_{n\to\infty}R(i,n)>0$ As all executions in $E_{\infty}$ are non-terminating, this contradicts the $\mathsf{AST}$ nature of $\mathcal{G}$. ∎ It turns out that, if we fix the minimum target index $n$, the function $R$ becomes a supermartingale. Define $V_{n}(\sigma)=R(\mathsf{Enum}(\sigma),n)$ for every $n\in\mathbb{N}$. It’s easy enough to see that $V_{n}$ is a supermartingale; for assignment / non-deterministic states $\sigma$ with possible successors $\sigma^{\prime}$, $V_{n}(\sigma)\geq V_{n}(\sigma^{\prime})$, and for probabilistic $\sigma=(l,\mathbf{x})$, $V_{n}(l,\mathbf{x})\geq\sum\mathsf{Pr}(l,l^{\prime})V_{n}(l^{\prime},\mathbf{x})$ across all successors $(l^{\prime},\mathbf{x})$. However, $V_{n}$ isn’t the supermartingale we need, as we may not always be able to construct a compatible $U$ for any $V_{n}$. This is because every $V_{n}$ is bounded above (by $1$), whereas $U$ typically isn’t bounded above. To construct an unbounded supermartingale, one could consider the sum $\sum V_{n}$ varied across all $n\in\mathbb{N}$. However, this sum could be $\infty$ for certain states. To combat this, we carefully choose an infinite subset of $\mathbb{N}$ to form the domain for $\sum V_{n}$. Consider the sequence $(n_{j})_{j\in\mathbb{N}}$ such that $n_{j}$ is the smallest number so that $R(i,n_{j})\leq 2^{-j}$ for all $i\leq j$. Each element in this sequence is certain to exist due to the monotonically non-increasing nature of $R(i,n)$ for fixed $i$ and the limit result of Lemma 3.3. Furthermore, restricting the domain of $\sum V_{n}$ to elements in $(n_{j})_{j\in\mathbb{N}}$ will mean that no state is assigned $\infty$ by the sum. This is because for each $\sigma$, the values of $V_{n_{j}}(\sigma)=R(\mathsf{Enum}(\sigma),n_{j})$ will certainly repeatedly halve after $j\geq\mathsf{Enum}(\sigma)$. Further note that the supermartingale nature of the $V_{n}$ implies that this sum is also a supermartingale. We thus have our required supermartingale (2) $V(\sigma)=\sum_{j\in\mathbb{N}}V_{n_{j}}(\sigma)=\sum_{j\in\mathbb{N}}R(\mathsf{Enum}(\sigma),n_{j})$ ###### Lemma 3.4 (Completeness). Section 3.2 is relatively complete. ###### Proof. Take an $\mathsf{AST}$ $\mathsf{CFG}$ $\mathcal{G}$, and set $\mathsf{Inv}$ and $A$ to $\mathsf{Reach}(\mathcal{G})$ and the singleton set containing the terminal state respectively. We first describe our choice for the variant function $U$. Since $\mathcal{G}$ is $\mathsf{AST}$, for every $\sigma\in\mathsf{Reach}(\mathcal{G})$, every scheduler must induce a finite path to a terminal state. Corollary 2.7 implies an upper bound on the length of the shortest terminal run from every $\sigma\in\mathsf{Inv}$. Set $U$ to map each $\sigma\in\mathsf{Inv}$ to this upper bound. If $U$ is bounded, setting $V(\sigma)=1$ for all $\sigma\in\mathsf{Inv}$ suffices. Otherwise, set $V$ to the supermartingale function defined in Eq. 2. It is easy to observe that for every $r$, the sublevel set $V_{\leq r}=\\{\sigma\mid V(\sigma)<r\\}$ is finite. This implies that $U$ is bounded within every sublevel set, and is hence compatible with this $V$. This completes the construction of the certificates required by the proof rule. We now argue that the invariant $\mathsf{Inv}$, the set $A$, the supermartingale $V$ and variant $U$ can each be represented in our assertion language of arithmetic interpreted over the rationals. We do this by encoding them first encoding them in the theory of natural numbers, and then using the relation $\mathsf{Nat}$ from Theorem 2.8 to insert them into our assertion language. Recall that all computable relations can be encoded in $\mathsf{Th}(\mathbb{Q})$. We present techniques with which one can augment computable relations with first-order quantifiers to represent these entities. By doing so, we demonstrate that these sets are _arithmetical_ ; see the works of Kozen (2006) and Rogers Jr. (1987) for detailed accounts on arithmetical sets. $A$ can trivially be represented in $\mathsf{Th}(\mathbb{Q})$. For $\mathsf{Inv}$, consider the relation $I$ that contains tuples of the form $(k,\sigma_{1},\sigma_{2})$ where $k\in\mathbb{N}$ and $\sigma_{1}$ and $\sigma_{2}$ are states of $\mathcal{G}$. Require $(k,\sigma_{1},\sigma_{2})\in I$ _iff_ there is a finite path of length $\leq k$ from $\sigma_{1}$ to $\sigma_{2}$. Clearly, $I$ is a computable relation and is thus representable in $\mathsf{Th}(\mathbb{Q})$. $\mathsf{Inv}(\sigma)$ can be represented from $I$ as $\exists k\cdot I(k,\sigma_{0},\sigma)$ where $\sigma_{0}$ is the initial state of $\mathcal{G}$. Similarly, the output of $U(\sigma)$ at every $\sigma$ can be represented using $I$ as $U(\sigma)=k\allowbreak\Longleftrightarrow\allowbreak I(\sigma,\sigma_{\bot},k)\allowbreak\land\left(\forall n<k\cdot\lnot I\left(\sigma,\sigma_{\bot},n\right)\right)$, where $\sigma_{\bot}$ is the terminal state. If $U$ were bounded, representing $V$ is trivial; we focus our attention on representing $V$ when $U$ isn’t bounded. Representations of $R$ (defined in Eq. 1 and used to derive $V$) and $V$ are complicated slightly because they can output real numbers. Instead of capturing the precise values of these functions, we capture their Dedekind cuts instead. In other words, we show that the collections of rational numbers $\leq V(\sigma)$, $\geq V(\sigma)$, $\leq\varphi_{i}(n)$ and $\geq\varphi_{i}(n)$ are each representable for each $\sigma$, $i$, and $n$. The unrolling lemma implies that if the probability of termination is $p$, then for all $n\in\mathbb{N}$, assimilating a termination probability mass of at least $p-1/n$ requires finitely many steps. It is simple to generalize this to observe that assimilating a probability mass of at least $R(i,n)-1/n$ for the event $\Diamond\left(\left\\{\sigma_{m}\in\mathsf{Inv}\mid m\geq n\right\\}\right)$ when $\sigma_{i}$ is the initial state also requires finitely many steps. Furthermore, computing the probability of the occurrence of this event within $k$ steps is computable for every natural number $k$. These two facts indicate that lower bounds on $R(i,n)$ can be represented in $\mathsf{Th}(\mathbb{Q})$. Upper bounds on $R(i,n)$ can be represented by simply negating this lower bound representation. Using these, enable the representation of each member of the sequence $(n_{j})_{j\in\mathbb{N}}$ that forms the domain of the sum that defines $V$. This enables representations of lower bounds on $V(\sigma)$, which in turn enables representations of upper bounds on $V(\sigma)$. Thus, the Dedekind cut of $V(\sigma)$ is representable in $\mathsf{Th}(\mathbb{Q})$. This completes the proof. ∎ ###### Example 3.5 (Random Walks). For the 1DRW example from Example 2.5, we take $\mathsf{Inv}$ to be all program states, $A$ to be the set containing the single terminal state $\\{x_{1}\coloneqq 0\\}$, and we set $V(x)=U(x)\coloneqq|x|$. It’s trivial to observe that all conditions required in Section 3.2 are met, and therefore, the 1DRW is $\mathsf{AST}$. Let us now consider the 2-D Random Walker (2DRW). The $\mathsf{AST}$ nature of this program has been notoriously hard to prove using prior rules. The principal enabler of our rule on the 2DRW is its set $A$ that forms a circle around the origin. Begin by setting $\mathsf{Inv}$ to the set of all states. Declare the distance variant $U$ to be the Manhattan distance $|x|+|y|$ of any state $(x,y)$ from the origin. Clearly, $U$ has a $\geq 1/4$ probability of reducing in a single step from every state. Now, define $V$ as $V(x,y)\triangleq\sqrt{\ln\left(1+\sqrt{x^{2}+y^{2}}\right)}$ It is difficult to prove that $V$ is a supermartingale for all non-terminal states. However, using Taylor series expansions, one can show that $V$ is a supermartingale for “sufficiently large” values of $x^{2}+y^{2}$. Menshikov et al. (2017) (see also Popov (2021, Section 2.3)) showed precisely this; their proof showed that the error terms in the Taylor series expansion of $V(x,y)$ cease to matter when $x^{2}+y^{2}$ grow large. They prove that there must exist a number $k$ such that the error terms in the Taylor expansion do not affect the non-negativity conditions at states where $x^{2}+y^{2}>k$. We now declare $A$ to be the set of states $(x,y)$ where $x^{2}+y^{2}\leq k$. Thus, $V$ satisfies the supermartingale conditions of Section 3.2 outside of $A$. Note that we don’t need to precisely decipher the value of $k$ for the soundness of the proof rule to work; we just need $A$ to be smaller than the invariant $\mathsf{Inv}$. The finiteness of $A$ satisfies this criterion. Furthermore, since the sublevel set $V_{\leq r}$ is finite, $V$ is compatible with $U$. Therefore, the 2DRW is $\mathsf{AST}$. ∎ ## 4\. Almost-Sure Termination: Stochastic Invariants Next, we give a different proof rule that takes a dual view. Instead of a single unbounded supermartingale, we consider several bounded supermartingales that each focus on different finite parts of the program’s state space. This focus means that each of these supermartingales takes on non-trivial values (i.e., between $0$ and $1$) at only finitely many states. They are thus “local” supermartingales, i.e., they are only meaningful at particular parts of the state space. The key to our rule is the observation that a program is almost surely terminating if, for every $\epsilon>0$, we can show that it terminates with probability at least $\epsilon$. We characterize this “$\epsilon$-wiggle room” using the stochastic invariants of Chatterjee et al. (2017). ###### Definition 4.1 (Stochastic Invariants (Chatterjee et al., 2017)). Let $\mathcal{G}=(L,V,\allowbreak l_{init},\mathbf{x}_{init},\mathord{\mapsto},G,\mathsf{Pr},\mathsf{Upd})$ be a $\mathsf{CFG}$. Suppose $\mathbf{\Psi}$ is a subset of states and let $p$ be a probability value. The tuple $(\mathbf{\Psi},p)$ is a _stochastic invariant_ (SI) if, under any scheduler $\mathfrak{s}$, the probability mass of the collection of runs beginning from $(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$ leaving $\mathbf{\Psi}$ is bounded above by $p$, i.e., $\sup{}_{\mathfrak{s}}\mathbb{P}{}_{\mathfrak{s}}\left[\rho\in\mathsf{Runs}_{\mathcal{G}}\mid\exists n\in\mathbb{N}\cdot\rho[n]\not\in\mathbf{\Psi}\right]\leq p$ Intuitively, stochastic invariants generalize the standard notion of invariants to the probabilistic setting. Given a stochastic invariant $(\mathbf{\Psi},p)$, the program execution is expected to hold $\mathbf{\Psi}$ (i.e., remain inside $\mathbf{\Psi}$) with probability $\geq 1-p$. As with invariants, the collection of states in stochastic invariants is typically captured by a predicate written in the assertion language of the program logic. In this work however, we do not characterize stochastic invariants directly; we instead use stochastic invariant indicators. ###### Definition 4.2 (Stochastic Invariant Indicator (Chatterjee et al., 2022)). Let $\mathcal{G}$ be the $\mathsf{CFG}$ $(L,V,l_{init},\allowbreak\mathbf{x}_{init},\mathord{\mapsto},G,\mathsf{Pr},\mathsf{Upd})$. A tuple $(\mathsf{SI},p)$ is a _stochastic invariant indicator_ (SI-indicator) if $p$ is a probability value and $\mathsf{SI}:L\times\mathbb{Z}^{V}\to\mathbb{R}$ is a partial function such that $\mathsf{SI}(l_{init},\mathbf{x}_{init})\leq p$, and for all states $(l,\mathbf{x})$ reachable from $(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$, 1. (1) $\mathsf{SI}(l,\mathbf{x})\geq 0$. 2. (2) if $(l,\mathbf{x})$ is an assignment or nondeterministic state, then $\mathsf{SI}(l,\mathbf{x})\geq\mathsf{SI}(l^{\prime},\mathbf{x}^{\prime})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$. 3. (3) if $(l,\mathbf{x})$ is a probabilistic state, then $\mathsf{SI}(l,\mathbf{x})\geq\sum\mathsf{Pr}((l,l^{\prime}))[\mathbf{x}]\times\mathsf{SI}(l^{\prime},\mathbf{x}^{\prime})$ over all possible successor states $(l^{\prime},\mathbf{x}^{\prime})$. Observe that functions $\mathsf{SI}$ in the SI-indicators are supermartingale functions. These $\mathsf{SI}$ are typically most interesting at states $\sigma$ where $\mathsf{SI}(\sigma)<1$; in fact, the collection of states $\sigma$ with this property corresponds to an underlying stochastic invariant with the same probability value as the SI-indicator. This was formally proven by Chatterjee et al. (2022). ###### Lemma 4.3 ((Chatterjee et al., 2022)). Let $\mathcal{G}$ be an $\mathsf{CFG}$. For each stochastic invariant $(\mathbf{\Psi},p)$ of $\mathcal{G}$, there exists a stochastic invariant indicator $(\mathsf{SI},p)$ of $\mathcal{G}$ such that $\mathbf{\Psi}\supseteq\\{\gamma\in\Sigma_{\mathcal{G}}\mid\mathsf{SI}(\gamma)<1\\}$. Furthermore, for each stochastic invariant indicator $(\mathsf{SI},p)$ of $\mathcal{G}$, there is a stochastic invariant $(\mathbf{\Psi},p)$ such that $\mathbf{\Psi}=\\{\gamma\in\Sigma_{\mathcal{G}}\mid\mathsf{SI}(\gamma)<1\\}$. The SI-indicator $\mathsf{SI}$ corresponding to the stochastic invariant $\mathbf{\Psi}$ maps each state $\sigma$ the probability with which runs beginning from $\sigma$ exit $\mathbf{\Psi}$. Thus, the SI-indicator tracks the probability of violating the stochastic invariant. Observe that $\mathsf{SI}(\sigma)\geq 1$ for all states $\sigma\not\in\mathbf{\Psi}$. We will use SI-indicators in our proof rules. Note that we cannot use a single stochastic invariant: this is too weak to ensure soundness. Instead, we use a family of stochastic invariants, one for each reachable state. Unlike stochastic invariants, representing SI-indicators is more complicated. Because SI-indicators map states to reals, we cannot always write them directly in our assertion language. At all $\mathsf{AST}$ programs, we will nevertheless show that one can always find expressions in our assertion language to effectively represent the SI-indicators our proof rule needs. $\mathsf{AST}$ is a property that holds when the initial state is changed to any reachable state. If $\mathcal{G}$ is almost surely terminating, then $\mathcal{G}$ is almost surely terminating from any state $\sigma$ reachable from the initial state $\sigma_{0}=(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$. Furthermore, if from every reachable state $\sigma$, $\mathcal{G}$ is known to terminate with some minimum probability $\epsilon>0$, then the program is $\mathsf{AST}$. We exploit these facts in our rule. If, for a fixed $0<p<1$, there exists (1) an inductive invariant $\mathsf{Inv}$ containing the initial state, (2) a mapping $\mathsf{SI}$ from each $\sigma\in\mathsf{Inv}$ to SI-indicator functions $(\mathsf{SI}_{\sigma},p):\mathsf{Inv}\to\mathbb{R}$ such that $\mathsf{SI}_{\sigma}(l,\mathbf{x})\leq p$ and, for all $(l,\mathbf{x})\in\mathsf{Inv}$, (a) $\mathsf{SI}_{\sigma}(l,\mathbf{x})\geq 0$. (b) if $(l,\mathbf{x})$ is an assignment, or nondeterministic state, then $\mathsf{SI}_{\sigma}(l,\mathbf{x})\geq\mathsf{SI}_{\sigma}(l^{\prime},\mathbf{x}^{\prime})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$. (c) if $(l,\mathbf{x})$ is a probabilistic state, then $\mathsf{SI}_{\sigma}(l,\mathbf{x})\geq\sum\mathsf{Pr}((l,l^{\prime}))[\mathbf{x}]\times\mathsf{SI}_{\sigma}(l^{\prime},\mathbf{x}^{\prime})$ over all possible successor states $(l^{\prime},\mathbf{x}^{\prime})$. (3) a mapping $\mathcal{E}$ mapping states $\sigma\in\mathsf{Inv}$ to values $\epsilon_{\sigma}\in(0,1]$, (4) a mapping $\mathcal{H}$ mapping states $\sigma\in\mathsf{Inv}$ to values $H_{\sigma}\in\mathbb{N}$, (5) a mapping $\mathcal{U}$ from each $\sigma\in\mathsf{Inv}$ to variants $U_{\sigma}:\mathsf{Inv}\to\mathbb{N}$ that is bounded above by $\mathcal{H}(\sigma)$, maps all states $\\{\gamma\mid\mathsf{SI}_{\sigma}(\gamma)\geq 1\lor\gamma\text{ is terminal}\\}$ to $0$ and, for other states $(l,\mathbf{x})\in\mathsf{Inv}$, (a) if $(l,\mathbf{x})$ is an assignment, or nondeterministic state, $U_{\sigma}(l^{\prime},\mathbf{x}^{\prime})<U_{\sigma}(l,\mathbf{x})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$. (b) if $(l,\mathbf{x})$ is a probabilistic state, $\sum\mathsf{Pr}(l,l^{\prime})[\mathbf{x}]>\mathcal{E}(\sigma)$ over all successor states $(l^{\prime},\mathbf{x}^{\prime})$ with $U_{\sigma}(l^{\prime},\mathbf{x}^{\prime})<U_{\sigma}(l,\mathbf{x})$. Then, $\mathcal{G}$ is $\mathsf{AST}$. Intuitively, our rule requires SI-indicator functions $\mathsf{SI}_{\sigma}$ at each $\sigma\in\mathsf{Inv}$ that hold for executions beginning at $\sigma$ with probability $\geq 1-p$. Each of these imply stochastic invariants $(\mathbf{\Psi}_{\sigma},p)$ centered around the state $\sigma$. The functions $\mathcal{U}$, $\mathcal{H}$, and $\mathcal{E}$ combine together to form variant functions $U_{\sigma}$ of the McIver-Morgan kind at each $\sigma\in\mathsf{Inv}$. These $U_{\sigma}$ further imply that a terminal state is contained within each $\mathbf{\Psi}_{\sigma}$, and induces paths within each $\mathbf{\Psi}_{\sigma}$ to this terminal state. Feeding $U_{\sigma}$, $\epsilon_{\sigma}$, and $H_{\sigma}$ into McIver and Morgan’s variant Section 3.1 gives us a proof for the fact that, were the execution to be restricted to $\mathbf{\Psi}_{\sigma}$, the probability of termination from $\sigma$ is $1$. Therefore, the probability of termination from each $\sigma$ is $\geq 1-p$. Applying the zero-one law of probabilistic processes (McIver and Morgan, 2005, Lemma 2.6.1) completes the proof of soundness of this rule. Notice that we don’t mandate any locality conditions on the SI-indicators in this rule. This is because they aren’t necessary to infer the soundness of the rule. However, we show in our completeness proof that one can always find “local” SI-indicators that only take on values $<1$ at finitely many states for $\mathsf{AST}$ programs. This is because these SI-indicators are built from appropriate finite stochastic invariants, the existence of which is a consequence of the unrolling lemma. ###### Lemma 4.4 (Completeness). Section 4 is relatively complete. ###### Proof (sketch). Let $\mathcal{G}$ be an $\mathsf{AST}$ $\mathsf{CFG}$. Set $\mathsf{Inv}=\mathsf{Reach}(\mathcal{G})$, the set of reachable states of $\mathcal{G}$. Fix a $0<p<1$ and a $\sigma\in\mathsf{Reach}(\mathcal{G})$. Since $\mathcal{G}$ is $\mathsf{AST}$, $\Pr_{\mathrm{term}}(\mathcal{G}(\sigma))=1$. The unrolling lemma indicates a $k\in\mathbb{N}$ such that the required simulation time to amass a termination probability of $1-p$ in $\mathcal{G}(\sigma)$ is bounded above by $k$. Let $\Sigma^{\sigma}_{k}$ be the collection of all states reachable from $\sigma$ by a finite path of length at most $k$. Observe that runs of $\mathcal{G}(\sigma)$ 1 terminate inside $\Sigma^{\sigma}_{k}$ with a probability of at least $1-p$, and 2 almost-surely terminate either inside $\Sigma^{\sigma}_{k}$ or outside $\Sigma^{\sigma}_{k}$. $(\Sigma^{\sigma}_{k},p)$ is thus the required stochastic invariant for $\sigma$. Arguments by Chatterjee et al. (2022, Theorem 1) indicate the existence of an SI-Indicator $(\mathsf{SI}_{\sigma},p)$ for the stochastic invariant $(\Sigma^{\sigma}_{k},p)$. Furthermore, the finiteness of $\Sigma^{\sigma}_{k}$ combined with the almost-sure property of either termination inside or exit from $\Sigma^{\sigma}_{k}$ enables Corollary 2.7, from which it is trivial to extract a suitable variant function $U_{\sigma}$ bounded above by some $H_{\sigma}$ exhibiting a minimum probability of decrease of $\epsilon_{\sigma}>0$. This is can be done for all $\sigma\in\mathsf{Inv}$. Let us now show how we can represent these entities in our assertion language. As in the proof of the completeness of the martingale Section 3.2, consider the relation $I$ such that $(k,\sigma_{1},\sigma_{2})\in I$ _iff_ there is a finite path of length $\leq k$ from $\sigma_{1}$ to $\sigma_{2}$ in $\mathcal{G}$. Clearly, $I$ is a computable relation and is thus representable in $\mathsf{Th}(\mathbb{Q})$. $\mathsf{Inv}$ and each $U_{\sigma}$ can easily be represented in $\mathsf{Th}(\mathbb{Q})$ using this relation $I$ in exactly the same way as with Section 3.2. To represent each $\mathsf{SI}_{\sigma}$, we note that arguments from Chatterjee et al. (2022) show that the output of $\mathsf{SI}_{\sigma}$ at a state $\gamma$ is precisely the probability of leaving the stochastic invariant $(\Sigma^{\sigma}_{k},p)$. Encoding lower bounds on this probability in $\mathsf{Th}(\mathbb{Q})$ uses the unrolling lemma, and was essentially shown by Majumdar and Sathiyanarayana (2023). The probability with which executions escape $\Sigma^{\sigma}_{k}$ within $m$ steps lower bounds the probability of leaving $\Sigma^{\sigma}_{k}$. The former probability is computable, and the unrolling lemma ensures that in spite of non-determinism, augmenting $m$ with a universal quantifier produces the precise lower bounds over the latter. Hence, each $\mathsf{SI}_{\sigma}(\gamma)$ is representable in $\mathsf{Th}(\mathbb{Q})$. This completes our proof of relative completeness. ∎ ## 5\. Quantitative Termination We now extend our rules to reason about lower and upper bounds on the probability of termination. This notion of termination is referred to in the literature as _quantitative termination_. This is in contrast to _qualitative termination_ , the nomenclature employed for $\mathsf{AST}$. As mentioned earlier, the proof rules we specify in this section can be used to show irrational bounds on the termination probability; this is a simple consequence of the fact that all possible termination probabilities can be expressed in our assertion language. Our upper bound rule is immediate from an observation of the nature of stochastic invariants. If there exists (1) an inductive invariant $\mathsf{Inv}$ containing $(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$, (2) a function $\mathsf{SI}:\mathsf{Inv}\to\mathbb{R}$ such that $\mathsf{SI}(l_{init},\mathbf{x}_{init})\leq p$ and for all $(l,\mathbf{x})\in\mathsf{Inv}$, (a) $\mathsf{SI}(l,\mathbf{x})\geq 0$; (b) if $(l,\mathbf{x})$ is an assignment, or nondeterministic state, then $\mathsf{SI}(l,\mathbf{x})\geq\mathsf{SI}(l^{\prime},\mathbf{x}^{\prime})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$; (c) if $(l,\mathbf{x})$ is a probabilistic state, then $\mathsf{SI}(l,\mathbf{x})\geq\allowbreak\sum\mathsf{Pr}((l,\allowbreak l^{\prime}))[\mathbf{x}]\times\mathsf{SI}(l^{\prime},\mathbf{x}^{\prime})$ over all possible successor states $(l^{\prime},\mathbf{x}^{\prime})$. (d) if $(l,\mathbf{x})$ is a terminal state, then $\mathsf{SI}(l,\mathbf{x})\geq 1$. Then, $\Pr_{\mathrm{term}}(\mathcal{G})\leq p$. This rule asks for an SI-indicator (and therefore a stochastic invariant) that excludes the terminal state. Therefore, the probability of termination is the probability of escaping the SI-indicator, which is included in the property of the indicator. Notice that, because the SI-indicator is a supermartingale function, the mere existence of a bounded supermartingale that assigns to the terminal state a value $\geq 1$ is sufficient to extract an upper bound. ###### Lemma 5.1. Section 5 is sound and relatively complete. ###### Proof. Lemma 4.3 shows that the pair $(\\{\gamma\in\mathsf{Inv}\mid\mathsf{SI}(\gamma)<1\\},p)$ is a stochastic invariant of $\mathcal{G}$. Since $\mathsf{SI}(\gamma)\geq 1$ at the terminal state, this stochastic invariant doesn’t contain the terminal state. Soundness of the rule trivially follows from the fact that, in order to terminate, a run must leave this invariant and this probability is bounded above by $p$. For completeness, set $\mathsf{Inv}=\mathsf{Reach}(\mathcal{G})$ and let $\sigma_{\bot}\in\mathsf{Inv}$ be the terminal state. Set $\mathbf{\Psi}=\mathsf{Inv}\setminus\\{\sigma_{b}ot\\}$, and observe that the collection of runs leaving $\mathbf{\Psi}$ is identical to the collection of terminal runs. Therefore, $(\mathbf{\Psi},p)$ must be a stochastic invariant. Lemma 4.3 indicates the existence of the SI-Invariant $\mathsf{SI}$ from $\mathbf{\Psi}$. $\mathsf{SI}$ immediately satisfies the conditions of the rule. To represent $\mathsf{SI}$ in our assertion language, note again that $\mathsf{SI}(\gamma)$ is precisely the probability of termination from $\gamma$. The unrolling lemma indicates that this probability is lower bounded by the probability of termination within $m$ steps from $\gamma$. The latter probability is computable, and is hence representable in $\mathsf{Th}(\mathbb{Q})$. Prepending an appropriate universal quantifier for $m$ allows us to form lower bounds for $\mathsf{SI}$ in $\mathsf{Th}(\mathbb{Q})$. Our upper bound rule is thus relatively complete. ∎ ### 5.1. Towards a Lower Bound For a lower bound on the probability of termination, we start with the following rule from Chatterjee et al. (2022). If there exists (1) an inductive invariant $\mathsf{Inv}$ containing $(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})$, (2) a stochastic invariant indicator $\mathsf{SI}:\mathsf{Inv}\to\mathbb{R}$ such that $\mathsf{SI}(l_{\mathit{init}},\mathbf{x}_{\mathit{init}})\geq p$, and for all $(l,\mathbf{x})\in\mathsf{Inv}$, (a) $\mathsf{SI}(l,\mathbf{x})\geq 0$. (b) if $(l,\mathbf{x})$ is an assignment, or nondeterministic state, then $\mathsf{SI}(l,\mathbf{x})\geq\mathsf{SI}(l^{\prime},\mathbf{x}^{\prime})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$. (c) if $(l,\mathbf{x})$ is a probabilistic state, then $\mathsf{SI}(l,\mathbf{x})\geq\sum\mathsf{Pr}((l,\allowbreak l^{\prime}))[\mathbf{x}]\allowbreak\times\mathsf{SI}(l^{\prime},\mathbf{x}^{\prime})$ over all possible successor states $(l^{\prime},\mathbf{x}^{\prime})$. (3) an $\epsilon>0$, (4) a variant $U:\mathsf{Inv}\to\mathbb{N}$ that is bounded above by some $H$, maps all states $\\{\gamma\in\mathsf{Inv}\mid\mathsf{SI}(\gamma)\geq 1\lor\gamma\text{ is terminal}\\}$ to $0$, and for other states $(l,\mathbf{x})\in\mathsf{Inv}$, (a) if $(l,\mathbf{x})$ is an assignment, or nondeterministic state, $U(l^{\prime},\mathbf{x}^{\prime})<U(l,\mathbf{x})$ for every successor $(l^{\prime},\mathbf{x}^{\prime})$. (b) if $(l,\mathbf{x})$ is a probabilistic state, $\sum\mathsf{Pr}(l,l^{\prime})[\mathbf{x}]>\epsilon$ over all successor states $(l^{\prime},\mathbf{x}^{\prime})$ with $U(l^{\prime},\mathbf{x}^{\prime})<U(l,\mathbf{x})$. Then, $\Pr_{\mathrm{term}}(\mathcal{G})\geq 1-p$. This rule demands a SI-indicator $(\mathsf{SI},p)$ and a bounded variant $U$ such that $\mathsf{SI}$ induces a stochastic invariant $(\mathbf{\Psi},p)$ that contains the terminal state. Intuitively, this rule works by splitting the state space into terminating and possibly non-terminating segments. The invariant $\mathbf{\Psi}$ represents the terminating section of the state space. The application of McIver & Morgan’s Section 3.1 with the variant $U$ allows us to deduce that, were the execution be restricted to $\mathbf{\Psi}$, the program almost-surely terminates. Observe that the variant $U$ effectively considers all states outside $\mathbf{\Psi}$ to be terminal. Therefore, $\mathcal{G}$ almost-surely either escapes $\mathbf{\Psi}$ or terminates within $\mathbf{\Psi}$. Non-termination is thus subsumed by the event of escaping the invariant, the probability of which is $p$. Hence, the probability of termination is $\geq 1-p$. Observe that, like our $\mathsf{AST}$ Section 3.2, this rule requires a supermartingale and a variant function that work in tandem. However, unlike Section 3.2, the supermartingale and the variant are entirely bounded. This soundness argument was formally shown by Chatterjee et al. (2022). In their original presentation, they do not specify an exact technique for determining the almost-sure property of either termination within or escape from the induced stochastic invariant $\mathbf{\Psi}$. We will explain our choice of the bounded variant Section 3.1 of McIver and Morgan (2005) in a moment. They additionally claimed the completeness of this rule, assuming the usage of a complete rule for almost-sure termination. If their completeness argument were true, it would indicate that all probabilistic programs induce state spaces that can neatly be partitioned into terminating and non- terminating sections. In Section 5.3, we show that this isn’t the case using a counterexample where this split isn’t possible. Nevertheless, this rule is complete for finite-state programs. This finite- state completeness pairs well with the finite-state completeness of the bounded variant Section 3.1 we use to certify the almost-sure property contained in the rule. $\sigma\in\Sigma_{good}$ Figure 1. If shortest runs of $\Sigma_{good}$ were too long. This is a representation of the collection of executions beginning at a good state $\sigma\in\Sigma_{good}$, according to the partitioning system suggested in the proof of Lemma 5.2. The black nodes are the terminal states; they all lead to the single terminal state. The blue states are identical to each other; the same holds for the blue green states. The pathological scheduler $\mathfrak{s}^{\prime}$ always takes the red back edges, rendering no terminal runs from $\sigma$. ###### Lemma 5.2. Section 5.1 is complete for finite state $\mathsf{CFG}$s. ###### Proof. In this proof, we argue about the stochastic invariants directly. The SI- indicator and variant functions can be derived from them using prior techniques (Chatterjee et al., 2022; McIver and Morgan, 2005). Let $\mathcal{G}$ be a finite state $\mathsf{CFG}$. Partition $\mathsf{Reach}(\mathcal{G})$, the set of states reachable from the initial state of $\mathcal{G}$, into 1 the singleton containing the terminal state $\Sigma_{\bot}$, 2 the _bad_ states $\Sigma_{bad}$ with the property that no finite paths ending on these states can be extended to a terminal run, 3 the _good_ states $\Sigma_{good}$ such that if a scheduler $f$ induces a finite path ending at a good state $\sigma$, $f$ induces at least one terminating run passing through $\sigma$, and 4 the remaining _neutral_ states $\Sigma_{neutral}$, with the property that each neutral state $\sigma$ is associated with a pathological scheduler $\mathfrak{s}_{\sigma}$ that induces runs that, if they pass through $\sigma$, do not terminate. Notice that, as long as $p<1$ (the case where $p=1$ is trivial, so we skip it), the initial state of $\mathcal{G}$ is in $\Sigma_{good}$. We show that $(\Sigma_{good}\cup\Sigma_{\bot},p)$ is the required stochastic invariant. We begin by showing that runs that remain inside $\Sigma_{good}\cup\Sigma_{\bot}$ almost-surely terminate. Fix a state $\sigma\in\Sigma_{good}$. Map to every scheduler $\mathfrak{s}$ that induces runs passing through $\sigma$ the shortest terminating consistent finite path $\pi_{\mathfrak{s}}$ beginning from $\sigma$. Suppose, for some scheduler $\mathfrak{s}$, $|\pi_{\mathfrak{s}}|>|\mathsf{Reach}(\mathcal{G})|$. Then, $\pi_{\mathfrak{s}}$ must visit some $\sigma^{\prime}\in\mathsf{Reach}(\mathcal{G})$ twice. This indicates a loop from $\sigma^{\prime}\to\sigma^{\prime}$ in $\mathcal{G}$, and there must thus exist a scheduler $\mathfrak{s}^{\prime}$ that extends $\pi_{\mathfrak{s}}$ by repeating this loop infinitely often. Since $\pi_{\mathfrak{s}}$ is the smallest terminating finite path beginning from $\sigma$, the same holds for all terminating finite paths consistent with $\mathfrak{s}$ beginning from $\sigma$. Thus, all terminating runs consistent with $\mathfrak{s}$ that pass through $\sigma$ must contain a loop. There must exist a scheduler $\mathfrak{s}^{\prime}$ which exploits these loops and yields no terminating runs passing through $\sigma$. Fig. 1 depicts the operation of $\mathfrak{s}^{\prime}$. Observe that the existence of $\mathfrak{s}^{\prime}$ contradicts $\sigma\in\Sigma_{good}$. Therefore, from every state $\sigma\in\Sigma_{good}$, there is a terminating run of length $\leq|\mathsf{Reach}(\mathcal{G})|$ no matter which scheduler is used. Thus, the probability of leaving $\Sigma_{good}$ from $\sigma$ is bounded below by $q^{|\mathsf{Reach}(\mathcal{G})|}$, where $q$ is the smallest transition probability of $\mathcal{G}$ (note that $q$ only exists because $\mathcal{G}$ is finite state). Further, $\Sigma_{good}$ only contains non-terminal states. This enables the zero-one law of probabilistic processes (McIver and Morgan, 2005, Lemma 2.6.1), allowing us to deduce the almost- certain escape from $\Sigma_{good}$ to $\Sigma_{\bot}\cup\Sigma_{bad}\cup\Sigma_{neutral}$. Hence, under all schedulers, the probability of either terminating inside $\Sigma_{good}\cup\Sigma_{\bot}$ or entering $\Sigma_{bad}\cup\Sigma_{neutral}$ is $1$. We now show that $(\Sigma_{good}\cup\Sigma_{bot},p)$ is a stochastic invariant. It’s easy to see that if a run ever enters $\Sigma_{bad}$, it never terminates. We know that if an execution enters some $\sigma\in\Sigma_{neutral}$ under a pathological scheduler $\mathfrak{s}_{\sigma}$, it never terminates. Let $\sigma_{1}$ and $\sigma_{2}$ be neutral states and $\mathfrak{s}_{1}$ and $\mathfrak{s}_{2}$ be their corresponding pathological schedulers. Notice that $\mathfrak{s}_{1}$ may induce terminating executions that pass through $\sigma_{2}$. One can build a scheduler $\mathfrak{s}_{3}$ that mimics $\mathfrak{s}_{1}$ until the execution reaches $\sigma_{2}$, and once it does, mimics $\mathfrak{s}_{2}$. Thus, $\mathfrak{s}_{3}$ would produce the pathological behaviour of both $\mathfrak{s}_{1}$ and $\mathfrak{s}_{2}$. In this way, we compose the pathological behavior of all neutral states to produce a scheduler $\mathfrak{f}$ that induces runs that, if they enter a neutral state, never terminate. Under $\mathfrak{f}$, leaving $\Sigma_{good}\cup\Sigma_{\bot}$ is equivalent to non-termination, and all terminating runs are made up of good states until their final states. Take a scheduler $\mathfrak{s}$ that induces terminating runs that pass through neutral states. Compose the scheduler $\mathfrak{s}^{\prime}$ that mimics $\mathfrak{s}$ until the execution enters a neutral state and mimics $\mathfrak{f}$ from then on. Let $T_{\mathfrak{s}}$ and $T_{\mathfrak{s}^{\prime}}$ be the collection of terminating runs consistent with $\mathfrak{s}$ and $\mathfrak{s}^{\prime}$ respectively. Each run in $T_{\mathfrak{s}^{\prime}}$ is made up of good and/or terminal states, and is therefore consistent with $\mathfrak{s}$. Hence, $T_{\mathfrak{s}}\supset T_{\mathfrak{s}^{\prime}}$ and, because $\mathfrak{s}$ and $\mathfrak{s}^{\prime}$ agree on $T_{\mathfrak{s}^{\prime}}$, we have $\mathbb{P}_{\mathfrak{s}}(T_{\mathfrak{s}})>\mathbb{P}_{\mathfrak{s}}(T_{\mathfrak{s}^{\prime}})$, where $\mathbb{P}_{\mathfrak{s}}$ is the probability measures in the semantics of $\mathcal{G}$ induced by $\mathfrak{s}$. Leaving $\Sigma_{good}\cup\Sigma_{\bot}$ is equivalent to entering $\Sigma_{neutral}\cup\Sigma_{\bot}$. Observe that the probability of never leaving $\Sigma_{good}\cup\Sigma_{\bot}$ under $\mathfrak{s}$ is the same as the probability of never leaving $\Sigma_{good}\cup\Sigma_{\bot}$ under $\mathfrak{s}^{\prime}$, as $\mathfrak{s}$ and $\mathfrak{s}^{\prime}$ agree until then. Furthermore, the probability measure of never leaving $\Sigma_{good}\cup\Sigma_{\bot}$ is just $\Pr_{\mathfrak{s}}(T_{\mathfrak{s}^{\prime}})=\Pr_{\mathfrak{s}^{\prime}}(T_{\mathfrak{s}^{\prime}})$. Add $\Pr_{\mathrm{term}}(\mathcal{G})\geq 1-p\implies p_{\mathfrak{s}^{\prime}}(T_{\mathfrak{s}^{\prime}})\geq 1-p$, and we get that, under $\mathfrak{s}$, the probability of leaving $\Sigma_{good}\cup\Sigma_{\bot}$ is upper bounded by $p$. Since this is true for any $\mathfrak{s}$, the lemma is proved. ∎ We do not show the relative completeness of this rule; nevertheless, this is easy to show using techniques discussed in prior rules. ### 5.2. Our Rule We now show a sound and complete rule for lower bounds that fixes the prior Section 5.1. This rule is implicitly contained in the details of the erroneous proof of (Chatterjee et al., 2022). It is principally similar to Section 4, in that it identifies finite sub-instances where prior rules can apply. It then combines the proofs of these sub-instances to deduce the desired lower bound. If for all $n\in\mathbb{N}$, there are functions $SI_{n}$ and $U_{n}$ that enable the application of Section 5.1 to deduce $\Pr_{\mathrm{term}}(\mathcal{G})\geq 1-(p+\frac{1}{n})$, then $\Pr_{\mathrm{term}}(\mathcal{G})\geq 1-p$. Soundness of this rule follows trivially from the soundness of the prior Section 5.1. The completeness of this rule is derived from the unrolling lemma; to reach a termination probability of $p$, the program must be able to amass a termination probability of $p-\frac{1}{n}$ within a finite subspace. Lemma 5.2 shows that finiteness can always be captured by the prior Section 5.1. ###### Lemma 5.3. Section 5.2 is relatively complete. ###### Proof. Let $\mathcal{G}$ be $\mathsf{CFG}$ such that $\Pr_{\mathrm{term}}(\mathcal{G})\geq 1-p$ for some $p>0$. Fix some $n\in\mathbb{N}$. Let $k_{n}$ be the upper bound over the required simulation times across all schedulers to amass a termination probability of $p-1/n$. Denote by $\Sigma_{n}$ the set of states $\sigma$ such that there is a finite path of length at most $k_{n}$ beginning at $(l_{init},\mathbf{x}_{init})$ and ending at $\sigma$. Clearly, $\Sigma_{n}$ must be finite and, for a fixed scheduler $\mathfrak{s}$, the probability measure of the collection of terminating runs made up of states in $\Sigma_{n}$ consistent with $\mathfrak{s}$ must be $\geq p-1/n$. Additionally, observe that runs of $\mathcal{G}$ either terminate inside $\Sigma_{n}$ or leave $\Sigma_{n}$. By the soundness of Section 5.1, the termination probability of $\mathcal{G}_{\varphi_{n}}$ must be $\geq p-1/n$. Each stochastic invariant $(\Sigma_{n},p+1/n)$ can be transformed into SI- Indicators $(\mathsf{SI}_{n},p+1/n)$ using prior techniques (Chatterjee et al., 2022). Representing each $SI_{n}$ and $U_{n}$ in $\mathsf{Th}(\mathbb{Q})$ can be done using techniques described in the proof of Lemma 4.4. Hence, this rule is relatively complete. ∎ ### 5.3. Counterexample to Completeness for Section 5.1 (Chatterjee et al., 2022) claimed that their Section 5.1 is complete for all programs. As promised, we now demonstrate a counterexample to their claim of completeness. $\sigma\in\Sigma_{good}$ $l_{1}$$l_{2}$$l_{3}$$x_{1}/x_{2}$$\begin{matrix}x_{1}\coloneqq x_{2}\\\ x_{2}\coloneqq x_{2}+x_{3}\\\ x_{3}\coloneqq x_{3}/2\end{matrix}$$1-(x_{1}/x_{2})$ Figure 2. Counterexample to the SI-rule for lower bounds. With an initial state of $(l_{1},(1,2,1/4))$, the termination probability of this program is $1/2$. However, there is no SI that shows this. Section 5.1 for lower bounds can be applied onto any $\mathsf{CFG}$ that induces a set of states $\mathbf{\Psi}$ with the property that executions remain within $\mathbf{\Psi}$ with exactly the probability of termination. As mentioned previously, not all programs are so well behaved. Consider the program $\mathcal{K}$ defined in Fig. 2. The initial location of $\mathcal{K}$ is $l_{1}$, and the values of the variables $(x_{1},x_{2},x_{3})$ are $(1,2,1/4)$. $l_{1}$ is a probabilistic location, $l_{3}$ is an assignment location, and $l_{2}$ is a terminal location. It isn’t difficult to prove that the probability of termination of $\mathcal{K}$ is $1/2$; we leave the details to the diligent reader. The SI- rule for lower bounds requires a stochastic invariant $(\mathbf{\Psi}_{\mathcal{K}},1/2)$ such that executions almost-surely either terminate or exit $\mathbf{\Psi}_{\mathcal{K}}$. ###### Lemma 5.4. There is no stochastic invariant $(\mathbf{\Psi}_{\mathcal{K}},1/2)$ of $\mathcal{K}$ such that runs almost-surely either terminate or leave $\mathbf{\Psi}_{\mathcal{K}}$. ###### Proof. Suppose there does exist a stochastic invariant $(\mathbf{\Psi}_{\mathcal{K}},1/2)$ that satisfies these properties. Therefore, the probability measure of the union of the collection of runs $\mathsf{Leave}_{\mathbf{\Psi}}$ that leave $\mathbf{\Psi}_{\mathcal{K}}$ and the runs $\mathsf{Term}_{\mathbf{\Psi}}$ that terminate inside $\mathbf{\Psi}_{\mathcal{K}}$ is $1$. However, because $\mathbf{\Psi}_{\mathcal{K}}$ is a stochastic invariant, the probability measure of $\mathsf{Leave}_{\mathbf{\Psi}}$ is bounded above by $1/2$. This means that the measure of $\mathsf{Term}_{\mathbf{\Psi}}$ is bounded below by $1/2$. But, the termination probability of $\mathcal{K}$ is $1/2$. Consequently, the measure of $\mathsf{Term}_{\mathbf{\Psi}}$ must be exactly $1/2$. This means $\mathsf{Term}_{\mathbf{\Psi}}$ contains all terminating runs of $\mathcal{K}$. It is easy to see that from any state $(l,\mathbf{x})$ reachable from the initial state, there is a finite path of length at most $2$ that leads it to a terminal state. Therefore, all reachable states $(l,\mathbf{x})$ are a part of some terminating run; meaning that the set of states that make up the runs in $\mathsf{Term}_{\mathbf{\Psi}}$ must be the set of reachable states. This is only possible when $\mathbf{\Psi}_{\mathcal{K}}$ is the set of reachable states. This means no runs leave $\mathbf{\Psi}_{\mathcal{K}}$, and therefore, the measure of $\mathsf{Term}_{\mathbf{\Psi}}$ is $1$. This contradicts the fact that the termination probability of $\mathcal{K}$ is $1/2$. ∎ _A note on syntax._ The $\mathsf{CFG}$ of Chatterjee et al. (2022) over which the claim of completeness of Section 5.1 was made do not feature fractional expressions guiding probabilistic branching. Nevertheless, they can be simulated with small programs that only use the basic coin flip (Flajolet et al., 2011). Therefore, Fig. 2 is a valid counterexample to their claim. ## 6\. Traveling Between Proof Systems A new proof rule, ultimately, is interesting only if one can actually prove the termination of many programs. In order to show that our proof rules, in addition to their theoretical properties, are also applicable in a variety of situations, we demonstrate that proofs in many existing proof systems can be compiled into our proof rules. #### From McIver and Morgan (2005) The variant functions from Section 3.1 immediately form the variant functions required in Section 3.2. Take $A$ to simply be the singleton containing the terminal state, and set $V$ to $0$ at the terminal state and $1$ everywhere else. This gives all we need to apply Section 3.2. #### From McIver et al. (2018) The $\mathsf{AST}$ proof rule proposed by McIver et al. (2018) has been applied onto a variety of programs, and has been shown to be theoretically applicable over the 2D random walker. Applications of their rule effectively requires the construction of a distance variant that is also a supermartingale. We note that their variants can be reused in Section 3.2 with little alterations as both the supermartingale and variant functions. This means proofs in their rule can be easily translated to proofs that use Section 3.2. #### From Section 3.2 to Section 4 Hidden in the proof of the soundness of Section 3.2 are the stochastic invariants that form the basis of Section 4. Fix a $p$, and take the set $\mathbf{\Psi}_{\sigma}=\\{\gamma\in\mathsf{Inv}\mid V(\gamma)\leq v_{\mathrel{\shortuparrow}}\\}$ for a sufficiently high value of $v_{\mathrel{\shortuparrow}}$ to yield an upper bound of $p$ on the probability of exiting $\mathbf{\Psi}_{\sigma}$. Then, expand $\mathbf{\Psi}_{\sigma}$ with the states necessary to keep all shortest consistent terminal runs across schedulers from states in $\mathbf{\Psi}_{\sigma}$ entirely within $\mathbf{\Psi}_{\sigma}$. In spite of these extensions, the value of $V$ will be entirely bounded when restricted to states in $\mathbf{\Psi}_{\sigma}$. It is then trivial to build the indicator functions from each stochastic invariant $(\mathbf{\Psi}_{\sigma},p)$ and the variant functions from $U$, completing a translation from Section 3.2 to Section 4. Note that using this technique, one can translate proofs from McIver et al. (2018) and McIver and Morgan (2005) to Section 4 as well. #### Using Guard Strengthening (Feng et al., 2023) Feng et al. (2023) have demonstrated Guard Strengthening as a technique for proving lower bounds on, amongst others, the termination of deterministic probabilistic programs. In principle, their technique can be translated to apply over $\mathsf{CFG}$s as follows: strengthen each transition guard by a suitable predicate $\varphi$ and add self loops at all locations guarded by $\lnot\varphi$. We observe that guards can succinctly overapproximate the set of states forming finite-state stochastic invariants: simply take the highest and lowest values each variable can take while remaining in the invariant, and form a predicate that limits each variable within these bounds. Thus, guards serve as convenience mechanisms for the application of Section 5.2. Additionally, observe that when proving lower bounds on termination probabilities, relevant finite state stochastic invariants $\mathbf{\Psi}_{\sigma}$ for each reachable state $\sigma$ are guaranteed to exist. Therefore, each stochastic invariant in Section 5.2 can be more succinctly represented using guards. #### Using Stochastic Invariants (Chatterjee et al., 2022) Separately, Chatterjee et al. (2022) have shown the applicability of Section 5.1 to demonstrate lower bounds on the termination probabilities for a variety of programs, and have also presented template-based synthesis techniques for achieving limited completeness. These proofs are also valid for Section 5.2, by setting the same stochastic invariant for each $n$. ## 7\. Conclusion We have presented the first sound and relatively complete proof rules for qualitative and quantitative termination of probabilistic programs with bounded probabilistic and nondeterministic choice. Our proof rules combine the familiar ingredients of supermartingales and variant functions in novel ways to reach completeness. We have demonstrated relative completeness of our rules in the assertion language of arithmetic. Our rules are able to accommodate existing proof techniques in the literature with minimal effort, thus demonstrating their applicability. pages main.pages.ctrpages pages0 pages ## References * (1) * Apt (1981) Krzysztof R. Apt. 1981\. Ten Years of Hoare’s Logic: A Survey - Part 1. _ACM Trans. Program. Lang. Syst._ 3, 4 (1981), 431–483. https://doi.org/10.1145/357146.357150 * Apt and Kozen (1986) Krzysztof R. Apt and Dexter Kozen. 1986. Limits for Automatic Verification of Finite-State Concurrent Systems. _Inf. Process. Lett._ 22, 6 (1986), 307–309. https://doi.org/10.1016/0020-0190(86)90071-2 * Apt and Plotkin (1986) Krzysztof R. Apt and Gordon D. Plotkin. 1986. Countable nondeterminism and random assignment. _J. ACM_ 33, 4 (1986), 724–767. https://doi.org/10.1145/6490.6494 * Avanzini et al. (2020) Martin Avanzini, Ugo Dal Lago, and Akihisa Yamada. 2020\. On probabilistic term rewriting. _Sci. Comput. Program._ 185 (2020). https://doi.org/10.1016/j.scico.2019.102338 * Baier and Katoen (2008) Christel Baier and Joost-Pieter Katoen. 2008. _Principles of model checking_. MIT Press. * Batz et al. (2021) Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, and Christoph Matheja. 2021\. Relatively complete verification of probabilistic programs: an expressive language for expectation-based reasoning. _Proc. ACM Program. Lang._ 5, POPL (2021), 1–30. https://doi.org/10.1145/3434320 * Bianco and de Alfaro (1995) Andrea Bianco and Luca de Alfaro. 1995. Model Checking of Probabalistic and Nondeterministic Systems. In _Foundations of Software Technology and Theoretical Computer Science, 15th Conference, Bangalore, India, December 18-20, 1995, Proceedings_ _(Lecture Notes in Computer Science, Vol. 1026)_ , P. S. Thiagarajan (Ed.). Springer, 499–513. https://doi.org/10.1007/3-540-60692-0_70 * Bournez and Garnier (2005) Olivier Bournez and Florent Garnier. 2005. Proving Positive Almost-Sure Termination. In _Term Rewriting and Applications, 16th International Conference, RTA 2005, Nara, Japan, April 19-21, 2005, Proceedings_ _(Lecture Notes in Computer Science, Vol. 3467)_ , Jürgen Giesl (Ed.). Springer, 323–337. https://doi.org/10.1007/978-3-540-32033-3_24 * Chakarov and Sankaranarayanan (2013) Aleksandar Chakarov and Sriram Sankaranarayanan. 2013. Probabilistic Program Analysis with Martingales. In _Computer Aided Verification - 25th International Conference, CAV 2013, Saint Petersburg, Russia, July 13-19, 2013. Proceedings_ _(Lecture Notes in Computer Science, Vol. 8044)_ , Natasha Sharygina and Helmut Veith (Eds.). Springer, 511–526. https://doi.org/10.1007/978-3-642-39799-8_34 * Chatterjee et al. (2022) Krishnendu Chatterjee, Amir Kafshdar Goharshady, Tobias Meggendorfer, and Dorde Zikelic. 2022\. Sound and Complete Certificates for Quantitative Termination Analysis of Probabilistic Programs. In _Computer Aided Verification - 34th International Conference, CAV 2022, Haifa, Israel, August 7-10, 2022, Proceedings, Part I_ _(Lecture Notes in Computer Science, Vol. 13371)_ , Sharon Shoham and Yakir Vizel (Eds.). Springer, 55–78. https://doi.org/10.1007/978-3-031-13185-1_4 * Chatterjee et al. (2017) Krishnendu Chatterjee, Petr Novotný, and Dorde Zikelic. 2017\. Stochastic invariants for probabilistic termination. In _Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017_ , Giuseppe Castagna and Andrew D. Gordon (Eds.). ACM, 145–160. https://doi.org/10.1145/3009837.3009873 * Cook (1978) Stephen A. Cook. 1978\. Soundness and Completeness of an Axiom System for Program Verification. _SIAM J. Comput._ 7, 1 (1978), 70–90. https://doi.org/10.1137/0207005 * Courcoubetis and Yannakakis (1995) Costas Courcoubetis and Mihalis Yannakakis. 1995. The Complexity of Probabilistic Verification. _J. ACM_ 42, 4 (1995), 857–907. https://doi.org/10.1145/210332.210339 * de Alfaro and Henzinger (2000) Luca de Alfaro and Thomas A. Henzinger. 2000. Concurrent Omega-Regular Games. In _15th Annual IEEE Symposium on Logic in Computer Science, Santa Barbara, California, USA, June 26-29, 2000_. IEEE Computer Society, 141–154. https://doi.org/10.1109/LICS.2000.855763 * de Alfaro et al. (2007) Luca de Alfaro, Thomas A. Henzinger, and Orna Kupferman. 2007\. Concurrent reachability games. _Theor. Comput. Sci._ 386, 3 (2007), 188–217. https://doi.org/10.1016/J.TCS.2007.07.008 * den Hartog and de Vink (2002) Jerry den Hartog and Erik P. de Vink. 2002. Verifying Probabilistic Programs Using a Hoare Like Logic. _Int. J. Found. Comput. Sci._ 13, 3 (2002), 315–340. https://doi.org/10.1142/S012905410200114X * Dijkstra (1976) Edsger W. Dijkstra. 1976\. _A Discipline of Programming_. Prentice-Hall. https://www.worldcat.org/oclc/01958445 * Doob (1953) J. L. Doob. 1953\. _Stochastic processes_. John Wiley & Sons, New York. viii+654 pages. MR 15,445b. Zbl 0053.26802.. * Feng et al. (2023) Shenghua Feng, Mingshuai Chen, Han Su, Benjamin Lucien Kaminski, Joost-Pieter Katoen, and Naijun Zhan. 2023. Lower Bounds for Possibly Divergent Probabilistic Programs. _Proc. ACM Program. Lang._ 7, OOPSLA1 (2023), 696–726. https://doi.org/10.1145/3586051 * Fioriti and Hermanns (2015) Luis María Ferrer Fioriti and Holger Hermanns. 2015. Probabilistic Termination: Soundness, Completeness, and Compositionality. In _Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2015, Mumbai, India, January 15-17, 2015_ , Sriram K. Rajamani and David Walker (Eds.). ACM, 489–501. https://doi.org/10.1145/2676726.2677001 * Flajolet et al. (2011) Philippe Flajolet, Maryse Pelletier, and Michèle Soria. 2011\. On Buffon Machines and Numbers. In _Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011_ , Dana Randall (Ed.). SIAM, 172–183. https://doi.org/10.1137/1.9781611973082.15 * Floyd (1993) Robert W. Floyd. 1993\. _Assigning Meanings to Programs_. Springer Netherlands, Dordrecht, 65–81. https://doi.org/10.1007/978-94-011-1793-7_4 * Foster (1951) F.G. Foster. 1951\. Markov chains with an enumerable number of states and a class of cascade processes. _Math. Proc. Cambridge Philos. Soc._ 47 (1951), 77–85. * Foster (1953) F.G. Foster. 1953\. On the stochastic matrices associated with certain queuing processes. _Ann. Math. Statistics_ 24 (1953), 355–360. * Fu and Chatterjee (2019) Hongfei Fu and Krishnendu Chatterjee. 2019. Termination of Nondeterministic Probabilistic Programs. In _Verification, Model Checking, and Abstract Interpretation - 20th International Conference, VMCAI 2019, Cascais, Portugal, January 13-15, 2019, Proceedings_ _(Lecture Notes in Computer Science, Vol. 11388)_ , Constantin Enea and Ruzica Piskac (Eds.). Springer, 468–490. https://doi.org/10.1007/978-3-030-11245-5_22 * Harel (1980) David Harel. 1980\. Proving the Correctness of Regular Deterministic Programs: A Unifying Survey Using Dynamic Logic. _Theor. Comput. Sci._ 12 (1980), 61–81. https://doi.org/10.1016/0304-3975(80)90005-5 * Harel et al. (2000) David Harel, Dexter Kozen, and Jerzy Tiuryn. 2000\. _Dynamic Logic_. MIT Press. * Hart et al. (1983) Sergiu Hart, Micha Sharir, and Amir Pnueli. 1983\. Termination of Probabilistic Concurrent Program. _ACM Trans. Program. Lang. Syst._ 5, 3 (1983), 356–380. https://doi.org/10.1145/2166.357214 * Hitchcock and Park (1972) Peter Hitchcock and David Michael Ritchie Park. 1972\. Induction Rules and Termination Proofs. In _Automata, Languages and Programming, Colloquium, Paris, France, July 3-7, 1972_ , Maurice Nivat (Ed.). North-Holland, Amsterdam, 225–251. * Huang et al. (2018) Mingzhang Huang, Hongfei Fu, and Krishnendu Chatterjee. 2018\. New Approaches for Almost-Sure Termination of Probabilistic Programs. In _Programming Languages and Systems - 16th Asian Symposium, APLAS 2018, Wellington, New Zealand, December 2-6, 2018, Proceedings_ _(Lecture Notes in Computer Science, Vol. 11275)_ , Sukyoung Ryu (Ed.). Springer, 181–201. https://doi.org/10.1007/978-3-030-02768-1_11 * Kaminski et al. (2019) Benjamin Lucien Kaminski, Joost-Pieter Katoen, and Christoph Matheja. 2019. On the hardness of analyzing probabilistic programs. _Acta Informatica_ 56, 3 (2019), 255–285. https://doi.org/10.1007/s00236-018-0321-1 * Kozen (2006) Dexter Kozen. 2006\. _Theory of Computation_. Springer. https://doi.org/10.1007/1-84628-477-5 * Majumdar and Sathiyanarayana (2023) Rupak Majumdar and V. R. Sathiyanarayana. 2023. Positive Almost-Sure Termination - Complexity and Proof Rules. _CoRR_ abs/2310.16145 (2023). https://doi.org/10.48550/ARXIV.2310.16145 arXiv:2310.16145 * Majumdar and Sathiyanarayana (2024) Rupak Majumdar and V. R. Sathiyanarayana. 2024. Positive Almost-Sure Termination: Complexity and Proof Rules. _Proc. ACM Program. Lang._ 8, POPL (2024), 1089–1117. https://doi.org/10.1145/3632879 * Manna and Pnueli (1974) Zohar Manna and Amir Pnueli. 1974. Axiomatic Approach to Total Correctness of Programs. _Acta Informatica_ 3 (1974), 243–263. https://doi.org/10.1007/BF00288637 * McIver and Morgan (2005) Annabelle McIver and Carroll Morgan. 2005. _Abstraction, Refinement and Proof for Probabilistic Systems_. Springer. https://doi.org/10.1007/B138392 * McIver et al. (2018) Annabelle McIver, Carroll Morgan, Benjamin Lucien Kaminski, and Joost-Pieter Katoen. 2018. A new proof rule for almost-sure termination. _Proc. ACM Program. Lang._ 2, POPL (2018), 33:1–33:28. https://doi.org/10.1145/3158121 * Menshikov et al. (2017) Mikhail Menshikov, Serguei Popov, and Andrew Wade. 2017\. _Non-homogeneous random walks: Lyapunov function methods for near critical stochastic systems_. Cambridge University Press. * Mertens et al. (1978) Jean-François Mertens, Ester Samuel-Cahn, and Shmuel Zamir. 1978\. Necessary and Sufficient Conditions for Recurrence and Transience of Markov Chains, in Terms of Inequalities. _Journal of Applied Probability_ 15, 4 (1978), 848–851. http://www.jstor.org/stable/3213440 * Pólya (1921) George Pólya. 1921\. Über eine aufgabe betreffend die irrfahrt im strassennetz. _Math. Ann._ 84 (1921), 149–160. * Popov (2021) Serguei Popov. 2021\. _Two-Dimensional Random Walk: From Path Counting to Random Interlacements_. Cambridge University Press. https://doi.org/10.1017/9781108680134 * Robinson (1949) Julia Robinson. 1949\. Definability and Decision Problems in Arithmetic. _J. Symb. Log._ 14, 2 (1949), 98–114. https://doi.org/10.2307/2266510 * Rogers Jr. (1987) Hartley Rogers Jr. 1987\. _Theory of recursive functions and effective computability (Reprint from 1967)_. MIT Press. https://mitpress.mit.edu/9780262680523/theory-of-recursive-functions-and-effective-computability/ * Takisaka et al. (2021) Toru Takisaka, Yuichiro Oyabu, Natsuki Urabe, and Ichiro Hasuo. 2021. Ranking and Repulsing Supermartingales for Reachability in Randomized Programs. _ACM Trans. Program. Lang. Syst._ 43, 2 (2021), 5:1–5:46. https://doi.org/10.1145/3450967 * Turing (1937) Alan M. Turing. 1937\. On computable numbers, with an application to the Entscheidungsproblem. _Proc. London Math. Soc._ s2-42, 1 (1937), 230–265. https://doi.org/10.1112/PLMS/S2-42.1.230 * Vardi (1985) Moshe Y. Vardi. 1985\. Automatic Verification of Probabilistic Concurrent Finite-State Programs. In _26th Annual Symposium on Foundations of Computer Science, Portland, Oregon, USA, 21-23 October 1985_. IEEE Computer Society, 327–338. https://doi.org/10.1109/SFCS.1985.12 pagesbib main.pagesbib.ctrpagesbib pagesbib0 pagesbib todos main.todos.ctrtodos todos0 todos pagestotal main.pagestotal.ctrpagestotal pagestotal0 pagestotal
# Digitized-Counterdiabatic Quantum Algorithm for Protein Folding Pranav Chandarana Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain EHU Quantum Center, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Biscay, Spain Narendra N. Hegade<EMAIL_ADDRESS>Kipu Quantum, Greifswalderstrasse 226, 10405 Berlin, Germany International Center of Quantum Artificial Intelligence for Science and Technology (QuArtist) and Physics Department, Shanghai University, 200444 Shanghai, China Iraitz Montalban Kipu Quantum, Greifswalderstrasse 226, 10405 Berlin, Germany Department of Physics, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Biscay, Spain Enrique Solano <EMAIL_ADDRESS>Kipu Quantum, Greifswalderstrasse 226, 10405 Berlin, Germany International Center of Quantum Artificial Intelligence for Science and Technology (QuArtist) and Physics Department, Shanghai University, 200444 Shanghai, China IKERBASQUE, Basque Foundation for Science, Plaza Euskadi 5, 48009 Bilbao, Spain Xi Chen<EMAIL_ADDRESS>Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain EHU Quantum Center, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Biscay, Spain ###### Abstract We propose a hybrid classical-quantum digitized-counterdiabatic algorithm to tackle the protein folding problem on a tetrahedral lattice. Digitized- counterdiabatic quantum computing is a paradigm developed to compress quantum algorithms via the digitization of the counterdiabatic acceleration of a given adiabatic quantum computation. Finding the lowest energy configuration of the amino acid sequence is an NP-hard optimization problem that plays a prominent role in chemistry, biology, and drug design. We outperform state-of-the-art quantum algorithms using problem-inspired and hardware-efficient variational quantum circuits. We apply our method to proteins with up to 9 amino acids, using up to 17 qubits on quantum hardware. Specifically, we benchmark our quantum algorithm with Quantinuum’s trapped ions, Google’s and IBM’s superconducting circuits, obtaining high success probabilities with low-depth circuits as required in the NISQ era. ## Introduction Variational quantum algorithms (VQAs) have been proposed to solve problems with noisy qubits in near-term quantum computers [1]. VQAs are hybrid classical-quantum algorithms that optimize a cost function containing information about the solution. The quantum part of a VQA consists of a parameterized quantum circuit (PQC), also known as circuit ansatz, to produce trial quantum states. The classical part consists of an optimization routine that gives optimal parameters to solve the problem. The choice of PQC affects the performance of the VQA to a great extent. These PQCs are broadly divided into two categories: problem-inspired and hardware-efficient. Problem-inspired ansatz utilizes the properties of the problem Hamiltonian to efficiently reach the expected state. Hardware-efficient ansatz takes the information of the device connections to reduce the noise due to deep circuits and unimplementable connections. Some examples of problem-inspired ansatz are the Quantum Approximate Optimization Algorithm (QAOA) [2], the unitary coupled- cluster ansatz [3] or the Hamiltonian variational ansatz [4, 5]. On the other hand, some noteworthy hardware-efficient ansätze can be found in Refs. [6, 7]. Implementing VQAs is challenging due to shot noise, measurement noise, and others. It has also been shown that VQAs suffer from barren plateaus, where the gradients vanish with increasing system size [8, 9, 10]. Generally, VQAs with hardware-efficient ansätze suffer from this challenge due to their high expressibility at larger depths. Hence, the use of the problem-inspired ansatz is motivated. In problem-inspired ansatz, the limited search space results in lower expressibility and higher trainability. That being said, problem- inspired ansätze usually involve large circuit depths, so experimental implementation becomes unfeasible on available noisy devices. Thus, a good circuit ansatz has to be expressible so that it contains the solution, but not too expressible that it becomes untrainable. Recently, several works have reported the use of digitized-counterdiabatic quantum computation (DCQC) to improve and compress state-of-the-art quantum algorithms. These methods utilize counterdiabatic (CD) protocols to accelerate given adiabatic quantum algorithms to generate many-body ground states [11] and QAOA [12]. Furthermore, they have also shown drastic improvements in industrial applications, like portfolio optimization [13] and integer factorization [14]. These methods have some difficulties, like finding suitable initial parameters and optimal CD terms. To solve these challenges, a meta-learning technique was proposed recently to find suitable initial parameters [15]. The choice of optimal CD terms may be tackled by machine learning methods like reinforcement learning [16] and Monte-carlo tree search [17]. CD protocols stem from the field of shortcuts to adiabaticity, which was developed to accelerate the quantum adiabatic processes. Among many methods, like fast forward [18, 19] and invariant-based engineering [20, 21], CD driving [22, 23, 24] has been of prominent interest over the years for studying many-body quantum systems. Apart from shortcuts to adiabaticity, other quantum control protocols like quantum optimal control (QOC) are also studied in the context of VQAs [25, 26]. VQAs have also been implemented to find optimal control sequences [27]. In this article, we develop a hybrid classical-quantum digitized- counterdiabatic algorithm to tackle a protein folding problem. It consists of a PQC inspired by CD protocols and a classical optimization routine for parameter optimization. Proteins are macromolecules consisting of a large chain of amino acid residues and perform many vital functions in organisms like DNA replication, catalyzing metabolic reactions, and more. Knowledge of how proteins fold is crucial in understanding enzymes. The mechanics of folding may unravel remedies for diseases like Alzheimer’s, Huntington’s, and Parkinson’s that are induced due to misfolding of proteins. With the combinatorially increasing solution space, protein folding problem is highly complex for classical computation, which suggests the potential use of quantum computers. Generally, protein folding is modeled by suitable 2D or 3D lattices, while the amino acids are allowed to be placed such that the interaction energy is minimized. By proper encoding schemes, this problem can be converted into a problem Hamiltonian whose ground state shows the configuration of the concerned protein in the given lattice. Over the last decade, numerous attempts have been made to tackle this problem with quantum computing. For instance, Perdomo et al. [28] studied a hydrophobic polar (HP) model with a 2D lattice and later also studied a coarse-grained model (3D lattice) with quantum annealing [29]. Babbush et al. [30] also studied the protein folding problem with turn encoding in quantum adiabatic algorithms. Babej et al. [31] studied this problem using the quantum alternating operator ansatz [32]. Recently, a resource-efficient version of the same problem was considered by Anton Robert et al. [6] with an experimental demonstration on IBM superconducting device. To address this problem, we propose a hybrid digitized-counterdiabatic algorithm that includes a PQC we call “CD-inspired” ansatz. While being problem-inspired, this ansatz is also hardware implementable and has a parameterization that scales as $\mathcal{O}(N^{2})$, where $N$ is the number of qubits. We benchmark the performance against state-of-the-art problem- inspired ansatz as well as hardware-efficient ansatz. In addition, we also perform experiments with system sizes up to 17 qubits using several quantum hardware platforms with different connectivity and native gates. Figure 1: A Schematic diagram of different types of ansatz with $p=1$ layer. In (a) hardware-efficient ansatz (HEA), which has no information from the problem, (b) CD-inspired ansatz proposed in this work, and (c) QAOA, which is a problem-inspired ansatz. Horizontal lines show qubit registers that are initialized in $\ket{+}^{\otimes N}$ state. In HEA, there are parameterized $Y$ rotations followed by nearest-neighbor entangling gates, then again parameterized $Y$ rotations. In QAOA, we have Hamiltonian term $U_{c}(\gamma)$ and mixer term $U_{b}(\beta)$. And, for the CD-inspired ansatz, we have parameterized $Y$ rotations followed by $YZ(\theta)$ rotations. In all cases, cost function $C$ (Eq. (3)) is computed, then the parameters are updated using gradient-based classical optimizers until $C$ is minimized. As we go from left to right, the implementation difficulty increases. ## Results CD-inspired ansatz. In this section, we will discuss the construction of the CD-inspired ansatz for the algorithm and study its performance. We begin by considering quantum adiabatic evolution with counterdiabatic protocol Hamiltonian $H_{cd}$ given by $H_{cd}(t)=(1-\lambda(t))H_{mixer}+\lambda(t)H+\dot{\lambda}(t)A_{\lambda},$ (1) where $H_{mixer}$ is a Hamiltonian whose ground state is easy to prepare, $\lambda(t)$ is a scheduling function with boundary conditions $\lambda(0)=0$, $\lambda(T)=1$, $T$ is the total evolution time, and $A_{\lambda}$ is the approximate CD term, calculated by using the nested commutator (NC) method (See Methods .1). Lower orders of $A_{\lambda}$ give the approximate CD terms while $l\to\infty$ will give the exact CD term. While working in the adiabatic regime, the scheduling function $\lambda(t)$ should be slow enough to satisfy the adiabatic theorem to reach the ground state of the target Hamiltonian with high probability. However, with the CD term, this condition is lifted as the non-adiabatic transitions can be suppressed [33] and at $|~{}\dot{\lambda}~{}|\to 0$, we retrieve the adiabatic Hamiltonian. Now consider a certain $\lambda(t)$ that satisfies $|~{}\dot{\lambda}(t)~{}|\gg|~{}\lambda(t)~{}|$ . For this scenario, the evolution will almost be non-adiabatic and most of the contribution will be from $A_{\lambda}$ term. In theory, this evolution should also be successful but it will require the calculation of the exact CD term which is a challenging task as information on all the spectral properties of the Hamiltonian becomes necessary. In DC-QAOA [12], the Eq. (1) is digitized with approximate CD terms, to get faster evolution. Instead of using actual scheduling functions, we take the aid of classical optimization routines to optimize trotter evolution to reach to the ground state. Under the assumption that there exists a scenario as mentioned above, we can get rid of the first two terms of Eq. (1) and only the CD terms can be considered. This is the intuitive motivation behind the CD- inspired ansatz: Instead of implementing all the evolution, we implement only the contributions from the digitized CD term as a parameterized circuit and allow the classical optimization to take care of the evolution to lead to the ground state. Thus, the CD-inspired ansatz will have the form $U_{cd}(\theta)=e^{-i\theta A},$ (2) where $A\in A_{\lambda}$ and $A_{\lambda}$ is a set of all the terms computed from the NC method. This is advantageous for VQAs in the sense that this condition gets rid of most of the terms from the ansatz which makes it implementable in the near-term devices and as these algorithms aim to find approximate solutions, this ansatz should lead to good solutions if a suitable optimization strategy is used. This claim can also be backed by QOC theory where the dynamical Lie algebra of non-commuting control Hamiltonians can be computed in order to understand the performance of the ansatz [34]. However, as the terms in the CD pool operators, $A_{\lambda}$ are calculated both by executing only odd commutations (See Methods .1), it will yield a relatively compact pool of operators. Thus, the NC method provides us with a way to truncate the full expansion and then select the appropriate terms from the reduced pool of operators. Apart from that, an important task is to choose the parameterization of the ansatz. There are several ways to accomplish this, for instance by studying the symmetries of the problem Hamiltonian [35]. Given we are relying on few terms from the NC method, it is beneficial that each term has its own free parameter, increasing the degrees of freedom of the ansatz. Summing up, the algorithm will consist of a PQC initialized in the ground state of $H_{mixer}$. Then we choose a pool of operators obtained from Eq. (2) to construct the ansatz, where each term has its free parameter to be optimized by a classical optimizer. The number of free parameters will depend on the number of interaction terms in the Hamiltonian. A schematic diagram of hardware-efficient ansatz, CD-inspired ansatz, and problem-inspired ansatz is shown in Fig. 1. Figure 2: Success Probability as a function of 20 randomly initialized instances for $6$, $7$, and $8$ amino acid proteins with $N=6$, $N=9$, and $N=13$ qubits respectively using the CD-inspired ansatz with $p=1$ layer, maximum 500 iteration steps and gradient-based optimizers Adam and Adagrad were used for classical optimizations. Performance analysis. In this section, we study the application of the CD- inspired ansatz to various proteins with different numbers of amino acids. These include the amyloid-beta peptide sequence (KLVFFA) which translates to a 6-qubit system, the Neuropeptide-alpha bag cell (APRLRFY), which translates to a 9-qubit system, cyclic peptide inhibitor (AVDINNNA) which translates to a 13-qubit system and Oxytocin (CYIQNCPLG) which translates to a 17-qubit system. Each of the letters represents an amino acid, for example, A: Alanine, C: Cysteine, D: Aspartic Acid, among others, with the Miyazawa and Jernigan (MJ) interactions [36]. In each case, we generate a 5-local Ising Hamiltonian (See Methods .2) intending to reach the ground-state, that shows the protein configuration by minimizing the expectation value. By considering the NC commutator in Eq. (7) with $l=2$, we obtain a set of operators with increasing locality. We truncate these terms to up to two-body terms, which result in $U_{cd}=\\{Y,YZ,ZY,XY,YX\\}$. Here, each of the terms shows the exponentiation of the corresponding Pauli terms, for example $XY=e^{-i\sum_{i,j}J_{ij}\sigma_{i}^{x}\sigma_{j}^{y}}$. Out of these, we select $\\{Y,YZ\\}$ where $Y=e^{-i\sum_{m}\sigma_{m}^{y}}$ with $m=0,...,N$, and $YZ=e^{-i\sum_{i,j}J_{ij}\sigma_{i}^{y}\sigma_{j}^{z}}$ where $(i,j)$ correspond to two-body interaction sites of the Hamiltonian with coefficients $J_{ij}$ (Fig. 1(b)). The number of two-body terms $N_{2loc}\leq N(N-1)/2$ make this ansatz hardware-efficient, in the sense that its experimental implementation is much more feasible. Regarding the optimizable parameters, each of the gates that are applied has its free parameter. Hence, the number of parameters per layer is $R=N_{2loc}+N$ so the parameter scaling is $\mathcal{O}(N^{2})$. In Appendix A, we have shown the parameterization as a function of the system size for various ansatz. For the classical optimization part, we implement stochastic gradient-descent- based optimizers, called Adam [37] and Adagrad [38], where the gradients are computed using the parameter-shift rule [39]. For each protein, we run the algorithm 20 times with random initial parameters for the $p=1$ layer. To quantify the performance of the algorithm, we use success probability as a metric, showing the probability of getting the ground state at the end of the algorithm. We also study the expectation values as a function of the iteration steps to understand the convergence. Figure 3: Energy as a function of iterations for $N=13$ qubit protein AVDINNNA. Results show simulator data for 100 iterations using Adam optimizer comparing QAOA and CD-inspired ansatz. Dashed lines display the average energy of 20 instances considered. Crosses show the energy variation of the best instance. Shaded regions indicate the standard deviation. The green line shows the exact ground state energy. To benchmark the performance, we implemented $p=1$ algorithm as a function of the number of instances. For each of them, the ansatz was randomly initialized and the success probabilities are shown in Fig. 2. The ansatz performs extremely well for all the cases we investigate but with increasing system size, the number of successful instances decreases. This is due to the fact that the solution space increases drastically with system size making it difficult to find a global solution. For the numerical simulations, we set the number of iterations to 500 with the tolerance of $10^{-6}$, in the sense that the iterations will stop once this tolerance is reached. The ‘unsuccessful’ instances show almost zero success probability, this behavior can be attributed to the fact that the spectrum of energy eigenstates is densely packed and has many degenerate excited states, so there is a finite possibility of the output state being stuck at the lower excited states with high probability. These results were computed with the intent to check how efficiently the algorithm can reach the exact ground state. However, the hybrid classical-quantum algorithms usually aim at finding approximate solutions near the ground state. The observed trend of decreasing successful instances with increased system size shows that scaling to a higher systems size might require good initialization techniques to get to the exact ground state. To evaluate CD-inspired ansatz against state-of-the-art quantum algorithms, we begin by doing a performance comparison with a well-known problem-inspired ansatz: QAOA. To do so, we studied the convergence of 20 instances with $p=1$ layer, each having random initialization, with $N=13$ qubits protein AVDINNNA and the results are shown in Fig. 3. Average energy with standard deviation and the best instance (in terms of convergence) is plotted as a function of iteration steps and compared with the exact ground state energy. Indeed, the performance of the CD-inspired ansatz is much better than QAOA. This performance enhancement can be attributed to the difference in the optimizable parameters for both algorithms. Only two parameters result in low expressibility, and to counter this challenge, a high $p$ ansatz is required. Taking into consideration that QAOA ansatz at $p=1$ already contains five-body terms, increasing layers would serve to be problematic as the two-qubit gate errors will start to accumulate and the circuit depth will also increase, leading to decoherence. Apart from this, the standard deviation of the average energy of CD-inspired ansatz is initially high but it reduces as the iteration steps increase. Hence, the algorithm will lead to states close to the ground state independent of the initial parameters chosen. Lastly, the algorithm reaches the approximate solutions in a relatively low number of iteration steps. This behavior is useful in the hardware setting as it will reduce the run-time of the algorithm. Figure 4: Energy as a function of iterations for $N=17$ qubits system comparing CD-inspired ansatz and hardware-efficient ansatz. Numerical simulations were performed for 100 iterations with the Adam optimizer. Dashed lines show the average energy of the 20 instances considered and Crosses show the best instance. The inset plot shows the minimum energy achieved during the optimization of the best instance. The green line shows the exact ground state. After comparing to a problem-inspired ansatz, we compare CD-inspired ansatz with a hardware-efficient ansatz (HEA). To do so, we implemented a circuit with parameterized $Y$ rotation applied to all qubits followed by CNOTs to the cyclic nearest neighbors for entanglement, followed by parameterized $Y$ rotation applied to all qubits (Fig. 1(a)). We study a nine amino-acid protein CYIQNCPLG that translates to a $N=17$ qubits system by comparing both average energy and best instance for $p=1$ algorithm and results are shown in Fig. 4. The inset plot shows the minimum energy obtained during the optimization of the best instance. We observe that the minimum with the best CD-inspired ansatz instance is much closer to the exact ground-state energy as compared to HEA. Regarding the convergence with iteration steps, the HEA has a relatively smoother convergence as compared to the CD-inspired ansatz. As far as the circuit evaluations are concerned, the parameter-shift method will require 2 evaluations per parameter to estimate the gradients, hence the CD-inspired ansatz will take $2(N_{2loc}+N)$ while HEA will take $2N$ circuit evaluations. This is a challenge since, at larger system sizes, these would increase drastically. To circumvent this, different methods to compute the gradients can be adopted. For instance, Ref. [40] studies a method that can compute the gradients with only two circuit evaluations; independent of the number of circuit parameters. Besides this, the energy difference between the ground state energy $E_{GS}$ and the minimum energy obtained from the best CD- inspired ansatz instance $E_{cd}$ is $E_{cd}-E_{GS}\approx 0.54$. Thus, even at a large system size, we get solutions very close to the ground state. With the CD-inspired ansatz, we observed that for a system size as high as $N=17$ qubits, the ansatz finds it hard to converge to some steady value at 100 iterations which might be due to the classical optimization routines or lower iteration steps. This also means that near the ground state, the energy landscape is extremely featured which makes it harder for the local optimizer to converge to a particular energy value. Figure 5: Energy as a function of iterations for $N=9$ qubits with noise model for ibmq_guadalupe. Numerical simulations were performed for 100 iterations with Adagrad optimizer for 5 instances. Blue dots show the average of these instances and the red-shaded region shows the standard deviation. The inset shows the last 10 iteration steps. The green line shows the exact ground state energy. In order to study the performance of the algorithm against noise, we considered a model corresponding to ibmq_guadalupe device from IBM. This model uses the actual backend parameters to mimic noise with some exemptions, namely, the cross-talk errors and leakage errors are not included [41]. Noisy simulations are key to understanding how the energy landscape changes under the effect of noise. In Fig. 5, we take the optimal initial parameters (of ideal simulation) of a $N=9$ qubit system APRLRFY with $p=1$ and perform the algorithm against the noise model five times with Adagrad optimizer and plot the average energy as a function of iterations. The aim is to check how much noise changes the convergence of the optimal algorithm. We can observe that even in the presence of noise, the ansatz does work considerably well. Having said that, the final convergence energy is lifted considerably as compared to the ground state energy and the algorithm is not exactly converging. There are still fluctuations as we reach the end (refer inset Fig. 5), which is natural due to existing noise. This gives an intuition about the energy landscape when CD-inspired ansatz is implemented. From Fig. 5, it is evident that the energy landscape with CD-inspired ansatz and gradient-based optimizers is smooth enough to take the state near to the ground state but there exist lots of local minima near the exact ground state. These landscape features make it easier for the classical optimizer to get near to the ground state but to reach exactly to the ground state becomes difficult with increasing system size. This fact is also evident from Fig. 2, as with increasing system size the number of successful instances decreases. Hence, at higher system sizes, there might be a requirement for global optimizations to efficiently reach the ground state with a high success probability. Also, these noisy simulations were run without any error mitigation techniques. The involvement of error mitigation techniques is crucial at the time of experimental implementation. There has been significant attention in studying error mitigation techniques [42, 43, 44, 45, 46, 47, 48]. Nevertheless, we still are able to reach considerable approximate solutions while performing the algorithm under noise. With the advantages of implementing this ansatz, several challenges also need to be addressed. First, the choice of suitable CD terms from the pool of NC commutation operators in this work is heuristic but clever techniques need to be developed for choosing the dominant CD term that performs the best depending on the problem. It can also be seen that, when the ansatz does not perform well, the ground state success probability is close to zero. This means that when we scale to higher system sizes, it might be difficult to find the exact ground state as the size of the Hilbert space will be huge. Lastly, although we have successfully reduced the circuit depth, there will still be a requirement for additional circuit optimization strategies to improve the results while performing experiments on real hardware, which we show in the next section for several cloud quantum computing platforms. Figure 6: Output probability distribution of $N=13$ AVDINNNA protein and $N=17$ CYIQNCPLG protein on a trapped-ion system: Quantinuum system H1 with 1000 shots. (a) show the $N=13$ case and (b) show the $N=17$ case. (a1) and (b1) show the graph corresponding to two-body interactions implemented in the CD-inspired ansatz. Blue edges show the present two-body connections while the purple edges show the connections that are absent. (a2) shows the optimal protein configuration with a dotted green line depicting the connection of the nearest-neighbor interaction. (b2) show the optimal configuration of protein obtained from exactly solving the problem whereas (b3) shows the protein configuration obtained in the experiment. In (b3) amino acids ‘P’ and ‘G’ are overlapping. Experimental implementations. In this section, we implement the proposed CD- inspired ansatz on different available noisy hardware and emulators, specifically on trapped-ions and superconducting systems. Having various native gate sets and connectivity, all these devices pose different challenges that require to be dealt with to get appreciable results. Details about the hardware, circuit optimization, and error mitigation techniques are given in Appendix C. Figure 7: Output probability distribution of $N=9$ qubits by implementing the optimal circuit on (a) IBM ibmq_guadalupe where the experiment was performed with 8192 shots and (b) Google’s quantum virtual machine rainbow [49] where the experiment was performed with 10000 shots. Dark-colored bars show the ground-state probability of the physical qubits whereas light-colored bars show the rest of the distribution (a1) and (b1) show the hardware topology and selected qubits are shown using red color. (a2) and (b2) both show the optimal protein configurations with the nearest neighbor connection between ‘A’ and ‘F’ shown by a dotted green line. #### .0.1 Quantinuum Trapped-ions We implemented two systems, 8 amino acid protein AVDINNNA ($N=13$ qubits) and 9 amino acid protein CYIQNCPLG ($N=17$ qubits) with $p=1$ layers on the Quantinuum H1-1 [50] device. For the trapped-ions system, all the qubits are identical and errors depend upon the interaction zones, so the selection of qubits becomes trivial since we can choose any. In both cases, the system was initialized in the $\ket{+}^{\otimes N}$ state by applying Hadamard gates to all the qubits. Following that, we implement parameterized $R_{y}({\theta_{i}})$ rotations on all qubits, and the interaction terms $YZ(\theta)$ were constructed by the native $ZZ(\theta)$ interaction. This is done by applying two rotations, $YZ^{ij}(\theta_{m})\equiv R_{x}^{i}(\pi/2)~{}ZZ^{ij}(J_{ij}\theta_{m})~{}R_{x}^{i}(-\pi/2)$ (refer Fig. 10(a)). For $N=13$ system, the Hamiltonian we considered had $N_{2loc}=52$ two-body interactions and for system with $N=17$ qubits, $N_{2loc}=80$ two- body interactions were present. Graphs in Fig. 6(a1) and Fig. 6(b1) show the full connectivity with blue edges showing the present interactions and purple edges showing the interactions that are absent. When dealing with the real hardware, gates are applied in a parallel manner to reduce the circuit depth but this comes at a cost of performing multiple operations at the same time. In this system, there is a limit on how many parallel operations can be performed efficiently, so we apply the gates such that they do not exceed 5 operations at a time. Due to the limited hardware access, we ran the optimization part on a local simulator with Adam optimizer and 500 iteration steps and implemented the circuit obtained with the optimal parameters on the real hardware. In Fig. 6, we show the probability distribution from the real hardware for both systems. Both experiments were performed with 1000 shots. For $N=13$ qubits, we achieve around 85% probability of getting the ground state shown in Fig. 6(a), while Fig. 6(a2) shows the folded protein in 3D. The nearest- neighbor amino acid connection is ‘V’–‘N’, which gives us $q_{2,7}=1$. The optimal turn sequence is $t_{expt}=[\bar{1},0,\bar{1},2,\bar{0},1,\bar{0}]$. On the other hand, Fig. 6(b) shows the $N=17$ qubits case with around 79% probability of the output state achieved during the experiment which is the 4th excited state of the Hamiltonian. As the system size is large and we are implementing only $p=1$ layered ansatz, the classical optimization routine gets close to the ground state energy but still reaches local minima. Fig.6(b2) shows the protein configuration with the ground state, while Fig.6(b3) is the protein configuration with the fourth excited state. The optimal turn sequence is $t_{opt}=[\bar{1},0,\bar{3},1,\bar{0},2,\bar{1},3]$ and the turn sequence we get while experimenting is $t_{expt}=[\bar{1},0,\bar{3},2,\bar{0},3,\bar{1},1]$. We can notice that the last turn taken by the amino acid is non-optimal which leads to overlap at the second last lattice point. For optimal configuration, the nearest-neighbor connections are ‘C’–‘C’ and ‘Y’–‘G’, and hence $q_{1,6}=1$ and $q_{2,9}=1$. However, with the 4th excited state the connections are ‘Y’–‘G’ and ‘Y’–‘P’, hence $q_{2,9}=1$ and $q_{2,7}=1$. So, for the fourth excited state, there is one connection and several turns that differ from the exact ground state. In terms of the energy difference, both these configurations differ by a factor of 0.54. This is close to the actual ground state and, thus, the algorithm is likely to jump to the excited state. Implementation of our hybrid quantum algorithm results in high success probability. This is due to the all-to-all connectivity and the fact that the two-body interactions in the circuit can be applied effectively with native gates. #### .0.2 IBM Superconducting chip We implemented the 7 amino acid protein APRLRFY on 16-qubit ibmq_guadalupe [51] device with $p=1$ layer of the ansatz. To choose 9 out of the 16 available qubits, we apply a subgraph isomorphism algorithm [52] to get the best layout possible. As usual, we start with the $\ket{+}^{\otimes N}$ state by applying the Hadamard gate to all the qubits and then apply $R_{y}(\theta_{i})$. The two-qubit gate decomposition in terms of the two- qubit native gate is given by $YZ^{ij}(\theta_{m})\equiv R_{z}^{i}(-\pi/2)~{}CR^{ji}(J_{ij}\theta_{m})~{}R_{z}^{i}(\pi/2)$ (refer Fig. 10(b)). We arrange these gates in parallel before the optimization such that the resultant circuit is as shallow as possible. We ran the optimization with this circuit on a local simulator and obtained the optimal parameters $\theta_{opt}$ as before. We show the resultant probability distribution in Fig. 7(a) with 8192 shots. We can achieve a ground state probability of around 20%. Apart from choosing the best layout, we implemented several additional strategies like SWAP strategy for achieving the desired connectivity [53], native gate and pulse optimization strategies [54], dynamical decoupling [55], and finally measurement error mitigation techniques [56]. Detailed discussions about these techniques are given in Appendix C. After the circuit optimization and error mitigation, the success probability is enhanced from around 8% to 20%. Fig. 7(a2) show the configuration of the protein. The nearest neighbor connection is between ‘A’–‘F’, and hence $q_{1,6}=1$. The optimal turn sequence is given by $t_{expt}=[\bar{1},0,\bar{3},1,\bar{0},1]$. This configuration is one of the two most stable (the other one is shown in Fig. 7(b2)) configurations. Scaling to higher qubits for IBM devices is a major challenge due to the increased requirement of SWAP gates and finite coherence times of the qubits. Hence, CD-inspired ansatz is hardware-implementable but still requires improvements to make it scalable for hardware with low connectivity. #### .0.3 Google QVM We implemented a 7 amino acid protein APRLRFY ($N=9$) with $p=1$ layers of the ansatz on quantum virtual machine rainbow. Since we opted for a 2D grid connectivity, we selected one out of three possible $3\times 3$ grids with low error rates to map our problem. The native entangling gate $CZ$ was selected and $YZ(\theta)$ interactions were implemented by first creating $ZZ^{ij}(\theta_{m})\equiv H^{j}CZ^{ij}H^{j}~{}R_{z}^{j}(J_{ij}\theta_{m})~{}H^{j}CZ^{ij}H^{j}$. From here, $YZ^{ij}(\theta_{m})\equiv R_{x}(\pi/2)ZZ^{ij}(\theta_{m})R_{x}(-\pi/2)$ (Fig. 10(c)). In this implementation, $N_{2loc}=25$ two-body interactions were present in the Hamiltonian. After performing the circuit optimization (Appendix C), the probability distribution of the ground state with 10000 shots is shown in Fig. 7(b). Around 48% success probability was achieved with the optimized circuit. This behavior is mainly due to the circuit decomposition and the connectivity of the system. Also, as this is a noisy emulator, the real hardware is expected to show a lower success probability. The folded protein is shown in Fig. 7(b2). The nearest neighbor connection is between ‘A’–‘F’ which is shown by a green dotted line which means that interaction qubit $q_{1,6}=1$. The optimal turn sequence is given by $t_{expt}=[\bar{1},0,\bar{3},1,\bar{0},2]$ . We can observe that as compared to the previous case, the change in this sequence is only the last turn. Both these cases are optimal and thus the Hamiltonian we considered for APRLRFY protein has a doubly degenerate ground state. ## Discussions We proposed a hybrid digitized-counterdiabatic quantum algorithm to investigate the protein folding problem. The parameterized quantum circuit associated with this algorithm is inspired by counterdiabatic protocols, has $\mathcal{O}(N^{2})$ parameterization, and consists of only one-qubit and two- qubit gates. We applied this algorithm to a tetrahedral lattice protein folding problem, encoded in a 5-local Ising Hamiltonian where the ground state contains the corresponding protein configuration. We study various proteins with increasing amino acid chains and show that the proposed algorithm has an excellent performance in terms of convergence and circuit depth. These results outperform previously utilized quantum algorithms, enhancing the experimental feasibility with various circuit optimization strategies. We prove that claim by implementing up to 9 amino acid proteins with 17 qubits on several quantum hardware like trapped-ions and superconducting systems and achieving high success probabilities. This work paves the way for implementing problem-inspired ansatz to industrial use cases in the current NISQ era by utilizing digitized counterdiabatic protocols. We believe this quantum algorithm can be extended to other relevant applications. There is also a connection between counterdiabatic protocols with adiabatic gauge potentials and dynamical Lie algebra. As mentioned before, some challenges need to be addressed, for instance, the sensitivity toward the initial parameters and the choice of the appropriate CD terms from the adiabatic gauge pool. Using various modifications like global optimizers would be interesting, as these algorithms find it difficult to reach the exact ground state at high system sizes. This is crucial as the protein structure changes significantly if we end up in an excited state. This also motivates the use of purely digitized-counterdiabatic quantum algorithms where the task is to find the exact ground state [11]. We also emphasize that using digital- analog quantum computing with digitized-counterdiabatic quantum computing can also be crucial. We believe this work advances quantum computing, bringing us one step closer to practical quantum advantage, which would have to challenge success in classical computing including the recent AlphaFold achievements [57]. ## Methods ### .1 Variational quantum algorithms and counterdiabaticity In VQA, a circuit ansatz and a classical optimizer combine to solve an optimization problem. The task of the circuit ansatz is to generate quantum states based on the optimal parameters provided by the classical routines. The classical parts can be done by gradient-based optimizers (like Adam or Adagrad) or gradient-free optimizers (like COBYLA or POWELL). The goal is to minimize a cost function $C$ that can take various forms depending upon the problem. But in our case, we will use the expectation value of the problem Hamiltonian $H$ given by $C(\vec{\theta})=\bra{\psi(\vec{\theta})}H\ket{\psi(\vec{\theta})},$ (3) where $\vec{\theta}=\\{\theta_{1},\theta_{2},...\\}$ shows the parameters associated with the circuit ansatz. As mentioned earlier, both problem- inspired ansatz and hardware-efficient ansatz have their advantages and disadvantages. Despite the promising performance of the problem-inspired ansatz, the current hardware experience several bottlenecks like limited qubit connectivity, imperfect implementation of gates, limited coherence times, etc. This makes the implementation impractical, so that hardware-efficient ansatz is preferred. Generally, the hardware-efficient ansatz is of the form $U(\vec{\theta})=\prod_{k=1}^{p}U_{k}(\vec{\theta}_{k})W_{k},$ (4) where $\vec{\theta}_{k}$ are the optimizable parameters and $p$ is the number of layers. $U_{k}=exp[-i\theta_{k}V_{k}]$, where $V_{k}$ is a Hermitian operator. $W_{k}$ are non-parameterized gates usually consisting of 2-qubit connecting gates like $CNOT$s and $U_{k}$ are parameterized single qubit rotations. On the other hand, the problem-inspired ansatz use evolutions of the form $U(\theta)=e^{-i\hat{g}t},$ (5) where $\hat{g}$ is a Hermitian operator and $t$ is a parameter. These $\hat{g}$ are derived from the system of interest. For instance, in QAOA [2], $\hat{g}=H$ corresponds to the problem Hamiltonian, thus Eq. (5) resembles the trotterized-time evolution. In QAOA, the quantum circuit consists of two unitaries: Hamiltonian term $U_{c}(\gamma)$ and mixing term $U_{b}(\beta)$ applied $p$-times to the initial state $\ket{\psi_{0}}$. Here, $(\gamma,\beta)$ are the parameters to optimize by the classical optimizer. Along these lines, the evolution looks like $\ket{\psi_{f}}=U_{b}{(\beta_{p})}U_{c}{(\gamma_{p})}U_{b}{(\beta_{p-1})}U_{c}{(\gamma_{p-1})}\dots U_{b}{(\beta_{1})}U_{c}{(\gamma_{1})}\ket{\psi_{0}},$ (6) where $\ket{\psi_{f}}$ shows the output state, while generally $U_{b}(\beta)=e^{-i\beta\sum_{i}\sigma_{i}^{x}}$ and $U_{c}(\gamma)=e^{-i\gamma H}$. Mixer term $U_{b}(\beta)$ can take several other forms as well [32]. The aim is to minimize the cost function given by Eq. (3). QAOA directly relates to the quantum adiabatic evolution [58] hence it has believed to be a successful algorithm at large $p$ layers due to the adiabatic theorem. That being said, implementing circuits with large $p$ results in high circuit depths, not feasible for current near-term devices. Many adaptations to QAOA have been reported [32, 59, 60]. Among them, CD protocol has been of interest recently [61, 16, 12], from which the newly proposed digitized-counterdiabatic QAOA (DC-QAOA) reports that the addition of CD terms to the usual QAOA ansatz gives significant improvements in the performance of QAOA. Finding CD terms is a critical task and is done by obtaining approximate CD terms by the adiabatic gauge potentials using the nested commutator (NC) method [62]. In the considered NC method, the approximate CD terms are given by $A_{\lambda}^{(l)}$ which can be calculated as $A_{\lambda}^{(l)}=i\sum_{k=1}^{l}\alpha_{k}(t)\underbrace{[H_{a},[H_{a},......[H_{a},}_{2k-1}\partial_{\lambda}H_{a}]]],$ (7) where $l$ shows the order of expansion and $H_{a}(\lambda)=(1-\lambda(t))H_{mixer}+\lambda(t)H$. The CD term is then digitized and the coefficient $\alpha$ is considered as an additional free parameter along with $(\beta,\gamma)$ to increase the ansatz expressibility. Summing up, DC-QAOA uses three unitary terms iteratively $p$-times: Hamiltonian term, mixer term, and CD term to minimize the cost function more effectively. However, we have to pay the price in terms of increased circuit depth per layer. To upgrade this method, we propose a CD-inspired circuit ansatz that is also hardware implementable. Hence, it partakes the advantages of both problem-inspired and hardware-efficient ansatz (see Results). ### .2 Protein folding Figure 8: Schematic diagram showing the tetrahedral lattice structures. (a) shows the lattice $\mathcal{P}$ and (b) shows the inverted lattice $\mathcal{Q}$. The turns taken by the amino acids will be one of the four directions ($t=0,1,2,3$ or $\bar{0},\bar{1},\bar{2},\bar{3}$) and both lattices are switched at each turn. (c) shows the mesh made of $\mathcal{P}$ (shown in red) and $\mathcal{Q}$ (shown in blue), while the green line shows a schematic of the turns taken by an arbitrary protein in 3D. In this section, we describe the current quantum scenario regarding protein folding, tackled as a lattice problem where amino acids are sequentially added to a given lattice such that the total conformation energy is minimum. The lattice can be 2D (plane) or 3D (cubic or tetrahedral). The complexity increases drastically while going from 2D to 3D because of the highly increasing number of possible protein configurations. After selecting the lattice structure, the next important step is to choose the type of encoding. The widely studied encoding types are position encoding, where the lattice coordinates are encoded as qubits, and turn encoding, where the turn direction is encoded as qubits. In position encoding, the solution bitstring will represent the coordinates. Here, the respective amino acids should be placed for energy minimization, and, for turn encoding, the solution contains information about the turns taken by each amino acid sequentially to result in minimum energy configuration. The difficulty and the form of the Hamiltonian are highly dependent on the type of encoding chosen. Another vital task is to assign amino acids’ interaction energies. There are two types of interactions, namely, the Hydrophobic model (HP), where the interaction coefficients are limited to binary values [28], and the Miyazawa and Jernigan (MJ) interaction, where the coefficients are arbitrary depending upon the contact of amino acids [36]. Depending upon the structure, we have to define some constraints that will help us avoid configurations that are not allowed, like stacking different amino acids on the same lattice point. In this work, we consider the 3D tetrahedral lattice model introduced in Ref. [6] with the turn encoding and MJ interactions. The qubits are encoded to represent the turn $t_{i}$ taken by the $(i+1)^{th}$ amino acid after $i^{th}$ amino acid. There are two sets of lattice $\mathcal{P}$ and $\mathcal{Q}$ exactly inverse to each other where the possible four turns are shown by $t=0,1,2,3$ for $\mathcal{P}$ (and $\bar{0},\bar{1},\bar{2},\bar{3}$ for $\mathcal{Q})$ and each of the numbers shows the different direction of the tetrahedron shown in Fig. 8. We devote two qubits to encode each turn so that the turns are given by $t_{j}=q_{j}q_{j+1}$ which scales as $2(N_{a}-3)$ where $N_{a}$ is the number of amino acids considered. $\mathcal{P}$ and $\mathcal{Q}$ are changed at every turn so that even turns can be shown by $\mathcal{P}$ and odd turns can be shown by $\mathcal{Q}$. An example of the turn bitstring will be of the form $t=[\bar{1},0,\bar{3},...]$. The schematic diagram is shown in Fig. 8. The initial two turns can be fixed to $t_{1}=1$ and $t_{2}=\bar{0}$ due to the symmetry of space we consider. In addition, one more qubit can also be saved due to space symmetry if no side chains are considered. Hence, the conformation qubits $Q_{c}$ will look like $Q_{c}=[00][01][q_{5}1][q_{7}q_{8}].....[q_{2(N-1)-1}q_{2(N-1)}]\,,$ (8) where $q_{6}=1$ due to spatial symmetry. To keep track of the turns, a function $g_{m}$ where $m={0,1,2,3}$ is constructed and this function will return 1 if the axis $m$ is selected at $i^{th}$ turn. The shortest distance between any two beads can be found by keeping track of the number of turns the beads had taken with lattices $\mathcal{P}$ and $\mathcal{Q}$. As far as the constraints are concerned, there are two types, namely, growth constraints that penalize unwanted conformations and chirality constraints that enforce correct values. To impose these, two terms $H_{gc}(Q_{c})$ and $H_{ch}(Q_{c})$ are added to the problem Hamiltonian with positive Lagrange multipliers $(\theta_{gc},\theta_{ch})$. Details about how to create these functions are given in Ref. [6]. Along with $Q_{c}$, a set of qubits $Q_{in}$ are included that takes account of the interactions between the nearest-neighbor beads in the protein chain. $Q_{in}=q_{i,j}$ are a set of two-indices qubits that have information about nearest neighbor contact with $i^{th}$ and $j^{th}$ bead. If the contact occurs, the energy $e_{i,j}$ is applied to the Hamiltonian $H_{in}$. We consider the nearest neighbor interactions, where the contact is considered if the distance between two non-consecutive beads is unity. Thus the total qubits $Q_{tot}=\\{Q_{c},Q_{in}\\}$ and the Hamiltonian $H$ is given by $H(Q_{tot})=H_{gc}(Q_{c})+H_{ch}(Q_{c})+H_{in}(Q_{in}).$ (9) The conversion of $H(Q_{tot})$ into an Ising Hamiltonian $H$ will result in $\displaystyle H$ $\displaystyle=\sum_{i}h_{i}\sigma_{z}^{i}+\sum_{ij}J_{ij}\sigma_{z}^{i}\sigma_{z}^{j}+\sum_{ijk}K_{ijk}\sigma_{z}^{i}\sigma_{z}^{j}\sigma_{z}^{k}$ (10) $\displaystyle+\sum_{ijkl}L_{ijkl}\sigma_{z}^{i}\sigma_{z}^{j}\sigma_{z}^{k}\sigma_{z}^{l}+\sum_{ijklm}M_{ijklm}\sigma_{z}^{i}\sigma_{z}^{j}\sigma_{z}^{k}\sigma_{z}^{l}\sigma_{z}^{m},$ where the indices and the coefficients depend on specific proteins. Hence, $H$ is a 5-local Ising Hamiltonian whose ground state shows the optimal turn sequence for the folded protein. The specific reason for the selection of this model is that this model has a tetrahedral lattice that captures many physical and chemical properties. This model uses a lower number of qubits for a specified amino-acid chain as compared to other methods, but this comes at the cost of increasing the locality of the Hamiltonian. In Results, we show that this increased locality does not affect our CD-inspired ansatz, and we can get to the optimal solutions by using only 2-local terms in the PQC. This happens in contrast to QAOA, where the circuit ansatz will require 5-local terms, which makes it challenging to implement on a real device. ###### Acknowledgements. We acknowledge the Azure quantum credits program for providing access to the Quantinuum trapped-ions systems. P.C. acknowledges Mikel Garcia de Andoin and Martin Larocca for useful discussions. This work is supported by EU FET Open Grant EPIQUS (899368), QUANTEK project (KK-2021/00070), the Basque Government through Grant No. IT1470-22, the project grant PID2021-126273NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe” and “ERDF Invest in your Future”, NSFC (12075145), and STCSM (2019SHZDZX01-ZX04). X.C. acknowledges ayudas para contratos Ramón y Cajal–2015-2020 (RYC-2017-22482). ## Appendix A Parameter Scaling In Fig. 9, we show how the parameters of $p=1$ CD-inspired ansatz of the protein folding problem vary as a function of system size as compared to both HEA and the case where all the possible two-body interactions are present. Figure 9: Number of optimizable parameters as a function of system size for various ansatz. The blue line shows hardware-efficient ansatz parameterization, the red line shows CD-inspired ansatz parameterization, and the green line shows the CD-inspired ansatz parameterization where all-to-all two-body interactions are present contrary to the protein folding problem. ## Appendix B Native gate decomposition The native gate decomposition of $YZ(\theta)$ interaction concerning various hardware is shown in Fig. 10. Fig. 10(a) shows the decomposition with respect to the $ZZ(\theta)$ interaction native to the Quantinuum trapped-ions hardware, Fig. 10(b) shows the decomposition in terms of $CR(\theta)$ gate native to IBM superconducting chip, and Fig. 10(c) show the decomposition in terms of $CZ$ gates native to the Google superconducting hardware. Figure 10: Native gate decomposition of $YZ(\theta)$ gate in real hardware for qubits $\ket{i}$ and $\ket{j}$. (a) shows the decomposition for Quantinuum tapped-ions hardware (native gate $ZZ(\theta)$), (b) shows the decomposition for IBM superconducting hardware (native gate $CR(\theta)$) and (c) shows the decomposition for Google QVM (native gate $CZ$). ## Appendix C Experimental details ### C.1 Quantinuum Trapped-ions Hardware details. The Quantinuum H1-1 [50] is an all-to-all connected trapped- ion system with 20 qubits. There are multiple interaction zones where these physical qubits can move and the quantum operations are applied using lasers. Due to the existence of these zones, this device can perform parallel operations effectively. All-to-all connectivity is achieved by the physical rearrangement of qubits to perform all the two-qubit interactions. All the qubits are identical but the errors arise due to the quantum operations depending upon the locality of those qubits. Typical one-qubit gate infidelity is $4\times 10^{-5}$, while the two-qubit gate infidelity is $3\times 10^{-3}$ and the native gate-set are the two-qubit $ZZ(\theta)$ gates. Circuit optimization and Error mitigation. Minimizing two-qubit gate error is an essential part of the experimental implementation. Due to all-to-all connectivity and suitable two-qubit native gate, we already have an implementable circuit. However, intending to obtain the best results, we strategically removed several $YZ(\theta)$ gates by analyzing the angles associated with them. Rotations with angles near zero do not contribute highly to the circuit but their gate errors do accumulate. Thus these gates can be avoided to minimize two-qubit gate errors. This should be done carefully keeping in mind the error rates and change in fidelity. The final $ZZ(\theta)$ gate count at the time of implementation was reduced to 35 for $N=13$ qubits case and to 70 for $N=17$ qubits case. ### C.2 Google QVM Hardware details. The hardware from Google has Sycamore architecture with 53 transmon qubits arranged in a 2D grid. Hence, each physical qubit is connected to at most 4 other qubits. The single-qubit gates are executed by microwave pulses of a fixed frequency and two-qubit gates can be executed by bringing the nearest qubits on resonance and then turning on a coupling. The two-qubit native gates are the $CZ$, $\sqrt{iSWAP}$, and Sycamore interactions. Typical two-qubit $\sqrt{iSWAP}$ implementation error is 1.4% per gate when applied in parallel and the typical single-qubit gate error is 0.1% per gate. More information can be found in Ref. [63]. In our experiment, we utilized the quantum virtual machine (QVM) offered by Google. Google offers two QVMs- rainbow and weber\- where a noise model is implemented that closely mimics the actual noise of the hardware. In this work, we utilize QVM rainbow, a 23-qubit device with square-grid lattice connectivity (Fig. 7(b1)). Circuit optimization and error mitigation. As the connectivity is not all-to- all, a SWAP strategy is required to implement the circuit. We apply a SWAP strategy based on Ref. [53] that implements the circuit by using 11 SWAP gates as all the two-qubit interactions are not present in the circuit. In order to get the best results, we optimized the ‘moments’ of the implemented circuit. A moment is defined as a set of operations, acting on different qubits such that all of them can be applied at a single abstract time slice. Essentially, the number of moments should be minimized to reduce the circuit execution time. In other words, we arranged the circuit such that the number of operations that can be applied at a single time is maximized. There are several ways to achieve this, for example, aligning the circuit to the left where the maximum possible number of operations are arranged from the start of the circuit. For our implementation, we optimized the circuit where all the operations are aligned into similar categories, that is, all the one-qubit and two-qubit operations are aligned in separate moments. This method is adopted to reduce the qubit idling since if these operations are isolated, the other qubits will be idle, leading to errors. Since we have only used the QVM, we limit ourselves to the circuit optimizations mentioned above. However, for the real device, there might be a need for further circuit optimizations. This can be done by adopting strategies like dropping moments that have a lower impact on the circuit results and adding dynamical decoupling to resist the qubit idling. Testing combinations of native gates to generate $YZ(\theta)$ interactions more effectively can also prove to be effective. For example, this is shown in Ref. [64] where they use the Sycamore gates to generate the combination of $ZZ-SWAP$ interactions. ### C.3 IBM superconducting chips Hardware details. IBM systems consist of fixed-frequency transmon qubits and the gate operations are applied with microwave pulses [65]. The connectivity is heavy-hex and for our implementation, we chose qubits with linear connectivity (Fig. 7(a1)). The two-qubit native gate for this device is the cross-resonance gate $R^{ij}_{zx}(\theta)=e^{-i\frac{\theta}{2}\sigma_{i}^{z}\sigma_{j}^{x}}$ also known as $CR(\theta)$ gate. The average single-qubit gate error is around 0.05% and the average two-qubit $CNOT$ gate error is around 1.4%. Error map of the ibmq_guadalupe device when the experiment was performed is shown in Fig. 11. This error map shows the connectivity of the qubits, the Hadamard gate error of each of the qubits, the $CNOT$ error of all the present connections, and the read-out errors of the qubits. The qubits shown within the red box were selected for the experiment. Figure 11: Error map of IBM’s 16 qubit ibmq_guadalupe device when the experiment was performed. The experiment was performed using the qubits highlighted in red color. Circuit optimization and error mitigation. Best layout- Qubit selection is a crucial task while implementing the circuit as each qubit possesses its own single-qubit gate errors and two-qubit gate errors with the connecting qubits. To select the best qubits from the device, we implement a sub-graph isomorphism algorithm that selects 9 qubits out of 16 that minimize the expected gate errors [52]. SWAP strategies- IBM’s heavy-hex connectivity scheme means there only exists nearest neighbor coupling. As we have a linear connection of qubits, we need a good SWAP strategy to implement our circuit using the least SWAP gates possible. To achieve this, we implement the strategy performed in Ref. [53]. As some of the two-qubit interactions are missing, we could cover our connectivity by applying 30 SWAP gates for the case involving $N=9$ qubits. Native gate compilation- SWAP, $CNOT$, and $ZZ(\theta)$ gates are not native to IBM’s chips, being translated to the native $CR(\theta)$ gate before these circuits are executed. Ref. [54] showed how by being hardware aware, these native gates can be translated beforehand so by the annihilation of commuting gates we can reduce the depth of the final logical circuit. By being hardware- specific in this regime, we could also calibrate the $CR(\theta)$ gates so that no other translation is done. Therefore, the actual physical coupling represents the intended interaction gate between two qubits. Hence, the final pulse schedule gets reduced to its minimal expression according to the specifications of the hardware while preserving the intended structure for the ansatz. Once the circuit is reduced to its minimum expression, there are still some techniques we can apply for noise mitigation. Some are encoded within our pulse definition and others by statistically correcting the systematic error upon measurement. Dynamical decoupling- Dynamical decoupling techniques introduce different gate schemes to minimize the noise generated by the idling effect of the qubits while other longer-time two-qubit gates are being applied [46]. They observed that if idling time can be used up by applying gates (that will not change the final result), the effect of noise can be reduced significantly. A well-known scheme is the $XY4$ scheme which applies a sequence of four rotation operations on the $X$ and $Y$ axis. Following a symmetrized version of the $XY4$ scheme as in Ref. [55], the $XY8$ scheme applies these rotations eight times instead of four. For our experiment runs, we use the $XY8$ sequences shown by $X_{\pi}\xrightarrow{}Y_{\pi}\xrightarrow{}...\xrightarrow{}X_{-\pi}\xrightarrow{}Y_{-\pi}.$ (11) Here, ${-\pi}$ and ${\pi}$ reflect the opposing pulse amplitude being applied on each axis. Thus, the result of the gate application renders the $I$ and by the active involvement, low-frequency noise is generated while idling gets reduced which results in an overall reduction in the noise. Measurement Error Mitigation- Finally, readout error can be seen as systematically and consistent for a given chip, always representing a similar error distribution along the usage of a chip. By compensating for this final measurement error one could boost the final results to their statistically corrected error-less version. Usually, this is done by an approach where for a given ideally simulated probability distribution $\vec{p}_{ideal}$, upon the measurement on a real device would yield a noisy distribution $\vec{p}_{noisy}$ such that $\vec{p}_{noisy}=\mathbf{A}\vec{p}_{ideal}.$ (12) Here, $\mathbf{A}$ represents a matrix called the assignment matrix that has information about the noise. The probability can be enhanced by finding $\mathbf{A^{-1}}$ and thus finding the error-mitigated quasi-probabilities. However, as the system sizes increase, the calculation of this matrix becomes difficult. In Ref. [56], an approach is proposed where, instead of calculating the full matrix, a reduced matrix is calculated. This is done by considering the qubits needed for the experiment runs instead of considering the full space, hence reducing the computational overhead. In our implementation, we use this technique to perform measurement error mitigation. ## References * Bharti _et al._ [2022] K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, M. Degroote, H. Heimonen, J. S. Kottmann, T. Menke, W.-K. Mok, S. Sim, L.-C. Kwek, and A. Aspuru-Guzik, Noisy intermediate-scale quantum algorithms, Reviews of Modern Physics 94, 015004 (2022). * Farhi _et al._ [2014] E. Farhi, J. Goldstone, and S. Gutmann, A quantum approximate optimization algorithm, arXiv preprint arXiv:1411.4028 (2014). * Romero _et al._ [2018] J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz, Quantum Science and Technology 4, 014008 (2018). * Wecker _et al._ [2015] D. Wecker, M. B. Hastings, and M. Troyer, Progress towards practical quantum variational algorithms, Physical Review A 92, 042303 (2015). * Wiersema _et al._ [2020] R. Wiersema, C. Zhou, Y. de Sereville, J. F. Carrasquilla, Y. B. Kim, and H. Yuen, Exploring entanglement and optimization within the hamiltonian variational ansatz, PRX Quantum 1, 020319 (2020). * Robert _et al._ [2021] A. Robert, P. K. Barkoutsos, S. Woerner, and I. Tavernelli, Resource-efficient quantum algorithm for protein folding, npj Quantum Information 7, 38 (2021). * Kandala _et al._ [2017] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549, 242 (2017). * McClean _et al._ [2018] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren plateaus in quantum neural network training landscapes, Nature Communications 9, 4812 (2018). * Holmes _et al._ [2022] Z. Holmes, K. Sharma, M. Cerezo, and P. J. Coles, Connecting ansatz expressibility to gradient magnitudes and barren plateaus, PRX Quantum 3, 010313 (2022). * Larocca _et al._ [2022] M. Larocca, P. Czarnik, K. Sharma, G. Muraleedharan, P. J. Coles, and M. Cerezo, Diagnosing Barren Plateaus with Tools from Quantum Optimal Control, Quantum 6, 824 (2022). * Hegade _et al._ [2021a] N. N. Hegade, K. Paul, Y. Ding, M. Sanz, F. Albarrán-Arriagada, E. Solano, and X. Chen, Shortcuts to adiabaticity in digitized adiabatic quantum computing, Physical Review Applied 15, 024038 (2021a). * Chandarana _et al._ [2022a] P. Chandarana, N. N. Hegade, K. Paul, F. Albarrán-Arriagada, E. Solano, A. del Campo, and X. Chen, Digitized-counterdiabatic quantum approximate optimization algorithm, Physical Review Research 4, 013141 (2022a). * Hegade _et al._ [2021b] N. Hegade, P. Chandarana, K. Paul, X. Chen, F. Albarrán-Arriagada, and E. Solano, Portfolio optimization with digitized-counterdiabatic quantum algorithms, arXiv preprint arXiv:2112.08347 (2021b). * Hegade _et al._ [2021c] N. N. Hegade, K. Paul, F. Albarrán-Arriagada, X. Chen, and E. Solano, Digitized adiabatic quantum factorization, Physical Review A 104, L050403 (2021c). * Chandarana _et al._ [2022b] P. Chandarana, P. S. Vieites, N. N. Hegade, E. Solano, Y. Ban, and X. Chen, Meta-learning digitized-counterdiabatic quantum optimization, arXiv preprint arXiv:2206.09966 (2022b). * Yao _et al._ [2021] J. Yao, L. Lin, and M. Bukov, Reinforcement learning for many-body ground-state preparation inspired by counterdiabatic driving, Physical Review X 11, 031070 (2021). * Yao _et al._ [2022] J. Yao, H. Li, M. Bukov, L. Lin, and L. Ying, Monte carlo tree search based hybrid optimization of variational quantum circuits, arXiv preprint arXiv:2203.16707 (2022). * Masuda and Nakamura [2008] S. Masuda and K. Nakamura, Fast-forward problem in quantum mechanics, Physical Review A 78, 062108 (2008). * Masuda and Nakamura [2010] S. Masuda and K. Nakamura, Fast-forward of adiabatic dynamics in quantum mechanics, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 466, 1135 (2010). * Chen _et al._ [2010] X. Chen, A. Ruschhaupt, S. Schmidt, A. del Campo, D. Guéry-Odelin, and J. G. Muga, Fast optimal frictionless atom cooling in harmonic traps: Shortcut to adiabaticity, Physical Review Letters 104, 063002 (2010). * Chen _et al._ [2011] X. Chen, E. Torrontegui, and J. G. Muga, Lewis-riesenfeld invariants and transitionless quantum driving, Physical Review A 83, 062116 (2011). * Demirplak and Rice [2003] M. Demirplak and S. A. Rice, Adiabatic population transfer with control fields, The Journal of Physical Chemistry A 107, 9937 (2003). * Demirplak and Rice [2005] M. Demirplak and S. A. Rice, Assisted adiabatic passage revisited, The Journal of Physical Chemistry B 109, 6838 (2005). * Berry [2009] M. V. Berry, Transitionless quantum driving, Journal of Physics A: Mathematical and Theoretical 42, 365303 (2009). * Magann _et al._ [2021] A. B. Magann, C. Arenz, M. D. Grace, T.-S. Ho, R. L. Kosut, J. R. McClean, H. A. Rabitz, and M. Sarovar, From pulses to circuits and back again: A quantum optimal control perspective on variational quantum algorithms, PRX Quantum 2, 010101 (2021). * Meitei _et al._ [2020] O. R. Meitei, B. T. Gard, G. S. Barron, D. P. Pappas, S. E. Economou, E. Barnes, and N. J. Mayhall, Gate-free state preparation for fast variational quantum eigensolver simulations: ctrl-vqe, arXiv preprint arXiv:2008.04302 (2020). * Li _et al._ [2017] J. Li, X. Yang, X. Peng, and C.-P. Sun, Hybrid quantum-classical approach to quantum optimal control, Physical Review Letters 118, 150503 (2017). * Perdomo _et al._ [2008] A. Perdomo, C. Truncik, I. Tubert-Brohman, G. Rose, and A. Aspuru-Guzik, Construction of model hamiltonians for adiabatic quantum computation and its application to finding low-energy conformations of lattice protein models, Physical Review A 78, 012320 (2008). * Perdomo-Ortiz _et al._ [2012] A. Perdomo-Ortiz, N. Dickson, M. Drew-Brook, G. Rose, and A. Aspuru-Guzik, Finding low-energy conformations of lattice protein models by quantum annealing, Scientific Reports 2, 571 (2012). * Babbush _et al._ [2014] R. Babbush, A. Perdomo-Ortiz, B. O’Gorman, W. Macready, and A. Aspuru-Guzik, Construction of energy functions for lattice heteropolymer models: Efficient encodings for constraint satisfaction programming and quantum annealing, Advances in Chemical Physics: Volume 155 , 201 (2014). * Babej _et al._ [2018] T. Babej, C. Ing, and M. Fingerhuth, Coarse-grained lattice protein folding on a quantum annealer, arXiv preprint arXiv:1811.00713 (2018). * Hadfield _et al._ [2019] S. Hadfield, Z. Wang, B. O’Gorman, E. G. Rieffel, D. Venturelli, and R. Biswas, From the quantum approximate optimization algorithm to a quantum alternating operator ansatz, Algorithms 12 (2019). * Sels and Polkovnikov [2017] D. Sels and A. Polkovnikov, Minimizing irreversible losses in quantum systems by local counterdiabatic driving, Proceedings of the National Academy of Sciences 114, E3909 (2017). * Anand _et al._ [2022] A. Anand, S. Alperin-Lea, A. Choquette, and A. Aspuru-Guzik, Exploring the role of parameters in variational quantum algorithms, arXiv preprint arXiv:2209.14405 (2022). * Sauvage _et al._ [2022] F. Sauvage, M. Larocca, P. J. Coles, and M. Cerezo, Building spatial symmetries into parameterized quantum circuits for faster training, arXiv preprint arXiv:2207.14413 (2022). * Miyazawa and Jernigan [1996] S. Miyazawa and R. L. Jernigan, Residue – residue potentials with a favorable contact pair term and an unfavorable high packing density term, for simulation and threading, Journal of Molecular Biology 256, 623 (1996). * Kingma and Ba [2014] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). * Duchi _et al._ [2011] J. Duchi, E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research 12, 2121 (2011). * Schuld _et al._ [2019] M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran, Evaluating analytic gradients on quantum hardware, Physical Review A 99, 032331 (2019). * Hoffmann and Brown [2022] T. Hoffmann and D. Brown, Gradient estimation with constant scaling for hybrid quantum machine learning, arXiv preprint arXiv:2211.13981 (2022). * Wood [2020] C. J. Wood, Special session: Noise characterization and error mitigation in near-term quantum computers, 2020 IEEE 38th International Conference on Computer Design (ICCD) , 13 (2020). * Bravyi _et al._ [2021] S. Bravyi, S. Sheldon, A. Kandala, D. C. Mckay, and J. M. Gambetta, Mitigating measurement errors in multiqubit experiments, Physical Review A 103, 042605 (2021). * Ding _et al._ [2020] Y. Ding, P. Gokhale, S. F. Lin, R. Rines, T. Propson, and F. T. Chong, Systematic crosstalk mitigation for superconducting qubits via frequency-aware compilation, 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) , 201 (2020). * Giurgica-Tiron _et al._ [2020] T. Giurgica-Tiron, Y. Hindy, R. LaRose, A. Mari, and W. J. Zeng, Digital zero noise extrapolation for quantum error mitigation, 2020 IEEE International Conference on Quantum Computing and Engineering (QCE) , 306 (2020). * Li and Benjamin [2017] Y. Li and S. C. Benjamin, Efficient variational quantum simulator incorporating active error minimization, Physical Review X 7, 021050 (2017). * Pokharel _et al._ [2018] B. Pokharel, N. Anand, B. Fortman, and D. A. Lidar, Demonstration of fidelity improvement using dynamical decoupling with superconducting qubits, Physical Review Letters 121, 220502 (2018). * Temme _et al._ [2017] K. Temme, S. Bravyi, and J. M. Gambetta, Error mitigation for short-depth quantum circuits, Physical Review Letters 119, 180509 (2017). * Ravi _et al._ [2022] G. S. Ravi, K. N. Smith, J. M. Baker, T. Kannan, N. Earnest, A. Javadi-Abhari, H. Hoffmann, and F. T. Chong, Navigating the dynamic noise landscape of variational quantum algorithms with qismet, arXiv preprint arXiv:2209.12280 (2022). * Isakov _et al._ [2021] S. V. Isakov, D. Kafri, O. Martin, C. V. Heidweiller, W. Mruczkiewicz, M. P. Harrigan, N. C. Rubin, R. Thomson, M. Broughton, K. Kissell, _et al._ , Simulations of quantum circuits with approximate noise using qsim and cirq, arXiv preprint arXiv:2111.02396 (2021). * [50] Quantinuum H1-1, www.quantinuum.com, September 30, 2022. * [51] IBM Quantum, www.quantum-computing.ibm.com, 2022. * Nation and Treinish [2022] P. D. Nation and M. Treinish, Suppressing quantum circuit errors due to system variability, arXiv preprint arXiv:2209.15512 (2022). * Weidenfeller _et al._ [2022] J. Weidenfeller, L. C. Valor, J. Gacon, C. Tornow, L. Bello, S. Woerner, and D. J. Egger, Scaling of the quantum approximate optimization algorithm on superconducting qubit based hardware, arXiv preprint arXiv:2202.03459 (2022). * Earnest _et al._ [2021] N. Earnest, C. Tornow, and D. J. Egger, Pulse-efficient circuit transpilation for quantum applications on cross-resonance-based hardware, Physical Review Research 3, 043088 (2021). * Farfurnik _et al._ [2015] D. Farfurnik, A. Jarmola, L. M. Pham, Z. H. Wang, V. V. Dobrovitski, R. L. Walsworth, D. Budker, and N. Bar-Gill, Optimizing a dynamical decoupling protocol for solid-state electronic spin ensembles in diamond, Physical Review B 92, 060301 (2015). * Nation _et al._ [2021] P. D. Nation, H. Kang, N. Sundaresan, and J. M. Gambetta, Scalable mitigation of measurement errors on quantum computers, PRX Quantum 2, 040326 (2021). * Jumper _et al._ [2021] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli, and D. Hassabis, Highly accurate protein structure prediction with alphafold, Nature 596, 583 (2021). * Barends _et al._ [2016] R. Barends, A. Shabani, L. Lamata, J. Kelly, A. Mezzacapo, U. L. Heras, R. Babbush, A. G. Fowler, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, E. Jeffrey, E. Lucero, A. Megrant, J. Y. Mutus, M. Neeley, C. Neill, P. J. J. O’Malley, C. Quintana, P. Roushan, D. Sank, A. Vainsencher, J. Wenner, T. C. White, E. Solano, H. Neven, and J. M. Martinis, Digitized adiabatic quantum computing with a superconducting circuit, Nature 534, 222 (2016). * Headley _et al._ [2020] D. Headley, T. Müller, A. Martin, E. Solano, M. Sanz, and F. K. Wilhelm, Approximating the quantum approximate optimisation algorithm, arXiv preprint arXiv:2002.12215 (2020). * Zhu _et al._ [2022] L. Zhu, H. L. Tang, G. S. Barron, F. A. Calderon-Vargas, N. J. Mayhall, E. Barnes, and S. E. Economou, Adaptive quantum approximate optimization algorithm for solving combinatorial problems on a quantum computer, Physical Review Research 4, 033029 (2022). * Wurtz and Love [2022] J. Wurtz and P. J. Love, Counterdiabaticity and the quantum approximate optimization algorithm, Quantum 6, 635 (2022). * Claeys _et al._ [2019] P. W. Claeys, M. Pandey, D. Sels, and A. Polkovnikov, Floquet-engineering counterdiabatic protocols in quantum many-body systems, Physical Review Letters 123, 090602 (2019). * Arute _et al._ [2019] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, A. Fowler, C. Gidney, M. Giustina, R. Graff, K. Guerin, S. Habegger, M. P. Harrigan, M. J. Hartmann, A. Ho, M. Hoffmann, T. Huang, T. S. Humble, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, J. Kelly, P. V. Klimov, S. Knysh, A. Korotkov, F. Kostritsa, D. Landhuis, M. Lindmark, E. Lucero, D. Lyakh, S. Mandrà, J. R. McClean, M. McEwen, A. Megrant, X. Mi, K. Michielsen, M. Mohseni, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, E. Ostby, A. Petukhov, J. C. Platt, C. Quintana, E. G. Rieffel, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, K. J. Sung, M. D. Trevithick, A. Vainsencher, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, and J. M. Martinis, Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019). * Harrigan _et al._ [2021] M. P. Harrigan, K. J. Sung, M. Neeley, K. J. Satzinger, F. Arute, K. Arya, J. Atalaya, J. C. Bardin, R. Barends, S. Boixo, M. Broughton, B. B. Buckley, D. A. Buell, B. Burkett, N. Bushnell, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, S. Demura, A. Dunsworth, D. Eppens, A. Fowler, B. Foxen, C. Gidney, M. Giustina, R. Graff, S. Habegger, A. Ho, S. Hong, T. Huang, L. B. Ioffe, S. V. Isakov, E. Jeffrey, Z. Jiang, C. Jones, D. Kafri, K. Kechedzhi, J. Kelly, S. Kim, P. V. Klimov, A. N. Korotkov, F. Kostritsa, D. Landhuis, P. Laptev, M. Lindmark, M. Leib, O. Martin, J. M. Martinis, J. R. McClean, M. McEwen, A. Megrant, X. Mi, M. Mohseni, W. Mruczkiewicz, J. Mutus, O. Naaman, C. Neill, F. Neukart, M. Y. Niu, T. E. O’Brien, B. O’Gorman, E. Ostby, A. Petukhov, H. Putterman, C. Quintana, P. Roushan, N. C. Rubin, D. Sank, A. Skolik, V. Smelyanskiy, D. Strain, M. Streif, M. Szalay, A. Vainsencher, T. White, Z. J. Yao, P. Yeh, A. Zalcman, L. Zhou, H. Neven, D. Bacon, E. Lucero, E. Farhi, and R. Babbush, Quantum approximate optimization of non-planar graph problems on a planar superconducting processor, Nature Physics 17, 332 (2021). * Koch _et al._ [2007] J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Charge-insensitive qubit design derived from the cooper pair box, Physical Review A 76, 042319 (2007).
# Highlights of EPS HEP 2019 Department of Physics & Astronomy, University College London E-mail ###### Abstract: An opinionated and informal recap of highlights from the EPS HEP 2019 conference in Ghent, including some aspects of flavour physics, neutrinos, high-density QCD, astrophysics and energy frontier collider physics, and some thoughts about the future. ## 1 Introduction The purpose of this write-up, as with the talk, is not to be definitive but to give an editorialised point of view on what was a packed and exciting meeting. As befits such a contribution, references will mostly be to other (mostly plenary) talks at the conference, and I will not include figures, on the assumption these will be included in the original contributions. Although it is a necessarily personal selection, I will not dwell on a satisfying semi- final and a stunning final from the cricket world cup, which took place during the conference, even though they will always be associated with Ghent for me from now on. I will say though that it was a privilege to be asked to give this talk by the European Physical Society, especially bearing in mind the turbulent politics between the UK and most of the rest of Europe at present. For the rest of the contribution, I will stick to the physics. There was a lot of it. ## 2 Flavour Physics Impressive advances were reported in flavour physics, with Katharina Müller presenting the first observation of charge-parity violation in charm decays, by LHCb [1]. The time-integrated CP asymmetry is measured in neutral charmed hadron ($D^{0}/\bar{D}^{0}$) decays to both $\pi^{+}\pi^{-}$ and $K^{+}K^{-}$. A combination of the new LHC Run 2 result with the Run 1 result gives a 5.3$\sigma$ difference of the measured value from zero, indicating dominantly direct CP violation and roughly compatible with the Standard Model (SM). The uncertainties in the SM prediction are greater than those in the data. Müller also presented an updated combination of measurements of the unitarity triangle angle $\gamma$, incuding a new measurement in the $B^{0}\rightarrow DK^{*0}(D\rightarrow K\pi,KK,\pi\pi)~{}$[2]. There are some tensions, at the 2$\sigma$ level, between different measurments, worth mentioning to emphasise the importance of measuring this angle in several different ways, since new physics may manifest itself differently depending upon the process. Updates of $B_{s}$ mixing from LHCb and ATLAS were also presented, using integrated luminosities of 4.9-1 and 99.7 fb-1 respectively. These results were also discussed by Johannes Albrecht. The main focus in his talk however was on semi-leptonic decays of $B$ hadrons to Kaons and either $e^{+}e^{-}$ or $\mu^{+}\mu^{-}$, and in particular the ratios and angular variables, where the consistency with the SM is at level of around 2.5$\sigma$, a level which has not changed with the increased precision from including part of the Run 2 data. While no individual measurement is compelling as evidence for physics beyond the SM (BSM), taken together (and as pointed out by Marco Nardecchia in his summary of these and other comparisons) there is a suggestive pattern which is readily accommodated by extensions to the SM. This is clearly an interesting space to watch, especially as more LHC luminosity is included and, as we heard from Francesco Forti, Belle II data are on the horizon, with the accelerator currently on its commissioning track to high luminosities and the detector already recording collisions. ## 3 Neutrinos That we are now into an era of precision, high statistics neutrino physics was amply illustrated by the dataset of atmospheric neutrino events collected by SuperKamiodande [3] and now used in global fits, with a “postage stamp” style slide shown by Gioacchino Ranucci containing 50,000 events split into 19 different analysis samples, each with enough statistics to allow detailed exploration of the event characteristics. As discussed by Francesca Di Lodovico and Sylvia Pasquale, data principally from T2K and No$\nu$a are beginning to constrain both the neutrino mass hierarchy (with the normal hierarchy marginaly favoured) and the CP violating phase in the PMNS matrix (with a non-zero value marginally favoured). Reactor experiments are also making major contributions to characterising the physics of the neutrino sector, particularly with their measurements of the mixing angle $\theta_{13}$. ## 4 New “Standard Model” Physics When we discuss the search for “new physics”, it grates on me sometimes, since as we probe above the electroweak symmetry breaking scale with ATLAS and CMS, everything we measure is in a sense new. We are measuring what are sometimes previously unseen processes, and always in a region of physics previously unexplored. Every time we do so and find agreement with SM, we are validating our ideas about the fundamental constituents and forces in a qualitatively new regime. By the standards of most fields, this is new physics. Several examples were shown by Wolfgang Adam, Lucia Di Ciaccio and Andreas Hoecker, with (to give just a few examples) impressive results on diboson production, top cross sections and asymmetry (including new calculations of NNLO spin correlations), prompt photons (now out to 2 TeV in transverse momentum) and a dilepton+photon measurements with the full Run 2 data set from ATLAS. As Giulia Zanderighi made clear, for the full exploitation of these measurements our ability to make exclusive calculations, implementing the kinematic cuts which allow comparisons to data in the fiducial phase space of the measurement, is essential and is an area of intense effort and much progress. The new regimes probed, and the new precision achieved with QCD theory, means that, as also pointed out by Zanderighi, there is now a need for both QCD and electroweak higher order corrections to be incorporated together in calculations. This need will become more urgent throughout the high luminosity LHC (HL-LHC) era. An early step in this direction was in fact shown in a parallel session by German Sborlini, in the form of a coherent QED and QCD higher-order calculation of Drell-Yan dilepton production [4]. It seems from this that a multiplicative rather than additive combination of the corrections is a better approximation to the fully coherent result. At the time, I thought I understood something about elliptical Feynman diagrams from the fascinating talk by Claude Duhr. This understanding now escapes me, but it seems there is an interesting and useful advance on the boundaries between physics and mathematics there. As predicted, jet substructure analysis has proved itself “useful more generally in the identification of hadronically decaying massive particles which have energy large compared to their mass” [5], and highly boosted $W,Z,H$ and top are now an important feature of physics at the LHC. Several new results were shown, including a very direct technique from CMS of measuring the top mass by measuring the mass of a “fat” jet containing three subjets from the decay of a top quark. There are interesting questions about how these techniques will change, and what demands they will place on the detectors, if even higher boosts are available at future colliders, and when also the scales are such that these particles will need to be treated as partons in the initial and final state showers. Present theoretical tools are likely to require substantial development for us even to make a reasonable estimate. ## 5 High Density QCD Jet substructure also featured as a powerful tool in heavy ion physics and the study of high-density QCD. An array of impressive new results was shown by Marco Van Leeuwen, Dennis Perepelitsa and Carlos Albert Salgado Lopez. The impact of the medium on QCD splitting is studied by observing differences between the subjet momenta in proton-proton and heavy ion collisions [6, 7], with interesting effects seen which should help characterise the medium. However, such comparisions must be done with care, given the obersvation of collective effects even in proton-proton collisions [8], a system thought, before LHC data, to be too small to manifest such a thing. As Jan Fiete Grosse-Oetringhaus strikingly put it, this challenges two paradigms at once. What is the smallest system in which the heavy-ion “standard model” remains valid? And can the standard tools for proton-proton physics remain standard? Traditional high-energy physics and traditional heavy ion studies, often sociologically and scientifically rather distant, must grow together. The underlying QCD is the same theory, and the aim must be to demonstrate that unified description for $ee$, $ep$, $pp$ and $AA$ is feasible, or show that different mechanisms are justified and how they arise. ## 6 The Higgs Even if you don’t buy my claim that most of the measurements from the LHC are new physics, perhaps you will agree with Roberto Salerno’s statement that Higgs physics, at least, is really new physics. Christoph Grojean in the ECFA session on Sunday gave a powerful reminder, in the context of proposals for future colliders [9], of just how special the Higgs boson is. It can be seen as a new force, of a different nature to the gauge interactions known so far. There is no underlying local symmetry, no quantised charge. It is deeply connected to the vacuum structure of space-time. The up and down-quark Yukawa couplings determine the relationship between the proton and neutron masses, and thus the stability of nuclei. The electron Yukawa controls the size of the atoms (and thus the size of the Universe?). The top quark Yukawa decides (in part) the stability of the electroweak vacuum. The Higgs self-coupling controls the (thermo)dynamics of the EW phase transition, and therefore might be responsible for the dominance of matter over antimatter in the Universe. The Higgs Boson really is special. Salerno, Adam and Hoecker showed a plethora of new results, many using the full LHC Run 2 dataset. ATLAS and CMS have established the existence of Higgs couplings to the third generation charged fermions, as well as the gauge bosons. New studies were shown on the pursuit of the second generation — muons and charm. The limits are still factors above the SM, but are getting close enough to have interesting sensitivity to any upward deviations, and more precision is promised. Differential cross sections for Higgs production and decay test whether is it pointlike, whether its couplings evolve as expected with (for example) transverse momentum. Direct measurement of the Higgs self coupling requires much more data, with observation possible with the full HL- LHC, and any precision requiring a future hadron collider, just as a measurement of the total Higgs width with a degree of model-independence requires a lepton collider. Finally on the Higgs, Francesco Riva detailed how intertwined the Higgs sector is with many of the processes measured at the LHC – dibosons, top and more. This implies that when we measure such things, we are, within the context of the SM, making powerful consistency tests of the parameters of the Higgs sector – or doing Higgs physics without a Higgs, as he styles it. So, if you agreed that Higgs physics is new physics, you now have to accept my earlier claim that so are many of the other measurements ATLAS and CMS are producing, even if no trace of BSM physics has yet shown itself. ## 7 Dark Matter, Astroparticle Physics and Cosmology The progress in searching for Dark Matter was described by Igor Garcia Irastorza, Carlos de los Heros and Kfir Blum. Amongst the items which were fresh news to me was the fact that the famous picture of the black hole at the centre of the giant elliptical galaxy M87 [10] actually excludes some ultra- light Dark Matter models, in which such black holes do not form. This is partly I think a reflection of a resurgence in model building and exploration, now the WIMP miracle is (at best) delayed. This is also reflected in the increased interest in axion searches as described by Irastorza. We also learned that the so-called “neutrino floor”, the point in sensitivity at which detectors looking directly for Dark Matter scattering off normal matter start seeing solar neutrino interactions, is not a hard floor, “more of a swampland” in the words of de los Heros. Anisotropic detectors, directional detection and time modulation offering possibilities for sinking below it. The anomalously large number of positrons seen by AMS (shown by Barbara De Lotto) remains intriguing – something interesting is going on there, whether it is Dark Matter-related or not. And the IceCube map of the neutrino sky shown by Elisa Bernardini offers a truly new view of the Universe, with the obvious potential for identifying point sources perhaps from Dark Matter annihilation, amongst other things. Our newest messenger, gravitational waves, continues to surprise and excite. One surprise to me was the fact that the “chirp” pattern of a neutron star merger, as shown by Patricia Schmidt, has – via tidal distortion – implications for the equations of state of high density QCD, as discussed by Carlos Albert Salgado Lopez. The prospect of the Einstein telescope, which with its sensitivity to lower frequencies should allow impending mergers to be spotted earlier and thus observed in more channels, is mouth-watering. ## 8 The Future, and Beyond the Standard Model And so to something on future prospects. Much of this hangs on development of accelerator technology, of which we heard summaries from Catarina Biscari in the ECFA session and Ralph Assman in the final session of the plenaries. Highlights included operational crab cavities for protons in the SPS (shown by Olivier Bruning in the parallel sessions), 2 GeV electron acceleration in the AWAKE proton-driven plasma wakefield experiment, a dipole magnet demonstrator so far tested at FNAL to 14 Tesla with an end goal of 15 T, and work on Nb3Sn wire toward 16 T from companies engaged in the Future Circular Collider project. The success (and speed) of such developments will be critical for the long-term future of the field. Nearer in time, we have LHC Run 3 and the HL-LHC era approaching. Isabell Melzer-Pellmann and Marie-Helene Genest showed the results of many ingenious and important searches for BSM physics, none of which have revealed anything other than increasingly stringent limits. This is itself is great progress and is having a profound impact on the theoretical field, as was described by Giuliano Panico and alluded to several times already in this contribution. Over the next few years, I believe we need, and will see, a profound change of approach here. In my opinion, while there is still important scope for novel signatures not yet covered (for example, exotic long-lived particles, disappearing tracks, non-standard jets, and probably other things still to be dreamt up) the main emphasis of the experiments should shift toward making precise, model- independent measurements which can be confronted with increasingly precise and exclusive SM predictions. It is such confrontation in my view that gives us the best chance of establishing whether or not we face a desert above the electroweak symmetry-breaking scale, as far as BSM physics is concerned. Such measurements, including those of the Higgs, seem likely to me to provide the main legacy of the LHC, an exacting challenge laid down to future model builders. A clearer idea of exactly what is being measured, in terms of separating theoretical interpretation from measurement, is needed. This implies not only measuring in fiducial regions reflecting the detector acceptance, but also in what is considered background and what is considered signal. An example: if we are aiming to measure $WW$ scattering, and we see an event with two isolated leptons and missing energy in the detector, should we really be subtracting it because our Monte Carlo says it came from top quarks? What about off-shell tops? It is often better to measure a final state, and do any subtraction later as part of the interpretation. This throws down a challenge to the theory to calculate the final states that are actually measured, of course, as discussed above. When it comes to interpretation of searches too, we need to on the one hand provide the data in such a way that a search for one model may be readily reinterpreted in terms of others with similar signatures, and on the other hand we should be moving away from simplified benchmark models toward making more general statements about whole classes of theory. An excellent example of the latter approach was given by Peter Athron in his parallel session talk [11], where the GAMBIT system is used to show that rumours of the death of the MSSM may be exaggerated. Not only are there still allowed parameter points for any and all values of neutralino and chargino mass (even though swathes of the multidimensional parameter space are indeed ruled out by LHC data) but there are even some regions of SUSY parameter space which are marginally favoured over the SM. I will take the opportunity to highlight a few other techniques and ideas of growing importance. The potential of machine learning, long extant around the periphery of the field, is now being realised in many different particle and astroparticle physics applications, and has much more to offer if used judiciously. Jet substructure has already been mentioned and hardly counts as new, but will in my view continue to grow in importance, including in the evaluation of the potential of future colliders. The same goes for the theoretical advances needed to calculate such variables incorporating both QCD and electroweak higher-order corrections. And finally, the field of particle physics really must get to grips with Open Data. The culture in astrophysics should be an example; the fact that LIGO data from the first gravitational wave observations is already public is immensely impressive, when the Higgs discovery data are still behind collaboration firewalls. Of course, the technical issues are different. Release of data is not without effort, and brings challenges. But in the final analysis, data produced by our experiments is part of the store of human knowledge, and is not ours to keep to ourselves in the long term. On the LHC side, CMS have led the way in opening a subset of early LHC data for physics analysis, some of which has been used by theorists to develop new results, and as shown in a parallel session by Matthias Schott, a member of ATLAS, to perform open cross-checks, new measurements and comparisons. This shows opening up our data is technically possible, and useful, for collider experiments. However, such activity – especially by a member of a rival collaboration – causes concern to some, and may even be discouraged or seen as a threat by the collaborations. This would be a mistake. While there are, for example, legitimate concerns about shortage of effort within the collaborations, attempting to control what colleagues do with public data is not the way to address those concerns, and in my opinion crosses an unacceptable line in terms of academic freedom. Open data, and its analysis, will in the end be good for science and innovation, and should be at least tolerated by even the most sceptical. External pressures are pushing the field in this direction anyway. We should go enthusiastically, not dragging our feet. ## 9 Conclusion With the ongoing results from the LHC experiments, Belle II arriving, HyperK and DUNE on the horizon, continuing improvement in theoretical and experiment techniques and a deepening engagement between particle physics, astrophysics and nuclear physics, this week has provided an exciting snapshot of a field still digesting the implications of the results of the last decade or so. I did not mention charged-lepton flavour measurements, or experiments probing the neutrino mass directly (its value, and whether it is Majorana or Dirac in nature), but there are also exciting prospects there. The main impression I took away from the conference is that while we need to pay attention to the far future, and it contains many uncertainties, the next few years promise a great richness of data, and therefore possibility, which should inspire and excite us all. But there is still much work to do to make this a reality. I apologise to anyone I didn’t represent well, or at all, here. I already thanked the organisers for the invitation. I would also like to thank them and the local organisers, colleagues, and the city of Ghent and its brewers, for hosting and attending a superb conference. ## References * [1] R. Aaij et al. [LHCb Collaboration], Phys. Rev. Lett. 122 (2019) no.21, 211803 doi:10.1103/PhysRevLett.122.211803 [arXiv:1903.08726 [hep-ex]]. * [2] R. Aaij et al. [LHCb Collaboration], JHEP 1909 (2019) 041 doi:10.1007/JHEP08(2019)041 [arXiv:1906.08297 [hep-ex]]. * [3] K. Abe et al. [Super-Kamiokande Collaboration], Phys. Rev. D 97 (2018) no.7, 072001 doi:10.1103/PhysRevD.97.072001 [arXiv:1710.09126 [hep-ex]]. * [4] L. Cieri, G. Ferrera and G. F. R. Sborlini, JHEP 1808 (2018) 165 doi:10.1007/JHEP08(2018)165 [arXiv:1805.11948 [hep-ph]]. * [5] J. M. Butterworth, B. E. Cox and J. R. Forshaw, Phys. Rev. D 65 (2002) 096014 doi:10.1103/PhysRevD.65.096014 [hep-ph/0201098]. * [6] A. M. Sirunyan et al. [CMS Collaboration], Phys. Rev. Lett. 120 (2018) no.14, 142302 doi:10.1103/PhysRevLett.120.142302 [arXiv:1708.09429 [nucl-ex]]. * [7] S. Acharya et al. [ALICE Collaboration], arXiv:1905.02512 [nucl-ex]. * [8] V. Khachatryan et al. [CMS Collaboration], JHEP 1009 (2010) 091 doi:10.1007/JHEP09(2010)091 [arXiv:1009.4122 [hep-ex]]. * [9] J. de Blas et al., arXiv:1905.03764 [hep-ph]. * [10] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875 (2019) no.1, L1 doi:10.3847/2041-8213/ab0ec7 [arXiv:1906.11238 [astro-ph.GA]]. * [11] P. Athron [GAMBIT Collaboration], arXiv:1910.05906 [hep-ph].
# Model-Driven Engineering for Formal Verification and Security Testing of Authentication Protocols Mariapia Raimondo Dip. di Matematica e Fisica Università della Campania “L. Vanvitelli” Caserta, Italy <EMAIL_ADDRESS>Stefano Marrone Dip. di Matematica e Fisica Università della Campania “L. Vanvitelli” Caserta, Italy <EMAIL_ADDRESS>Angelo Palladino Aerospace Business Unit Kineton srl Napoli, Italy <EMAIL_ADDRESS> ###### Abstract Even if the verification of authentication protocols can be achieved by means of formal analysis, the modelling of such an activity is an error-prone task due to the lack of automated and integrated processes. This paper proposes a comprehensive approach, based on the Unified Modeling Language (UML) profiling technique and on model-transformation, to enable automatic analysis of authentication protocols starting from high-level models. In particular, a UML-based approach is able to generate an annotated model of communication protocols from which formal notations (e.g., AnBx, Tamarin) can be generated. Such models in lower-level languages can be analysed with existing solvers and/or with traditional testing techniques by means of test case generation approaches. The industrial impact of the research is high due to the growing need of security and the necessity to connect industrial processes and equipment to virtualised computing infrastructures. The research is conducted on two case studies: railway signalling systems and blockchain based applications. ###### Index Terms: Formal verification of security protocols, Model Driven Engineering, Verification and Validation, Railway signalling, blockchain technology ## I Introduction Critical systems are now connected to the Internet due to an increase in the complexity of their functionalities. This growth of complexity goes not only in the direction of increasing the level of automation but also in opening such systems to remote control. Security becomes a prime factor in such a context: its lack could lead not only to economic loss or privacy leaks, but also to damage to people and goods. This is the case of the railway signalling systems, which is considered in this paper. Another emerging domain where security plays an important role is the one constituted by the emerging Distributed Ledger Technologies (DLTs) and, despite its relatively recent adoption, a lack of security in DLT would have a strong social impact. In fact, the blockchain technology, which is a particular case of DLT, has received widespread support and acclamation. It provides an infrastructure to manage transactions within a community without the need of a trusted third party that supervises. Suffice to say, cryptocurrencies are based on blockchain technology and they are one of the main strategies of long-term investment adopted nowadays. Moreover, the European Community has started writing down regulations111https://eur- lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0858&qid=1656931726550 to use them and proposals to prevent the usage of this technology for illicit purposes222https://eur-lex.europa.eu/legal- content/EN/TXT/?uri=CELEX%3A52021PC0422&qid=1656931726550. Formal verification can be very useful both to check the correctness of the behaviour of Industrial Control Systems (ICSs) and the achievement of the security levels required by the new technologies such as the blockchain ones. Unlike simulation and testing, this complementary technique is able to find very specific conditions that could bring to a security flaw that has not been considered in security test plans. Furthermore, formal methods are recommended for certification purposes, especially in critical systems. Nonetheless, it is worth underlining that: simulation and testing are still the most effective methods to demonstrate the presence of a security issue in a specific scenario; formal modelling and analysis often require specialised skilled people, whose effort is devoted to low-level details error-prone activities. The work presented in this paper deals with the problem of easing the work of the modeller and unifying the approach to check security and behavioural properties. The main objective is to provide a comprehensive approach, supporting formal analysis and testing, based on Model-Driven Engineering (MDE) techniques. The approach presented in the paper is based on a traditional model-driven process schema with the following elements. First, a UML Profile able to capture the authentication related features of the modelled system is defined. Then, a model transformation is provided to derive a formal notation from an annotated UML model of the system. Finally, the produced model is analysed with a set of techniques to verify security properties by formal analysis and/or to generate test scripts to support the verification. Where possible, the approach involves the use of existing and assessed solvers and toolchains. More in particular, the UML Profile is used to enrich behavioural models with cryptoprimitives and security properties. The model transformation is used to derive an Alice & Bob (AnB) model which is then checked over security properties. At the best of our knowledge, there is no unifying description framework for the different tools that could be used in the formal verification approach. This work will be also driven by two case studies: the European Railway Traffic Management System (ERTMS)/European Train Control System (ETCS) Key Management System (KMS) and the Tweetchain protocol. The paper is structured as follows: Section II gives a quick revision of related works, Section III describes the proposed approach, while Section IV discusses about specific technical concerns. Finally, Section V resumes the current developmental state of the approach. ## II Related works As mentioned in Section I, one of the adopted methodologies is the UML profiling technique. In particular, up to now, there is no UML standard profile — i.e., a UML profile defined by the Object Management Group — devoted to the security analysis of blockchain-based protocols and applications, which we aim to propose. However, regarding UML profiles for security analysis, the QoS&FT profile [1] provides general support for the specification of Quality of Service (QoS) characteristics and for risk assessment. Moreover, many researchers contributed by proposing UML profiles useful for the modelling and analysis of security properties of software systems, such as UML Sec [2] and SecAM [3]. From a model-driven perspective, the scientific community has traditionally proposed many model-based approaches that are specific to one simulation platform per time (e.g., [4]). Moreover, a set of contributions regarding the test case generation from high level models is present (e.g., [5]). The combination of both formal modelling and test case generation is present in a series of works that use, as an example, the model checking technique, as in [6]. Regarding the railway domain, security is rarely approached in scientific studies, which more frequently focus on other non-functional features as safety, availability, performance and signalling. In [7] the authors provide an approach to model and analyse the railway signalling system in Event-B. Similarly, in [8] a process is presented to generate Generalized Stochastic Petri Net (GSPN) and Promela models for analysis and test case generation purposes. In [9], instead, a model-based approach is provided that is aimed at improving the development of proprietary railway interlocking systems. On the other side, security analysis in blockchain environments by means of formal methods mainly focuses on smart contracts. The survey [10] discusses 35 papers from 2015 to 2019 just focusing on formalization of smart-contracts. ## III Overview of the approach Figure 1 describes the approach presented in the paper. The aim of the approach is to facilitate the verification of security properties of communication protocols. This goal is achieved by allowing the definition of the protocol, and of the related security properties to be verified, in a high-level language (Model level). Moreover, we leverage model-driven techniques to automatically generate low-level artefacts and run verification activities. The latter can be carried out using formal and simulation techniques (Solver level and Simulation Framework). A key feature of this approach is the possibility of using different platforms to analyse protocols, that range from formal verification tools to test scripts generation ones. Figure 1: Overview of the approach The chosen high-level modelling language is the well-known UML, which provides a set of different diagrams that can be used to model a given system. In particular, dealing with communication protocols, good options are the Sequence Diagrams and the protocol State Machine Diagrams. The first capture the interactions between parties, whilst the second focus more on the state changes of participants in reaction to message exchanges. Since our aim is to ease the work of the modeller, a way to fill the gap that arises from the creation of a UML model of a protocol is to provide more protocol-related concepts at UML level. This can be done by applying the UML Profiling technique to capture and define as a Profile the authentication related features of the modeled systems. In Fig. 1, UML and the AP Profile — Authentication Protocol Profile — constitute the high-level language that can be used to model both the protocol specification and the security properties to be verified (e.g., by means of UML constraints or other annotated UML constructs). Then, thanks to a model transformation, both the protocol and the security properties are translated in a Low-level model. The latter is a tool-independent notation, for example AnB. Finally, further model transformations are used to obtain formal models and a set of specific solvers can be used: 1. 1. To formally check the model against the translated security properties; 2. 2. To get counterexamples of the unmatched properties to extract Abstract Test Cases (ATCs) from such counterexamples; 3. 3. To transform, according to the presence of a system simulator framework, the ATCs into an intruder model; 4. 4. To simulate such model (i.e., intruder and protocol participants) to get a practical demonstration of the presence of the security flaw. ## IV Towards an implementation of the approach This section discusses three technical concerns to match in order to realize the approach. ### IV-A Reusabilty One of the key goals of the proposed research is to define a methodology that can be easily adapted in different domains. The sequence of the exchanged messages, and the type of information they carry, can differ a lot among the various authentication protocols. This difference is already present in classical protocol descriptions (i.e., standards, white papers, technical reports, etc.). The separation of these descriptions boosts the flexibility of the approach. The concrete solution in this work considers two sub-languages. The first is represented by a classical UML Profile: an example of the concepts represented at this level are the message, that can be captured in the UML Profile by a stereotype to annotate Sequence Diagram’s messages. At the second level, there are instead specific textual notations, formalised by Backus-Naur Form (BNF) grammars. These notations can be used to detail which are the data exchanged (both in plain and cypher-texts) between communication parties. The UML Profile is separated from the specific notation and, thus, it can be reused in different specific domains. When a modeller specifies a protocol, he/she can use the stereotypes considered in the UML Profile to model and annotate the UML diagrams accordingly. Moreover, as tagged values of UML stereotypes give the possibility to add textual information, they are used with the Transaction stereotype. By doing so, the text can be formatted according to the specific grammar defined for the specific domain. A preliminary design of this mechanism is published in [11]. ### IV-B Choice of the solution toolchain One of the most valuable benefits coming from the adoption of MDE is that it firmly separates the level of abstraction of the different artefacts. Part of the implementation of the toolchain presented in this paper can be clearly achieved reusing existing tools and integrating part of existing project in ours. In particular, the work of Modesti [12] caught our attention, since it provides the generation of analysable formal models starting from semi-formal language models. The authors in a first phase of their work developed a on-the-fly model checker called OFMC (Open-Source Fixed-Point Model-Checker). It is a model checker naturally devoted to the analysis of security protocols and it requires as input a model written in AnB/AnBx language. This makes it an adequate component to be used as partial back-end of our proposal. In detail, the AnBx language is a formalized version of the well-known semi- formal AnB notation. It provides a higher level of abstraction by providing a easier way to model cryptographic primitives in the shape of a channel. In fact, the mode of a channel allows the modeller to enforce or weaken the security guarantees of messages exchange in a convenient style, decoupling the protocol from the specific encryption method adopted. It is valuable to underline that the Modesti’s toolchain does not represent the only possible choice. In [13] and in [11], the Tamarin Prover model checker has been used. In particular, in [13], Tamarin has been used as a single language (i.e., detached by a MDE approach), while in [11] a model- driven toolchain is proposed using a generic AnB notation and Tamarin as a solving back-end. ### IV-C Integrating formal analysis and testing The community of security testing is wide and it considers finding vulnerabilities and errors in low-level code and operating systems as well as finding high-level protocol flaws. The formal methods and model-driven bodies of knowledge are more prone to support the second kind of activity and, thus, talking about the possibility of implementing security testing features in our approach could seem strange. As Fig. 1 reports, the testing “branch” of the approach (from Abstract Test Cases) relies on the presence of a Simulation Framework whose purpose is to define an executable abstraction of the behaviours of protocol parties. At this point of the research, there is no specific requirement for such frameworks: they could span from simple event based simulation to complex hardware/system-in-the-loop environments. A possible architecture for the Simulation Framework is here discussed. The core of the architecture is a simulation engine executing the participant models, which mainly exchange messages, and the intruder, which attempts to intercepts such messages injecting malicious ones. The engine is supported by these components: (1) a translator from the ATCs and security property to the concrete notation for the test script; (2) a security monitor, which is in charge of monitoring the security properties, raising exceptions in case of property violations. The Simulation Framework is based on the presence of a language for test scripting to express the models of the actors and of the security properties. ## V State of progress We plan to prototype and apply the approach here presented to two different case studies: one is the Tweetchain protocol, the other one is the Key Management System of the ERTMS/ETCS. The Tweetchain protocol, described in [14], is a lightweight version of the blockchain paradigm. It mainly consists in a reinvention of the consensus protocol to make the protocol applicable to Internet of the Things (IoT) devices, which do not own the necessary computing power and memory to perform one of the full consensus protocols available. In fact, the authors leveraged the feature of IoT devices of being connected to the Internet to base the consensus protocol on the famous social network Twitter, by using tweets, to encode transactions, and meshed replications, to substitute the proof of work. Currently, foundations were laid for the creation of a blockchain-specific UML Profile, which could be a first version of the AP Profile. Moreover, a specific textual notation for the Tweetchain data exchange has been provided, that is expressed by a BNF grammar. Furthermore, a transformation from a UML Sequence Diagram annotated with such a profile to the AnB notation is designed. Finally, thanks to the already available automatic translation provided by [15], it has been possible to translate the AnB protocol into a Tamarin model [16]. Only formal analysis is conducted up to now on this protocol. Previous published papers describing the preliminary results on this case study are [13, 11]. To guarantee the security of signalling systems in modern railway, Key Management System regulates the exchanges of the keys between trackside and onboard controllers. The ETCS-KMS protocol is devoted to guarantee the security of this exchange in an on-line environment [17]. In [18], a preliminary study on the application of the techniques here presented is reported, focusing on a compositional approach of Tamarin specification. Fig. 2 wants to summarise the status of the development of the entire approach. It recalls Fig. 1. The blue circles represent the part of the research covered by the works about the Tweetchain protocol [13, 11], while the pink ones are related to the ETCS-KMS case study [18]. Figure 2: Roadmap of the work The next steps in this research will be: (1) the migration of the case studies to the toolchain based on the work of Modesti, to exploit the translator from AnBx to OFMC and related tools (in green in the figure); (2) to define the blockchain-specific Profile and extend it to the AP Profile, to include also the ETCS-KMS specific features (in red); (3) the automation of the approach by means of the prototyping of key components (in yellow). In particular, the original contributions of this work will be: (1) the definition of a UML Profile for blockchain-based applications and authentication protocols; (2) the integration of an intruder model in the Simulation Framework, which is not present in any of the considered tools; (3) the provision of a framework giving the evidence of attacks arisen from formal analysis. ## Acknowledgment The work of Mariapia Raimondo is granted by INPS — Istituto Nazionale di Previdenza Sociale (Italy) — with the PhD program XXXVI cycle. ## References * [1] “UMLTM Profile for Modeling Quality of Service and Fault Tolerance Characteristics and Mechanisms Specification,” Object Management Group, Tech. Rep., 2005, formal-08-04-05. * [2] J. Jürjens, _Secure systems development with UML_. Berlin, Heidelberg: Springer, 2005. * [3] R. J. Rodríguez, J. Merseguer, and S. Bernardi, “Modelling security of critical infrastructures: A survivability assessment,” _Comput. J._ , vol. 58, no. 10, pp. 2313–2327, 2015. * [4] K. Anastasakis, B. Bordbar, G. Georg, and I. Ray, “UML2Alloy: A challenging model transformation,” _LNCS_ , vol. 4735, pp. 436–450, 2007. * [5] M. Utting and B. Legeard, _Practical Model-Based Testing_ , 2007. * [6] A. Gargantini and C. Heitmeyer, “Using model checking to generate tests from requirements specifications,” _SIGSOFT Softw. Eng. Notes_ , vol. 24, no. 6, p. 146–162, oct 1999. * [7] R. Bougacha, A. Ait Wakrime, S. Kallel, R. Ben Ayed, and S. Collart-Dutilleul, “A model-based approach for the modeling and the verification of railway signaling system,” 01 2019, pp. 367–376. * [8] S. Marrone, F. Flammini, N. Mazzocca, R. Nardone, and V. Vittorini, “Towards model-driven V&V assessment of railway control systems,” _International Journal on Software Tools for Technology Transfer_ , vol. 16, no. 6, pp. 669–683, 2014. * [9] F. Scippacercola, A. Zentai, and S. Russo, “Experiencing model-driven engineering for railway interlocking systems,” pp. 31–64, 2017. * [10] A. Singh, R. M. Parizi, Q. Zhang, K.-K. R. Choo, and A. Dehghantanha, “Blockchain smart contracts formalization: Approaches and challenges to address vulnerabilities,” _Computers & Security_, vol. 88, 2020. * [11] M. Raimondo, S. Bernardi, S. Marrone, and J. Merseguer, “An approach for the automatic verification of blockchain protocols: The tweetchain case study,” _Journal of Computer Virology and Hacking Techniques_ , to appear. * [12] M. Bugliesi, S. Calzavara, S. Mödersheim, and P. Modesti, “Security protocol specification and verification with AnBx,” _Journal of Information Security and Applications_ , vol. 30, pp. 46–63, 2016. * [13] M. Raimondo, S. Bernardi, and S. Marrone, “On formalising and analysing the tweetchain protocol,” in _ICISSP 2021 - Proceedings of the 7th International Conference on Information Systems Security and Privacy_ , 2021, pp. 781–791. * [14] F. Buccafurri, G. Lax, S. Nicolazzo, and A. Nocera, “Overcoming limits of blockchain for iot applications,” in _ACM International Conference Proceeding Series_ , vol. Part F130521, 2017. * [15] M. Keller, “Converting Alice&Bob Protocol Specifications to Tamarin,” ETH Zurich, 2014, Bachelor’s Thesis. * [16] S. Meier, B. Schmidt, C. Cremers, and D. Basin, “The TAMARIN prover for the symbolic analysis of security protocols,” in _proc. of CAV 13_ , vol. LNCS 8044, 2013. * [17] UNISIG, “On-line Key Management FFFIS - Subset 137.” * [18] M. Raimondo, “Formal modeling and analysis of cryptographic schemes in railway systems.” 2020, Master’s Thesis.
# Discovering Multiple Truths with a Hybrid Model Furong Li† Xin Luna Dong‡ Anno Langen‡ Yang Li‡ †National University of Singapore ‡Google Inc., Mountain View, CA, USA <EMAIL_ADDRESS>{lunadong, arl<EMAIL_ADDRESS> ###### Abstract Many data management applications require integrating information from multiple sources. The sources may not be accurate and provide erroneous values. We thus have to identify the true values from conflicting observations made by the sources. The problem is further complicated when there may exist multiple truths (e.g., a book written by several authors). In this paper we propose a model called Hybrid that jointly makes two decisions: how many truths there are, and what they are. It considers the conflicts between values as important evidence for ruling out wrong values, while keeps the flexibility of allowing multiple truths. In this way, Hybrid is able to achieve both high precision and high recall. ## 1 Introduction When consolidating information from different sources, we may observe different values provided for the same entity. Consequently, we need to identify the correct values from conflicting observations made by multiple sources, which is known as the data fusion (or truth discovery) problem [2, 5]. We illustrate the problem using the example below. ###### Example 1.1 Table 1 shows the information collected from three sources regarding equipments of two winter sports: ice hockey and snowboarding. We can see that four different values are provided for the entity ice hockey (helmet, stick, boots and skis), while only the first two are correct. The goal of the truth discovery problem is to identify the correct values from Table 1. The simplest solution to the truth discovery problem is majority vote: consider the value provided by the largest number of sources as the truth. For example, for the entity snowboarding, we select board as the truth since it is provided by two sources while neck guard is only provided by one source. However oftentimes different sources have different qualities, and one may want to distinguish them. The authors of [8, 1] measure the quality of a source $s$ by its accuracy, which is the probability that a value provided by $s$ is correct. Then the truth can be decided through a weighted vote, where a source with higher accuracy is assigned to a higher weight; the value with the highest vote is selected as the truth. The intuition behind this approach is that values provided by more accurate sources are more likely to be true. Table 1: Information collected from different sources regarding equipments of various winter sports. $\surd/\times$ indicates the correctness. | entity | attribute | value | sources ---|---|---|---|--- $\surd~{}o_{1}$ | ice hockey | equipments | helmet | $s_{1},s_{3}$ $\surd~{}o_{2}$ | ice hockey | equipments | stick | $s_{1},s_{2}$ $\times~{}o_{3}$ | ice hockey | equipments | boots | $s_{2}$ $\times~{}o_{4}$ | ice hockey | equipments | skis | $s_{3}$ $\surd~{}o_{5}$ | snowboarding | equipments | board | $s_{2},s_{3}$ $\times~{}o_{6}$ | snowboarding | equipments | neck guard | $s_{1}$ The limitation of the above methods is that when multiple truths exist, they at best find one of them while missing the rest. We thus refer to them as single-truth approaches. While truth discovery algorithms usually compute a probability $p(v)$ of each value $v$ being true, in single-truth approaches, the probabilities of all values sum up to 1 since they assume there is only one true value. ###### Example 1.2 We use Accu [1] as a representative of single-truth approaches, and then compute the probabilities of the values provided from ice hockey equipments. Assuming all sources have the same accuracy 0.6, we obtain the probabilities in Table 2 (see the first line). We observe that the probabilities of the four values add up to 1, so even the true values (helmet and stick) have rather low probabilities. Table 2: Value probabilities computed by different approaches for (ice hockey, equipments). | helmet | stick | boots | skis ---|---|---|---|--- Single-truth [1] | 0.47 | 0.47 | 0.03 | 0.03 Multi-truth [7] | 0.63 | 0.63 | 0.54 | 0.54 Hybrid | 0.92 | 0.92 | 0.08 | 0.08 To address the above problem, multi-truth approaches [9, 7] have been proposed recently. They compute the probability for each value separately, and thus do not require the probabilities of all values sum up to 1. Instead, they compute both the probability $p(v)$ of $v$ being true, and the probability $p(\neg v)$ of $v$ being false, where $p(v)+p(\neg v)=1$. Then a value $v$ is consider true if $p(v)>p(\neg v)$, that is, $p(v)>0.5$. An unknown semantics is used to capture the nature of multi-truth: if a source $s$ does not provide the value $v$, $s$ means that it does not know whether or not $v$ is correct (instead of saying $v$ is incorrect). Thus, apart from accuracy (also called precision in some methods), multi-truth approaches also measure the quality of a source $s$ by its recall, the probability that a truth is provided by $s$. Intuitively, values provided by high-precision sources are likely to be true (i.e., a higher $p(v)$), and values not provided by high-recall sources are likely to be false (i.e., a higher $p(\neg v)$). In this way, they derive $p(v)$ and $p(\neg v)$ from the precision and recall of the sources, and then normalize with the equation $p(v)+p(\neg v)=1$. ###### Example 1.3 Table 2 also shows the value probabilities computed by the multi-truth approach PrecRec [7]. Assuming a precision of 0.6 and recall of 0.5 for each source, PrecRec will decide that all provided values are true, resulting in false positives (i.e., boots and board). In practice, even multi-truth items often have only a few true values instead of infinite number of truths. Existing methods [9, 7] cannot capture this because they decide the truthfulness of each value independently, without considering other values provided for the entity and thus lack a global view of the entity. As a result, they suffer from low precision when the sources have low coverage or noisy observations (as shown later in Section 5). In this paper we introduce a new solution to the truth discovery problem, called Hybrid, which works for multi-truth applications. Based on the values provided for an entity, Hybrid makes two decisions: (i) how many truths there are, and (ii) what they are. Essentially it interleaves the two decisions and finds the truths one by one. Conditioning on a sequence of true values that have been selected previously, it computes (1) the probability of a value $v$ being the next truth, and (2) the probability that there is no more truth. In this way, Hybrid combines the flexibility of the multi-truth approaches of allowing multiple truths for an entity, and the inherent strength of the single-truth approaches of considering conflicts between values as important evidence for ruling out wrong values. Therefore, it obtains both high precision and high recall. Note that the multi-truth setting should be considered more general than the single-truth setting, since it allows for the existence of multiple truths (but not necessarily). Our proposed method also works for entities with a single-truth because it can automatically decide the number of truths. Although one can easily extend a single-truth approach to handle multi-truth applications by setting a threshold (i.e., consider all values with predicted probabilities over $\lambda$ as true), it is hard to find a threshold that works for all entities. We discuss a slightly more sophisticated extension in Section 3. ## 2 Definitions and Notations Data item, value, source, observation. We call an (entity, attribute) pair a data item, denoted by $d$. Then we have a set $\cal S$ of sources provide values on $d$. Let $v$ be a value provided by a source $s\in{\cal S}$ for the data item $d$, the pair $(d,v)$ is then called an observation of $s$. For instance, there are two data items in Table 1: $d_{1}=$ (ice hockey, equipments) and $d_{2}=$ (snowboarding, equipments); there are 4 values provided for $d_{1}$ and 2 values provided for $d_{2}$. In total we have 6 observations made by 3 sources $\\{s_{1},s_{2},s_{3}\\}$. Given a data item $d$, we use $\Psi$ to denote the set of observations made on $d$ (we dismiss $d$ in the notation for simplicity); then $\Psi(s)$ denotes the values from $s$. For example in Table 1, for the item (ice hockey, equipments), $\Psi(s_{1})=\\{{\rm helmet,stick}\\}$. Table 3 summarizes the notations we use in this chapter. Table 3: Table of notations. Notation | Description ---|--- $d$ | a data item $v$ | a value $s$ | a source that provides values ${\cal S}_{v}$ | the set of sources that provide the value $v$ $\Psi$ | the mapping between sources and their provided values $\Psi(s)$ | the set of values provided by $s$ on a data item $n$ | the number of wrong values in the domain ${\cal O}$ | a sequence of values that have been selected as truths $\perp$ | “there is no more truth” Problem definition. Given a data item $d$ and a set ${\cal S}$ of sources, let $\cal V$ denote the set of values provided by $\cal S$ on $d$. Our goal is to compute a probability $p(v|\Psi)$ for each value $v\in\cal V$ being true based on the source observations. In this chapter we focus on the case where the sources are independent of each other; we can extend our model with techniques from [1, 7] to address correlations between sources. ## 3 A Hybrid Model This section presents a truth discovery model, called Hybrid, which allows for the existence of multiple truths. Essentially, Hybrid makes two decisions for a data item $d$: (i) how many truths there are, and (ii) what they are. One can imagine a natural solution that processes in two steps: (1) decide the number of truths $k$ with a single-truth method, treating “the number of truths for $d$” as a data item and $|\Psi(s)|$ as the value provided by $s$; (2) apply the single-truth method on $\cal V$ and select the values with top-$k$ probabilities as the truths. Although this approach often outperforms both the existing single-truth and multi-truth approaches (as we shall show later in Section 5), it has two problems. First, it does not update the value probabilities according to its belief of the number of truths (all probabilities still sum up to 1). Second, separating the decisions into two steps may hurt precision when many sources provide more values than the truths: once the first step decides the number of truths $k$, the second step will fill in $k$ values, possibly with values lacking strong support from the sources. Different from the above baseline approach, Hybrid combines the two steps and finds the truths one by one. Conditioning on a sequence ${\cal O}$ of true values that have been selected previously, it decides (1) the probability of a value $v$ being the next truth, denoted by $p(v|{\cal O},\Psi)$, and (2) the probability that there is no more truth, denoted by $p({\perp}|{\cal O},\Psi)$. These are disjoint decisions so their probabilities sum up to 1. Thus, when selecting the next truth, Hybrid basically applies a single-truth method. However, when deciding whether there is any more truth (i.e., $p({\perp}|{\cal O},\Psi)$), Hybrid incorporates the unknown semantics used in multi-truth approaches: if a source provides 2 values for an item $d$, it claims that it knows 2 values of $d$, instead of claiming that $d$ has only 2 values. In this way, Hybrid combines the flexibility of the multi-truth methods of allowing multiple truths for a data item, and the inherent strength of the single-truth methods of considering conflicts between values as important evidence for ruling out wrong values. Therefore, it obtains both high precision and high recall. Moreover, Hybrid leverages the typical number of truths for each type of data items; for example, a person typically has 2 parents and 1-5 children. Hybrid allows incorporating such knowledge as the a priori probability of $p({\perp}|{\cal O},\Psi)$, which further improves performance. Bear in mind that a priori probabilities have much less effect than observations on computing a posteriori probabilities, so Hybrid applies a soft constraint rather than a hard one. We next describe the Hybrid model in more details, and answer the following question: since there should not exist any ordering between the truths, how would Hybrid avoid the consequence of finding the truths one by one? ### 3.1 Overall Probability of a Value Consider computing $p(v|\Psi)$ for a value $v\in{\cal V}$. As we select truths one by one, there can be various sequences of truths (of any length below $|{\cal V}|$) that are selected before $v$. We call each sequence ${\cal O}$ a possible world and denote by $\Omega$ all possible worlds. Then the probability of $v$ is the weighted sum of its probability in each possible world: $p(v|\Psi)=\sum_{{\cal O}\in\Omega}p(v|{\cal O},\Psi)\cdot p({\cal O}|\Psi).$ (1) where $p({\cal O}|\Psi)$ is the probability of entering the possible world ${\cal O}$. Let ${\cal O}=v_{1}v_{2}\dots v_{|{\cal O}|}$, $v\notin{\cal O}$, denote a possible world with the sequence $v_{1},v_{2},\dots,v_{|{\cal O}|}$ of values selected as truths. Let ${\cal O}_{j}$ denote a prefix of ${\cal O}$ with length $j$ and ${\cal O}_{0}=\varnothing$. Applying the chain rule leads us to: $p({\cal O}|\Psi)=\prod_{j=1}^{|{\cal O}|}p(v_{j}|{\cal O}_{j-1},\Psi).$ (2) Now the only piece missing from Eq.s (1)-(2) is the conditional probability $p(v|{\cal O},\Psi)$, which we describe in the next subsection. Back to the question we asked previously, even though Hybrid finds truths one by one, it is order-independent as it considers all possible ways to select a value and computes an overall probability. Apparently enumerating all possible worlds is prohibitively expensive; we describe a polynomial-time approximation in Section 4. ### 3.2 Conditional Probability of a Value Now consider computing $p(v|{\cal O},\Psi)$. Under a possible world ${\cal O}$, we either choose one of the remaining values as the next truth or decide that there is no more truth, thus $\sum_{v^{\prime}\in{\cal V}\setminus{\cal O}}p(v^{\prime}|{\cal O},\Psi)+p({\perp}|{\cal O},\Psi)=1$; this is similar to what we have in single-truth approaches. Then according to the Bayes Rule, we have $p(v|{\cal O},\Psi){=}{p(\Psi|{\cal O},v)p(v|{\cal O})\over\sum\limits_{v^{\prime}\in{\cal V}\setminus{\cal O}}p(\Psi|{\cal O},v^{\prime})p(v^{\prime}|{\cal O})+p(\Psi|{\cal O},\perp)p({\perp}|{\cal O})}.$ (3) Here the inverse probability $p(\Psi|{\cal O},v)$ is the probability of observing $\Psi$ if $v$ is the next truth. The a priori probability $p(v|{\cal O})$ is the probability of $v$ being the next truth regardless of the observations $\Psi$. The same applies to $p(\Psi|{\cal O},\perp)$ and $p({\perp}|{\cal O})$. Before we can compute these two sets of probabilities, we first define the metrics that are used to measure the quality of a source. #### 3.2.1 Source-quality metrics Imagine that there are $m$ latent slots for truths of a data item, and a source $s$ is asked to fill the slots. The number of slots is unknown to $s$, so it iteratively performs two tasks: predict whether there exists another slot (i.e., another truth), and if so, fill the slot with a value. We thus capture the quality of a source with two sets of metrics: that for deciding whether there exists a truth, and that for deciding the true values. The first set of metrics enables the unknown semantics for multi-truth, and it includes two measures: * • Precision $P(s)$, the probability that when $s$ provides a value, there indeed exists a truth; * • Recall $R(s)$, the probability that when there exists a truth, $s$ provides a value. Note that our $P(s)$ and $R(s)$ are different from the same notions in [7]: we only measure how well $s$ predicts whether or not there exists a truth, but not how well $s$ predicts what the truth is; in other words, we do not require the value provided by $s$ to be the same as the truth. To facilitate later computations, we next derive the false positive rate of $s$, denoted by $Q(s)$, from $P(s)$ and $R(s)$ by applying the Bayes Rule (see [7] for details): $Q(s)={\alpha\over 1-\alpha}\cdot{1-P(s)\over P(s)}\cdot R(s),$ (4) where $\alpha$ is the a priori probability111Previous work [7] has shown that a priori probabilities play a minor role on final results comparing with the source observations. that a provided value corresponds to a truth slot. Intuitively, $Q(s)$ is the probability that $s$ still provides a value when there is no truth slot. The second set of metrics follows single-truth models to address the conflicts between values. It contains one measure: accuracy $A(s)$, the probability that a value provided by $s$ for a “real” truth slot is true (i.e., $s$ provides a true value after it has correctly predicted the existence of a truth slot). Note that values provided for non-existing truth slots, which are absolutely false, are not counted here, as they have been captured by $P(s)$. We describe how we compute these metrics in the next subsection, and demonstrate the basic idea of them in the example below. ###### Example 3.1 Consider the source $s_{2}$ and the data item $d_{1}=$ (ice hockey, equipments) in Table 1. Suppose ice hockey requires 3 equipments . We observe that $s_{2}$ provides 2 values on $d_{1}$, meaning that it predicts that there are 2 slots for truths; among the provided values one is true. Therefore for this particular data item, $s_{2}$ has precision $2/2=1$, recall $2/3=0.67$, and accuracy $1/2=0.5$. Now consider data item $d_{2}=$ (snowboarding, equipments), which has 1 truth. As $s_{2}$ provides 1 correct value, its precision, recall, and accuracy for this item are all 1. If $s_{2}$ provides only these 2 data items, on average, we have $P(s_{2})=\frac{1+1}{2}=1,R(s_{2})=\frac{0.67+1}{2}=0.83$, and $A(s_{2})=\frac{0.5+1}{2}=0.75$. #### 3.2.2 Inverse probabilities We are now ready to derive the inverse probabilities $p(\Psi|{\cal O},v)$ and $p(\Psi|{\cal O},\perp)$ in Eq. (3). Assuming that the set of sources are independent, we have $p(\Psi|{\cal O},v)=\prod_{s\in{{\cal S}}}p(\Psi(s)|{\cal O},v),$ (5) and similar for $p(\Psi|{\cal O},\perp)$. In the following computations, when conditioning on $({\cal O},v)$, we think that ${\cal O}\cup\\{v\\}$ are the only set of truths; similarly, when conditioning on $({\cal O},\perp)$, we think ${\cal O}$ is the only set of truths. This is known as the closed-world assumption, and according to [6], it should give the same results as the open- world assumption where the truths form a superset of ${\cal O}\cup\\{v\\}$. Let $\bar{T}$ be the truths of the item $d$, that is, $\bar{T}={\cal O}$ (when computing $p(\Psi|{\cal O},\perp)$) or $\bar{T}={\cal O}\cup\\{v\\}$ (when computing $p(\Psi|{\cal O},v)$). Accordingly, we can partition $\Psi(s)$, values provided by $s$ on $d$, into four categories: consistent values, inconsistent values, extra values and missing values. We denote respectively the size of each category as $N_{c},N_{w},N_{e},N_{m}$, and the probability that a value falls into each category as $P_{c},P_{w},P_{e},P_{m}$. Then $p(\Psi(s)|{\cal O},v)$ is given by: $p(\Psi(s)|{\cal O},v)=P_{c}^{N_{c}}\cdot P_{w}^{N_{w}}\cdot P_{e}^{N_{e}}\cdot P_{m}^{N_{m}}.$ (6) When deriving $p(\Psi(s)|{\cal O},{\perp})$, the only difference is that we will award a source $s$ if it does not provide any extra value ; otherwise, we re-use Eq. (6). The probability of not providing extra values is $P_{\neg e}=1-Q(s)$, and recall that ${\cal O}$ is the (estimated) set of truths for the data item. Thus we have: $p(\Psi(s)|{\cal O},{\perp})=\begin{cases}P_{c}^{N_{c}}\cdot P_{w}^{N_{w}}\cdot P_{e}^{N_{e}}\cdot P_{m}^{N_{m}}&|\Psi(s)|>|{\cal O}|;\\\ P_{c}^{N_{c}}\cdot P_{w}^{N_{w}}\cdot P_{e}^{N_{e}}\cdot P_{m}^{N_{m}}\cdot P_{\neg e}&|\Psi(s)|\leq|{\cal O}|.\end{cases}$ (7) We next define each category and describe how we compute their sizes and probabilities. Following [1], we assume that there are $n$ false values in the domain of $d$ and they are uniformly distributed (note that the false values may not appear in $\cal V$). * • Consistent value: A consistent value is a value in $\bar{T}\cap\Psi(s)$; thus, $N_{c}=|\bar{T}\cap\Psi(s)|$. To provide a consistent value, $s$ needs to correctly predict that there exists a slot for truth, and fills the slot with a true value, so $P_{c}=R(s)\cdot A(s)$. * • Inconsistent value: An inconsistent value is a value that is provided for a truth slot, but differs from any true value. At most we have $|\bar{T}|$ values provided for truth slots; except the consistent values, others are inconsistent. Thus $N_{w}=\min(|\bar{T}|,|\Psi(s)|)-N_{c}$. When $s$ provides an inconsistent value, it correctly predicts the existence of a truth slot, but fills in a particular false value, so $P_{w}=R(s){\cdot}{1-A(s)\over n}$. * • Extra value: If $s$ provides more than $|\bar{T}|$ values, the rest of the values are extra values; thus, $N_{e}=\max(|\Psi(s)|-|\bar{T}|,0)$. When $s$ provides an extra value, it incorrectly predicts a non-existing slot, and fills in a particular (false) value, so $P_{e}={Q(s)\over n}$. * • Missing value: Alternatively when $\Psi(s)$ contains fewer values than $\bar{T}$, $s$ misses some truth slots (i.e., $s$ thinks they do not exist). We have $N_{m}=\max(|\bar{T}|-|\Psi(s)|,0)$ and $P_{m}=1-R(s)$. ###### Example 3.2 Consider the data item $d_{1}$ and we now compute $p(\Psi(s_{2})|o_{1},o_{2})$, the probability of observing the values in $\Psi(s_{2})$ if $o_{2}$ is the next truth after $o_{1}$ has been selected. We have ${\cal O}=o_{1}$, $\Psi(s_{2})=\\{o_{2},o_{3}\\}$ and $\bar{T}=\\{o_{1},o_{2}\\}$. So $\Psi(s_{2})$ contains one consistent value, and one inconsistent value; there is no extra value or missing value. In other words, we have $N_{c}=N_{w}=1$ and $N_{e}=N_{m}=0$. Supposing $n=10$ and $A(s_{2})=0.6,R(s_{2})=0.9,Q(s_{2})=0.1$, we have $P_{c}=0.9\cdot 0.6=0.54$, $P_{w}=0.9\cdot\frac{1-0.6}{10}=0.036$, $P_{e}=\frac{0.1}{10}=0.01$, $P_{m}=0.1$. Then according to Eq. (6) we compute: $p(\Psi(s_{2})|o_{1},o_{2})=0.54^{1}\cdot 0.036^{1}\cdot 0.01^{0}\cdot 0.1^{0}=0.019$. We repeat the above process for the other sources and obtain: $p(\Psi(s_{1})|o_{1},o_{2})=0.292$, $p(\Psi(s_{3})|o_{1},o_{2})=0.019$. With the source-independence assumption, we have: $p(\Psi|o_{1},o_{2})=0.019\cdot 0.292\cdot 0.019=1.05\times 10^{-4}$. #### 3.2.3 A priori probabilities We then compute the probabilities $p({\perp}|{\cal O})$ and $p(v|{\cal O})$ in Eq. (3). Intuitively, the chance of $\perp$ increases when more truths are found. Let $\beta_{i}$ be the a priori probability of $\perp$ when we are looking for the $i$-th truth (i.e., $|{\cal O}|=i-1$). So there are $|{\cal V}|-i+1$ unselected values in $\cal V$; assuming they have the same a priori probability, the a priori probability $p(v|{\cal O})$ of each value $v$ would be: $p(v|{\cal O})=\frac{1-\beta_{i}}{|{\cal V}|-i+1}.$ (8) We can derive $\beta_{i}$ from the distribution of the number of truths for a data item. For example, among people who have children, if $30\%$ of them have 1 child, $40\%$ have 2 children, and so on, then $\beta_{2}=0.3$ (with probability $30\%$ there is not a second truth), and $\beta_{3}=0.7$ (with probability $30\%$+$40\%$ there is not a third truth). #### 3.2.4 Summary By putting the derived inverse probabilities and a priori probabilities into Eq. (3), we are able to obtain $p(v|{\cal O},\Psi)$, and this completes the computation of Eq. (1). As the following proposition shows, Hybrid computes higher probabilities for values provided by more accurate sources; it finds more truths when high-precision sources provide more values; and it finds fewer truths when high-recall sources provide fewer values. These all conform to our intuition. ###### Proposition 3.3 Consider a value $v$ and a source $s\in{{\cal S}}$ where $v\in\Psi(s)$; we fix all sources in ${{\cal S}}$ except $s$. * • If $A(s)>\frac{1}{u+1}$, $p(v|\Psi)$ increases when $A(s)$ increases. * • If $Q(s)<\frac{R(s)-R(s)A(s)}{1-R(s)A(s)}$, $p({\perp}|\Psi)$ decreases when $s$ provides more values. * • If $R(s)>\frac{Q(s)}{1-A(s)+A(s)Q(s)}$, $p({\perp}|\Psi)$ increases when $s$ provides fewer values. $\Box$ ###### Example 3.4 Continuing with Example 3.2, we proceed to compute $p(o_{2}|o_{1},\Psi)$ using Eq. (3). This requires the inverse probability $p(\Psi|o_{1},v)$ and the a priori probability $p(v|o_{1})$ for every remaining value in ${\cal V}\setminus{\cal O}=\\{o_{2},o_{3},o_{4}\\}$ as well as $\perp$. We have obtained the inverse probability $p(\Psi|o_{1},o_{2})$ in Example 3.2; we now repeat the process on $o_{3}$, $o_{4}$ and $\perp$ to compute: $p(\Psi|o_{1},o_{3})=p(\Psi|o_{1},o_{4})=6.8\times 10^{-6}$; $p(\Psi|o_{1},\perp)=1.05\times 10^{-8}$. Then assuming $\beta_{1}=p({\perp}|o_{1})=0.3$, from Eq. (8) we have $p(o_{2}|o_{1})=p(o_{3}|o_{1})=p(o_{4}|o_{1})=\frac{1-\beta_{1}}{|{\cal V}|-1+1}=0.175$. We can thus obtain $p(o_{2}|o_{1},\Psi)$ via Eq. (3): $p(o_{2}|o_{1},\Psi)=\frac{p(\Psi|o_{1},o_{2})p(o_{2}|o_{1})}{\sum_{v\in\\{o_{2},o_{3},o_{4}\\}}p(\Psi|o_{1},v)p(v|o_{1})+p(\Psi|o_{1},\perp)p({\perp}|o_{1})}\\\ =0.88$. Table 2 shows the probabilities obtained by enumerating all possible worlds ${\cal O}$ for each value $v$. We can see that Hybrid gives very high probabilities (0.92) for the two true values (helmet and stick) and meanwhile very low probabilities for the false ones. ### 3.3 Evaluating Source Quality The previous subsection explains how to compute value probabilities based on the quality of sources. We do not always have such prior knowledge on source qualities, and in this case we start by assuming each source has the same quality, and then iteratively compute value probabilities and source qualities until convergence. This subsection describes how to update source quality based on the estimated truths of a set of data items. For each source $s$, we compute $P(s),R(s)$ and $A(s)$ as defined, except that we adopt the probabilistic decisions made on the truthfulness of values. We emphasize again that the computation of precision and recall do not consider the truthfulness of the values, but only the cardinality of the provided values (i.e., how many truth slots $s$ thinks there are). * • The precision of $s$ is the average of its precision on each data item $s$ provides values for. Let $\Psi_{d}(s)$ be the set of values provided by $s$ on $d$ and ${\cal V}_{d}$ be the domain of $d$. Then $\sum_{v{\in}{\cal V}_{d}}p(v)$ is the (probabilistic) number of truths for $d$, and we have $P(s)=\operatorname*{Avg}_{d}~{}\min(\frac{\sum_{v{\in}{\cal V}_{d}}p(v|\Psi)}{|\Psi_{d}(s)|},1).$ (9) * • Similarly, the recall of $s$ is the average of its recall on each data item. $R(s)=\operatorname*{Avg}_{d}~{}\min(\frac{|\Psi_{d}(s)|}{\sum_{v{\in}{\cal V}_{d}}p(v|\Psi)},1).$ (10) * • The accuracy of $s$ can be estimated as the average probability of its values, divided by its precision, so it accounts for values provided for “real” truth slots only. $A(s)={\operatorname*{Avg}_{d,v\in\Psi_{d}(s)}p(v|\Psi)\over P(s)}.$ (11) ## 4 Approximation for HYBRID Computing value probabilities by enumerating all possible worlds takes exponential time. We conjecture that the value probability computation in Hybrid is #P-complete; the proof of the conjecture remains an open problem. This section describes an approximation for probabilities under Hybrid. We start with simplifying the computation of $p(v|{\cal O},\Psi)$ in Eq. (3), and then present our approximation algorithm. ### 4.1 Simplification of $p(v|{\cal O},\Psi)$ We can simplify the computations in Section 3.2 to a much simpler form. We start from Eq. (6) and Eq. (7). Given a particular source $s$, suppose $N_{c}=c$, $N_{w}=w$, $N_{e}=e$ and $N_{m}=m$ when computing $p(\Psi(s)|{\cal O},{\perp})$ with Eq. (7). Then we have four cases when deciding the $N_{c}$, $N_{w}$, $N_{e}$ and $N_{m}$ for $p(\Psi(s)|{\cal O},v)$ in Eq. (6), depending on whether $|\Psi(s)|>|{\cal O}|$ and whether $v\in\Psi(s)$; we illustrate them in Table 4. Table 4: Numbers used in Eq. (6) when $N_{c}=c,N_{w}=w,N_{e}=e$ and $N_{m}=m$ in Eq. (7). The case that $s$ belongs to | $N_{c}$ | $N_{w}$ | $N_{e}$ | $N_{m}$ ---|---|---|---|--- case 1: $|\Psi(s)|>|{\cal O}|$ and $v\in\Psi(s)$ | $c+1$ | $w$ | $e-1$ | $m$ case 2: $|\Psi(s)|>|{\cal O}|$ and $v\notin\Psi(s)$ | $c$ | $w+1$ | $e-1$ | $m$ case 3: $|\Psi(s)|\leq|{\cal O}|$ and $v\in\Psi(s)$ | $c+1$ | $w-1$ | $e$ | $m+1$ case 4: $|\Psi(s)|\leq|{\cal O}|$ and $v\notin\Psi(s)$ | $c$ | $w$ | $e$ | $m+1$ Let ${\cal S}_{1}$, ${\cal S}_{2}$, ${\cal S}_{3}$, ${\cal S}_{4}$ denote the set of sources in ${\cal S}$ that fall into each of the above cases respectively. We can then write $p(\Psi|{\cal O},v)$ and $p(\Psi|{\cal O},{\perp})$ into the following formats: $\displaystyle p(\Psi|{\cal O},v)$ $\displaystyle=\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{3}}(P_{c})^{c+1}\cdot\prod_{\bar{\cal S}_{2}\cup\bar{\cal S}_{4}}(P_{c})^{c}\cdot$ (12) $\displaystyle~{}~{}~{}~{}~{}~{}~{}\prod_{\bar{\cal S}_{1}}(P_{w})^{w}\cdot\prod_{\bar{\cal S}_{2}}(P_{w})^{w+1}\cdot\prod_{\bar{\cal S}_{3}}(P_{w})^{w-1}\prod_{\bar{\cal S}_{4}}(P_{w})^{w}\cdot$ $\displaystyle~{}~{}~{}~{}\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{2}}(P_{e})^{e-1}(P_{m})^{m}\cdot\prod_{\bar{\cal S}_{3}\cup\bar{\cal S}_{4}}(P_{e})^{e}(P_{m})^{m+1};$ $\displaystyle p(\Psi|{\cal O},{\perp})$ $\displaystyle=\prod_{\bar{\cal S}}(P_{c})^{c}(P_{w})^{w}(P_{e})^{e}(P_{m})^{m}\cdot\prod_{\bar{\cal S}_{3}\cup\bar{\cal S}_{4}}P_{\neg e}.$ (13) Next let $\displaystyle C$ $\displaystyle=\prod\limits_{\bar{\cal S}}(P_{c})^{c}\cdot\prod\limits_{\bar{\cal S}_{1}\cup\bar{\cal S}_{2}}(P_{w})^{w+1}(P_{e})^{e-1}(P_{m})^{m}\cdot$ $\displaystyle~{}~{}\prod\limits_{\bar{\cal S}_{3}\cup\bar{\cal S}_{4}}(P_{w})^{w}(P_{e})^{e}(P_{m})^{m+1};$ we can simplify Eq. (12) and Eq. (13) as follows: $\displaystyle p(\Psi|{\cal O},v)=C\cdot\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{3}}\frac{P_{c}}{P_{w}};$ (14) $\displaystyle p(\Psi|{\cal O},{\perp})=C\cdot\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{2}}\frac{P_{e}}{P_{w}}\cdot\prod_{\bar{\cal S}_{3}\cup\bar{\cal S}_{4}}\frac{P_{\neg e}}{P_{m}}.$ (15) Next, we define the vote count of a value $v$ based on the accuracy of its providers: $\displaystyle L(v)$ $\displaystyle=\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{3}}\frac{P_{c}}{P_{w}}=\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{3}}\frac{R(s)A(s)}{R(s)\frac{1-A(s)}{n}}$ $\displaystyle=\prod_{S\in{\bar{\cal S}}_{v}}\frac{nA(s)}{1-A(s)};$ (16) The vote count of $\perp$ at the $i$-th step (i.e., $i-1$ truths have been selected) combines the a priori probability and the votes from all sources: $\displaystyle L_{|{\cal O}|+1}(\perp)$ $\displaystyle=\frac{p({\perp}|{\cal O})}{p(v|{\cal O})}\cdot\prod_{\bar{\cal S}_{1}\cup\bar{\cal S}_{2}}\frac{P_{e}}{P_{w}}\cdot\prod_{\bar{\cal S}_{3}\cup\bar{\cal S}_{4}}\frac{P_{\neg e}}{P_{m}}$ $\displaystyle={\beta_{i}(|{\cal V}|{-}i{+}1)\over 1-\beta_{i}}\cdot\prod_{|\Psi(s)|>|{\cal O}|}{Q(s)\over R(s)(1-A(s))}\cdot$ (17) $\displaystyle~{}~{}~{}~{}\prod_{|\Psi(s)|\leq|{\cal O}|}{1-Q(s)\over 1-R(s)}.$ We can then transform Eq. (3) into the following format: $p(v|{\cal O},\Psi)={L(v)\over\sum_{v^{\prime}\in{\cal V}\setminus{\cal O}}L(v^{\prime})+L_{|{\cal O}|+1}(\perp)}.$ (18) ### 4.2 Approximation Our approximation leverages three observations. First, equivalent to computing $p(v|\Psi)$ conditioning on all possible worlds, we compute $p(v|\Psi)=\sum_{i}p_{i}(v|\Psi)$, where $p_{i}(v|\Psi)$ denotes the probability of $v$ being the $i$-th truth, computed by considering possible worlds ${\cal O}$ with $i-1$ values. Second, although there are multiple possible worlds with size $i-1$, the nature of Bayesian analysis determines that one of them would have a much higher probability than the others, so we can use it for approximation. Third, once the probability of $\perp$ is above that of the $i$-th truth, it quickly increases to $1$ in the following steps. Therefore if we terminate at the $i$-th step, we would not lose much. Recall that the confidence of a value $v$ does not change with $i$, but only that of $\perp$ changes, thus we can easily decide the number of steps we need before termination. Algorithm 1 gives the details of the approximation. * • Without losing generality, let $L(v_{1}),L(v_{2}),\dots$ be a sorted list in decreasing order, that is, $L(v_{i-1})\geq L(v_{i})$ for $\forall i>1$ (Lines 1-1). Let $k$ be an integer where $L(v_{k-1})\geq L_{k-1}(\perp)$ and $L(v_{k})<L_{k}(\perp)$; we thus terminate after $k$ steps, and the first $k-1$ values are considered as truths (Lines 1-1). * • We first initialize $p(v|\Psi)$ to $0$ for each $v\in{\cal V}$. Then at each step $i$, we update $p(v|\Psi)$ by adding the probability $p_{i}(v|\Psi)$ for $v$ to be the $i$-th truth (Line 1). To compute $p_{i}(v|\Psi)$, we consider possible worlds where $v$ is not present yet (their probabilities sum up to $1-p(v|\Psi)$). Assuming $v$ has the same conditional probability in all these possible worlds, denoted by $p(v|\Gamma_{i},\Psi)$, we have: $p_{i}(v|\Psi)=(1-p(v|\Psi))\cdot p(v|\Gamma_{i},\Psi).$ (19) * • We obtain $p(v|\Gamma_{i},\Psi)$ from the possible world with the largest probability, which must have selected the $i{-}1$ values with the highest confidence as truths; that is, $\Gamma_{i}=v_{1}v_{2}\dots v_{i-1}$. We thus compute $p(v|\Gamma_{i},\Psi)$ by normalizing the subsequence of confidence starting with $L(v_{i})$ (Line 1). Note that for any possible world ${\cal O}$ with length $i-1$, we have $p(v|{\cal O},\Psi)\leq p(v|\Gamma_{i},\Psi)$. input : Observations $\Psi$ containing a set $\cal V$ of values provided by a set ${\cal S}$ of sources on data item $d$; prior probability $\beta$ output : Probability $p(v|\Psi)$ for each $v\in{\cal V}$ 1 foreach $v\in{\cal V}$ do 2 $p(v|\Psi)\leftarrow 0$; 3 Compute $L(v)$ using Eq. (LABEL:multi:eq:confv); 4 5Let $L(v_{1}),L(v_{2}),\dots$ be a sorted list in decreasing order; 6 7foreach $i\in[1,|{\cal V}|]$ do 8 Compute $L_{i}(\perp)$ using Eq. (LABEL:multi:eq:confempty); 9 foreach $v\in{\cal V}$ do 10 $p(v|\Gamma_{i},\Psi)=\min(\frac{L(v)}{\sum_{j=i}^{j=|\cal V|}L(v_{j})+L_{i}(\perp)},1)$; 11 $p(v|\Psi)\leftarrow p(v|\Psi)+(1-p(v|\Psi))\cdot p(v|\Gamma_{i},\Psi)$; 12 13 if $L_{i}(\perp)>L(v_{i})$ then 14 break; 15 16 Algorithm 1 Approximation for Hybrid ###### Example 4.1 Consider again computing $p(o_{2}|\Psi)$. The sorted list of the value confidences is $\\{225,225,15,15\\}$, given by $o_{1}$, $o_{2}$, $o_{3}$ and $o_{4}$; the confidences of $\perp$ in different steps are $\\{0.1,0.24,18033,\\\ 18033\\}$. We thus terminate after the third step (when $i=3$). When $i{=}1$, we compute $p_{1}(o_{2}|\Psi)=0.47$. When $i{=}2$, we compute $p(o_{2}|\Gamma_{2},\Psi)=p(o_{2}|o_{1},\Psi)=0.88$, thus $p_{2}(o_{2}|\Psi)=(1-0.47)\times 0.88=0.47$. When $i{=}3$, we compute $p(o_{2}|\Gamma_{3},\Psi)=\frac{225}{15+15+18033}=0.01$, thus $p_{3}(o_{2}|\Psi)=(1-0.47-0.47){\times}0.01=0.0006$. The final probability for $o_{2}$ is $p(o_{2}|\Psi)=p_{1}(o_{2}|\Psi)+p_{2}(o_{2}|\Psi)+p_{3}(o_{2}|\Psi)=0.9406$, very close to the probability $0.92$ obtained by Hybrid. The next theorem shows that Algorithm 1 approximates the value probabilities both efficiently and effectively. ###### Theorem 4.2 Let $d$ be a data item and $n$ be the number of values provided for $d$. * • Algorithm 1 estimates the probability of each provided value in time $O(n^{2})$. * • For each value $v$ on $d$, we have $|p(v)-\hat{p}(v)|<\frac{1}{6}$, where $\hat{p}(v)$ is the exact probability computed by Hybrid, and $p(v)$ is the probability obtained by Algorithm 1. $\Box$ ###### Proof 4.1. See Appendix A. ## 5 Experimental Study We now present experimental results to evaluate the proposed approach. Section 5.1 describes experimental settings. Then Section 5.2 gives a comprehensive comparison of various fusion models on a widely used real dataset as well as synthetic data, showing that Hybrid outperforms others in general and is the most robust. ### 5.1 Experimental Settings Methods to compare. We compared the following fusion algorithms. * • Accu [1], the single-truth model reviewed in Section 1. For each data item, it considers the value with the highest predicted probability as the truth. * • PrecRec [7], the multi-truth model reviewed in Section 1. It considers a value correct if its predicted probability is above 0.5. * • LTM [9], a multi-truth model using directed graphical model. It also considers a value correct if its predicated probability is above 0.5. * • TwoStep, the baseline method described in Section 3. It first decides the number of truths $k$, and then returns the $k$ values with top probabilities according to Accu. * • Hybrid, Algorithm 1 described in Section 4. It considers the values obtained before the termination step as the truths. Implementations. Whenever applicable, we set $n=10,\alpha=0.25$, and consider only “good” sources (e.g., sources on which the conditions in Proposition 3.3 hold). We initialize the source quality metrics as $A=0.8,R=0.8,Q=0.2$, and then iteratively compute value probabilities and source qualities for up to 5 iterations. We implemented all methods in Java on a MapReduce-based framework. Metrics. We report precision and recall for each method. Precision measures among all observations predicted as correct, the percentage that are true. Recall measures among all true observations, the percentage that are predicted as correct. F-measure is computed as ${2\cdot prec\cdot rec\over prec+rec}$. (Note that they are different from precision and recall of individual sources as defined in Section 3.2). ### 5.2 Experiment Results #### 5.2.1 Results on Book data We first use the Book data from [8], which has been widely used for knowledge- fusion experiments. As shown in Table 5, it contains 6,139 book-author triples on 1,263 books from 876 retailers. The gold standard consists of authors for 100 randomly sampled books, where the authors were manually identified from book covers. According to the gold standard, 62% of the provided authors are correct, and 98% of the true values are provided by some source. There are 57% of the books that have multiple authors. Table 5: Statics of the Book data. #entities | #triples | #sources | precision | recall | %multi-truth ---|---|---|---|---|--- 1,263 | 6,139 | 876 | 0.62 | 0.98 | 57% In addition to the five fusion methods we listed before, we also compared with Accu_list, which applies Accu but considers the full list of authors as a whole [1, 8]. Table 6: Results on Book data. Hybrid obtains the highest recall and F-measure. | Precision | Recall | F1 ---|---|---|--- Accu | 0.990 | 0.532 | 0.692 Accu_list | 0.974 | 0.801 | 0.879 LTM | 0.911 | 0.973 | 0.941 PrecRec | 0.965 | 0.931 | 0.947 Hybrid | 0.941 | 0.973 | 0.957 Table 6 shows the results, and we can see that Hybrid obtains a higher F-measure than existing single-truth and multi-truth models. By considering both conflicts between values and the possibility of having multiple truths, it is able to identify more true values without sacrificing much of precision. Not surprisingly, Accu has the highest precision but the lowest recall as it only finds one author for a book. LTM has a lower precision as it lacks a global view of the values provided for the same data item. Instead, PrecRec has a lower recall but a higher precision: in this dataset many sources only provide the first author of a book and this explains the low recall; the high precision is because the sources provide few wrong values. TwoStep separates the decisions of how many truths there are and what they are, so may return authors that do not have strong support, leading to a low precision. #### 5.2.2 Results on Synthetic Data To better understand the performance of different approaches in various situations, we compare them on synthetic data where we vary the number of truths and the quality of sources. We generated 10 data sources providing values on 100 data items, where wrong values were randomly selected from a domain of 100 values. We varied the following parameters when generating the data. * • Number of truths for each data item: ranges from 1 to 10, and by default follows a Gaussian distribution with mean=6 and std=1. * • Source accuracy: ranges from 0.2 to 1, and 0.7 by default. * • Source recall: ranges from 0.2 to 1, and 0.7 by default. * • Extra ratio: equals to $\frac{N_{e}}{N_{c}+N_{w}}$ (see Eq. (6)); ranges from 0.2 to 1, and 0.2 by default. All experiments were repeated 100 times and we report the average. Figure 1: Varying the number of truths on synthetic data. Hybrid improves over other models when the number of truths is large. Figure 2: Varying source accuracy. Hybrid obtains a significantly higher precision when source accuracy is low. Figure 3: Varying source recall. Hybrid gives the highest precision and F1 when the sources have medium recall. Figure 4: Varying extra ratio. Hybrid is the most robust and outperforms the others. Figure 1 shows the results when we vary the number of truths in data generation. Hybrid can fairly well “guess” the number of truths and consistently outperforms the others. As the number of truths increases, the precision of Hybrid remains high, while the precision of PrecRec drops. This is because the extra ratio is fixed; when there are more truths, there will be more wrong values and PrecRec is more sensitive to noise. Not surprisingly, Accu always has the highest precision. LTM and TwoStep have low precision: the former lacks a global view of values for the same data item, while the latter may return false values with weak support. Figures 2-4 plot the results of different methods when we vary source qualities. As expected, as the quality of the sources drops, the quality of the fusion results drops as well. However, we observe that Hybrid has the highest F-measure in general and it is the most robust. It usually gives significantly higher precision than PrecRec since it considers the conflicts between provided values as evidence to eliminate wrong values. While most methods perform better when the source quality increases, PrecRec obtains the worst results when the source quality is medium (0.4-0.6). This is because, when the sources have similar probability of providing a true value and providing a false value, PrecRec is unable to distinguish them. ## 6 Related Work Data fusion [2, 4] refers to the problem of identifying the truths from different values provided by various sources. We have provided a high-level review of different approaches in Section 1 and presented a comprehensive experimental study in Section 5. A model called TEM [10] considers in addition whether the truth for a date item exists at all (i.e., date-of-death does not exist for an alive person). It is mainly designed for single-truth scenario. The method in [3] considers the case where most sources provide only a few triples, thus source quality cannot be reliably estimated. The Hybrid model can be enhanced by these approaches. ## 7 Conclusion In this paper we present an approach to find true values for an entity from information provided by different sources. It jointly makes two decisions on an entity: how many truths there are, and what they are. In this way, it allows the existence of multiple truths, while considering the conflicts between different values as important evidence for ruling out wrong values. Extensive experiments on both real-world and synthetic data show that the proposed outperforms the state-of-the-art techniques, and it is able to obtain a high precision without sacrificing the recall much. ## References * [1] X. L. Dong, L. Berti-Equille, and D. Srivastava. Integrating conflicting data: the role of source dependence. PVLDB, 2009. * [2] M. Gupta and J. Han. Heterogeneous network-based trust analysis: a survey. ACM SIGKDD Explorations Newsletter, 2011. * [3] Q. Li, Y. Li, J. Gao, L. Su, B. Zhao, M. Demirbas, W. Fan, and J. Han. A confidence-aware approach for truth discovery on long-tail data. PVLDB, 2014. * [4] X. Li, X. L. Dong, K. Lyons, W. Meng, , and D. Srivastava. Truth finding on the deep web: Is the problem solved? PVLDB, 2013. * [5] Y. Li, J. Gao, C. Meng, Q. Li, L. Su, B. Zhao, W. Fan, and J. Han. A survey on truth discovery. arXiv preprint arXiv:1505.02463, 2015. * [6] X. Liu, X. L. Dong, B. C. Ooi, and D. Srivastava. Online data fusion. PVLDB, 2011. * [7] R. Pochampally, A. Das Sarma, X. L. Dong, A. Meliou, and D. Srivastava. Fusing data with correlations. In SIGMOD, 2014. * [8] X. Yin, J. Han, and P. S. Yu. Truth discovery with multiple conflicting information providers on the web. In KDD, 2007. * [9] B. Zhao, B. I. P. Rubinstein, J. Gemmell, and J. Han. A bayesian approach to discovering truth from conflicting sources for data integration. PVLDB, 2012. * [10] S. Zhi, B. Zhao, W. Tong, J. Gao, D. Yu, H. Ji, and J. Han. Modeling truth existence in truth discovery. In KDD, 2015. ## Appendix A Proof of Theorem 4.2 Theorem 4.2. Let $d$ be a data item and $n$ be the number of values provided for $d$. * • Algorithm 1 estimates the probability of each provided value in time $O(n^{2})$. * • For each value $v$ on $d$, we have $|p(v)-\hat{p}(v)|<\frac{1}{6}$, where $\hat{p}(v)$ is the exact probability computed by Hybrid, and $p(v)$ is the probability obtained by Algorithm 1. $\Box$ ###### Proof A.1. We first consider the time complexity of Algorithm 1. Lines 1-1 have a complexity of $O(n\log n)$. The loops in Lines 1 and 1 take $O(n^{2})$ time. Therefore the overall complexity is $O(n^{2})$. Next we prove the approximation bound of Algorithm 1. In this proof, we use a tree structure to illustrate the computations made by the full Hybrid model; see Figure 5 as an example. The root of the tree represents having not selected any value. A path from the root to a node $v$ represents a possible way of selecting $v$; for instance, the path $v_{1}$-$v_{2}$-$v_{3}$ corresponds to the case where we select $v_{3}$ after selecting $v_{1}$ and $v_{2}$ sequentially (i.e., ${\cal O}=v_{1}v_{2}$). The children of a node represent candidates for the next truth. The number under each node $v$ is the conditional probability $p(v|{\cal O},\Psi)$. By multiplying the numbers along a path, we obtain the probability of the path. The overall $p(v|\Psi)$ is thus the sum of the probabilities of all paths ending with $v$. Figure 5: A tree structure to illustrate the full Hybrid model. Algorithm 1 differs from the full Hybrid model in two places: (1) it terminates early when $L_{i}(\perp)>L(v_{i})$ without enumerating all possible worlds (Line 1), and (2) it assumes that $v$ has the same conditional probability $p(v|\varGamma_{i},\Psi)$ under all possible worlds with size $i-1$ (Line 1). The first one will make the approximated probability $p(v)$ lower than the exact probability $\hat{p}(v)$ under the full model, while the second one will lead to a higher $p(v)$. We next prove by constructing the worst case for each of them. Case I: Algorithm 1 terminates early such that $p(v)<\hat{p}(v)$. In this case the approximation error is due to the early termination: Algorithm 1 terminates at step $i{-}1$ without increasing $p(v)$ by $p_{j}(v)$ ($j\geq i$) in future steps. It is easy to check that Algorithm 1 outputs the same probabilities as the full model if the number of provided values is less than 3. So we require at least three values in the domain. The earliest possible termination is at step 2 (i.e., $i=3$). Next we construct a case with three values and Algorithm 1 terminates at step 2, such that the approximation error is reflected in one step (which is $p_{3}(v)$). By definition we have $L(v_{3})\leq L(v_{2})$ and $L_{3}(\perp)\geq L_{2}({\perp})$; to terminate at step 2, we need $L(v_{2})<L_{2}({\perp})$. It is easy to see that among the tree values, $v_{3}$ has the largest probability at step 3. To maximize the approximation error cased by $p_{3}(v_{3})$, we need $L(v_{3})$ to be maximized and $L_{2}({\perp})$ as well as $L_{3}({\perp})$ to be minimized. Suppose $L(v_{2})=a$, we then have $L(v_{3})=L(v_{2})=a$; let $\gamma$ be a real number where $\gamma\geq 1$, we have $L(v_{1})=\gamma{\cdot}a$. Further, let $L_{2}(\perp)=L_{3}(\perp)=a+\epsilon$ where $\epsilon$ is a very small constant. As usual, we have $L_{1}(\perp)=0$. With the above setting, we compute all conditional probabilities following Section 3.2 and illustrate in Figure 5 (we omit $\epsilon$). Next we compute the overall probability for $v_{3}$ using the full Hybrid model and Algorithm 1 respectively. For the full Hybrid model, we find all paths ending with $v_{3}$ at each level of the tree: Level 1: $\hat{p}_{1}(v_{3})=\frac{1}{\gamma+2}$; Level 2: $\hat{p}_{2}(v_{3})=\frac{\gamma}{\gamma+2}\cdot\frac{1}{3}+\frac{1}{\gamma+2}\cdot\frac{1}{\gamma+2}$; Level 3: $\hat{p}_{3}(v_{3})=\frac{\gamma}{\gamma+2}\cdot\frac{1}{3}\cdot\frac{1}{2}+\frac{1}{\gamma+2}\cdot\frac{\gamma}{\gamma+2}\cdot\frac{1}{2}$; Finally, $\hat{p}(v_{3})=\hat{p}_{1}(v_{3})+\hat{p}_{2}(v_{3})+\hat{p}_{3}(v_{3})=\frac{1}{\gamma+2}+\frac{\gamma+1}{2(\gamma+2)}$. For Algorithm 1, it terminates after 2 steps: When $i=1$: $p_{1}(v_{3})=\frac{1}{\gamma+2}$; When $i=2$: $p_{2}(v_{3})=(1-\frac{1}{\gamma+2})\times\frac{1}{3}=\frac{\gamma+1}{3(\gamma+2)}$; Finally, $p(v_{3})=p_{1}(v_{3})+p_{2}(v_{3})=\frac{1}{\gamma+2}+\frac{\gamma+1}{3(\gamma+2)}$. Therefore $\hat{p}(v_{3})-p(v_{3})=\frac{1}{6}\cdot\frac{\gamma+1}{\gamma+2}<\frac{1}{6}$. Case II: We assume the same conditional probability under all possible worlds such that $p(v)>\hat{p}(v)$. Recall that Eq. (19) assumes that in each step $i$, $v$ has the same conditional probability $p(v|\Gamma_{i},\Psi)$ under all possible worlds where $v$ is not present yet. This $p(v|\Gamma_{i},\Psi)$ is an upper bound of the real conditional probability; we obtain the largest difference between $p(v|\Gamma_{i},\Psi)$ and the real conditional probability when the possible worlds end with $\perp$ (the real probability is 0). We thus construct a case where $p_{2}(\perp)p(v_{3}|\Gamma_{3},\Psi)$ is maximized, leading to an over- estimation of $p_{3}(v_{3})$. Similar to the previous case, we assume $L(v_{1})=\gamma{\cdot}a$ and $L(v_{2})=a$. In this case we want Algorithm 1 to continue when $i=3$, so that $p_{3}(v_{3})$ is added to the overall probability of $v_{3}$. Therefore we require $L_{2}(\perp)\leq L(v_{2})$; to make $p_{2}(\perp)$ larger, we have $L_{2}(\perp)=L(v_{2})=a$. Given that $L(v_{3})\geq L(v_{2})$, to maximize $p(v_{3}|\Gamma_{3},\Psi)$, we need $L(v_{3})=L(v_{2})=a$. With the above setting, the probabilities computed by the full model remain the same, but Algorithm 1 continues when $i=3$: $p_{3}(v_{3})=(1-\frac{1}{\gamma+2}-(1-\frac{1}{\gamma+2})\times\frac{1}{3})\times\frac{1}{2}=\frac{\gamma+1}{3(\gamma+2)}$; $p(v_{3})=p_{1}(v_{3})+p_{2}(v_{3})+p_{3}(v_{3})=\frac{1}{\gamma+2}+\frac{\gamma+1}{3(\gamma+2)}+\frac{\gamma+1}{3(\gamma+2)}=\frac{1}{\gamma+2}+\frac{2(\gamma+1)}{3(\gamma+2)}$. Therefore $p(v_{3})-\hat{p}(v_{3})=\frac{1}{6}\cdot\frac{\gamma+1}{\gamma+2}<\frac{1}{6}$. Combining the two cases, we have $|p(v)-\hat{p}(v)|<\frac{1}{6}$.
# Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation Haochen Liu University of Virginia <EMAIL_ADDRESS> Chen Chen University of Virginia <EMAIL_ADDRESS> &Song Wang University of Virginia <EMAIL_ADDRESS> Jundong Li University of Virginia <EMAIL_ADDRESS> ###### Abstract Few-shot Knowledge Graph (KG) Relational Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs, given only several triplets of these relations as references (i.e., support triplets). This task has gained significant traction due to the widespread use of knowledge graphs in various natural language processing applications. Previous approaches have utilized meta-training methods and manually constructed meta-relation sets to tackle this task. Recent efforts have focused on edge-mask-based methods, which exploit the structure of the contextualized graphs of target triplets (i.e., a subgraph containing relevant triplets in the KG). However, existing edge-mask-based methods have limitations in extracting insufficient information from KG and are highly influenced by spurious information in KG. To overcome these challenges, we propose SAFER (Subgraph Adaptation for FEw- shot Relational Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs generated from support and query triplets to perform the prediction. Specifically, SAFER enables the extraction of more comprehensive information from support triplets while minimizing the impact of spurious information when predicting query triplets. Experimental results on three prevalent datasets demonstrate the superiority of our proposed framework SAFER.111Our code is available at https://github.com/HaochenLiu2000/SAFER. Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation Haochen Liu University of Virginia<EMAIL_ADDRESS>Chen Chen University of Virginia<EMAIL_ADDRESS>Song Wang University of Virginia <EMAIL_ADDRESS>Jundong Li University of Virginia<EMAIL_ADDRESS> ## 1 Introduction Figure 1: We provide an instance for the two limitations of edge-mask-based methods. In this example, there are two support triplets (music, created_by, musican) and (news article, created_by, reporter). When extracting support information by finding the common subgraph, the extraction of edges with similar meanings but in different graphs will fail, and some spurious information will be extracted, which cannot correctly represent the logical pattern of the relation created_by. Knowledge Graphs (KGs) consist of many triplets, i.e., (head, relation, tail), which represent specific relationships between real-world entities Wang et al. (2017); Ji et al. (2022). These triplets form directed graphs that store knowledge information and can be applied to various knowledge-based tasks Liang et al. (2022); Wang et al. (2023) such as question answering Huang et al. (2019); Saxena et al. (2020), information extraction Hoffmann et al. (2011); Daiber et al. (2013), program analysis Liang et al. (2023), and language model enhancement Zhang et al. (2020b); Yasunaga et al. (2021); Xie et al. (2022). However, KGs generally cannot encompass all the necessary knowledge triplets required by downstream tasks, as most KGs are severely incomplete Xiong et al. (2018). Therefore, it becomes crucial to complete KGs by inferring potential missing relations between entities. In particular, existing works for KG completion Bordes et al. (2013); Zhu et al. (2021); Zhang et al. (2022) often assume the availability of sufficient instances (i.e., triplets) for each relation to be predicted. However, in real-world scenarios, it is common to encounter _few-shot relations_ , where only limited instances of triplets with these relations, called _support_ triplets, are available. KGs are constantly being updated, for example, by including knowledge from social networks. This often results in new relations with a relatively scarce number of discovered triplets, as the labeling process can be laborious. These new relations are generally known as few-shot relations. Consequently, predicting new relations with only limited triplets becomes a significant task Ma and Wang (2023). Therefore, it is crucial to perform the Few-shot KG Relational Reasoning (Few-shot KGR) task Xiong et al. (2018), which aims to predict the existence of (unseen) query triplets of a relation, given a background KG and a set of a limited number of _support triplets_ of the relation as the _support set_. Currently, there exist two types of approaches for solving the Few-shot KGR task. The first type is _meta-learning-based_ methods Chen et al. (2019); Zhang et al. (2020a); Sun et al. (2022), which utilize the meta-learning framework Finn et al. (2017) to transfer useful knowledge to new KGR tasks Hospedales et al. (2021) with a limited number of support triplets, to tackle the issue of data scarcity in the target few-shot tasks. Nevertheless, the distribution of the manually selected target relations plays an important role in these methods, which will result in suboptimal performance if the meta- training sets are not well-designed. To address this limitation, more recent studies have explored _edge-mask-based_ approaches Huang et al. (2022); Meng et al. (2023), providing an alternative solution to Few-shot KGR tasks. Edge- mask-based methods analyze each support (or query) triplet by first retrieving its contextualized graph, i.e., the subgraph that consists of the head and tail entities of a triplet, and the most relevant entities and relations of the triplet. The subgraph is referred to as the support (or query) graph. Then they extract common subgraphs across support graphs in the form of masks that identify edges with shared meanings for predictions on query triplets. Despite the effectiveness of these works, we argue that there are still two major limitations of edge-mask-based methods. (1) Existing edge-mask-based approaches assume that the largest common subgraph (masks) shared across all support graphs is sufficient to represent the unseen target relation. However, this assumption is difficult to satisfy in certain cases, e.g., when dealing with triplets that involve different but similar relations across other support graphs. As shown in Figure 1, on the support graphs of the target relation created_by, the relations produced_by and published_in preserve similar meanings. However, the strategy of learning edge masks fails to harness the valuable insights from these different yet similar relations, resulting in the insufficient extraction of information from created_by. (2) The extracted common subgraph (masks) often contains unrelated spurious information that can negatively impact prediction performance. For example, during the extraction process in Figure 1 regarding the target relation created_by, the support graphs may include spurious relations like related_job, as it can be unhelpful or even misleading when predicting query triplets of relation created_by. To overcome the aforementioned challenges, we propose SAFER (Subgraph Adaptation for FEw-shot Relational Reasoning), a novel subgraph-based approach that effectively utilizes useful information from support graphs while excluding spurious information. In SAFER, we first generate the contextualized graphs of support and query triplets with edge weights representing the importance of each relation for performing relational reasoning. Subsequently, we perform Subgraph Adaptation comprising two crucial modules: _Support Adaptation_ and _Query Adaptation_ , which aim to extract valuable information from support graphs and exclude spurious information, respectively. In our _Support Adaptation_ module, we incorporate information from each support graph into others to enable the adaptation to support graphs with different structures to extract and utilize useful information, e.g., similar relations. In our _Query Adaptation_ module, we adapt the support information to the structure of the query graph so that spurious information among support graphs can be filtered out in a query-adaptive manner. As a result, we can effectively alleviate the adverse impact of spurious information. In summary, our contributions in this paper are as follows: 1. 1. We scrutinize the challenges of few-shot knowledge graph relational reasoning (Few-shot KGR) from the perspective of extracting informative common subgraphs. We also discuss the necessity of tackling the challenges. 2. 2. We develop a novel Few-shot KGR framework consisting of Subgraph Generation and Subgraph Adaptation. Subgraph Adaptation includes (1) a Support Adaptation (SA) module that enables a more comprehensive extraction of information from the support graphs; (2) a Query Adaptation (QA) module that allows for excluding the influence of spurious information in the extracted information. 3. 3. We conduct experiments on three prevalent real-world KG datasets of different scales. The results further demonstrate the superiority of SAFER over other state-of-the-art approaches. ## 2 Related Work ### 2.1 Meta-learning-based Few-shot KGR Meta-learning Finn et al. (2017); Hospedales et al. (2021) is an effective learning paradigm that transfers generalizable knowledge learned from training tasks to new test tasks. Meta-learning necessitates a meta-training set that comprises multiple Few-shot KGR tasks for training purposes and then generalizes learned knowledge to tasks in the meta-test set. For example, GMatching Xiong et al. (2018) and FSRL Zhang et al. (2020a), acquire a universal metric to match query triplets with support triplets Wang et al. (2021b). The performance of meta-learning is significantly influenced by the quality of the manually created meta-training set. Moreover, the meta-training set is sampled from the same distribution as the meta-test set, which is impractical in practice Huang et al. (2022). To overcome these problems, some alternative studies based on subgraph structures are proposed to tackle the Few-shot KGR task. ### 2.2 Edge-mask-based Few-shot KGR Edge-mask-based methods, such as CSR Huang et al. (2022) and SARF Meng et al. (2023), consider the few-shot relational reasoning task as an inductive reasoning problem Spelda (2020); Teru et al. (2020), which relies on the relevant relations(i.e., edges) of the triplet Galárraga et al. (2013); Lin et al. (2018); Qu et al. (2021) in KG to perform the prediction. These methods employ an encoder-decoder model to encode the shared subgraphs of support samples (masks), i.e., common subgraphs in KG that connect the two entities of the triplets, into an embedding representing the target relation. The decoder uses the embedding to reconstruct the edge masks in a query graph showing the shared edges. These approaches take advantage of the edge structure to perform reasoning. However, these methods have the limitation that the largest common subgraph among support graphs may lose some of the relation’s logical patterns, and the spurious information extracted will detrimentally affect the prediction. In this paper, our approach uses a novel adaptation process to address the shortcomings of incomplete utilization of structure information in edge-mask-based methods. ## 3 Problem Formulation We study the problem of _Few-shot Knowledge Graph Relational Reasoning_ , i.e., Few-shot KGR Xiong et al. (2018); Chen et al. (2019). We first denote the background KG as $\mathcal{G}=(\mathcal{E},\mathcal{R},\mathcal{T})$, where $\mathcal{E}$ and $\mathcal{R}$ are sets of entities and relations. $\mathcal{T}=\\{(h,r,t)|h,t\in\mathcal{E},r\in\mathcal{R}\\}$ represents the facts as triplets, each of which contains a head entity, a tail entity, and a relation. For a new target relation $r^{\prime}\notin\mathcal{R}$, we are given a support set $S_{r^{\prime}}$ with $K$ triplets $\\{(h_{i},r^{\prime},t_{i})\\}_{i=1}^{K}$ of $r^{\prime}$, where $h_{i},t_{i}\in\mathcal{E}$. The number of triplets in the support set $K$ is relatively small ($K\leq 5$). With $S_{r^{\prime}}$ as the reference, we aim to predict tail entities, given a head entity $h_{q}$, i.e., $(h_{q},r^{\prime},?)$. There are usually multiple candidates of the tail entity that need to be scored and ranked. Then the candidate with the highest score is considered as the prediction result. So we will consider the query triplet $(h_{q},r^{\prime},c)$ ($c$ is a candidate) as a full triplet to score. ## 4 Methodology In this section, we introduce details of our proposed framework SAFER. As illustrated in Figure 2, for each support (or query) triplet, we first extract a support (or query) graph from the background KG and assign weights for each edge on the graph. Then we conduct Subgraph Adaptation on the generated support and query graphs and finally achieve the prediction score for a query triplet. Figure 2: The framework of SAFER, which shows the scoring pipeline for a query tail candidate $c$ of target relation $r^{\prime}$. We represent the same relations in colors, while the gray relations are all different. We first extract the contextualized graph of each support and query triplet and assign weights to all edges using an aggregation process $P_{w}$ (the width of edges represents weights). Then we apply another aggregation process $P_{a}$ and two adaptation operations to perform support information extraction and query candidate scoring. ### 4.1 Retrieving Contextualized Graphs To obtain structural information for the unseen target relation, we utilize the contextualized graphs of support and query triplets, i.e., _support graphs_ and _query graphs_. Contextualized graphs are generated based on the enclosing subgraph strategy proposed by Zhang and Chen (2018); Teru et al. (2020). We introduce how to construct contextualized graphs in Appendix A.1. ### 4.2 Edge Weight Assignment After acquiring the contextualized graph, we propose to assign weights to all edges on the contextualized graphs based on their importance to the target relation. We assign the weight $w_{e}$ for each edge $e$ by incorporating information from all support graphs to determine the importance, such that we can effectively leverage the information within all relations. Specifically, we leverage the PathCon Wang et al. (2021a) model to extract structural information and calculate the edge weights, as it can measure graph isomorphism. While edge-mask-based methods apply the model repeatedly between any two graphs to get the masks, we only apply it to get an overall embedding $g_{all}$ of all support graphs. We define an aggregation process $P_{w}$ with $L$ iterations as follows: $b_{v}^{i}=\frac{1}{1+|\\{e|e\in N(v)\\}|}\sum_{e\in N(v)}b_{e}^{i},$ (1) $r_{v}^{i}=b_{v}^{i}\|\mathbbm{1}(v=h)\|\mathbbm{1}(v=t),$ (2) $b_{e}^{i+1}=f(r_{u}^{i}\|r_{v}^{i}\|b_{e}^{i}),u,v\in N(e),$ (3) where $b_{e}^{i}$ (or $b_{v}^{i}$) is the learned edge (or node) embedding in iteration $i$. $N(v)$ is the set of all neighboring edges of $v$. $f$ is a neural network (NN) consisting of both non-linear and linear layers. $\|$ denotes the concatenation of two vectors (or scalars). In particular, Eq. (1) aggregates the embeddings of neighboring edges of each node. Then Eq. (2) adds the label of head and tail so that the information of a node’s relative position to head and tail can be considered. Eq. (3) updates all edge embeddings based on the current embedding of the edge and its two end nodes. In the first step, we initilize $b_{e}^{0}$ with the pretrained relation embedding $v_{e}$ of the relation on edge $e$. We define the embedding of $G$ as follows: $g(G)=\text{MaxPool}(b_{v}^{L})\|b_{h}^{L}\|b_{t}^{L},$ (4) where $\text{MaxPool}(b_{v}^{L})$ is the max-pooling of all node embeddings in $G$. In the second step, similarly, we apply $P_{w}$ again to acquire the weights of edges in both the support graphs and the query graphs. Additionally, we use the average of the embeddings of all support graphs $g_{all}$ from the first step as an input to incorporate the overall information in the support set and initialize $b_{e}^{0}$ as $v_{e}\|g_{all}$. Here $g_{all}$ is defined as follows: $g_{all}=\frac{1}{K}\sum_{k}g(G_{s}^{k}).$ (5) Here $G_{s}^{k}$ is the $k$-th support graph. We use another $f$ in this step. Then we perform $P_{w}$ on the target graph $G$. Finally, we calculate the weight $w_{e}$ of edge $e$: $w_{e}=\frac{1}{1+\exp(-\text{Linear}(b_{e}^{L}))},$ (6) where $\text{Linear}(\cdot)$ is a linear layer, and $w_{e}$ will serve as the edge weight of $e$ in the subsequential adaptation modules. Note that weight assignment does not rely on specific loss functions or ground-truth definitions for edge weights. Instead, it is trained in an end- to-end manner along with other modules in the subsequent sections. All edges in the support graphs can contribute to the subsequential adaptation modules based on the weight. ### 4.3 Subgraph Adaptation In this subsection, we introduce the process of our Subgraph Adaptation module, including _Support Adaptation_ (SA) and _Query Adaptation_ (QA). After obtaining the edge-weighted support graphs and query graphs, we achieve embeddings that contain the information from different subgraphs by aggregations. While performing the aggregations, we further adapt graph information to all support and query graphs to perform SA and QA. We first define an $L$-iteration aggregation process $P_{a}$, which is utilized in both SA and QA: $a_{v}^{i}(k)=\frac{1}{1+\sum_{e\in N(v)}w_{e}(k)}\sum_{e\in N(v)}b_{e}^{i}(k)\cdot w_{e}(k),$ (7) $\displaystyle b_{v}^{i}(k)=\left\\{\begin{array}[]{ll}T_{SA}(\\{a^{i}_{v}(m)\\}_{m=1}^{K}),&\text{if}\ \text{}\text{SA},\\\ T_{QA}(a^{i}_{v}(k),\\{b^{i}_{t}(m)\\}_{m=1}^{K};\lambda),&\text{if}\ \text{}\text{QA},\end{array}\right.$ (8) $r_{v}^{i}(k)=b_{v}^{i}(k)\|\mathbbm{1}(v=h)\|\mathbbm{1}(v=t),$ (9) $b_{e}^{i+1}(k)=f(r_{u}^{i}(k)\|r_{v}^{i}(k)\|b_{e}^{i}(k)),u,v\in N(e),$ (10) where $k$ indicates that a term is calculated on the $k$-th support graph, and it can be replaced by $q$ to represent the value on a query graph in _Query Adaptation_ (e.g., $a^{i}_{v}(q)$ and $b^{i}_{v}(q)$). $N(v)$ is the set of all neighboring edges of node $v$. $w_{e}$ is the weight of edge $e$. $a_{v}^{i}$ is the aggregation output of node $v$ at iteration $i$. Here Eq. (7) aggregates the embeddings of all neighboring edges of each node based on edge weights. $b_{v}^{i}$ (or $b_{e}^{i}$) is the learned node (or edge) embedding in iteration $i$. The adaptation steps are $T_{SA}(\cdot)$ (for SA) and $T_{QA}(\cdot)$ (for QA), and the details will be introduced in the following subsections. $f$ is a neural network (NN) consisting of non-linear and linear layers acting in both SA and QA. $\lambda$ is a hyperparameter used in QA to be introduced. Note that we initialize $b_{e}^{0}(k)$ with the pretrained embedding of the relation on edge $e$ to incorporate more information. #### 4.3.1 Support Adaptation To extract valuable information from all support graphs and reduce the omissions of information, we propose the _Support Adaptation_ (SA) strategy that enables the incorporation of information from all support graphs when learning the embedding for each support graph. During aggregation on each graph, we average the learned embeddings of the tail entities in all support graphs after each iteration to absorb beneficial information from all other support graphs. In particular, we choose to average the embeddings of tail entities (instead of other entities), because the tail entity preserves the most crucial information for the prediction of the target relation. The averaged embedding will be used to update embeddings of all edges connected to tail entities in all support graphs. This strategy ensures the transfer of relational information from one support graph to various others, thereby enabling adaptation to structures of different support graphs during subsequent aggregation steps. In this way, all edges in the support graph can contribute to SA based on their weights. In SA, we apply $P_{a}$ to all $K$ support graphs for $L$ iterations. $T_{SA}(\cdot)$ is defined as $\displaystyle T_{SA}(\\{a^{i}_{v}(m)\\}_{m=1}^{K})=$ (11) $\displaystyle\left\\{\begin{array}[]{ll}\frac{1}{K}\sum_{m=1}^{K}a^{i}_{t}(m),&\text{if}\ v=t,\\\\[6.0pt] a_{v}^{i}(k),&\text{otherwise}.\end{array}\right.$ Via Eq. (11), we manage to incorporate information from other support graphs when performing aggregation on each support graph. Generally, if the information from a specific relation in a support graph can be easily propagated on another support graph with a different relation, we can infer that these two relations maintain similar meanings. Therefore, our SA strategy allows for extracting relevant relations (e.g., different yet similar relations) among support graphs. #### 4.3.2 Query Adaptation _Query Adaptation_ (QA) is the subsequent module that can exclude the influence of spurious information extracted by the SA module. Generally, we predict the score of a query triplet by comparing the similarity between information learned from the query graph and the support graphs. To deal with the presence of spurious information across query and support graphs, our QA module adapts the tail node embeddings in support graphs to the structure of the query graph. In this manner, the support information unhelpful for query scoring will be filtered out, due to different structures between support graphs and query graphs. Then we calculate the score of a query triplet by comparing the filtered support embedding with the embedding of the query graph. To perform QA, we apply the aggregation process $P_{a}$ to the query graph of the query triplet candidate. $T_{QA}(\cdot)$ is defined as follows: $\displaystyle T_{QA}(a_{v}^{i}(q),\\{b^{i}_{t}(m)\\}_{m=1}^{K};\lambda)=$ (12) $\displaystyle\left\\{\begin{array}[]{ll}(1-\lambda)\cdot a_{t}^{i}(q)+\frac{\lambda}{K}\sum_{m=1}^{K}b^{i}_{t}(m),&\text{if}\ v=t,\\\\[6.0pt] a_{v}^{i}(q),&\text{otherwise}.\end{array}\right.$ Here $\lambda\geq 0$ is a hyperparameter of QA, which shows the ratio of incorporation of extracted support information and the information from the query graph. In this manner, we perform aggregation for support information on the query graph. As a result, our QA module can exclude the influence of spurious information in support graphs, thus achieving more precise prediction results. To perform prediction for a query triplet, we compare two embeddings, $E_{s}$ and $E_{q}$, which involve (filtered) support information and query information, respectively. Specifically, we define $E_{s}=T_{QA}(a_{t}^{L}(q),\\{b^{L}_{t}(m)\\}_{m=1}^{K};\lambda)$ (13) as the result of the filtered support information with $\lambda>0$ obtained from Eq. (12). For $E_{q}$, we perform $P_{a}$ with $\lambda=0$ to ensure that there is no incorporation of support information. We define $E_{q}$ as follows: $E_{q}=T_{QA}(a_{t}^{L}(q),\\{b^{L}_{t}(m)\\}_{m=1}^{K};0).$ (14) As the calculation of $E_{q}$ does not involve information from support graphs, $E_{q}$ only contains the query information. Additionally, we concatenate the average of pretrained embeddings of all support and query tail entities to $E_{s}$ and $E_{q}$, respectively, so that the pretrained entity embedding can also contribute to the scoring. In particular, we use the cosine similarity between $E_{s}$ and $E_{q}$ to measure the score of a query candidate, denoted as $s(t_{q})=\text{cos}(E_{s}\|\frac{1}{K}\sum_{k=1}^{K}v_{t_{s,k}},E_{q}\|v_{t_{q}}),$ (15) where $s(t_{q})$ is the score for $t_{q}$, i.e., the tail entity of the query triplet. $t_{s,k}$ is the tail entity of the $k$-th support triplet. We use $v_{t_{s,k}}$ (or $v_{t_{q}}$) to denote the pretrained node embedding of $t_{s,k}$ (or $t_{q}$). Note that both $E_{s}$ and $E_{q}$ are solely acquired via aggregation on the query graph. This ensures exclusion of spurious information in support graphs, thus achieving more precise scoring results. ### 4.4 Training Objective To train the overall SAFER framework, we leverage contrastive learning with positive samples (i.e., same relation in support and query triplets) and negative samples (i.e., different relations in support and query triplets). Specifically, we use the Margin Ranking Loss: $\mathcal{L}=\max(s_{neg}-s_{pos}+\gamma,0),$ (16) where $s_{pos}$ and $s_{neg}$ are scores of the positive sample and the negative sample, respectively. $\gamma\in\mathbbm{R}$ is a hyperparameter utilized to control the margin that separates positive and negative samples. ## 5 Experiments In this section, we elaborate on the experiments for evaluating our proposed framework. ### 5.1 Experimental Settings #### 5.1.1 Datasets We evaluate our framework and other baselines on three real-world Few-shot KGR datasets, generated based on NELL Mitchell et al. (2018), FB15K-237 Toutanova et al. (2015), and ConceptNet Speer et al. (2017), respectively. The NELL dataset is a subset of NELL-One Chen et al. (2019) by selecting the relations that have between 50 and 500 triples as few-shot tasks. For FB15K-237 and ConceptNet, we select the fewest 30 and 2 appearing relations as test few-shot tasks, respectively, following Lv et al. (2019) and Chen et al. (2019). Table 1 lists the statistics of all three datasets. #### 5.1.2 Evaluation Metrics We perform the evaluation for our framework and all baselines by calculating the scores for query candidates of each test instance using the standard ranking metrics. In particular, we utilize the Mean Reciprocal Ranking (MRR) and Hits@h. The MRR measures the average reciprocal rank of the correct candidate in the ranking of all candidates, where a higher value indicates better performance. We also compute the Hits@h value, which measures the percentage of the correct candidates ranked within the top $h=\\{1,5,10\\}$ positions. In evaluation, each correct candidate in the test set is paired with 50 other candidate negative triplets. #### 5.1.3 Baselines We compare our framework with existing Few-shot KGR methods, including MetaR Chen et al. (2019), FSRL Zhang et al. (2020a), CSR-OPT Huang et al. (2022), CSR-GNN Huang et al. (2022), SARF+Learn Meng et al. (2023), and SARF+Summat Meng et al. (2023). For meta-learning-based methods, the training is achieved by randomly sampling tasks from the KG rather than the meta-training split that is originally provided, to avoid the influence of manually constructed meta-training sets. Table 1: Statistics of three Few-shot KGR datasets. Dataset | # Entities | # Relations | # Edges | # Tasks ---|---|---|---|--- NELL | 68,544 | 291 | 181,109 | 11 FB15K-237 | 14,543 | 200 | 268,039 | 30 ConceptNet | 790,703 | 14 | 2,541,996 | 2 Table 2: Performance comparison of different KG datasets. The best and second-best results are shown in bold and underlined, respectively. Dataset | Method | MRR | Hits@1 | Hits@5 | Hits@10 ---|---|---|---|---|--- NELL | MetaR | 0.471 | 0.322 | 0.647 | 0.763 FSRL | 0.490 | 0.327 | 0.695 | 0.853 CSR-OPT | 0.463 | 0.321 | 0.629 | 0.760 CSR-GNN | 0.577 | 0.442 | 0.746 | 0.858 SARF+Learn | 0.627 | 0.493 | 0.798 | 0.877 SARF+Summat | 0.626 | 0.493 | 0.797 | 0.875 SAFER (ours) | 0.674 | 0.560 | 0.812 | 0.887 FB15K-237 | MetaR | 0.805 | 0.740 | 0.881 | 0.937 FSRL | 0.684 | 0.573 | 0.817 | 0.912 CSR-OPT | 0.619 | 0.512 | 0.747 | 0.824 CSR-GNN | 0.781 | 0.718 | 0.851 | 0.907 SARF+Learn | 0.779 | 0.718 | 0.846 | 0.905 SARF+Summat | 0.753 | 0.688 | 0.814 | 0.884 SAFER (ours) | 0.793 | 0.728 | 0.860 | 0.914 ConceptNet | MetaR | 0.318 | 0.226 | 0.390 | 0.496 FSRL | 0.577 | 0.469 | 0.695 | 0.753 CSR-OPT | 0.559 | 0.450 | 0.692 | 0.736 CSR-GNN | 0.606 | 0.496 | 0.735 | 0.777 SARF+Learn | 0.613 | 0.511 | 0.731 | 0.771 SARF+Summat | 0.624 | 0.527 | 0.729 | 0.768 SAFER (ours) | 0.638 | 0.564 | 0.721 | 0.743 ### 5.2 Performance Comparison The detailed settings of our experiments are in Appendix A.2. We evaluate SAFER along with other methods on the three datasets. For baseline performance, we use the experimental results from Huang et al. (2022) and Meng et al. (2023). Table 2 shows that our method outperforms baselines in most cases. In NELL and ConceptNet, the improvement of SAFER on the testing MRR is $7.67\%$ and $2.24\%$. The improvement of Hit@1 is $13.59\%$ and $7.02\%$. On FB15K-237, our method is the second best, while being very close to MetaR. The reason is that FB15K-237 contains a large number of relations whose contextualized graphs contain only one triplet, and thus the methods based on the subgraphs’ structure (i.e., CSR, SARF, SAFER) are limited in performance. Compared to baselines, SAFER shows more significant advantages in MRR and Hits@1. This is because, for the query candidates with high scores, the information provided by the support and query graphs will be similar. Thus, the spurious information in support graphs will more seriously impact the scoring. Nevertheless, our process avoids spurious information in support graphs, which contributes more to the detailed comparison between high-score samples. Thus, SAFER achieves a more precise scoring result. Figure 3: The performance of our proposed method SAFER with different $\lambda$. ### 5.3 Hyperparameter Study The value of $\lambda$ balances the removal of spurious information and the prevention of over-filtering in QA. To study the impact of $\lambda$, we conduct experiments with different values of $\lambda$, ranging from 0.001 to 1. The experimental results are presented in Figure 3. In general, these results indicate that different datasets have different optimal values of $\lambda$. For both MRR and Hits@1, the optimal $\lambda$ is 0.1 for NELL and 0.5 for FB15K-237 and ConceptNet. When $\lambda=1$, the scoring process is actually a direct comparison between the outputs $b_{t}^{L}$ of support graphs and the query graph in $P_{a}$ without any adaptation. In this case, the results are much worse than the optimal results, which demonstrates the strength of our QA module. For the NELL dataset, the optimal value of $\lambda$ is much smaller because the candidates in NELL have more complex subgraphs and thus require a more precise comparison of the detailed local features. ### 5.4 Ablation Study Table 3: Ablation study on three datasets. The best results are shown in bold. Dataset | Method | MRR | Hits@1 | Hits@5 | Hits@10 ---|---|---|---|---|--- NELL | SAFER | 0.674 | 0.560 | 0.812 | 0.887 SAFER$\backslash$W | 0.546 | 0.428 | 0.683 | 0.752 SAFER$\backslash$S | 0.575 | 0.434 | 0.753 | 0.832 SAFER$\backslash$Q | 0.533 | 0.422 | 0.659 | 0.715 FB15K-237 | SAFER | 0.793 | 0.728 | 0.860 | 0.914 SAFER$\backslash$W | 0.761 | 0.689 | 0.840 | 0.901 SAFER$\backslash$S | 0.761 | 0.688 | 0.841 | 0.901 SAFER$\backslash$Q | 0.778 | 0.713 | 0.846 | 0.905 ConceptNet | SAFER | 0.638 | 0.564 | 0.721 | 0.743 SAFER$\backslash$W | 0.474 | 0.331 | 0.632 | 0.729 SAFER$\backslash$S | 0.510 | 0.399 | 0.629 | 0.728 SAFER$\backslash$Q | 0.533 | 0.404 | 0.710 | 0.742 In this subsection, we conduct an ablation study to evaluate the contributions of the three modules in SAFER: Weight Assignment, Support Adaptation, and Query Adaptation. In particular, we remove one module in SAFER each time and report the performance of the revised model on all three datasets. For SAFER$\backslash$W, we directly set the weight $w_{e}=1$ for all edges to remove the Weight Assignment module. For SAFER$\backslash$S, we remove the SA module by removing the averaging in each iteration of $P_{a}$ and only using the average of its final outputs as the support embedding. For SAFER$\backslash$Q, we set $\lambda=1$ to change the scoring into a direct comparison between the outputs $b_{t}^{L}$ of support graphs and the query graph in $P_{a}$ without QA. The results of the ablation study, presented in Table 3, validate the effectiveness of all modules in SAFER. Removing the Weight Assignment module significantly decreases the MRR metric. This demonstrates the importance of the weights in the data preparation. Furthermore, removing the SA module leads to a decrease in all evaluation metrics. This is because, at each iteration of the $P_{a}$, the aggregations of embeddings from other graphs can emphasize relevant relations in the support graphs. Without this module, the adaptation process becomes a simple average of the final outputs of $P_{a}$ of all support graphs, resulting in a loss of emphasis on critical information. Furthermore, the results highlight the importance of the QA module, particularly in terms of MRR and Hit@1 that reflect the similarity between high-score candidates and support samples. By filtering the support information, QA ensures that only relevant, and useful information from the support graph is retained. This prevents the inclusion of spurious information within the predefined limits (e.g. common subgraph), thus ultimately contributing to improved performance. ### 5.5 Case Study Figure 4: An instance on dataset ConceptNet using the edge-mask-based method CSR and our method SAFER. The figure shows part of support and query graphs and the scores of the 3-top candidates of the two methods. The shown edges prove the limitation of the extraction of common subgraphs in edge-mask-based methods. In this section, we study the case that, in existing edge-mask-based methods, the extracted masks (common subgraph) could not correctly represent the target relation all the time. We use a real example in the ConceptNet test set to demonstrate the limitations of extracting common subgraphs to represent the logical pattern of the target relation. We consider the 2-shot relational reasoning task with two support triplets (art, created_by, artist) and (babies, created_by, humans), along with a query triplet (article, created_by, writer). Here we use an example with both two cases of extracted spurious relations and unextracted relevant relations in the edge-mask-based methods to showcase the two limitations of edge-mask-based methods, as shown in Figure 4. In the observed support graphs, we can identify two edges of relations at_location and related_to as similar but unshared information, and edges of relation action as spurious information. Regarding the prediction results, our approach SAFER ranks the true answer of the correct tail entity writer as first of all candidates, whereas the CSR model ranks it as third of all candidates. In the scoring result of CSR, incorrect candidates guideline and autism both receive higher scores than writer. This study shows that our SAFER can actually solve the two limitations of existing edge-mask-based methods in information extraction and processing. ## 6 Conclusion In this paper, we introduce SAFER, a novel approach designed to address the challenges in Few-shot Knowledge Graph Relational Reasoning (Few-shot KGR). SAFER overcomes the limitations of existing methods by extracting useful information while excluding spurious information. We first generate edge- weighted subgraphs of triplets to retrieve useful information from the knowledge graph. With the generated subgraphs, we perform Support Adaptation, which enables the incorporation of useful information that is difficult to extract (e.g., different yet similar relations). Subsequently, our Query Adaptation module filters out spurious information that is easily extracted (e.g., unhelpful relations that are shared across support graphs). Experimental evaluations on three datasets demonstrate the superiority of SAFER over other state-of-the-art baselines under different evaluation metrics. In summary, our work provides valuable insights into the potential of subgraph adaptation to improve performance on Few-shot KGR tasks. ## 7 Acknowledgement This work is supported in part by the National Science Foundation under grants (IIS-2006844, IIS-2144209, IIS-2223769, CNS2154962, and BCS-2228534), the Commonwealth Cyber Initiative Awards under grants (VV-1Q23-007, HV2Q23-003, and VV-1Q24-011), the JP Morgan Chase Faculty Research Award, and the Cisco Faculty Research Award. ## References * Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. _NeurIPS_. * Chen et al. (2019) Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, and Huajun Chen. 2019. Meta relational learning for few-shot link prediction in knowledge graphs. In _EMNLP-IJCNLP_. * Daiber et al. (2013) Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In _SEMANTICS_. * Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In _ICML_. * Galárraga et al. (2013) Luis Antonio Galárraga, Christina Teflioudi, Katja Hose, and Fabian M. Suchanek. 2013. AMIE: association rule mining under incomplete evidence in ontological knowledge bases. In _WWW_. * Hoffmann et al. (2011) Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In _ACL_. * Hospedales et al. (2021) Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. 2021. Meta-learning in neural networks: A survey. _TPAMI_. * Huang et al. (2022) Qian Huang, Hongyu Ren, and Jure Leskovec. 2022. Few-shot relational reasoning via connection subgraph pretraining. In _NeurIPS_. * Huang et al. (2019) Xiao Huang, Jingyuan Zhang, Dingcheng Li, and Ping Li. 2019. Knowledge graph embedding based question answering. In _WSDM_. * Ji et al. (2022) Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. 2022. A survey on knowledge graphs: Representation, acquisition, and applications. _TNNLS_. * Liang et al. (2022) Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, and Fuchun Sun. 2022. Reasoning over different types of knowledge graphs: Static, temporal and multi-modal. _arXiv preprint arXiv:2212.05767_. * Liang et al. (2023) Ke Liang, Jim Tan, Dongrui Zeng, Yongzhe Huang, Xiaolei Huang, and Gang Tan. 2023. Abslearn: a gnn-based framework for aliasing and buffer-size information retrieval. _PAA_. * Lin et al. (2018) Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In _EMNLP_. * Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In _ICRL_. * Lv et al. (2019) Xin Lv, Yuxian Gu, Xu Han, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2019. Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations. In _EMNLP-IJCNLP_. * Ma and Wang (2023) Haodi Ma and Daisy Zhe Wang. 2023. A survey on few-shot knowledge graph completion with structural and commonsense knowledge. _arXiv preprint arXiv:2301.01172_. * Meng et al. (2023) Lingyuan Meng, Ke Liang, Bin Xiao, Sihang Zhou, Yue Liu, Meng Liu, Xihong Yang, and Xinwang Liu. 2023. Sarf: Aliasing relation assisted self-supervised learning for few-shot relation reasoning. _arXiv preprint arXiv:2304.10297_. * Mitchell et al. (2018) T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2018. Never-ending learning. _Commun. ACM_. * Qu et al. (2021) Meng Qu, Junkun Chen, Louis-Pascal A. C. Xhonneux, Yoshua Bengio, and Jian Tang. 2021. Rnnlogic: Learning logic rules for reasoning on knowledge graphs. In _ICLR_. * Saxena et al. (2020) Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In _ACL_. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _AAAI_. * Spelda (2020) Petr Spelda. 2020. Machine learning, inductive reasoning, and reliability of generalisations. _AI_. * Sun et al. (2022) Jian Sun, Yu Zhou, and Chengqing Zong. 2022. One-shot relation learning for knowledge graphs via neighborhood aggregation and paths encoding. _TALLIP_. * Teru et al. (2020) Komal K. Teru, Etienne G. Denis, and William L. Hamilton. 2020. Inductive relation prediction by subgraph reasoning. In _ICML_. * Toutanova et al. (2015) Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In _EMNLP_. * Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In _ICML_. * Wang et al. (2021a) Hongwei Wang, Hongyu Ren, and Jure Leskovec. 2021a. Relational message passing for knowledge graph completion. In _SIGKDD_. * Wang et al. (2017) Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. _TKDE_. * Wang et al. (2021b) Song Wang, Xiao Huang, Chen Chen, Liang Wu, and Jundong Li. 2021b. Reform: Error-aware few-shot knowledge graph completion. In _CIKM_. * Wang et al. (2023) Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, et al. 2023. Knowledge editing for large language models: A survey. _arXiv preprint arXiv:2310.16218_. * Xie et al. (2022) Qianqian Xie, Jennifer Bishop, Prayag Tiwari, and Sophia Ananiadou. 2022. Pre-trained language models with domain knowledge for biomedical extractive summarization. _KBS_. * Xiong et al. (2018) Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2018. One-shot relational learning for knowledge graphs. In _EMNLP_. * Yasunaga et al. (2021) Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: reasoning with language models and knowledge graphs for question answering. In _NAACL-HLT_. * Zhang et al. (2020a) Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, and Nitesh V. Chawla. 2020a. Few-shot knowledge graph completion. In _AAAI_. * Zhang et al. (2022) Denghui Zhang, Zixuan Yuan, Hao Liu, Xiaodong Lin, and Hui Xiong. 2022. Learning to walk with dual agents for knowledge graph reasoning. In _AAAI_. * Zhang and Chen (2018) Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. In _NeurIPS_. * Zhang et al. (2020b) Yice Zhang, Jiaxuan Lin, Yang Fan, Peng Jin, Yuanchao Liu, and Bingquan Liu. 2020b. CN-HIT-IT.NLP at semeval-2020 task 4: Enhanced language representation with multiple knowledge triples. In _SemEval_. * Zhu et al. (2021) Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal A. C. Xhonneux, and Jian Tang. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. In _NeurIPS_. ## Appendix A Appendix ### A.1 Retrieving Contextualized Graphs In this section, we introduce how we retrieve contextualized graphs from a triplet. Contextualized graphs are generated based on the enclosing subgraph strategy proposed by Zhang and Chen (2018); Teru et al. (2020). Specifically, for a given triplet $(h,r,t)$, we first sample the nodes within $n$-hop undirected neighbors of both the head entity $h$ and the tail entity $t$ from the background KG. To include sufficient nodes for logic extraction, we also perform random sampling from all neighbors of $h$ and $t$. The resulting contextualized graph is induced by all selected nodes and their connections. It should be noted that the specific value of $n$ is determined based on the density of the KG. In particular, these contextualized graphs can capture the local structure and relevant entities surrounding the support and query triplets, thus allowing us to extract valuable information for the relational reasoning task. ### A.2 Experimental Settings In this section, we delve into a more comprehensive exposition of our experimental setups, including detailed parameter settings, as applied to the three distinct real KG datasets. In our experiments, we have employed 3-shot relational reasoning tasks across all three datasets. For the NELL dataset, we set $n=2$ hops, whereas, for both the FB15K-237 and ConceptNet datasets, we use $n=1$ hop when generating the contextualized graphs of their respective triplets. Regarding the neural network $f$, we have incorporated three distinct neural networks for the first and second steps of weight assignment and the adaptation module. The overall iteration of all modules is set to four, and the hidden dimension of all embeddings (excluding the initialization) has been standardized to 128. For the standard model, we choose the hyperparameter $\lambda$ in Query Adaptation as $\lambda=0.1$ for NELL and $\lambda=0.5$ for FB15K-237 and ConceptNet. All methods have utilized 100-dimensional relation and entity embeddings. For pretrained embeddings, we have employed TransE Bordes et al. (2013) for the NELL and FB15K-237 datasets, while ComplEx Trouillon et al. (2016) has been utilized for ConceptNet. In the context of the NELL dataset, the TransE embeddings have been integrated by concatenating $v_{head}-v_{tail}$ to $E_{s}$ and $E_{q}$ within the _Query Adaptation_ phase. Here, $v_{head}$ and $v_{tail}$ signify the pretrained embeddings of the head and tail entities, and an optional neural network ($NN(v_{head}-v_{tail})$) can also be added. For the FB15K-237 dataset, a $BatchNorm$ Layer has been introduced within the $Linear$ layer in Eq. (6). Regarding optimization, we have employed AdamW Loshchilov and Hutter (2019) with the learning rate $10^{-5}$, utilizing a linear schedule with 2,000 warm- up steps and a total of 20,000 steps. To ensure robustness and reliability, each reported experimental result is derived from the average value obtained through conducting three independent experiments. ### A.3 Experimental Details We conduct all our SAFER training and testing procedures using NVIDIA RTX A6000 GPUs with a memory capacity of 48GB. Each training and testing instance was executed on a single GPU, and conducted using Python 3.10.10. We implement our framework with PyTorch. ### A.4 Limitations In this section, we introduce the limitations of our work in detail. Our SAFER model incorporates the Query Adaptation (QA) module to mitigate the inclusion of spurious information derived from the Support Adaptation (SA) module. For tail candidates with notably high scores, indicating substantial similarity between query and support graphs, the presence of extracted spurious information can severely impact the scoring process. In this way, the model tends to compare the most important and detailed information between support and query. Consequently, this has resulted in a remarkable enhancement in Mean Reciprocal Rank (MRR) and Hits@1 metrics. However, this adaptation process inadvertently can still lead to the omission of certain global information from the support graph. This is a consequence of transferring all support information for processing onto the query graph. Consequently, the improvements of SAFER in Hits@5 and Hits@10 metrics are not as pronounced as those observed in MRR and Hits@1. At present, we have yet to devise a solution to effectively integrate global information into predictions. Balancing the incorporation of detailed and global information concurrently presents a challenge that necessitates further investigation and future research endeavors.
# Disentangled Feature Representation for Few-shot Image Classification Hao Cheng1, Yufei Wang1, Haoliang Li2, Alex C. Kot1, Bihan Wen1 Bihan Wen is the corresponding author. ###### Abstract Learning the generalizable feature representation is critical for few-shot image classification. While recent works exploited task-specific feature embedding using meta-tasks for few-shot learning, they are limited in many challenging tasks as being distracted by the excursive features such as the background, domain and style of the image samples. In this work, we propose a novel Disentangled Feature Representation framework, dubbed DFR, for few-shot learning applications. DFR can adaptively decouple the discriminative features that are modeled by the classification branch, from the class-irrelevant component of the variation branch. In general, most of the popular deep few- shot learning methods can be plugged in as the classification branch, thus DFR can boost their performance on various few-shot tasks. Furthermore, we propose a novel FS-DomainNet dataset based on DomainNet, for benchmarking the few-shot domain generalization tasks. We conducted extensive experiments to evaluate the proposed DFR on general and fine-grained few-shot classification, as well as few-shot domain generalization, using the corresponding four benchmarks, i.e., mini-ImageNet, tiered-ImageNet, CUB, as well as the proposed FS- DomainNet. Thanks to the effective feature disentangling, the DFR-based few- shot classifiers achieved the state-of-the-art results on all datasets. ## Introduction While deep neural networks achieved superior results on image classification via supervised learning from large-scale datasets, it is challenging to classify a query sample using only few labelled data, which is known as few- shot classification (Fei-Fei, Fergus, and Perona 2006). How to learn the discriminative feature representation that can be generalized from the training set to new classes in testing is critical for few-shot tasks. Popular few-shot methods applied _meta-learning_ (Vinyals et al. 2016) by episodic training from a large amount of simulated meta-tasks, to obtain a task- specific feature embedding associated with a distance metric (e.g., cosine or Euclidean distance) for classification. Figure 1: Excursive features (highlighted in boxes) that distract few-shot classification for fine-grained and multi-domain tasks. In practice, many excursive features of image data, e.g., style, domain and background, are typically class-irrelevant. Figure 1 shows two such examples in fine-grained and multi-domain classification tasks, respectively, which are challenging for few-shot learning: (1) Only the subtle traits are critical to characterize and differentiate the objects of fine-grained classes; (2) The style and domain information dominates the image visual presence, but they are in fact the excursive and class-irrelevant features. As the subtle traits vary in different simulated meta-tasks, they can hardly be preserved by the learned embedding. On the contrary, the excursive features usually distract the feature embedding (Tokmakov, Wang, and Hebert 2019; Zhang et al. 2020), leading to the degraded few-shot classification results. To rectify such limitations, most recent few-shot methods attempted to suppress excursive features or propose proper metrics, e.g., LCR (Tokmakov, Wang, and Hebert 2019), DeepEMD (Zhang et al. 2020), FEAT (Ye et al. 2020) and CNL (Zhao et al. 2021). However, none of the existing methods explicitly extract the class- specific representation from the excursive image features. In this paper, we present a novel approach to incorporate deep disentangling for few-shot image classification. Such approach can selectively extract the subtle traits for each task, while maintaining the model generalization. First, we propose a novel Disentangled Feature Representation (DFR) framework which can be applied to most few-shot learning methods. DFR contains two branches: the classification branch extracts the discriminative features of the image sample, while the variation branch encodes the class-irrelevant information that complements the image representation. A RelationNet (Sung et al. 2018) is applied in the variation branch to measure the feature similarity of each sample pair. A hybrid loss is applied for training DFR, including a reconstruction loss to ensure image information preservation, as well as the translation, discriminative and cross-entropy losses for class-specific feature disentangling. At the inference stage, only the disentangled features from the classification branch are used for class prediction. Second, we integrate the proposed DFR framework into representative baselines for few- shot classification, including the popular ProtoNet (Snell, Swersky, and Zemel 2017) and the state-of-the-art DeepEMD (Zhang et al. 2020) and FEAT (Ye et al. 2020), to carefully investigate the behaviour of DFR with feature visualization and analysis. Extensive experiments are conducted on a set of few-shot tasks, i.e., general image classification, fine-grained image classification, and domain generalization over four benchmarks to demonstrate the effectiveness of our DFR framework. Our main contributions are summarized as follows: * • We propose a novel disentangled feature representation (DFR) framework, which can be easily applied to most of the few-shot learning methods to extract class-specific features from excessive information. * • We propose a novel benchmark named FS-DomainNet based on DomainNet (Peng et al. 2019) and fully study the few-shot domain generalization task with two evaluation settings. * • We evaluate the DFR framework over four few-shot benchmarks, i.e., mini- ImageNet, tiered-ImageNet, CUB-200-2011, and the proposed FS-DomainNet dataset. Results show that incorporating DFR into existing few-shot algorithms, including both baseline and state-of-the-art methods, can generate consistent gains for multi few-shot classification tasks under both 5-way 1-shot and 5-way 5-shot settings. Figure 2: DFR for few-shot image classification: Given a few-shot meta-task $\mathcal{T}_{FS}$ with support ($\mathcal{S}$) and query ($\mathcal{Q}$) image sets, the encoders ($E_{cls}$ and $E_{var}$) of the classification and variation branches extract the class-specific and class-irrelevant features, respectively. $E_{cls}$ is a classic backbone used in the few-shot methods (e.g., ResNet-12 in this work) which follows the blue stream. The output of $E_{cls}$ is used for few-shot classification, and the output of $E_{var}$ is fed to the RelationNet with a gradient reverse layer (GRL) to remove any class-related information. An MLP block extracts the class information from the classification branch to guide the image reconstruction and translation. Specifically, the Decoder D can achieve self-reconstruction, class- reconstruction, or class-translation according to the different inputs of MLP. ## Related Work ### Few-shot Learning According to the meta-learning framework (Vinyals et al. 2016), there are mainly three types of few-shot learning methods. Firstly, the gradient-based methods utilize a good model initialization (Finn, Abbeel, and Levine 2017; Nichol, Achiam, and Schulman 2018) or optimization strategy (Ravi and Larochelle 2017; Rusu et al. 2019; Lee et al. 2019; Liu, Schiele, and Sun 2020) to quickly adapt to novel tasks. Secondly, the data augmentation-based methods focus on generating (Gidaris and Komodakis 2019; Li et al. 2020a) or gathering augmented data (Hariharan and Girshick 2017; Wang et al. 2018; Yang, Liu, and Xu 2021) to enable classification from limited samples. In this work, we focus on the third type, namely the metric learning-based methods, i.e., to learn the discriminative feature embedding for distinguishing different image classes. For example, ProtoNet (Snell, Swersky, and Zemel 2017) considered the class-mean representation as the prototype of each class and applied the Euclidean distance metric for classification. LCR (Tokmakov, Wang, and Hebert 2019) applied the subspace-based embedding for each class and DeepEMD (Zhang et al. 2020) adopted the earth mover’s distance as the metric function to compare the similarity between two feature maps in a structured way. FEAT (Ye et al. 2020) defined four kinds of set-to-set transformation including self- attention transformer (Jaderberg et al. 2015) to learn a task-specific feature embedding for few-shot learning. Based on prior knowledge, COMET (Cao, Brbic, and Leskovec 2021) mapped some high-level visual concepts into a semi- structured metric space and then learned an ensemble classifier by combining the outputs of independent concept learners. Tang et al. (Tang, Wertheimer, and Hariharan 2020) also uses a semi-structured feature space based on independent prior knowledge concepts to do pose normalization for fine-grained tasks. Our work does not intend to propose new metrics, but focuses on extracting the class-specific features from the variations distracting the metric learning, thus to improve few-shot classification. ### Disentangled Feature Representations Disentangled feature representation aims to learn an interpretable representation for image variants, which has been widely studied in tasks such as face generation (Chen et al. 2016), style translation (Lee et al. 2020; Liu et al. 2019), image restoration (Li et al. 2020b), video prediction (Hsieh et al. 2018) and image classification (Prabhudesai et al. 2021; Li et al. 2021). InfoGAN (Chen et al. 2016) applied an unsupervised method to learn interpretable and disentangled representations by maximized mutual information. DRIT (Lee et al. 2020) embedded images into a content space and a domain-specific attribute space and applied a cycle consistency loss for style translation. FDR (Li et al. 2020b) applied channel-wise feature disentanglement to reduce the interference between hybrid distortions for hybrid-distorted image restoration. Li et al. (Li et al. 2021) proposed a disentangled-VAE to excavate category-distilling information from visual and semantic features for generalized zero-shot learning. It is noteworthy that the very recent D3DP (Prabhudesai et al. 2021) also adopted a feature disentangling scheme for few-shot detection and VQA, by dividing high-dimensional data (e.g., RGB-D) into individual objects and other attributes. However, our DFR significantly differs from D3DP in the following aspects: DFR is a general-purpose feature extractor for image classification, while D3DP only disentangles individual object to tackle more specific object detection and VQA in 3D scenes. Besides, our DFR works on real images from few-shot benchmarks while D3DP only works with synthetic scenes. Moreover, the DFR framework is much simpler comparing to D3DP with fewer parameters and more efficient algorithms. Thus DFR can serve an enhanced feature extractor for classic backbones that are widely used in most of the existing few-shot methods. ## Proposed Method In this section, we start with a brief introduction to few-shot learning. Then the proposed DFR framework is explained in detail, followed by the loss function of our model and why our DFR works well. ### Problem Definition Given a training image set with the base classes $\mathcal{C}_{train}$, few- shot image classification task aims to predict the novel classes $\mathcal{C}_{test}$ from the testing set, i.e., $\mathcal{C}_{train}\cap\mathcal{C}_{test}=\emptyset$. Thus, the trained classifier from $\mathcal{C}_{train}$ needs to be generalized to $\mathcal{C}_{test}$ in the testing stage with only few labeled samples. In this paper, we follow the meta-learning strategy (i.e., the $N$-way $K$-shot setting) (Vinyals et al. 2016) to simulate meta-tasks in the training set that are similar to the few-shot setting at the testing stage, i.e., each meta-task $\mathcal{T}_{FS}$ contains a support set $\mathcal{S}$, and a query set $\mathcal{Q}$. The support set $\mathcal{S}$ contains $N$ classes with $K$ labeled samples ($N$ and $K$ are both very small) and a query set $\mathcal{Q}$ with unlabeled query samples from $N$ classes is used to evaluate the performance. ### Disentangled Feature Representation Figure 2 is an overview of the proposed DFR framework. With a few-shot task $\mathcal{T}_{FS}$ of support set $\mathcal{S}$ and query set $\mathcal{Q}$, the objective is to extract discriminative features for classification from the excursive information of each image $x_{i}$. The proposed DFR consists of two branches with two encoders, i.e., $E_{cls}$ and $E_{var}$ for the classification and variation branches, respectively, and one decoder $D$, as well as a discriminator with a gradient reverse layer and a relation module. Classification Branch. In principle, any classic metric-based backbone for few-shot learning can be applied as $E_{cls}$ in this branch, to extract the class-specific features of each $x_{i}$. In this work, the commonly-used ResNet-12 backbone is adopted as $E_{cls}$, and the classifier $f(\cdot)$ varies for different few-shot learning baselines being used (e.g., ProtoNet (Snell, Swersky, and Zemel 2017), DeepEMD (Zhang et al. 2020) and FEAT (Ye et al. 2020) are applied in this work, with the corresponding models denoted as +DFR). Therefore, the query sample $x_{i}^{\mathcal{Q}}$ can be classified based on the support samples $x^{\mathcal{S}}$ as $\hat{y}_{i}=f(E_{cls}(x_{i}^{\mathcal{Q}});\\{E_{cls}(x^{\mathcal{S}}),y^{\mathcal{S}}\\}).$ (1) Variation Branch. The role of the variation branch is to encode the class- irrelevant information of image samples, which consists of an encoder $E_{var}$ followed by a discriminator. The feature map dimension (i.e., $h\times w$) of $E_{var}(x_{i})$ is set to be higher than that of $E_{cls}(x_{i})$, to contain more excursive image features. Moreover, we apply instance normalization (IN) and adaptive instance normalization (AdaIN) instead of batch normalization to achieve information transfer for variation encoder $E_{var}$ and decoder $D$, respectively. The discriminator is formed by a gradient reversal layer (GRL) and a RelationNet $r_{\varphi}$ (Sung et al. 2018) to measure the variation feature similarity between any two samples. To be specific, the GRL acts as an identity transform in forward pass, and it multiples the gradient from the subsequent level by a constant $\textnormal{-}\lambda$ during back-propagation. In training, we construct positive and negative pairs from the meta-task by their real labels. The relation module $r_{\varphi}$ outputs a score $s_{i}\in[0,1]$ indicating the probability that the pair $x_{i1}$ and $x_{i2}$ are from the same class as $s_{i}=r_{\varphi}\left(E_{var}(x_{i1}),E_{var}(x_{i2})\right).$ (2) Decoder Module. To preserve the image information and achieve feature disentanglement, a decoder module with a MLP module and a decoder network $D$ combines the classification and variation branches for image reconstruction and translation. To be specific, the output feature of the classification branch is fed to the MLP module $g$ to extract class-specific information $(\mu,\sigma)$ of each sample for scaling the feature of the variation branch in the follow-up decoder. The decoder can reconstruct or translate the source image based on different sources of $(\mu,\sigma)$, shown in Figure 2, as $\hat{x}_{i}=D(E_{var},g(X)),$ (3) where $X$ can be the feature of the $i$-th sample itself for self- reconstruction, or the mean feature of class $y_{i}$ for class-reconstruction or another class $y_{j}$ with $j\neq i$ for class-translation. ### Loss Function The objective function consists of the discriminative loss $L_{dis}$, cross- entropy loss $L_{cls}$, reconstruction loss $L_{rec}$ and translation loss $L_{tran}$. Discriminative Loss. To remove the class-specific information in the variation branch, we incorporate the binary cross-entropy loss to optimize the variation feature maps based on the score of RelationNet as $L_{dis}=-\sum_{i=1}^{P}\left(l_{i}\cdot log(s_{i})+(1-l_{i})\cdot log(1-s_{i})\right),$ (4) where $P$ denotes the number of training pairs, $s_{i}$ is the relation score of the $i$-th pair which is calculated by (2), and $l_{i}=0$ or $1$ indicates the ground truth whether the $i$-th training pair is positive. We minimize $L_{dis}$ in training, and apply GRL to reverse the gradient during back- propagation to achieve feature disentangling, i.e., to minimize the class- specific information captured by the variation feature. Cross-Entropy Loss. To preserve class-related features for few-shot classification, we minimize the cross-entropy loss $L_{cls}$ for the classification branch for query samples of all classes as $L_{cls}=-\sum_{i=1}^{Q}y_{i}\log P\left(\hat{y}_{i}=y_{i}\mid\mathcal{T}_{FS}\right),$ (5) where $Q$ is the number of query samples in a meta-task $\mathcal{T}$, $y_{i}$ and $\hat{y_{i}}$ denote the true and predicted class label of each query sample $x_{i}$, respectively. Reconstruction and Translation Loss. To ensure that the disentangled classification and variation features can jointly restore the input image, an $\ell_{1}$-norm penalty for image reconstruction and a perceptual loss (Johnson, Alahi, and Fei-Fei 2016) are applied after decoding for self- reconstruction and class-reconstruction as $L_{rec}=\frac{1}{M}\sum_{i=1}^{M}\|x_{i}-\hat{x}_{i}\|_{1}+\frac{1}{M}\sum_{i=1}^{M}\|\phi(x_{i})-\phi(\hat{x}_{i}^{c_{i}})\|_{1},$ (6) where $M$ denotes the number of samples in a meta-task $\mathcal{T}_{FS}$. $\hat{x}_{i}$ and $\hat{x}_{i}^{c_{i}}$ are the reconstructed images of $x_{i}$ based on the feature of the $i$-th sample itself and mean feature $c_{i}$ of class $y_{i}$ using (3), respectively. Moreover, the perceptual loss is also adapted to measure perceptual differences between the output image $\hat{x}_{i}^{c_{j}}$ and the support set of the $j$-th class for class-translation to achieve feature disentanglement as $L_{tran}=\frac{1}{N}\sum_{i=1}^{M}\sum_{l=1}^{K}\|\phi(x_{l}^{\mathcal{S}_{j}})-\phi(\hat{x}_{i}^{c_{j}})\|_{1}.$ (7) The total loss for training DFR can be formulated as $L_{total}=\lambda_{1}\cdot L_{dis}+\lambda_{2}\cdot L_{rec}+\lambda_{3}\cdot L_{tran}+L_{cls},$ (8) where $\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3}$ denote the weights parameters of $L_{dis}$, $L_{rec}$ and $L_{tran}$ relative to $L_{cls}$, respectively. Figure 3: The t-SNE visualization of the feature representations: (a) the learned features of ResNet-12 backbone for methods w/o DFR, (b) the output features of the classification branch, and (c) the output features of the variation branch. Method | mini-ImageNet | tiered-ImageNet ---|---|--- 5-way 1-shot | 5-way 5-shot | 5-way 1-shot | 5-way 5-shot TADAM (Oreshkin, López, and Lacoste 2018) | 58.50 $\pm$ 0.30 | 76.70 $\pm$ 0.30 | - | - AFHN (Li et al. 2020a) | 62.38 $\pm$ 0.72 | 78.16 $\pm$ 0.56 | - | - MetaOptNet (Lee et al. 2019) | 62.64 $\pm$ 0.82 | 78.63 $\pm$ 0.46 | 65.99 $\pm$ 0.72 | 81.56 $\pm$ 0.53 DSN (Simon et al. 2020) | 62.64 $\pm$ 0.66 | 78.83 $\pm$ 0.45 | 66.22 $\pm$ 0.75 | 82.79 $\pm$ 0.48 MatchNet (Vinyals et al. 2016) | 63.08 $\pm$ 0.80 | 75.99 $\pm$ 0.60 | 68.50 $\pm$ 0.92 | 80.60 $\pm$ 0.71 E3BM (Liu, Schiele, and Sun 2020) | 63.80 $\pm$ 0.40 | 80.10 $\pm$ 0.30 | 71.20 $\pm$ 0.40 | 85.30 $\pm$ 0.30 CAN (Hou et al. 2019) | 63.85 $\pm$ 0.48 | 79.44 $\pm$ 0.34 | 69.89 $\pm$ 0.51 | 84.23 $\pm$ 0.37 CTM (Li et al. 2019) | 64.12 $\pm$ 0.82 | 80.51 $\pm$ 0.13 | 68.41 $\pm$ 0.39 | 84.28 $\pm$ 1.73 P-Transfer (Shen et al. 2021) | 64.21 $\pm$ 0.77 | 80.38 $\pm$ 0.59 | - | - RFS (Tian et al. 2020) | 64.82 $\pm$ 0.60 | 82.14 $\pm$ 0.43 | 71.52 $\pm$ 0.69 | 86.03 $\pm$ 0.49 ConstellationNet (Xu et al. 2020) | 64.89 $\pm$ 0.23 | 79.95 $\pm$ 0.17 | - | - FRN (Wertheimer, Tang, and Hariharan 2021) | 66.45 $\pm$ 0.19 | 82.83 $\pm$ 0.13 | 71.16 $\pm$ 0.22 | 86.01 $\pm$ 0.15 infoPatch (Liu et al. 2021) | 67.67 $\pm$ 0.45 | 82.44 $\pm$ 0.31 | 71.51 $\pm$ 0.52 | 85.44 $\pm$ 0.35 ProtoNet (Snell, Swersky, and Zemel 2017) | 61.83 $\pm$ 0.20 | 79.86 $\pm$ 0.14 | 66.84 $\pm$ 0.23 | 84.54 $\pm$ 0.16 ProtoNet + DFR | 64.84 $\pm$ 0.20 | 81.10 $\pm$ 0.14 | 70.22 $\pm$ 0.23 | 84.74 $\pm$ 0.16 DeepEMD (Zhang et al. 2020) | 64.93 $\pm$ 0.29 | 81.73 $\pm$ 0.57 | 70.47 $\pm$ 0.33 | 84.76 $\pm$ 0.61 DeepEMD + DFR | 65.41 $\pm$ 0.28 | 82.18 $\pm$ 0.55 | 71.56 $\pm$ 0.31 | 86.23 $\pm$ 0.58 FEAT (Ye et al. 2020) | 66.52 $\pm$ 0.20 | 81.46 $\pm$ 0.14 | 70.30 $\pm$ 0.23 | 84.55 $\pm$ 0.16 FEAT + DFR | 67.74 $\pm$ 0.86 | 82.49 $\pm$ 0.57 | 71.31 $\pm$ 0.93 | 85.12 $\pm$ 0.64 Table 1: Few-shot classification accuracy ($\%$) averaged over mini-ImageNet and tiered-ImageNet with the ResNet backbone. ### Why It Works DFR framework aims to extract only class-related information for classification. Different from other attempts towards more adaptive embedding using attention mechanism (Hou et al. 2019; Li et al. 2019; Ye et al. 2020), our classification and variation branches play as the adversaries by minimizing $L_{cls}$ and $L_{dis}$ simultaneously. In practice, the classification and variation features of an image are always complementary, thus the image reconstruction quality is enforced after fusion by minimizing $L_{rec}$. It is essential to preserve the image representation in DFR for few-shot classification: as the class-specific features can be task-varying thus hard to be generalized, any information loss throughout the inter-state flow may potentially limit the model performance. Such design is contrast to the classic feature embedding for few-shot learning, in which image features are always projected onto the lower-dimensional manifolds (Simon et al. 2020). Our $E_{cls}$ feature has much lower dimension comparing to the $E_{var}$ feature, as the class-irrelevant information (e.g., image style and background) are typically excessive. To this end, a more restrictive classification feature will significantly reduce the model bias, thus to enhance its generalizability in few-shot tasks. We visualize the feature distributions w/o and w/ DFR using t-SNE (Van der Maaten and Hinton 2008) to verify our intuition. Figure 3 (a) shows that the learned features extracted from the ResNet-12 backbone are less discriminative without using the DFR framework. While when applying the DFR framework, the classification branch clusters in Figure 3 (b) are more separable from each other, and the output features of the variation branch in Figure 3 (c) contains more class-irrelevant information that meets our expectations. ## Experiment We conduct extensive experiments on two few-shot benchmarks, i.e., Mini- ImageNet and Tiered-ImageNet on general few-shot classification tasks to evaluate the performance of our proposed DFR framework. After that, we introduce a novel FS-DomainNet dataset with the proposed two evaluation settings for benchmarking few-shot domain generalization task (FS-DG). Moreover, we evaluate the performance of DFR on CUB-200-2011 benchmark on fine-grained few-shot classification task. 111The code of the proposed DFR model and FS-DomainNet dataset will be available on https://github.com/chengcv/DFRFS.. ### Implementation Details We use ResNet-12 network (Lee et al. 2019) as our backbone $E_{cls}$ for classification branch and set the number of channels as $[64,160,320,640]$, which are similar to the competing methods. The encoder $E_{var}$ consists of four convolutional blocks and two residual blocks. The decoder contains a two- layer MLP block and a decoder network $D$ with residual blocks and upscale convolutional blocks. The I/O channel numbers of variation encoder and decoder are all set to $128$. The level ratio $\lambda$ of GRL layer is set to $1$. Data augmentation including resizing, random cropping, color jitter and random flipping following (Ye et al. 2020) are applied for all methods in training. Our models are all trained using SGD optimizer, with the weight decay as $5e{-4}$, and the momentum as $0.9$. We conduct experiments under both $5$-way $1$-shot and $5$-way $5$-shot settings with $15$ query images each class, i.e., $5\times(1\,and\,5)+5\times 15$ samples for $1$-shot and $5$-shot tasks, respectively. We report the mean accuracy of randomly sampled $10k$ tasks as well as the $95\%$ confidence intervals on the testing set as mentioned in (Ye et al. 2020; Zhang et al. 2020). To verify the effectiveness of our proposed DFR framework, we combined DFR with three few-shot algorithms: a commonly used baseline ProtoNet (Snell, Swersky, and Zemel 2017), two state-of-the-art methods DeepEMD (Zhang et al. 2020) and FEAT (Ye et al. 2020).222We utilized the official codes released by the authors, for implementations of ProtoNet, DeepEMD and FEAT and the corresponding DFR models. The results are all obtained by following the unified setting for fair comparison, which may not exactly match with the results reported in their original papers. Note that we only adopt the FCN version of DeepEMD for comparison over all datasets. ### General Few-shot Classification We first conduct experiments on two general few-shot benchmarks: mini-ImageNet and tiered-ImageNet. Mini-ImageNet. Mini-ImageNet (Vinyals et al. 2016) is a subset of the ILSVRC-12 challenge (Krizhevsky, Sutskever, and Hinton 2012) proposed for few- shot classification. It contains 100 diverse classes with 600 images of size $84\times 84\times 3$ in each category. Following the class split setting (Ravi and Larochelle 2017) used in previous works, all 100 classes are divided into 64, 16 and 20 classes for training, validation and testing, respectively. Tiered-ImageNet. Similar to mini-ImageNet, Tiered-ImageNet (Ren et al. 2018) is also a subset of ILSVRC-12, which contains more classes that are organized in a hierarchical structure, i.e., 608 classes from 34 top categories. We follow the setups proposed by (Ren et al. 2018), and split 608 categories into 351, 97 and 160 for training, validation and testing, respectively. The classification results are shown in Table 1. It is clear that FEAT+DFR achieves state-of-the-art results on mini-ImageNet benchmark, while DeepEMD+DFR achieves state-of-the-art results on tiered-ImageNet benchmark. Moreover, we observe that the improvements by DFR remain inconsistent for all baselines. By adopting the DFR framework, the 5-way 1-shot accuracies by ProtoNet are increased by $3.0\%$ and $3.4\%$ on mini-ImageNet and tiered- ImageNet, respectively, which are even comparable to more sophisticated methods. For the other two methods DeepEMD and FEAT, which are the current state-of-the-art FS methods, their FS classification results can still be further boosted by $1\%$ in average, after applying the DFR framework. Data Split | Class | Domain ---|---|--- Train | Test | Source | Target Classic DG Setting | $-$ | $-$ | $\triangle$ | $\diamondsuit$ Classic FS Setting | $\triangle$ | $\mathcal{S},\mathcal{Q}$ | $-$ | $-$ FS-DG Setting A | $\triangle$ | $\mathcal{S},\mathcal{Q}$ | $\triangle$ | $\mathcal{S},\mathcal{Q}$ FS-DG Setting B | $\triangle$ | $\mathcal{S},\mathcal{Q}$ | $\triangle$ $\mathcal{S}$ | $\mathcal{Q}$ Table 2: Comparison of different settings for DG and FS tasks. $\triangle$: training data selection for DG and FS tasks. $\diamondsuit$: testing data selection for DG tasks. $\mathcal{S},\mathcal{Q}$: FS support and query data selection. The general FS and DG tasks do not split the domain and class sets, respectively. ### Few-shot Domain Generalization Domain generalization (DG) aims to learn a domain-agnostic model from multiple sources that can classify data from any target domain. DG tasks become more challenging when there exists a class gap (besides domain gap) between the training and testing sets, i.e., DG under the few-shot setting. General few- shot learning does not consider the influence caused by the domain gap, thus the DG models can hardly be generalized to unseen domains. In this work, we consider a more challenging Few-Shot Domain Generalization (FS-DG) problem, i.e., both domain and class gaps exist between the training (source) and testing (target) sets. In our FS-DG experiments (under both the Setting A and B), only the training samples from the source domains are selected in training. Specifically, an $N$-way $K$-shot FS-DG task contains support and query samples from $N$ classes on the source domain in the meta-training step, and then the trained model is to predict the query data label out of the testing classes on the target domain. Here we propose two FS-DG evaluation settings based on different domains of support set $\mathcal{S}$ as: (1) Setting A: Support set is only from the target domain and (2) Setting B: Support set is only from the source domain. Both settings can evaluate the generalizability of the model, i.e., ability to extract domain-invariant and class-specific features. Recent works (Ye et al. 2020; Du et al. 2021) also attempted simple FS-DG tasks to evaluate their proposed FS models. However, only preliminary results are reported following the simple setting (i.e., Setting B in Table 2) without comprehensive investigation of the effect of domain gap on novel classes (test class set). We further conduct experiments with full evaluation settings to validate the proposed DFR for FS-DG tasks using a novel FS-DomainNet Benchmark. #### FS-DomainNet Benchmark. We propose FS-DomainNet for benchmarking few-shot domain generalization. Different from the few-shot domainnet (Du et al. 2021) which only contains 200 classes with 1000 images each class, FS-DomainNet captures a much larger subset of DomainNet (Peng et al. 2019), i.e., 569010 images from six distinct domains (i.e., Sketch, Quickdraw, Real, Painting, Clipart and Infograph) with 345 different categories of objects from 24 divisions. We reorganize it for few-shot learning and select all categories (i.e., 527156 images of 299 classes) that include at least the number of samples (i.e., 20) required by the $5$-shot setting on each domain. Then we split 299 categories into 191, 47 and 61 for training, validation and testing, respectively, while maintaining the consistency of class split on each domain. More detailed descriptions and data examples of FS-DomainNet are included in our Supplementary Materials. Different from existing few-shot benchmarks, FS- DomainNet additionally includes objects that are collected from multiple domains considering both the domain and class gaps, and the sample size varies greatly between different categories to enable more challenging FS-DG task settings. Additionally, FS-DomainNet can also be utilized for the few-shot domain adaptation and general few-shot classification tasks. Method | Setting A | Setting B ---|---|--- 1-shot | 5-shot | 1-shot | 5-shot MatchNet | 45.23 | 54.92 | 40.61 | 49.09 ProtoNet | 47.96 | 66.64 | 48.70 | 67.96 ProtoNet+DFR | 49.29 | 68.73 | 49.76 | 70.34 DeepEMD | 53.20 | 70.59 | 51.97 | 70.62 DeepEMD+DFR | 54.47 | 71.60 | 54.06 | 72.33 FEAT | 51.83 | 69.26 | 52.46 | 71.54 FEAT+DFR | 52.58 | 69.93 | 54.75 | 71.91 Table 3: FS-DG Classification accuracy ($\%$) averaged on FS-DomainNet with two evaluation settings under the 5-way setting. #### Experimental Setups. Following the classic DG setting, we choose five out of six domains from FS- DomainNet as the source domains and the remaining one as the target domain. We report the average FS-DG accuracies over the splits with each of the six domains as the target domain. For $1$-shot tasks, we randomly select one support sample only from one random domain for each class; For $5$-shot tasks, we select one labeled sample of each source domain for each class, i.e., each meta-task contains the same support samples of each domain. For query samples of each class with multi- domains under both $1$\- and $5$-shot settings, we select the same number of query samples from each domain, i.e., $\|Q\|=3\times 5=15$. #### Results. Table 3 shows the average accuracy of six target domains on the FS-DomainNet benchmark for two evaluation settings. It is clear that DFR can provide consistent improvement on classification accuracies for all FS baselines under both settings. Besides, DFR provides more significant boosting for FS-DG performance under the Setting B, thanks to its effective disentanglement of class-specific features. Comparing to 5-shot tests, DFR provides less help for 1-shot tests, as learning from only one support sample from a random source domain for each category in meta-task is more challenging. It is worth noting that both ProtoNet and FEAT perform better under the Setting B, while DeepEMD generates better results under the Setting A, comparing to the other setting. It is due to the unique design of DeepEMD by adapting the channel-wise EMD metric based on the feature maps, which inexplicitly incorporates the similarity of domain information. In the FS-DG setting A, the support and query data are from the same domain, which is, in fact, advantageous for DeepEMD, while the domain gap between support and query sets, on the contrary, degrades the DeepEMD performance under the Setting B. After applying the proposed DFR, the feature map in the classification branch removes the interference information, which can always improve DeepEMD under both settings. More experiment results and analysis on the FS-DomainNet dataset can be found in our Supplementary Materials. Method | CUB ---|--- 5-way 1-shot | 5-way 5-shot RelationNet (Sung et al. 2018) | 66.20 $\pm$ 0.99 | 82.30 $\pm$ 0.58 MAML (Finn, Abbeel, and Levine 2017) | 67.28 $\pm$ 1.08 | 83.47 $\pm$ 0.59 MatchNet (Vinyals et al. 2016) | 71.87 $\pm$ 0.85 | 85.08 $\pm$ 0.57 COMET (Cao, Brbic, and Leskovec 2021) | 72.20 $\pm$ 0.90 | 87.60 $\pm$ 0.50 P-Transfer (Shen et al. 2021) | 73.88 $\pm$ 0.92 | 87.81 $\pm$ 0.48 ProtoNet (Snell, Swersky, and Zemel 2017) | 72.25 $\pm$ 0.21 | 87.47 $\pm$ 0.13 ProtoNet+DFR | 73.52 $\pm$ 0.21 | 87.90 $\pm$ 0.13 DeepEMD (Zhang et al. 2020) | 74.88 $\pm$ 0.30 | 88.52 $\pm$ 0.52 DeepEMD+DFR | 76.78 $\pm$ 0.29 | 89.19 $\pm$ 0.52 FEAT (Ye et al. 2020) | 75.68 $\pm$ 0.20 | 87.91 $\pm$ 0.13 FEAT+DFR | 77.14 $\pm$ 0.21 | 88.97 $\pm$ 0.13 Table 4: Fine-grained few-shot classification accuracy ($\%$) averaged on CUB with the ResNet backbone. ### Fine-grained Few-shot Classification We further evaluate DFR on a fine-grained benchmark, i.e., Caltech-UCSD Birds 200-2011 (CUB) (Wah et al. 2011) which was initially proposed for fine-grained image classification, which contains 200 different birds with 11788 images. Following the split in (Chen et al. 2019; Hilliard et al. 2018), 200 classes are divided into 100, 50 and 50 for training, validation and testing, respectively. We also pre-process the data by cropping each image with the provided bounding box according to the prior work (Ye et al. 2020; Wertheimer, Tang, and Hariharan 2021). Table 4 reports the fine-grained few-shot classification results with both 5-way 1-shot and 5-way 5-shot tests. Comparing to the general and multi-domain few-shot benchmarks that contain significant differences between the categories, fine-grained classification only includes minor intra-class differences. The domain information in the fine-grained dataset may contribute to the category, making it a challenging task. It is clear that the proposed DFR can also significantly and consistently boost all FS baselines, with $0.5\%$ to $1.9\%$ additional improvements on CUB dataset. It demonstrates that DFR can effectively remove the excursive features, and thus highlight the subtle traits which are critical for fine-grained FS classification. ### Ablation Study DFR | $\lambda_{1}$ | $\lambda_{2}$ | $\lambda_{3}$ | 1-shot | 5-shot ---|---|---|---|---|--- ✗ | - | - | - | 66.52 | 81.46 ✓ | 1.0 | - | 1.0 | 66.75 | 81.98 ✓ | 1.0 | 1.0 | - | 66.99 | 82.16 ✓ | 1.0 | 1.0 | 1.0 | 67.74 | 82.49 Table 5: Ablation study on mini-ImageNet dataset of FEAT with the proposed DFR framework. We investigate the weights in our formulation and incorporate FEAT as the baseline method. Table 5 shows that FEAT+DFR achieves the best performance when weighting parameters are all set to $1.0$. Compared with $L_{rec}$ and $L_{tran}$, the discriminative loss $L_{dis}$ has a more significant impact on performance as it affects the class-specific information removed from the variation branch, which is directly related to the classification ability of the classification branch. Overall, we find that the performance is minimally affected by loss weight which also shows the robustness of our framework. ## Conclusion We propose a novel and effective Disentangled Feature Representation (DFR) framework for few-shot image classification. Unlike the feature embeddings which may encode the excursive image information, such as background and domain, the proposed DFR aims to extract the class-specific features which is essential in most few-shot learning pipelines. Furthermore, to tackle the challenges of the domain gap in few-shot learning, we propose a novel benchmarking dataset (FS-DomainNet) for the few-shot domain generalization task. We have studied the importance of applying DFR in few-shot tasks by visualizing the t-SNE of the extracted features w/o DFR and disentangled features from the classification and variation branches. Experimental results on four datasets, including three tasks (general image classification, fine- grained classification, and domain generalization) under the few-shot settings, evaluate the effectiveness of the proposed DFR framework. ## References * Cao, Brbic, and Leskovec (2021) Cao, K.; Brbic, M.; and Leskovec, J. 2021. Concept Learners for Few-Shot Learning. In _International Conference on Learning Representations_. * Chen et al. (2019) Chen, W.-Y.; Liu, Y.-C.; Kira, Z.; Wang, Y.-C. F.; and Huang, J.-B. 2019. A Closer Look at Few-shot Classification. In _International Conference on Learning Representations_. * Chen et al. (2016) Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; and Abbeel, P. 2016\. InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_ , 2180–2188. * Du et al. (2021) Du, Y.; Zhen, X.; Shao, L.; and Snoek, C. G. M. 2021. MetaNorm: Learning to Normalize Few-Shot Batches Across Domains. In _International Conference on Learning Representations_. * Fei-Fei, Fergus, and Perona (2006) Fei-Fei, L.; Fergus, R.; and Perona, P. 2006. One-shot learning of object categories. _IEEE transactions on pattern analysis and machine intelligence_ , 28(4): 594–611. * Finn, Abbeel, and Levine (2017) Finn, C.; Abbeel, P.; and Levine, S. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In _International Conference on Machine Learning_ , 1126–1135. PMLR. * Gidaris and Komodakis (2019) Gidaris, S.; and Komodakis, N. 2019. Generating classification weights with gnn denoising autoencoders for few-shot learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 21–30. * Hariharan and Girshick (2017) Hariharan, B.; and Girshick, R. 2017. Low-shot visual recognition by shrinking and hallucinating features. In _Proceedings of the IEEE International Conference on Computer Vision_ , 3018–3027. * Hilliard et al. (2018) Hilliard, N.; Phillips, L.; Howland, S.; Yankov, A.; Corley, C. D.; and Hodas, N. O. 2018. Few-shot learning with metric-agnostic conditional embeddings. _arXiv preprint arXiv:1802.04376_. * Hou et al. (2019) Hou, R.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2019. Cross attention network for few-shot classification. In _Proceedings of the 33rd International Conference on Neural Information Processing Systems_ , 4003–4014. * Hsieh et al. (2018) Hsieh, J.-T.; Liu, B.; Huang, D.-A.; Fei-Fei, L.; and Niebles, J. C. 2018. Learning to decompose and disentangle representations for video prediction. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , 515–524. * Jaderberg et al. (2015) Jaderberg, M.; Simonyan, K.; Zisserman, A.; and Kavukcuoglu, K. 2015. Spatial transformer networks. In _Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2_ , 2017–2025. * Johnson, Alahi, and Fei-Fei (2016) Johnson, J.; Alahi, A.; and Fei-Fei, L. 2016. Perceptual losses for real-time style transfer and super-resolution. In _European conference on computer vision_ , 694–711. Springer. * Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_ , 1097–1105. * Lee et al. (2020) Lee, H.-Y.; Tseng, H.-Y.; Mao, Q.; Huang, J.-B.; Lu, Y.-D.; Singh, M.; and Yang, M.-H. 2020. Drit++: Diverse image-to-image translation via disentangled representations. _International Journal of Computer Vision_ , 128(10): 2402–2417. * Lee et al. (2019) Lee, K.; Maji, S.; Ravichandran, A.; and Soatto, S. 2019. Meta-learning with differentiable convex optimization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 10657–10665. * Li et al. (2019) Li, H.; Eigen, D.; Dodge, S.; Zeiler, M.; and Wang, X. 2019. Finding task-relevant features for few-shot learning by category traversal. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 1–10. * Li et al. (2020a) Li, K.; Zhang, Y.; Li, K.; and Fu, Y. 2020a. Adversarial feature hallucination networks for few-shot learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 13470–13479. * Li et al. (2020b) Li, X.; Jin, X.; Lin, J.; Liu, S.; Wu, Y.; Yu, T.; Zhou, W.; and Chen, Z. 2020b. Learning Disentangled Feature Representation for Hybrid-distorted Image Restoration. In _European Conference on Computer Vision_ , 313–329. Springer. * Li et al. (2021) Li, X.; Xu, Z.; Wei, K.; and Deng, C. 2021. Generalized Zero-Shot Learning via Disentangled Representation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 1966–1974. * Liu et al. (2021) Liu, C.; Fu, Y.; Xu, C.; Yang, S.; Li, J.; Wang, C.; and Zhang, L. 2021. Learning a Few-shot Embedding Model with Contrastive Learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 8635–8643. * Liu et al. (2019) Liu, M.-Y.; Huang, X.; Mallya, A.; Karras, T.; Aila, T.; Lehtinen, J.; and Kautz, J. 2019. Few-shot unsupervised image-to-image translation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 10551–10560. * Liu, Schiele, and Sun (2020) Liu, Y.; Schiele, B.; and Sun, Q. 2020. An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning. In _European Conference on Computer Vision (ECCV)_. * Nichol, Achiam, and Schulman (2018) Nichol, A.; Achiam, J.; and Schulman, J. 2018. On first-order meta-learning algorithms. _arXiv preprint arXiv:1803.02999_. * Oreshkin, López, and Lacoste (2018) Oreshkin, B. N.; López, P. R.; and Lacoste, A. 2018. TADAM: Task dependent adaptive metric for improved few-shot learning. In _NeurIPS_. * Peng et al. (2019) Peng, X.; Bai, Q.; Xia, X.; Huang, Z.; Saenko, K.; and Wang, B. 2019. Moment matching for multi-source domain adaptation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 1406–1415. * Prabhudesai et al. (2021) Prabhudesai, M.; Lal, S.; Patil, D.; Tung, H.-Y.; Harley, A. W.; and Fragkiadaki, K. 2021. Disentangling 3D Prototypical Networks for Few-Shot Concept Learning. In _International Conference on Learning Representations_. * Ravi and Larochelle (2017) Ravi, S.; and Larochelle, H. 2017. Optimization as a model for few-shot learning. In _International Conference on Learning Representations_. * Ren et al. (2018) Ren, M.; Triantafillou, E.; Ravi, S.; Snell, J.; Swersky, K.; Tenenbaum, J. B.; Larochelle, H.; and Zemel, R. S. 2018. Meta-learning for semi-supervised few-shot classification. In _ICLR_. * Rusu et al. (2019) Rusu, A. A.; Rao, D.; Sygnowski, J.; Vinyals, O.; Pascanu, R.; Osindero, S.; and Hadsell, R. 2019. Meta-Learning with Latent Embedding Optimization. In _International Conference on Learning Representations_. * Shen et al. (2021) Shen, Z.; Liu, Z.; Qin, J.; Savvides, M.; and Cheng, K.-T. 2021. Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 9594–9602. * Simon et al. (2020) Simon, C.; Koniusz, P.; Nock, R.; and Harandi, M. 2020. Adaptive subspaces for few-shot learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 4136–4145. * Snell, Swersky, and Zemel (2017) Snell, J.; Swersky, K.; and Zemel, R. 2017. Prototypical networks for few-shot learning. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , 4080–4090. * Sung et al. (2018) Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P. H.; and Hospedales, T. M. 2018\. Learning to Compare: Relation Network for Few-Shot Learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. * Tang, Wertheimer, and Hariharan (2020) Tang, L.; Wertheimer, D.; and Hariharan, B. 2020. Revisiting pose-normalization for fine-grained few-shot recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 14352–14361. * Tian et al. (2020) Tian, Y.; Wang, Y.; Krishnan, D.; Tenenbaum, J. B.; and Isola, P. 2020. Rethinking few-shot image classification: a good embedding is all you need? In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16_ , 266–282. Springer. * Tokmakov, Wang, and Hebert (2019) Tokmakov, P.; Wang, Y.-X.; and Hebert, M. 2019. Learning compositional representations for few-shot recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 6372–6381. * Van der Maaten and Hinton (2008) Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. _Journal of machine learning research_ , 9(11). * Vinyals et al. (2016) Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; and Wierstra, D. 2016\. Matching networks for one shot learning. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_ , 3637–3645. * Wah et al. (2011) Wah, C.; Branson, S.; Welinder, P.; Perona, P.; and Belongie, S. 2011. The caltech-ucsd birds-200-2011 dataset. * Wang et al. (2018) Wang, Y.-X.; Girshick, R.; Hebert, M.; and Hariharan, B. 2018. Low-shot learning from imaginary data. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 7278–7286. * Wertheimer, Tang, and Hariharan (2021) Wertheimer, D.; Tang, L.; and Hariharan, B. 2021. Few-Shot Classification With Feature Map Reconstruction Networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 8012–8021. * Xu et al. (2020) Xu, W.; Wang, H.; Tu, Z.; et al. 2020. Attentional Constellation Nets for Few-Shot Learning. In _International Conference on Learning Representations_. * Yang, Liu, and Xu (2021) Yang, S.; Liu, L.; and Xu, M. 2021. Free Lunch for Few-shot Learning: Distribution Calibration. In _International Conference on Learning Representations_. * Ye et al. (2020) Ye, H.-J.; Hu, H.; Zhan, D.-C.; and Sha, F. 2020. Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 8808–8817. * Zhang et al. (2020) Zhang, C.; Cai, Y.; Lin, G.; and Shen, C. 2020. DeepEMD: Few-Shot Image Classification With Differentiable Earth Mover’s Distance and Structured Classifiers. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. * Zhao et al. (2021) Zhao, J.; Yang, Y.; Lin, X.; Yang, J.; and He, L. 2021. Looking Wider for Better Adaptive Representation in Few-Shot Learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 10981–10989.
SI # Human mobility is well described by closed-form gravity-like models learned automatically from data Oriol Cabanas-Tirapu Department of Chemical Engineering, Universitat Rovira i Virgili, 43007 Tarragona, Catalonia Lluís Danús Department of Chemical Engineering, Universitat Rovira i Virgili, 43007 Tarragona, Catalonia Esteban Moro Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA 02139 Department of Mathematics and GISC, Universidad Carlos III de Madrid, 28911 Leganés, Spain Network Science Institute, Northeastern University, Boston, MA 02115, United States Marta Sales-Pardo Department of Chemical Engineering, Universitat Rovira i Virgili, 43007 Tarragona, Catalonia Corresponding authors: Marta Sales-Pardo (E-mail: [email protected]); Roger Guimerà (E-mail<EMAIL_ADDRESS> Roger Guimerà Department of Chemical Engineering, Universitat Rovira i Virgili, 43007 Tarragona, Catalonia ICREA, 08007 Barcelona, Catalonia Corresponding authors: Marta Sales-Pardo (E-mail: [email protected]); Roger Guimerà (E-mail<EMAIL_ADDRESS> ###### Abstract Modeling of human mobility is critical to address questions in urban planning and transportation, as well as global challenges in sustainability, public health, and economic development. However, our understanding and ability to model mobility flows within and between urban areas are still incomplete. At one end of the modeling spectrum we have simple so-called gravity models, which are easy to interpret and provide modestly accurate predictions of mobility flows. At the other end, we have complex machine learning and deep learning models, with tens of features and thousands of parameters, which predict mobility more accurately than gravity models at the cost of not being interpretable and not providing insight on human behavior. Here, we show that simple machine-learned, closed-form models of mobility are able to predict mobility flows more accurately, overall, than either gravity or complex machine and deep learning models. At the same time, these models are simple and gravity-like, and can be interpreted in terms similar to standard gravity models. Furthermore, these models work for different datasets and at different scales, suggesting that they may capture the fundamental universal features of human mobility. ## Introduction Accurate models of population mobility within and between municipalities are critical to address questions in urban planning and transportation engineering. Additionally, since municipalities are the main ground on which societies and cultures develop today, such mobility models are also instrumental in addressing global challenges in sustainability, public health, and economic development. Two main factors have driven recent interest in modeling human mobility patterns [1, 2, 3, 4, 5]. First, accurate models of human mobility could help identify transportation needs [6], allocate services and amenities (shopping, health, parks) more efficiently [7], or even understand and eventually alleviate problems like segregation [8], or epidemic spreading [9]. But, at the same time, models of human mobility can help identify the main behavioral components driving people to make large displacements to, for example, buy a new product, find a new house, or use physical activity spaces. Better behavioral models can help us implement more efficient policies to change people’s behavior, rather than urban environments, in favor of more sustainable attitudes. Figure 1: Modeling approaches for mobility flows between test municipalities in Texas, US. (A) Real mobility flows between municipalities in the test set in Texas, US (Methods). For each flow, we consider origin $o$ and destination $d$ features, such as population $m_{o/d}$, aggregate statistics about points of interest (POI), and the distance between them. (B) Flows predicted by the deep gravity model [10], which uses a total of 39 features from origin and destination. (C) Flows predicted by the closed-form, median predictive model identified by the Bayesian machine scientist (BMS; see text). This model only uses the population of origin and destination, as well as the distance between them (Fig. 6B). Despite these considerations, our understanding of the mobility flows within and between urban areas is still incomplete. One of the earliest and most fruitful attempts to model mobility flows between municipalities is the so- called gravity model [1]. This model assumes that mobility flows depend solely on the attractiveness or opportunities of the municipalities of origin and destination (for which population is typically used as a proxy) and the geographical distance between them, in a fashion that is mathematically similar to Newton’s law of gravitation. In its different incarnations and refined versions [11, 4, 12], the gravity model provides a simple phenomenological description of a very complex phenomenon. Because of this, while gravity models are not without their limitations, they are very often used in urban design, transportation, or even commercial applications. Recently, deep learning algorithms have been proposed, extending the ideas underlying gravity models; they incorporate many other features besides the populations of the origin and destination municipalities and their distance [10]. Although those sophisticated machine learning tools are more accurate at predicting flows between urban areas, they lack the explanatory power, analytical tractability, adaptability to different contexts, and connection to human decision-making of simple gravity models. Given the reasonable success of simple gravity models in explaining flows in urban areas, here we investigate the fundamental question of whether we really need non-interpretable models that are much more complex than the gravity law to delve deeper into the essence of urban mobility. Unlike other behavioral models, gravity mobility models are phenomenological. Because of the lack of precise theoretical underpinnings, their predictive ability depends on the exact functional specification of the dependency of the mobility flows on the model features; that is, the mathematical dependency on origin and destination populations and distance. Here, we leverage recent developments in Bayesian symbolic regression to obtain closed-form, interpretable models[13] of mobility from data in a principled and automatic fashion [14, 15, 16]. In particular, we systematically compare the performance at predicting mobility flows of simple gravity models, complex machine learning and deep- learning methods, and closed-form, interpretable models obtained through Bayesian symbolic regression (Fig. 1). We find that the Bayesian symbolic regression approach yields simple models that perform better, overall, than any of the other modeling approaches. Our approach is able to learn accurate models that, like gravity models, solely take into account the origin and destination populations and the geographical distance between them. Importantly, the learned models are gravity-like in their mathematical dependencies on populations and distance. Furthermore, exploration of the relationship between the contribution of the populations of municipalities and their relative distance reveals common patterns in all the datasets, which suggests a close to universal relationship between mobility flows and these variables. ## Results ### A Bayesian machine scientist learns closed-form mathematical models from mobility data We aim to determine whether it is possible to model mobility flows by means of closed-form mathematical models that are interpretable like gravity models, and as predictive as (non-interpretable) machine learning models such as the deep gravity model [10]. To automatically learn such closed-form models from data, we use the so-called Bayesian machine scientist (BMS)[14]. Given a dataset $D$, the BMS samples closed-form mathematical models from the posterior distribution $p(M|D)$, which gives the probability that a given model $M$ is the true generating model given the data (Methods). The BMS is guaranteed to asymptotically identify the true generating model, if one exists, and makes quasi-optimal predictions for unobserved data [16]. We consider as our main dataset $D$ the set of flows $T_{od}$ between origin $o$ and destination $d$ municipalities in six states in the USA (New York, Massachusetts, California, Florida, Washington, and Texas; see Data). The BMS is fed with $D$, and samples closed-form models from $p(M|D)$ using Markov chain Monte Carlo[14] (Methods). This sampling yields an ensemble of hundreds of different closed-form models for the flows $T_{od}$ such as, for example, $\log T_{od}=A\left(1+\frac{B\left(\left(m_{d}+C\right)\left(m_{o}+D\right)\right)^{\beta}}{d_{od}}\right)^{\xi}\quad{\rm or}\quad\log T_{od}=\log\left[A\left(\frac{B\left(m_{d}m_{o}+Cm_{d}+D\right)}{d_{od}^{\alpha}}+1\right)^{\gamma}\right]\;,$ (1) where $m_{o/d}$ is the population of the origin/destination municipalities, $d_{od}$ is the distance between them, and $A$, $B$, $C$, $D$, $\beta$ and $\xi$ are model parameters. These models are able to make predictions of test flows (not seen by the BMS during training) that follow real values over several orders of magnitude (Figs. 1 and 2M-R). In what follows, we analyze in more depth this ensemble of models and its predictive abilities, vis a vis gravity models and machine learning models such as the deep gravity model. ### Different models capture flows at different scales Figure 2: Model predictions of flows between municipalities. Each panel shows, in logarithmic scale, the scatter plot of predicted flows between municipalities versus the corresponding real flows, for different states in the US (columns). Plots show results for test data for different models (rows): (A-F) the Deep Gravity model, (G-L) a Random Forest regressor, (M-R) the most plausible model sampled by the Bayesian machine scientist, and (S-X) a gravity model in its power law version. Supplementary Fig. S1 shows scatter plots for the full set of models we consider (see Methods for a complete description of the models and their parameters). In order to compare the ability of modeling approaches to describe mobility data, one needs a model selection criterion. In probabilistic terms, selecting the best model amounts to selecting the most plausible model, that is, the model that has the highest probability $p(M|D)$ of being the true generating model given the observed data; or, equivalently, the model with the shortest description length (Eqs. (2) and (3)). However, this criterion is not always applicable in practice, because often it is not possible to compute the description length of a model, as it happens for deep learning and most other machine learning models. Alternatively, one can measure performance at certain predictive tasks[17], which is the approach typically taken in mobility modeling studies, and that we take here. Specifically, for each state for which we have data, we split municipalities in two sets. Flows between municipalities in the first set comprise the training set, and flows between municipalities in the second set comprise the test set. By building the training and test sets in this way, all the information about the municipalities in the test set, their characteristics, and the distances between them is completely new to the trained algorithm. We compare the closed-form mobility models identified by the BMS to two alternative approaches. First, we consider gravity models, in which mobility flows are directly proportional to the product of masses (that is, populations) at the origin and destination, and inversely proportional to the distance between them. These approaches include traditional gravity models [1], as well as the closely related radiation model [4]. Second, we consider machine learning approaches that, besides considering population and distance between municipalities, also consider additional characteristics of municipalities such as density of shops, entertainment venues, or educational facilities (Methods). Specifically we consider a random forest regression model [18] and the deep gravity model [10]. From the ensemble of closed-form models sampled by the BMS, we analyze (Methods): (i) the most plausible (minimum description length) model found by the BMS; (ii) the median of the ensemble of models sampled by the BMS (which is the optimal predictor); and (iii) what we call the median predictive model, that is, the single model in the ensemble of sampled models whose predictions are closest to the ensemble median. In Fig. 2, we show the predicted flows versus the real flows in the test set (see Supplementary Fig. S1 for results for additional models). Whereas all models are predictive, we find that different models are differently capable to describe mobility flows of different orders of magnitude. For instance, gravity-like models (Fig. 2 S-X and Supplementary Figs. S1 and S2 ) are typically good at capturing the behavior of large flows, but not of small flows. Indeed, for small flows (less than around 100 commuters) these approaches tend to underestimate flows, in some cases by several orders of magnitude, and even predict flows smaller than 1 person (Supplementary Fig. S2). This is also the case for the deep gravity model, which again under- predicts small flows (Figs. 1 and 2 A-F). By contrast, neither the random forest nor the BMS suffer from this caveat, and both capture the whole range of flows more consistently and without large systematic deviations (Figs. 1 and 2 G-R). ### Simple closed-form models are overall more accurate than gravity and non- interpretable machine learning models Next, we quantify the performance of the models at the task of predicting unobserved flows. To that end, and considering the qualitative results in the previous section, we compute several complementary performance metrics. First, we consider the common part of commuters (CPC; see Methods), which is a usual choice in the mobility literature[10]. The CPC measures the overlap between predicted and observed flows, and can take values from 0 to 1; the larger the CPC, the better the predictions. Despite its popularity, this metric favors models that predict the larger flows well, but overlooks errors in small flows (Fig. 3A-F). Since mobility flows typically span several orders of magnitude (Fig. 2), models with the larger CPC are not necessarily the best models for the whole range of flows. To have metrics of performance that cover the whole range of flows, we consider, in addition and complementary to CPC, the absolute error, the absolute relative error, and the absolute log-ratio (Fig. 3). For each of these metrics, and to avoid the disproportionate influence of singular large errors (especially for non-relative quantities such as the absolute error), we always show the whole distribution of error values (as a boxplot), and use the median value to compare models (Fig. 3); the lower the median, the better the performance of the model. Note that these metrics highlight different aspects of the prediction. Absolute errors are correlated with the magnitude of the flow we are trying to predict, so that errors are typically larger for larger flows. Because of this, and similar to the CPC, average absolute errors are very sensitive to the errors in predicting large flows but not to errors in small flows. For the same reason, median values of the absolute error typically reflect errors in performance for typical flow values, and do not reflect the ability of a model to predict values in the whole range of flows. The absolute relative error and the absolute log-ratio do take into account the effect of the magnitude of the flow, and therefore are more informative of the global behavior of a model when the range of flows spans several orders of magnitude (Fig. 3M-X). An issue with the relative error is that while it penalizes over-prediction, it does not penalize under-prediction; in the extreme case in which the predicted flow equals zero and the real flow is larger than zero, the relative error is equal to one. As a result, distributions for relative errors in gravity-like models and the deep gravity model, in which small flows are under predicted, are centered around 1 (Fig. 3S-X; Supplementary Fig. S2). By contrast, the absolute log-ratio has the property that over- and under-prediction are equally penalized (in a logarithmic scale), that is, predicting the real flow multiplied or divided by the same factor results in same absolute log-ratio. This metric therefore captures the ability of a model to predict flows in any range of values. Figure 3: Model performance at predicting flows between municipalities. For each one of the model predictions shown in Fig. 2 and Supplementary Fig. S1 we assess model performance using four different metrics:A-F, Common part of commuters; G-L Absolute error; M-R, Absolute relative error; S-X, Absolute log-ratio. Each row corresponds to a different US state as indicated. The Common Part of Commuters is a global metric, thus we have a single value for each metric. For the other three metrics, we show the median, 50% confidence interval (box) and 95% confidence interval (whiskers).Triangles ($\blacktriangleleft$) indicate the best performing model for each metric (largest CPC or lowest median). See Methods and text for the definition and discussion of the different metrics. See Supplementary Table S3 for numerical values. Figure 4: Performance for different flow ranges. The observed flows between municipalities span six orders of magnitude and are distributed, over the six states we consider, as shown by the histogram. We measure the performance of the different models at predicting flows within each bin of flows: A, CPC; B, absolute error; C, absolute relative error; D, absolute log ratio. Using these metrics, we compare the different modeling approaches (Fig. 3). The first conclusion from this comparison is that gravity models, including the radiation model, are never optimal; for all states and metrics we consider, there is always at least one other model that performs better. This is not surprising, since these models are simple and highly stylized, and they have already been shown to make less accurate predictions than deep gravity models[10]. Perhaps more surprisingly, we find that closed-form mathematical models obtained by the BMS perform, overall, at least as well or even slightly better than machine learning models, even when the latter are much more complex and can use many features other than population and distance. For the CPC, machine learning models are the best performing model for three out of six states (the random forest is best in two states, and deep gravity in one), whereas closed- form models identified as most plausible by the BMS are optimal in the three remaining states. In the case of California, the BMS overestimates a few extremely large flows (hence the CPC$\simeq 0$), but is able to describe all other flows well. Similarly, for the relative error, random forest models are optimal in half of the States, whereas closed-form BMS models are optimal in the other half. When models are compared in terms of both absolute error and log-ratio, we find that closed-form BMS models are optimal in four of the States, whereas random forest models are optimal in the other two. Remarkably, deep gravity models are never optimal according to absolute error, relative error or log ratio. When considering how each model performs for flows in specific ranges (Fig. 4 and Supplementary Figure S3), we find that BMS models, similar to random forest models, are particularly good at modeling flows in the range that is the most common in the data. Taking into account that both random forest and deep gravity models use many more features for their predictions (39 features in total, in contrast to the three features used by gravity models and the closed-form models identified by the BMS, namely, origin and destination population, and origin-destination distance), we conclude that the symbolic regression approach using the BMS yields more parsimonious models of human mobility flows between municipalities. Closed-form models obtained by the BMS also compare well to the alternatives in terms of the fairness of their predictions[19] (Supplementary Fig. S4). In particular, we find that the random forest and the models identified by the BMS are, overall, the most consistent models across states in terms of the fairness of their predictions. Figure 5: Model performance at predicting flows at small distances. We consider the data used by Simini et al.[10] about mobility flows between census tracts within small geographical (25 km2) regions in NY state. We evaluate predictions over test data using the same metrics as in Fig. 3. Different columns correspond to different metrics and triangles ($\blacktriangleleft$) indicate the best performing model according to each metric. See Methods and text, for a complete description of the metrics, the models, and the data. ### Closed-form models also describe flows at shorter scales So far, we have analyzed within-State flows between municipalities, at any range of distances and of any size. However, it may be that the geographical, economic and demographic characteristics of smaller areas become relevant when modeling flows at shorter distances (for example, within neighborhoods or adjacent towns in large metropolitan areas). To elucidate to what extent different modeling approaches can accommodate to such short-distance flows, we adopt the framework used in Ref. [10]: we divide the state of New York in small tiles of 25 km2, and consider the flows between census tracts within each tile.[10] We use 50% of the tiles for training the different models, and then test on the flows between the remaining 50% of the tiles. For this experiment we find that, regardless of the metric used, closed-form models identified by the BMS are always more accurate than machine learning and gravity models (Fig. 5). Our results thus indicate that simple closed-form models that just consider populations of municipalities/census tracts and the distances between them provide better descriptions of mobility flows than complex models that take many more features into account, and have many more parameters, also for flows at short distances. ### The Bayesian machine scientist finds gravity-like models to describe mobility flows Our analysis indicates that the BMS is able to find closed-form mathematical models that solely consider the populations of the origin and destination and the geographic distance between them; and that these models provide predictions of mobility flows that are even more accurate than complex models such as the deep gravity model. Here, we investigate whether, besides being predictive, these closed-form models are also interpretable and insightful. We start by noting, once more, that the BMS samples hundreds of models and that, overall, they all perform well, which shows that there are many different models that can describe the data. Such a set of models is sometimes called a Rashomon set[13]. Then, the relevant question is whether these models share any common defining properties that could explain why they describe mobility flows accurately. To elucidate this question, we analyze two particularly relevant closed-form models identified by the BMS (Fig. 6): (i) the most plausible model, that is, the model that has the highest probability $p(M|D)$ given the data (or, equivalently, the shortest description length ${\mathcal{L}}(M,D)$) among all those sampled by the BMS (Methods); (ii) the median predictive model, that is, the closed-form model whose predictions for unobserved data are closest to the median prediction of the whole ensemble of sampled models (Methods). Formally, there are marked mathematical differences between both models. In particular, the most plausible model is an exponential model for the flows, while the median predictive modesl is a power-law model for the flows. However, the two models have relevant properties in common. First, both models are gravity-like models, that is, they depend on a product of the origin and destination populations (shifted by a certain amount), and inversely on a function of the geographic distance between origin and destination. This is remarkable because the BMS has not received any input about the particular shape that models should take, which suggests that the regularities in the data are well- described by this general class of models and justifies the historical use of gravity models. Second, we find that origin and destination do not necessarily play a symmetric role, which allows the model to accommodate non-symmetric flows in contrast to typical gravity models which do not allow for this possibility. Indeed, an inspection of the parameters shows that in some states such as New York or Texas, flows are much more symmetric than in others such as Florida and Massachusetts. Finally, we also find that the relative contribution of the mass product with respect to the geographical distance is consistent across models. In both models, we find a mathematically equivalent dependency on the ratio $(m_{d}m_{o})/d^{e}$, where $e=1/\beta$ in the most plausible model (Fig. 6A), and $e=\alpha$ in the median predictive model (Fig. 6B). We find that, for a given state, $\alpha\approx 1/\beta$ suggesting that this relationship is to a large extent model-independent (Fig. 6C). Furthermore, we find that the state- to-state variability is small since all exponents fall within the range $[1,2]$ (most within the range $[1.5,2]$), which suggests that reasonable models for mobility flows are gravity-like models with specific constraints in the relationship between the contributions of the mass product and the geographical distance. Figure 6: Closed-form models for mobility flows. We ran the Bayesian machine scientist (BMS) with a training set of 1000 points and three features: origin and destination populations and distance between them. We used 5 independent Markov chains of 12,000 Monte Carlo steps each. (A) Minimum description length model (Methods) for the logarithm of the data, where $d$ is the inter- municipality distance, $m_{o}$ is the origin population, and $m_{d}$ is the destination population. In the table we show the fitting parameters for each state of the training data. (B) Median predictive model (Methods) for the logarithm of the data. As before, $d$ is the inter-municipality distance, $m_{o}$ is the origin population, and $m_{d}$ is the destination population. In the table we show the fitting parameters for each state of the training data. (C) Ratio between the distance exponent and the population exponent. Blue points are obtained from the most plausible model and orange points from the median predictive model. ## Discussion Understanding human mobility is critical to address questions in urban planning and transportation, as well as global challenges in sustainability, public health, and economic development. Traditionally, mobility flows have been modeled using simple gravity models, which are conceptually simple and easy to interpret, but have limited predictive power. Recently, deep learning models have been proposed as an alternative; whereas these models are consistently and significantly more predictive than gravity models, they are not interpretable and provide little insight into human behavior. Here, we have shown that automated equation discovery approaches lead to parsimonious models that combine the most desirable aspects of both approaches—the simplicity of gravity models and a predictive power that is even better, overall, than deep learning models. Remarkably, the models we identified are gravity-like in that they are increasing functions of a certain product of populations of origin and destination, and decreasing functions of the distance between them. While the ratio between the population and the distance terms is, in principle, model and data-dependent, in the datasets we explore the ratio is roughly constant and dataset-independent. Individual mobility depends critically on the urban environment, personal preferences, commuting patterns, and accessibility to transportation and amenities. Thus, modeling mobility at an individual or small spatial scale might require more complicated models that account for routes, the purpose of the trip, points of interest, or even the demographic traits of individuals [19, 10, 20]. However, our results show that by aggregating mobility at a larger spatial scale, the movements of millions of people can be described by simple fully explainable models that do not depend on the microscopic characteristics of the origin, destination, or route taken. This is because the randomness and variability inherent in individual behaviors tend to cancel out when looked at collectively, revealing underlying trends and movements that are driven by the shared needs of large populations, and the structure of the built environment. Our results show, therefore, that the aggregated flows in human mobility can be seen as an emergent and universal property of the complex system of individual movements. More broadly, our results showcase the potential of using machine scientists[21, 22, 14] to automate the process of finding similar phenomenological closed-form models from data; and to use these models to gain insight into the relevant variables and mechanisms to describe other complex phenomena. ## Methods ### Mobility data between municipalities in the US We collected weekly flows between United States census tracts [23] for the first week of January 2019 (2019/01/07 to 2019/01/13) and the first week of March 2019 (2019/03/04 to 2019/03/9). The data consist of anonymous mobile data trajectories, regardless of the transportation method. The data set contains the geographical identifier (GEO ID) for both origin and destination census tracts as well as their corresponding geographical coordinates, the estimated number of visitors detected by SafeGraph, and the estimated population flows inferred from the number of visitors. The datasets are available at GeoDS. #### Data processing In order to obtain the flows between municipalities from the data using census tract data, we first match municipalities (cities, towns and villages) with their corresponding census tracts. Then, the total flow between two municipalities A and B is calculated as the aggregate of flows between the set of census tracts municipality A is comprised of and the set municipality B is comprised of. Specifically, we consider mobility data sets within six states in the US: New York, Massachusetts, California, Florida, Washington, and Texas. For each state, our data consist of the origin-destination municipality names, the flow, the distance, and the origin-destination populations and POI categories. We only consider municipalities with a non-zero population and pairs of municipalities with non-zero flow; we do not consider flows within the same municipality (See Supplementary Table LABEL:t.summary for details). #### Information about municipalities and census tracts We obtained shapefiles containing the geographical coordinates of the polygons delimiting census tracts and municipalities from United States Census Bureau. We also retrieved population data of each municipality using the GEO ID Data Commons. Finally, using a local copy of Open Street Map (OSM) with Overpass API, we retrieved information about Points of Interest (POI) in each census tract. We selected 18 categories of OSM elements that represent geographical, demographic and socio-economic features of the different municipalities. #### Construction of train and test datasets To generate the train and test datasets for each state, we divide municipalities in two approximately equal sets at random. The training fold comprises flows between municipalities in the first set; the test fold comprises flows between municipalities in the second set. By doing so, we ensure that we test the learned models on data never observed before (not only in terms of flows, but in terms of municipality features as well). Tables 1 and 2 show the characteristics of train and test datasets. To speed up the training process, we train the models with the same random sample of 1000 points of the training fold except for the Deep Gravity model for which we have to use a lager number of data points for training. State | Entries | Municipalities | Flow | Distance (Km) | Population ---|---|---|---|---|--- | | | Min | Max | Min | Max | Min | Max New York | 1000 | 217 | 26 | 122991 | 1.83 | 489.80 | 185 | $8.80\cdot 10^{6}$ Massachusetts | 1000 | 75 | 19 | 66946 | 2.34 | 205.51 | 1029 | 675647 California | 1000 | 260 | 14 | 91267 | 2.12 | 1078.21 | 237 | $1.013\cdot 10^{6}$ Florida | 1000 | 223 | 7 | 78316 | 2.54 | 607.50 | 251 | 949611 Washington | 1000 | 107 | 20 | 71796 | 2.17 | 467.16 | 20 | 228989 Texas | 1000 | 177 | 21 | 192660 | 1.84 | 975.16 | 173 | $2.30\cdot 10^{6}$ Table 1: Train dataset. Number of points and municipalities in the train set for each state, obtained form a random sample of 1000 points of the original train fold. We also detail the lowest and largest flow, distance and municipality population. State | Entries | Municipalities | Flow | Distance (Km) | Population ---|---|---|---|---|--- | | | Min | Max | Min | Max | Min | Max New York | 5952 | 249 | 20 | 35063 | 0.77 | 531.39 | 361 | 211569 Massachusetts | 1180 | 75 | 9 | 55499 | 2.16 | 267.24 | 1517 | 206518 California | 11727 | 319 | 2 | 416696 | 1.09 | 1084.34 | 129 | $3.9\cdot 10^{6}$ Florida | 7092 | 245 | 3 | 79624 | 1.14 | 875.82 | 78 | 442241 Washington | 2083 | 109 | 12 | 54245 | 1.48 | 431.53 | 487 | 737015 Texas | 2782 | 192 | 3 | 112865 | 2.46 | 1196.33 | 106 | $1.43\cdot 10^{6}$ Table 2: Test dataset. Number of points and municipalities in the test set for each state. We also detail the lowest and largest flow, distance and municipality population. #### Economic classification of municipalities We retrieve the median income per capita of each municipality and state as of 2020 from Data Commons. Then we classify each municipality with the label _rich_ if the median income of the municipality is above the median income of the state, or _poor_ if the median income per capita of the municipality is below the median income of the state. ### Mobility data at small scales from Simini et al. [10]. The code available for the Deep Gravity model provides data at the level of census tracts for the New York State area. Mobility flows are obtained from the same dataset. In order to obtain the predictions of the Deep Gravity model for each individual trip and also the same dataset, we run the program and save, for each trip, the corresponding tile, the real value, the prediction and the variables. We store the train and test sets for the comparison with other methods. ### Bayesian Machine Scientist The Bayesian machine scientist (BMS) is a Bayesian approach to symbolic regression that estimates the plausibility of a closed-form mathematical model $M$ given the observed data $D$ as the posterior probability $p(M|D)$. Without loss of generality, this posterior can be written [14] $p(M|D)=\frac{\exp\left[-{\mathcal{L}}(M,D)\right]}{Z}\,,$ (2) where ${\mathcal{L}}(M,D)$ is the description length[24] of the model (and the data), and $Z=\sum_{M^{\prime}}\exp\left[-{\mathcal{L}}(M^{\prime},D)\right]=p(D)$ is the evidence. Within standard approximations[25, 14], the description length can be computed as ${\mathcal{L}}(M,D)=\frac{B(M,D)}{2}-\log p(M)\,,$ (3) where $B(M,D)$ is the Bayesian information criterion[25], which is straightforward to calculate from the data, and $p(M)$ is a suitable prior distribution over models[14]. In this work, our goal is to simultaneously model flows from six different states in the US (that is, six different datasets). To this end, we use a multi-dataset approach[15] which consists in finding a unique closed-form model for multiple datasets. For each dataset, we allow model parameters to take different values[15]. For a single dataset $D=\\{(y_{i},{\bf x}_{i})\\}$, where $\\{y_{i}\\}$ is the set of observations and $\\{{\mathbb{x}}_{i}\\}$ is the set of feature values associated to each observation, the description length of a closed-form model $M$ and the data is given by Eq. (3). In the case in which our data comprises $K$ independent datasets $D=\\{D_{k},\leavevmode\nobreak\ k=1\dots,K\\}=\left\\{\\{(y^{1}_{i},{\mathbb{x}}^{1}_{i})\\},\dots,\\{(y^{K}_{i},{\mathbb{x}}^{K}_{i})\\}\right\\}$ that we want to model using a single model $M$, the description length is[15] ${\mathcal{L}}(M,D)=\frac{1}{2}\sum_{k}B(M,D_{k})-\log p(M)\leavevmode\nobreak\ .$ (4) The BMS represents closed-form models as labeled trees and uses Markov chain Monte Carlo (MCMC) to explore the space of closed-form mathematical models by sampling from the posterior distribution $p(M|D)\propto\exp(-{\mathcal{L}})$. In this work, we consider models for flows with three independent variables (populations at origin and destination and geographical distance) and up to six parameters. The ensemble of sampled models allows to make predictions on the test set using three different approaches: 1. 1. The most plausible model. This is the model with the shortest description length that the BMS is able to find. 2. 2. The ensemble of models. Ensemble predictions are the optimal predictions since they correspond to fully integrating over model space. We estimate this integral by averaging over the predictions made by each one for the models we sample. Specifically, we perform five independent realizations of 12,000 MCMC steps. We then collect a model every 100 steps within the last 2,000 steps of the Markov chain to obtain an ensemble of 100 models. 3. 3. The median predictive model. This is the model within the ensemble whose predictions are closest to the predictions of the ensemble as a whole. ### Benchmark models #### Gravity model. We consider the gravity model in its simplest form [1] in which the observed flow $T_{ij}$ between municipalities $(i,j)$ is a function of the populations $m_{i}$ and $m_{j}$ of the municipalities and the distance $d_{ij}$ between them $T_{ij}=C\frac{m_{i}\,m_{j}}{f(d_{ij})}\,.$ (5) Here $C$ is a scaling parameter and $f(d)$ is a function of the distance. Specifically, we consider two possible choices for $f(d)$: i) a power-law $f_{\rm pow}(d)=d^{\alpha}$; and ii) an exponential-law $f_{\rm exp}(d)=\exp(\alpha d)$. In both cases, the parameter $\alpha$ is obtained by fitting the model to the data in the training set. Because flows span several orders of magnitude, we find that training the model on the logarithm of the flows, rather than the flows themselves, leads to more predictive models. Therefore, all results reported here for gravity models correspond to this approach. #### Radiation model. We consider the original formulation of the model [4], in which flow $T_{ij}$ is modeled as the the total outflow of an origin municipality $T_{i}$ times the probability of going from $i$ to $j$. This probability depends on the populations of the origin ($m_{i}$) and destination($m_{j}$), as well as the populations of the municipalities within a radius $d_{ij}$ from the municipality at the origin: $T_{ij}=T_{i}\,p_{i\to j}=T_{i}\frac{m_{i}\,m_{j}}{(m_{i}+s_{ij})\,(m_{i}+m_{j}+s_{ij})},$ (6) where $s_{ij}=\sum_{k\neq i,j}m_{k}(\forall k\;d_{ik}<d_{ij})$. Recent works introduce modifications to this model for finite-size systems and in order to avoid border effects [26, 27, 4, 28, 29](See Supplementary Fig. S5). However, we find that these models do not outperform the original formulation . #### Random Forest. We implement a Random Forest regressor [30] with 1,000 estimators. As input data we use a total of 39 features for each origin-destination pair which include: distance, population at origin and destination, and 36 geographical and socio-economic features of the origin and destination areas (see Data and Supplementary Table LABEL:t.poi). #### Deep Gravity. Taking as a baseline the algorithm developed in [10], we modified the model to predict flows between municipalities rather than between small geographic regions resulting from tessellation. The major difference with the original model is that municipalities are now the smallest geographic unit, allowing us to compare with the models evaluated in this study. In all other aspects (features used, pre-processing of data, an model training) the model remains the same. The modified version of the code can be consulted and downloaded here https://github.com/ocabanas/Symbolic_mobility_BMS/DeepGravity. ### Metrics #### Common Part of Commuters (CPC). It is a widely used metric to analyze the performance of mobility models that is defined as $\mathsf{CPC}=\frac{2\sum_{ij}{\rm min}(T_{ij},T_{ij}^{*})}{\sum_{ij}T_{ij}+T_{ij}^{*}}\;,$ (7) where $T_{ij}$ is the predicted value of the flow from $i$ to $j$ and $T^{*}_{ij}$ is the observed flow. The maximum value of the CPC is $1$ if there is a complete agreement between the real data and predictions and it decreases to $0$ if all predictions for any flow are equal to zero. Note that the CPC is biased toward models that make accurate predictions for large flows, since smaller flows have marginal contributions to the sums. This is especially critical in mobility data where flows can span several orders of magnitude (see Tables 1, 2). #### Absolute error. It measures the distance between the real and predicted data $E_{ij}=|T_{ij}-T_{ij}^{*}|\leavevmode\nobreak\ \leavevmode\nobreak\ .$ (8) Note that absolute error scales with the size of the flows, so that the average absolute error is biased towards the errors of average flows. For this reason we represent the whole distribution. #### Relative error. It measures the difference between real and predicted flows relative to the observed value of the flow: $\epsilon_{ij}=|\frac{T_{ij}-T_{ij}^{*}}{T_{ij}^{*}}|\leavevmode\nobreak\ .$ (9) Note that, while over-predicting flows is penalized by the relative error, under-predicting flows is not, since predicting a zero value for a non-zero flow results in $\epsilon_{ij}=1$. Because this can bias average relative errors, we plot the whole distribution. #### Absolute log-ratio. It measures the difference in the logarithms of predicted and real flows: $LR_{ij}=|\log\frac{T_{ij}}{T_{ij}^{*}}|\leavevmode\nobreak\ .$ (10) Note that for a perfect prediction this metric is equal to zero. Importantly this metric penalizes both over- and under-predictions equally. For instance, a prediction of a flow twice as large $T_{ij}=2T^{*}_{ij}$ has $LR_{ij}=\log\leavevmode\nobreak\ {2}$, and a prediction $T_{ij}=T^{*}_{ij}/2$ has $LR_{ij}=\log\leavevmode\nobreak\ {2}$ as well. #### Proportional Demographic Parity (PDP). The goal of this metric is to quantify the fairness of a model when predicting flows between different demographic or socio-economic groups $\\{g\in\mathcal{G}\\}$. To do so, it quantifies to what extent the errors of the predictions for flows across pairs of groups $\\{f_{ij}\equiv(g_{i},g_{j})\in\mathcal{G}^{2}\equiv\mathcal{G}\times\mathcal{G}\\}$ are equally distributed for all pairs of (different) groups. Consider that $\bar{l}$ is the median error of all flows, and $\tau$ is a percentile window around the median ($0\leq\tau\leq 100$). For a pair of flow groups $(f_{1},f_{2})$, $\mathsf{PDP}$ estimates the difference between their error distributions as $\mathsf{PDP}_{f_{1},f_{2}}=\left|P(\overline{l}-\frac{\tau}{2}\leq l\leq\overline{l}+\frac{\tau}{2}|f_{1})-P(\overline{l}-\frac{\tau}{2}\leq l\leq\overline{l}+\frac{\tau}{2}|f_{2})\right|$ (11) where $P(\cdot|f_{i})$ is the probability that a prediction of a flow in flow group $f_{i}$ has an error $l$ such that $\overline{l}-\frac{\tau}{2}\leq l\leq\overline{l}+\frac{\tau}{2}$. To get an overall estimate, then $\mathsf{PDP}$ uses a weighted average[19] $\mathsf{PDP}=\sum_{f,h\in\mathcal{G}^{2},f\neq h}w_{f,h}\,\mathsf{PDP}_{f,h}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ ,$ (12) where the weight $w_{f,h}=\sum_{k\in\mathcal{G}^{2}}N_{k}/\left(N_{f}+N_{h}\right)$ enhances the relative contribution of small groups of flows. Note that our approach to measure $\mathsf{PDP}$ is a generalization of that used in Ref.[19], where, instead of percentile windows, the authors consider $\tau$ to be a standard deviation around the mean. However, because error distributions are not Gaussian in general (see explanation for the different error metrics), we use a more general definition applicable to any distribution. In our analysis, we consider two groups of municipalities: above (rich) and below (poor) the median income per capita (see Data). Therefore we have four different flow groups $\mathcal{G}^{2}=\\{{\rm poor\to poor},\,{\rm rich\to poor},\,{\rm poor\to rich},\,{\rm rich\to rich}\\}$. ## References * [1] Zipf, G. K. The p1 p2/d hypothesis: On the intercity movement of persons. _American Sociological Review_ 11, 677–686 (1946). * [2] Erlander, S. & Stewart, N. F. _The gravity model in transportation analysis: theory and extensions_ , vol. 3 (Vsp, 1990). * [3] Guimerà, R., Mossa, S., Turtschi, A. & Amaral, L. A. N. The worldwide air transportation network: Anomalous centrality, community structure, and cities’ global roles. _Proc. Natl. Acad. Sci. USA_ 102, 7794–7799, 10.1073/pnas.0407994102 (2005). * [4] Simini, F., González, M. C., Maritan, A. & Barabási, A.-L. A universal model for mobility and migration patterns. _Nature_ 484, 96–100, 10.1038/nature10856 (2012). * [5] Schläpfer, M. _et al._ The universal visitation law of human mobility. _Nature_ 593, 522–527, 10.1038/s41586-021-03480-9 (2021). * [6] Yuan, H. & Li, G. A survey of traffic prediction: from spatio-temporal data to intelligent transportation. _Data Science and Engineering_ 6, 63–85 (2021). * [7] Haynes, K. E. & Fotheringham, A. S. _Gravity and Spatial Interaction Models_. No. 07 in Wholbk (Regional Research Institute, West Virginia University, 1985). * [8] Moro, E., Calacci, D., Dong, X. & Pentland, A. Mobility patterns are associated with experienced income segregation in large us cities. _Nature Communications_ 12, 4633, 10.1038/s41467-021-24899-8 (2021). * [9] Balcan, D. _et al._ Modeling the spatial spread of infectious diseases: The global epidemic and mobility computational model. _Journal of computational science_ 1, 132–145 (2010). * [10] Simini, F., Barlacchi, G., Luca, M. & Pappalardo, L. A Deep Gravity model for mobility flows generation. _Nature Communications_ 12, 6576, 10.1038/s41467-021-26752-4 (2021). * [11] Pappalardo, L., Rinzivillo, S. & Simini, F. Human mobility modelling: Exploration and preferential return meet the gravity model. _Procedia Computer Science_ 83, 934–939, https://doi.org/10.1016/j.procs.2016.04.188 (2016). The 7th International Conference on Ambient Systems, Networks and Technologies (ANT 2016) / The 6th International Conference on Sustainable Energy Information Technology (SEIT-2016) / Affiliated Workshops. * [12] Chen, Y. The distance-decay function of geographical gravity model: Power law or exponential law? _Chaos, Solitons & Fractals_ 77, 174–189, https://doi.org/10.1016/j.chaos.2015.05.022 (2015). * [13] Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. _Nat. Mach. Intell._ 1, 206–215 (2019). * [14] Guimerà, R. _et al._ A Bayesian machine scientist to aid in the solution of challenging scientific problems. _Science Advances_ 6, eaav6971, 10.1126/sciadv.aav6971 (2020). https://www.science.org/doi/pdf/10.1126/sciadv.aav6971. * [15] Reichardt, I., Pallarès, J., Sales-Pardo, M. & Guimerà, R. Bayesian machine scientist to compare data collapses for the Nikuradse dataset. _Phys. Rev. Lett._ 124, 084503, 10.1103/PhysRevLett.124.084503 (2020). * [16] Fajardo-Fontiveros, O. _et al._ Fundamental limits to learning closed-form mathematical models from data. _Nature Communications_ 14, 1043, 10.1038/s41467-023-36657-z (2023). * [17] Vallès-Català, T., Peixoto, T. P., Sales-Pardo, M. & Guimerà, R. Consistencies and inconsistencies between model selection and link prediction in networks. _Phys. Rev. E_ 97, 062316, 10.1103/PhysRevE.97.062316 (2018). * [18] Ho, T. K. Random decision forests. In _Proceedings of 3rd international conference on document analysis and recognition_ , vol. 1, 278–282 (IEEE, 1995). * [19] Liu, Z., Huang, L., Fan, C. & Mostafavi, A. Fairmobi-net: A fairness-aware deep learning model for urban mobility flow generation (2023). 2307.11214. * [20] Feng, J. _et al._ DeepMove: Predicting Human Mobility with Attentional Recurrent Networks. _Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW ’18_ 1459–1468, 10.1145/3178876.3186058 (2018). * [21] Džeroski, S. & Todorovski, L. (eds.) _Computational Discovery of Scientific Knowledge_. Lecture Notes in Artificial Intelligence (Springer, 2007). * [22] Evans, J. & Rzhetsky, A. Machine science. _Science_ 329, 399–400 (2010). * [23] Kang, Y., Gao, S., Liang, Y., Li, M. & Kruse, J. Multiscale dynamic human mobility flow dataset in the U.S. during the COVID-19 epidemic. _Scientific Data_ 1–13 (2020). * [24] Grünwald, P. D. _The Minimum Description Length Principle_ , vol. 1 of _MIT Press Books_ (The MIT Press, 2007). * [25] Schwarz, G. Estimating the Dimension of a Model. _The Annals of Statistics_ 6, 461 – 464, 10.1214/aos/1176344136 (1978). * [26] Masucci, A. P., Serras, J., Johansson, A. & Batty, M. Gravity versus radiation models: On the importance of scale and heterogeneity in commuting flows. _Phys. Rev. E_ 88, 022812, 10.1103/PhysRevE.88.022812 (2013). * [27] Yang, Y., Herrera, C., Eagle, N. & González, M. C. Limits of predictability in commuting flows in the absence of data for calibration. _Scientific Reports_ 4, 5662, 10.1038/srep05662 (2014). * [28] Lenormand, M., Huet, S., Gargiulo, F. & Deffuant, G. A universal model of commuting networks. _PLOS ONE_ 7, 1–7, 10.1371/journal.pone.0045985 (2012). * [29] Lenormand, M., Bassolas, A. & Ramasco, J. J. Systematic comparison of trip distribution laws and models. _Journal of Transport Geography_ 51, 158–169, https://doi.org/10.1016/j.jtrangeo.2015.12.008 (2016). * [30] Pedregosa, F. _et al._ Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ 12, 2825–2830 (2011). ## Acknowledgements We thank L. Pappalardo and M. Luca for help with the Deep Gravity algorithm, and M. Luca for comments and suggestions on the manuscript. This research was supported by projects PID2019-106811GB-C32 (E.M), PID2019-106811GB-C31 and PID2022-142600NB-I00 (M.SP. and R.G.), and FPI grant PRE2020-095552 (O.CT.) from MCIN/AEI/10.13039/501100011033, and by the Government of Catalonia (2021SGR-633) (M.SP. and R.G.). E.M. acknowledges support from the National Science Foundation under Grant No. 2218748. ## Author contributions statement O.C. collected data. O.C. and L.D. wrote code and performed experiments. All authors designed research, analyzed results, discussed results, and wrote the paper. ### Data and code availability All data are available as described in the Methods section. Data used for the evaluation of the Deep Gravity model are available at Github. The code for the BMS is available from https://bitbucket.org/rguimera/machine-scientist. ## Competing interests The authors declare no conflict of interest.
In particle simulations, often the dynamics results from the summation of all pair-wise forces in the ensemble of particles. Such situations arise in astrophysics, molecular dynamics, plasma physics, and certain formulations of fluid dynamics problems, for example. The total field of interest (gravitational, electrostatic, etc.) at one evaluation point requires adding the contribution of all source points or particles, and so if both evaluation points and particles are numbered at $N$, a total of $N^{2}$ operations is needed. This fact was for a long time an impediment to the wider use of particle simulations, as the computational effort becomes prohibitive for large numbers of particles. The above scenario changed dramatically with the introduction of tree-codes and the fast multipole method (FMM), which appear in the late 1980s for accelerated evaluation of $N$-body problems. Tree-codes Appel1985,BarnesHut1986 are generally perceived to be easier to grasp and program, and provide a complexity of $\mathcal{O}(N\log N)$. The FMM was introduced as an algorithm for the rapid evaluation of gravitational or Coulombic interactions greengard+rokhlin1987 and promises a reduction in computational complexity to $\mathcal{O}(N)$. It has, since its dissemination, been adapted for many applications: for fast evaluation of boundary elements Gaspar1998, for vortex sheet problems with desingularized equations HamiltonMajda1995, for long-range electrostatics in DNA simulations FenleyETal1996, and many others. The impact of the FMM has been undeniably huge, resulting in it being chosen as one of the Top 10 Algorithms of the 20th Century DongarraSulli2000. Despite the great volume of work using and adapting the FMM in many application areas, there remains some lack of insight regarding how the algorithm can be efficiently used to obtain an accurate representation of the field of interest. The error of the FMM approximation is estimated by theoretical bounds, which as could be expected reveal a trade-off between accuracy and efficiency of the computation. However, there is not much literature providing measurements of the accuracy of the approximation, in practice. One may often find such assertions in published works as “only the first three moments of the expansion were used”, or something to that effect. But just as often there is no information provided about the actual errors which are observed. Of course, it is not easy to provide such measures of observed error, as this would require additional computations using the direct $\mathcal{O}(N^{2})$ method, for comparison purposes. Nevertheless, it is important for users of the algorithm to know what to expect in terms of accuracy and efficiency, depending on the choice of algorithm parameters. We aim to contribute to this gap in understanding by presenting a methodical investigation into the errors of the approximation used by the FMM, when the underlying ‘client’ application is the calculation of the velocity field induced by $N$ regularized particles of vorticity. This application is rather more demanding than the Newtonian force calculation, because in the latter case the gravitational interaction is dominated by the first moment —due to the fact that all mass is positive. Therefore, keeping only the first three moments could easily give the desired accuracy. On the other hand, as in Coulomb electrostatic calculations, the vortex particles can be both positive and negative, and thus an acceptable accuracy may require that more terms in the expansion be kept. For the purposes of this study, a prototype code of the FMM computation of the velocity induced by $N$ vortex particles was implemented using the Python††http://www.python.org/ language. The nice features of Python —such as dynamic typing, extensive numerical libraries, and high programmer productivity— helped us produce a piece of software which is easy to use and easy to understand. We are currently using this Python code as a starting point for a parallel version of the code which, in collaboration with members of the developer team, will be incorporated to the PETSc library for scientific computingpetsc-manual. This project will be reported elsewhere, but preliminary results are being presented in the upcoming Parallel CFD meetingCruzBarbaKnepley2008a. Our final aim is to contribute to the community of particle simulations with an open source FMM implementation which is parallel and portable. For the time being, the Python code is being made available publicly and we welcome correspondence from interested researchers††http://www.maths.bris.ac.uk/ aelab/research/pyFMM.html . Using the Python FMM code, more than 900 calculations were performed, varying the numerical parameters: $N$, the number of particles, $l$, the number of levels in the tree, and $p$, the truncation level of the multipole expansion. We looked not only at the maximum error in the domain, which would be the conventional approach; we also present results showing how the error varies in space, revealing some interesting features of the method. Through this presentation of the results, we believe a clear characterization of the nature of the FMM approximation is obtained. The paper is organized as follows. The next section presents an outline of the vortex particle method, for completeness. In §fmm, we offer an overview of the FMM, with some details of our implementation. Following, in §errors, we discuss briefly the sources of errors in the FMM algorithm. And finally, §results reports the detailed experiments using the FMM for evaluation of the velocity of $N$ vortex particles; the behavior of the method will be illustrated for varying parameters, as well as the impact on the efficiency of the calculation, for different problem sizes.
* Agarwal et al. (2017) Agarwal, Sumit, Itzhak Ben-David, and Vincent Yao, 2017, Systematic mistakes in the mortgage market and lack of financial sophistication, Journal of Financial Economics 123, 42–58. * Agarwal et al. (2014) Agarwal, Sumit, Souphala Chomsisengphet, Neale Mahoney, and Johannes Stroebel, 2014, Regulating consumer financial products: Evidence from credit cards, The Quarterly Journal of Economics 130, 111–164. * Alan and Loranth (2013) Alan, Sule, and Gyongyi Loranth, 2013, Subprime consumer credit demand: Evidence from a lender’s pricing experiment, The Review of Financial Studies 26, 2353–2374. * Allcott et al. (2021) Allcott, Hunt, Joshua Kim, Dmitry Taubinsky, and Jonathan Zinman, 2021, Are high-interest loans predatory? Theory and evidence from payday lending, The Review of Economic Studies 89, 1041–1084. * Ally (2017) Ally, 2017, Ally Auto Receivables Trust, Prospectus 2017-3, Ally Auto Assets LLC. * Ally (2019) Ally, 2019, Ally Auto Receivables Trust, Prospectus 2019-3, Ally Auto Assets LLC. * Andersen et al. (1993) Andersen, Per Kragh, Ørnulf Borgan, Richard D. Gill, and Niels Keiding, 1993, Statistical Models Based on Counting Processes (Springer). * Assunção et al. (2013) Assunção, Juliano J., Efraim Benmelech, and Fernando S. S. Silva, 2013, Repossession and the democratization of credit, The Review of Financial Studies 27, 2661–2689. * Ausubel (1991) Ausubel, Lawrence M., 1991, The failure of competition in the credit card market, The American Economic Review 81, 50–81. * Ayres and Siegelman (1995) Ayres, Ian, and Peter Siegelman, 1995, Race and gender discrimination in bargaining for a new car, The American Economic Review 85, 304–321. * Banasik et al. (1999) Banasik, John, Jonathan N. Crook, and L. C. Thomas, 1999, Not if but when will borrowers default, Journal of the Operational Research Society 50, 1185–1190. * Bertrand and Morse (2011) Bertrand, Marianne, and Adair Morse, 2011, Information disclosure, cognitive biases, and payday borrowing, The Journal of Finance 66, 1865–1893. * Beyersmann et al. (2009) Beyersmann, Jan, Aurélien Latouche, Anika Buchholz, and Martin Schumacher, 2009, Simulating competing risks data in survival analysis, Statistics in Medicine 28, 956–971. * Blumenstock et al. (2022) Blumenstock, Gabriel, Stefan Lessmann, and Hsin-Vonn Seow, 2022, Deep learning for survival and competing risk modelling, Journal of the Operational Research Society 73, 26–38. * Butler et al. (2022) Butler, Alexander W, Erik J Mayer, and James P Weston, 2022, Racial disparities in the auto loan market, The Review of Financial Studies hhac029. * Calem and Mester (1995) Calem, Paul S., and Loretta J. Mester, 1995, Consumer behavior and the stickiness of credit-card interest rates, The American Economic Review 85, 1327–1336. * Campbell (2016) Campbell, John Y., 2016, Restoring rational choice: The challenge of consumer financial regulation, American Economic Review 106, 1–30. * CarMax (2017) CarMax, 2017, CarMax Auto Owner Trust, Prospectus 2017-2, CarMax Business Services LLC. * CarMax (2019) CarMax, 2019, CarMax Auto Owner Trust, Prospectus 2019-4, CarMax Business Services LLC. * Cohen (1998) Cohen, Lloyd, 1998, The puzzling phenomenon of interest‐rate discounts on auto loans, The Journal of Legal Studies 27, 483–501. * Consumer Financial Protection Bureau (2019) Consumer Financial Protection Bureau, 2019, Borrower risk profiles, url: https://www.consumerfinance.gov/data-research/consumer-credit-trends/auto-loans/borrower-risk-profiles/ (Accessed: 2022-06-15). * Crowder (2001) Crowder, Martin J, 2001, Classical Competing Risks (Chapman and Hall/CRC). * Dai et al. (2016) Dai, Hongsheng, Marialuisa Restaino, and Huan Wang, 2016, A class of nonparametric bivariate survival function estimators for randomly censored and truncated data, Journal of Nonparametric Statistics 28, 736–751. * De Leonardis and Rocci (2008) De Leonardis, Daniele, and Roberto Rocci, 2008, Assessing the default risk by means of a discrete-time survival analysis approach, Appl. Stoch. Model. Bus. Ind. 24, 291–306. * Dirick et al. (2017) Dirick, Lore, Gerda Claeskens, and Bart Baesens, 2017, Time to default in credit scoring using survival analysis: A benchmark study, Journal of the Operational Research Society 68, 652–665. * Dobbie et al. (2021) Dobbie, Will, Andres Liberman, Daniel Paravisini, and Vikram Pathania, 2021, Measuring bias in consumer lending, The Review of Economic Studies 88, 2799–2832. * Edelberg (2006) Edelberg, Wendy, 2006, Risk-based pricing of interest rates for consumer loans, Journal of Monetary Economics 53, 2283–2298. * Edelberg (2007) Edelberg, Wendy, 2007, Racial dispersion in consumer credit interest rates, Finance and Economics Discussion Series 2007-28, Board of Governors of the Federal Reserve System (U.S.). * Einav et al. (2012) Einav, Liran, Mark Jenkins, and Jonathan Levin, 2012, Contract pricing in consumer credit markets, Econometrica 80, 1387–1432. * Fine and Gray (1999) Fine, Jason P, and Robert J Gray, 1999, A proportional hazards model for the subdistribution of a competing risk, Journal of the American Statistical Association 94, 496–509. * Frydman and Matuszyk (2022) Frydman, Halina, and Anna Matuszyk, 2022, Random survival forest for competing credit risks, Journal of the Operational Research Society 73, 15–25. * Fulford (2015) Fulford, Scott L., 2015, How important is variability in consumer credit limits?, Journal of Monetary Economics 72, 42–63. * Geskus (2011) Geskus, Ronald B., 2011, Cause-specific cumulative incidence estimation and the Fine and Gray model under both left truncation and right censoring, Biometrics 67, 39–49. * Gross and Souleles (2002) Gross, David B., and Nicholas S. Souleles, 2002, Do liquidity constraints and interest rates matter for consumer behavior? Evidence from credit card data, The Quarterly Journal of Economics 117, 149–185. * Grunewald et al. (2020) Grunewald, Andreas, Jonathan A Lanning, David C Low, and Tobias Salz, 2020, Auto dealer loan intermediation: Consumer behavior and competitive effects, Working Paper 28136, National Bureau of Economic Research. * Heidhues and Kőszegi (2016) Heidhues, Paul, and Botond Kőszegi, 2016, Naïeté-based discrimination, The Quarterly Journal of Economics 132, 1019–1054. * Huang and Wang (1995) Huang, Ying, and Mei-Cheng Wang, 1995, Estimating the occurrence rate for prevalent survival data in competing risks models, Journal of the American Statistical Association 90, 1406–1415. * Ishwaran et al. (2014) Ishwaran, Hemant, Thomas A. Gerds, Udaya B. Kogalur, Richard D. Moore, Stephen J. Gange, and Bryan M. Lau, 2014, Random survival forests for competing risks, Biostatistics 15, 757–773. * Kalbfleisch and Prentice (2011) Kalbfleisch, John D, and Ross L Prentice, 2011, The Statistical Analysis of Failure Time Data (John Wiley & Sons). * Karger (2003) Karger, Howard Jacob, 2003, No deals on wheels: How and why the poor pay more for basic transportation, Journal of Poverty 7, 93–112. * Keys et al. (2016) Keys, Benjamin J., Devin G. Pope, and Jaren C. Pope, 2016, Failure to refinance, Journal of Financial Economics 122, 482–499. * Klugman et al. (2012) Klugman, Stuart A., Harry H. Panjer, and Gordon E. Willmot, 2012, Loss Models: From Data to Decisions, Fourth Edition (John Wiley & Sons, Inc., Hoboken, New Jersey). * Lautier et al. (2021) Lautier, Jackson, Vladimir Pozdnyakov, and Jun Yan, 2021, Estimating a distribution function for discrete data subject to random truncation with an application to structured finance, arXiv 1, 1–26. * Lautier et al. (2023) Lautier, Jackson P., Vladimir Pozdnyakov, and Jun Yan, 2023, Pricing time-to-event contingent cash flows: A discrete-time survival analysis approach, Insurance: Mathematics and Economics 110, 53–71. * Lee et al. (2018a) Lee, Changhee, William Zame, Jinsung Yoon, and Mihaela van der Schaar, 2018a, Deephit: A deep learning approach to survival analysis with competing risks, Proceedings of the AAAI Conference on Artificial Intelligence 32\. * Lee et al. (2018b) Lee, Minjung, Eric J. Feuer, and Jason P. Fine, 2018b, On the analysis of discrete time competing risks data, Biometrics 74, 1468–1481. * Lehmann and Casella (1998) Lehmann, E.L., and George Casella, 1998, Theory of Point Estimation, 2nd Edition (Springer). * Lim et al. (2014) Lim, Younghee, Trey Bickham, Cassie M. Dinecola, Julia Broussard, Brittany E. Weber, and Alethia Gregory, 2014, Payday loan use and consumer well-being: What consumers and social workers need to know about payday loans, Journal of Poverty 18, 379–398. * Livshits (2015) Livshits, Igor, 2015, Recent developments in consumer credit and default literature, Journal of Economic Surveys 29, 594–613. * Lusardi and de Bassa Scheresberg (2013) Lusardi, Annamaria, and Carlo de Bassa Scheresberg, 2013, Financial literacy and high-cost borrowing in the United States, Working Paper 18969, National Bureau of Economic Research. * Manheim (2023) Manheim, 2023, Used vehicle value index, url: https://publish.manheim.com/en/services/ consulting/used-vehicle-value-index.html (Accessed: 2023-03-14). * Melzer (2011) Melzer, Brian T., 2011, The real costs of credit access: Evidence from the payday lending market, The Quarterly Journal of Economics 126, 517–555. * Mian and Sufi (2012) Mian, Atif, and Amir Sufi, 2012, The effects of fiscal stimulus: Evidence from the 2009 cash for clunkers program, The Quarterly Journal of Economics 127, 1107–1142. * Morse (2011) Morse, Adair, 2011, Payday lenders: Heroes or villains?, Journal of Financial Economics 102, 28–44. * Mukhopadhyay (2000) Mukhopadhyay, Nitis, 2000, Probability and Statistical Inference (Marcel Dekker, New York, NY). * Obama (2010) Obama, Barack, 2010, Remarks by the President at Signing of Dodd-Frank Wall Street Reform and Consumer Protection Act, Office of the Press Secretary, The White House. * Phillips (2013) Phillips, Robert, 2013, Optimizing prices for consumer credit, Journal of Revenue & Pricing Management 12\. * Pintilie (2006) Pintilie, Melania, 2006, Competing Risks: A Practical Perspective (John Wiley & Sons). * Pollard et al. (2021) Pollard, Jane, Evelyn Blumenberg, and Stephen Brumbaugh, 2021, Driven to debt: Social reproduction and (auto)mobility in los angeles, Annals of the American Association of Geographers 111, 1445–1461. * Prentice et al. (1978) Prentice, R. L., J. D. Kalbfleisch, A. V. Peterson, N. Flournoy, V. T. Farewell, and N. E. Breslow, 1978, The analysis of failure times in the presence of competing risks, Biometrics 34, 541–554. * Pressman and Scott (2009) Pressman, Steven, and Robert Scott, 2009, Consumer debt and the measurement of poverty and inequality in the US, Review of Social Economy 67, 127–148. * R Core Team (2022) R Core Team, 2022, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria. * Robb et al. (2015) Robb, Cliff, Patryk Babiarz, Ann Woodyard, and Martin Seay, 2015, Bounded rationality and use of alternative financial services, Journal of Consumer Affairs 49, 407–435. * Rosenbaum (2020) Rosenbaum, Eric, 2020, The used car boom is one of the hottest, and trickiest, coronavirus markets for consumers, url: https://www.cnbc.com/2020/10/15/used-car-boom-is-one-of-hottest-coronavirus-markets-for-consumers.html (Accessed: 2023-03-14). * Sankaran and Antony (2007) Sankaran, P.G., and Ansa Alphonsa Antony, 2007, Bivariate competing risks models under random left truncation and right censoring, Sankhyā: The Indian Journal of Statistics (2003-2007) 69, 425–447. * Santander (2017a) Santander, 2017a, Drive Auto Receivables Trust, Prospectus 2017-1, Santander Drive Auto Receivables LLC. * Santander (2017b) Santander, 2017b, Santander Drive Auto Receivables Trust, Prospectus 2017-2, Santander Drive Auto Receivables LLC. * Santander (2019a) Santander, 2019a, Drive Auto Receivables Trust, Prospectus 2019-4, Santander Drive Auto Receivables LLC. * Santander (2019b) Santander, 2019b, Santander Drive Auto Receivables Trust, Prospectus 2019-3, Santander Drive Auto Receivables LLC. * Schmid and Berger (2021) Schmid, Matthias, and Moritz Berger, 2021, Competing risks analysis for discrete time-to-event data, Wiley Interdisciplinary Reviews: Computational Statistics 13, e1529. * Securities and Exchange Commission (2014) Securities and Exchange Commission, 2014, 17 CFR Parts 229, 230, 232, 239, 240, 243, and 249 Asset-Backed Securities Disclosure and Registration. * Securities and Exchange Commission (2016) Securities and Exchange Commission, 2016, 17 CFR §229.1125 (Item 1125) Schedule AL - Asset-level information. * Stango and Zinman (2011) Stango, Victor, and Jonathan Zinman, 2011, Fuzzy math, disclosure regulation, and market outcomes: Evidence from Truth-in-Lending reform, The Review of Financial Studies 24, 506–534. * Staten (2015) Staten, Michael, 2015, Risk-based pricing in consumer lending, Journal of Law, Economics & Policy 11, 33–58. * Stepanova and Thomas (2002) Stepanova, Maria, and Lyn Thomas, 2002, Survival analysis methods for personal loan data, Operations Research 50, 277–289. * Thackham and Ma (2022) Thackham, Mark, and Jun Ma, 2022, On maximum likelihood estimation of competing risks using the cause-specific semi-parametric cox model with time-varying covariates – an application to credit risk, Journal of the Operational Research Society 73, 5–14. * Tutz and Schmid (2016) Tutz, Gerhard, and Matthias Schmid, 2016, Modeling Discrete Time-to-Event Data (Springer). * U.S. Government Accountability Office (2022) U.S. Government Accountability Office, 2022, Stimulus checks: Direct payments to individuals during the covid-19 pandemic, url: https://www.gao.gov/assets/gao-22-106044.pdf (Accessed: 2023-03-02). * Wycinka (2019) Wycinka, Ewa, 2019, Competing risk models of default in the presence of early repayments, Econometrics 23, 99–120. * Zhang et al. (2019) Zhang, Nailong, Qingyu Yang, Aidan Kelleher, and Wujun Si, 2019, A new mixture cure model under competing risks to score online consumer loans, Quantitative Finance 19, 1243–1253. * Zingales (2015) Zingales, Luigi, 2015, Presidential address: Does finance benefit society?, The Journal of Finance 70, 1327–1363. Internet Appendix ### F. Proofs: Section II ###### Proof of Proposition 1.. Statement (i) follows from (ii), so it is enough to show (ii). Let $\Delta+1\leq k\leq\xi$ and observe $\displaystyle\hat{\lambda}_{\tau,n}^{0i}(k)-\lambda_{\tau}^{0i}(k)$ $\displaystyle=\frac{\frac{1}{n}\sum_{j=1}^{n}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}}{\hat{U}_{\tau,n}(k)}-\frac{f_{*,\tau}^{0i}(k)}{U_{\tau}(k)}$ $\displaystyle=\frac{\\{\sum_{j=1}^{n}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\\}U_{\tau}(k)-f_{*,\tau}^{0i}(k)\hat{U}_{\tau,n}(k)}{\hat{U}_{\tau,n}(k)U_{\tau}(k)}$ $\displaystyle=\bigg{[}\frac{1}{\hat{U}_{\tau,n}(k)U_{\tau}(k)}\bigg{]}\frac{1}{n}\sum_{j=1}^{n}\\{\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}U_{\tau}(k)-f_{*,\tau}^{0i}(k)\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}\\}.$ Define $H^{0i}_{\tau,k(j)}=\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}U_{\tau}(k)-f_{*,\tau}^{0i}(k)\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})},$ for $1\leq j\leq n$ and $\mathbf{A}_{\tau,n}=\text{diag}([\hat{U}_{\tau,n}(\Delta+1)U_{\tau}(\Delta+1)]^{-1},\ldots,[\hat{U}_{\tau,n}(\xi)U_{\tau}(\xi)]^{-1}).$ Then, $\hat{\bm{\Lambda}}_{\tau,n}^{0i}-\bm{\Lambda}^{0i}_{\tau}=\mathbf{A}_{\tau,n}\frac{1}{n}\sum_{j=1}^{n}\begin{bmatrix}H^{0i}_{\tau,\Delta+1(j)}\\\ \vdots\\\ H^{0i}_{\tau,\xi(j)}\end{bmatrix},$ or, letting $\mathbf{H}^{0i}_{\tau,(j)}=(H^{0i}_{\tau,\Delta+1(j)},\ldots,H^{0i}_{\tau,\xi(j)})^{\top}$ denote independent and identically distributed random vectors, we have compactly $\hat{\bm{\Lambda}}_{\tau,n}^{0i}-\bm{\Lambda}^{0i}_{\tau}=\mathbf{A}_{\tau,n}\frac{1}{n}\sum_{j=1}^{n}\mathbf{H}^{0i}_{\tau,(j)}.$ It is noteworthy the components of $\mathbf{H}^{0i}_{\tau,(j)}$ are uncorrelated. More specifically, $\text{Cov}[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]=\begin{cases}U_{\tau}(k)f^{0i}_{*,\tau}(k)[U_{\tau}(k)-f^{0i}_{*,\tau}(k)],&k=k^{\prime}\\\ 0,&k\neq k^{\prime}.\end{cases}$ (10) To see this, first notice the indicator functions $\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}$ and $\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}$ are Bernoulli random variables with probability parameters $f^{0i}_{*,\tau}(k)$ and $U_{\tau}(k)$, respectively. Hence, $\displaystyle\mathbf{E}H^{0i}_{\tau,k(j)}$ $\displaystyle=\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}U_{\tau}(k)-f^{0i}_{*,\tau}(k)\mathbf{E}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}$ $\displaystyle=f^{0i}_{*,\tau}(k)U_{\tau}(k)-f^{0i}_{*,\tau}(k)U_{\tau}(k)$ $\displaystyle=0.$ Therefore, $\displaystyle\text{Cov}[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]={}$ $\displaystyle\mathbf{E}H^{0i}_{\tau,k(j)}H^{0i}_{\tau,k^{\prime}(j)}$ $\displaystyle={}$ $\displaystyle\mathbf{E}\\{\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}U_{\tau}(k)-f_{*,\tau}^{0i}(k)\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}\\}$ $\displaystyle\times\\{\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}U_{\tau}(k^{\prime})-f_{*,\tau}^{0i}(k^{\prime})\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}\\}$ $\displaystyle={}$ $\displaystyle U_{\tau}(k)U_{\tau}(k^{\prime})\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}$ $\displaystyle- U_{\tau}(k)f^{0i}_{*,\tau}(k^{\prime})\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}$ $\displaystyle- U_{\tau}(k^{\prime})f^{0i}_{*,\tau}(k)\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}$ $\displaystyle+f^{0i}_{*,\tau}(k)f^{0i}_{*,\tau}(k^{\prime})\mathbf{E}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}.$ We proceed by cases. Case 1: $k=k^{\prime}$. Working through each expectation in $\text{Cov}[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]$, we have $\displaystyle\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}$ $\displaystyle=\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}$ $\displaystyle=f^{0i}_{*,\tau}(k),$ $\displaystyle\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}$ $\displaystyle=\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}$ $\displaystyle=\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}$ $\displaystyle=\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}$ $\displaystyle=f^{0i}_{*,\tau}(k),$ and $\mathbf{E}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}=\mathbf{E}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}=U_{\tau}(k).$ Thus, $\text{Cov}[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]=U_{\tau}(k)f^{0i}_{*,\tau}(k)[U_{\tau}(k)-f^{0i}_{*,\tau}(k)].$ Case 2: $k\neq k^{\prime}$. Working through each expectation in $\text{Cov}[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]$, we have $\mathbf{E}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}=0,$ $\displaystyle\mathbf{E}$ $\displaystyle\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k}\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}$ $\displaystyle=\begin{cases}\Pr(X_{j}\leq C_{j},Z_{X_{j}}=i,\min(X_{j},C_{j})=k,Y_{j}\leq k^{\prime}),&k>k^{\prime}\\\ 0,&k<k^{\prime},\end{cases}$ $\displaystyle\mathbf{E}$ $\displaystyle\mathbf{1}_{X_{j}\leq C_{j}}\mathbf{1}_{Z_{X_{j}}=i}\mathbf{1}_{\min(X_{j},C_{j})=k^{\prime}}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}$ $\displaystyle=\begin{cases}0,&k>k^{\prime}\\\ \Pr(X_{j}\leq C_{j},Z_{X_{j}}=i,\min(X_{j},C_{j})=k^{\prime},Y_{j}\leq k),&k<k^{\prime},\end{cases}$ and $\mathbf{E}\mathbf{1}_{Y_{j}\leq k\leq\min(X_{j},C_{j})}\mathbf{1}_{Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})}=\Pr(Y_{j}\leq k\leq\min(X_{j},C_{j}),Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})).$ Thus, Cov $\displaystyle[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]=f^{0i}_{*,\tau}(\min(k,k^{\prime}))\bigg{\\{}$ $\displaystyle-U_{\tau}(\max(k,k^{\prime}))\Pr(X_{j}\leq C_{j},Z_{X_{j}}=i,\min(X_{j},C_{j})=\max(k,k^{\prime}),Y_{j}\leq\min(k,k^{\prime}))$ $\displaystyle+f^{0i}_{*,\tau}(\max(k,k^{\prime}))\Pr(Y_{j}\leq k\leq\min(X_{j},C_{j}),Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j}))\bigg{\\}}.$ However, because of the independence between $Y$ and $(X,Z_{X})$, $\displaystyle U_{\tau}(\max(k,k^{\prime}))$ $\displaystyle=\Pr(Y_{j}\leq\max(k,k^{\prime})\leq\min(X_{j},C_{j}))$ $\displaystyle=\Pr(Y\leq\max(k,k^{\prime}),X\geq\max(k,k^{\prime}),C\geq\max(k,k^{\prime})\mid Y\leq X)$ $\displaystyle=\\{\Pr(Y\leq\max(k,k^{\prime})\leq C)\Pr(X\geq\max(k,k^{\prime}))\\}/\alpha,$ $\displaystyle\Pr$ $\displaystyle(X_{j}\leq C_{j},Z_{X_{j}}=i,\min(X_{j},C_{j})=\max(k,k^{\prime}),Y_{j}\leq\min(k,k^{\prime}))$ $\displaystyle=\Pr(C\geq\max(k,k^{\prime}),Z_{X}=i,X=\max(k,k^{\prime}),Y\leq\min(k,k^{\prime})\mid Y\leq X)$ $\displaystyle=\\{\Pr(X=\max(k,k^{\prime}),Z_{X}=i)\Pr(Y\leq\min(k,k^{\prime}),C\geq\max(k,k^{\prime}))\\}/\alpha,$ $\displaystyle f^{0i}_{*,\tau}(\max(k,k^{\prime}))$ $\displaystyle=\Pr(X=\max(k,k^{\prime}),C\geq\max(k,k^{\prime}),Z_{x}=i\mid Y\leq X)$ $\displaystyle=\\{\Pr(X=\max(k,k^{\prime}),Z_{X}=i)\Pr(Y\leq\max(k,k^{\prime})\leq C)\\}/\alpha,$ and $\displaystyle\Pr$ $\displaystyle(Y_{j}\leq k\leq\min(X_{j},C_{j}),Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j}))$ $\displaystyle=\Pr(Y\leq\min(k,k^{\prime}),C\geq\max(k,k^{\prime}),X\geq\max(k,k^{\prime})\mid Y\leq X)$ $\displaystyle=\\{\Pr(Y\leq\min(k,k^{\prime}),C\geq\max(k,k^{\prime}))\Pr(X\geq\max(k,k^{\prime}))\\}/\alpha$ Therefore, $\displaystyle U_{\tau}$ $\displaystyle(\max(k,k^{\prime}))\Pr(X_{j}\leq C_{j},Z_{X_{j}}=i,\min(X_{j},C_{j})=\max(k,k^{\prime}),Y_{j}\leq\min(k,k^{\prime}))$ $\displaystyle=f^{0i}_{*,\tau}(\max(k,k^{\prime}))\Pr(Y_{j}\leq k\leq\min(X_{j},C_{j}),Y_{j}\leq k^{\prime}\leq\min(X_{j},C_{j})),$ and so $\text{Cov}[H^{0i}_{\tau,k(j)},H^{0i}_{\tau,k^{\prime}(j)}]=0$ when $k\neq k^{\prime}$. This confirms (10). Now define $\mathbf{D}^{0i}_{\tau}=\text{diag}\big{(}U_{\tau}(\Delta+1)f^{0i}_{*,\tau}(\Delta+1)[U_{\tau}(\Delta+1)-f^{0i}_{*,\tau}(\Delta+1)],\ldots,U_{\tau}(\xi)f^{0i}_{*,\tau}(\xi)[U_{\tau}(\xi)-f^{0i}_{*,\tau}(\xi)]\big{)},$ and $\bar{\mathbf{H}}_{\tau,n}^{0i}=\frac{1}{n}\sum_{j=1}^{n}\mathbf{H}^{0i}_{\tau,(j)}.$ By the multivariate Central Limit Theorem (Lehmann and Casella, 1998, Theorem 8.21, pg. 61), therefore, $\sqrt{n}(\bar{\mathbf{H}}_{\tau,n}^{0i}-\bm{0})\overset{\mathcal{L}}{\longrightarrow}N(\bm{0},\mathbf{D}^{0i}_{\tau}),\text{ as }n\rightarrow\infty.$ Next, define $\mathbf{V}_{\tau}=\text{diag}(U_{\tau}(\Delta+1)^{-2},\ldots,U_{\tau}(\xi)^{-2})$. By Lemma 1 (Lautier et al., 2023), $\mathbf{A}_{\tau,n}\overset{\mathcal{P}}{\longrightarrow}\mathbf{V}_{\tau}$, as $n\rightarrow\infty$. Thus, by multivariate Slutsky’s Theorem (Lehmann and Casella, 1998, Theorem 5.1.6, pg. 283), $\sqrt{n}(\mathbf{A}_{\tau,n}\bar{\mathbf{H}}_{\tau,n}^{0i})\overset{\mathcal{L}}{\longrightarrow}N(\bm{0},\mathbf{V}_{\tau}\mathbf{D}^{0i}_{\tau}\mathbf{V}_{\tau}^{\top}),\text{ as }n\rightarrow\infty.$ We may complete the proof by observing $\mathbf{V}_{\tau}\mathbf{D}^{0i}_{\tau}\mathbf{V}_{\tau}^{\top}=\bm{\Sigma}^{0i}$ and $\mathbf{A}_{\tau,n}\bar{\mathbf{H}}_{\tau,n}^{0i}=\hat{\bm{\Lambda}}^{0i}_{\tau,n}-\bm{\Lambda}_{\tau}^{0i}$. ∎ ###### Proof of Lemma 1. The classical method dictates first finding a $(1-\theta)$% confidence interval on a log-scale and then converting back to a standard-scale to ensure the estimated confidence interval for the hazard rate, which is a probability, remains in the interval $(0,1)$. By an application of the Delta Method (Lehmann and Casella, 1998, Theorem 8.12, pg. 58), we have for $x\in\\{\Delta+1,\ldots,\xi\\}$ and $i=1,2$, $\sqrt{n}\big{(}\ln\hat{\lambda}_{\tau,n}^{0i}(x)-\ln\lambda^{0i}_{\tau}(x)\big{)}\overset{\mathcal{L}}{\longrightarrow}N\bigg{(}0,\frac{f_{*,\tau}^{0i}(x)\\{U_{\tau}(x)-f_{*,\tau}^{0i}(x)\\}}{U_{\tau}(x)^{3}}\frac{1}{\lambda^{0i}_{\tau}(x)^{2}}\bigg{)}.$ The result follows from (4), the Continuous Mapping Theorem (Mukhopadhyay, 2000, Theorem 5.2.5, pg. 249), the pivotal approach (Mukhopadhyay, 2000, §9.2.2), and converting back to the standard scale. ∎ ### G. Proofs: Section E ###### Proof of Proposition 2. For a loan with initial balance, $B$, monthly interest rate, $r_{a}$, and initial term of $\xi$, the monthly payment, $P$, is $P=B\bigg{[}\frac{1-(1+r_{a})^{-\xi}}{r_{a}}\bigg{]}^{-1}.$ Assume $x\in\\{1,\ldots,\xi\\}$. The balance at month $x$, $B_{x}$ is $\displaystyle B_{x}$ $\displaystyle=B(1+r_{a})^{x}-P\bigg{[}\frac{(1+r_{a})^{x}-1}{r_{a}}\bigg{]}$ $\displaystyle=B(1+r_{a})^{x}-B\bigg{[}\frac{1-(1+r_{a})^{-\xi}}{r_{a}}\bigg{]}^{-1}\bigg{[}\frac{(1+r_{a})^{x}-1}{r_{a}}\bigg{]}.$ (11) Thus, $\rho_{a\mid x}$ is the rate such that the expected present value of the future monthly payments equals $B_{x}$. The payment stream is constant, however, and so $\displaystyle B_{x}$ $\displaystyle=P\bigg{[}\frac{1}{(1+\rho_{a\mid x})}+\cdots+\frac{1}{(1+\rho_{a\mid x})^{\xi-x}}\bigg{]}$ $\displaystyle=B\bigg{[}\frac{1-(1+r_{a})^{-\xi}}{r_{a}}\bigg{]}^{-1}\bigg{[}\frac{1-(1+\rho_{a\mid x})^{-(\xi-x)}}{\rho_{a\mid x}}\bigg{]}.$ Use (11) and solve for $\rho_{a\mid x}$ to complete the proof. ∎ ###### Proof of Lemma 2. The result follows by Proposition 1, part $(i)$ and the Continuous Mapping Theorem (Mukhopadhyay, 2000, Theorem 5.2.5, pg. 249). ∎ ### H. Large Sample Simulation Study We present a simulation study in support of Proposition 1 and Lemma 1. Let the true distribution for the lifetime random variable $X$ and bivariate distribution of $(X,Z_{X})$ be as in Table 4. The column $p(x)$ denotes the probability of event type 1 given an event at time $X$. This allows us to populate the joint distribution for $\Pr(X=x,Z_{X}=i)$ for $i=1,2$. The cause- specific hazard rates then follow from (3), and we also report the all-cause hazard rate in the final column. Notice that, for each $x$, $p(x)=\frac{\lambda^{01}(x)}{\lambda^{01}(x)+\lambda^{02}(x)}.$ Table 4: Simulation Study Lifetime of Interest Probabilities. The true probabilities of the lifetime random variable, $X$, for the simulation study results of Figure 14. The probabilities $p(x)$ and $\Pr(X=x)$ for $x\in\\{1,\ldots,10\\}$ are selected at onset, and the remaining probabilities in this table may be derived from these quantities. Not summarized here is the truncation random variable, $Y$, which was assumed to be discrete uniform over the integers $\\{1,\ldots,5\\}$. $p(x)$ | $X$ | $\Pr(X=x)$ | $\Pr(X=x,Z_{x}=1)$ | $\Pr(X=x,Z_{x}=2)$ | $\lambda^{01}(x)$ | $\lambda^{02}(x)$ | $\lambda(x)$ ---|---|---|---|---|---|---|--- 0.66 | 1 | 0.04 | 0.026 | 0.014 | 0.026 | 0.014 | 0.04 0.20 | 2 | 0.06 | 0.012 | 0.048 | 0.013 | 0.050 | 0.06 0.45 | 3 | 0.10 | 0.045 | 0.055 | 0.050 | 0.061 | 0.11 0.87 | 4 | 0.14 | 0.122 | 0.018 | 0.152 | 0.023 | 0.18 0.20 | 5 | 0.09 | 0.018 | 0.072 | 0.027 | 0.109 | 0.14 0.81 | 6 | 0.06 | 0.049 | 0.011 | 0.085 | 0.020 | 0.11 0.05 | 7 | 0.14 | 0.007 | 0.133 | 0.014 | 0.261 | 0.27 0.78 | 8 | 0.18 | 0.140 | 0.040 | 0.379 | 0.107 | 0.49 0.25 | 9 | 0.07 | 0.018 | 0.053 | 0.092 | 0.276 | 0.37 0.42 | 10 | 0.12 | 0.050 | 0.070 | 0.420 | 0.580 | 1.00 For the truncation random variable, we assume $Y$ is discrete uniform with sample space $\mathcal{Y}\in\\{1,2,3,4,5\\}$. This results in $\alpha=0.864$. For the purposes of the simulation, we further assume $\tau=5$. We use the simulation procedure of Beyersmann et al. (2009) but modified for random truncation. Specifically, 1. 1. Simulate the truncation time, $Y$. 2. 2. Set the censoring time to be $Y+\tau$. 3. 3. Simulate the event time, $X$. 4. 4. Simulate a Bernoulli event with probability $p(x)$ to determine if the event $X$ was caused by type 1 with probability $p(x)$ or type 2 with probability $1-p(x)$. We simulated $n=10{,}000$ lifetimes using the above algorithm. We then tossed any observations that were truncated (i.e., $Y_{j}>X_{j}$, for $j=1,\ldots,n$). This left a sample of competing risk events subject to censoring, which would be the same incomplete data conditions as a trust of securitized loans. We then used the results of Section II.A to estimate $\hat{f}^{0i}_{*,\tau,n}(x)$, $\hat{U}_{\tau,n}(x)$, and $\hat{\lambda}^{0i}_{\tau,n}(x)$ for $i=1,2$ and $x\in\\{1,\ldots,10\\}$ over $r=1{,}000$ replicates. To validate the asymptotic results of Proposition 1, we compare the empirical covariance matrix against the derived asymptotic covariance matrix, $\bm{\Sigma}^{0i}$, by examining estimates of the confidence intervals using Lemma 1. Figure 14 presents the results for the cause-specific hazard rate for cause 01 and 02, respectively. The empirical estimates and 95% confidence intervals are indistinguishable from the true quantities using Proposition 1 and estimated quantities using Proposition 1 but replacing all quantities with their respective estimates from Section II.A. Figure 14: Simulation Study Results. A comparison of true $\lambda^{0i}_{\tau}(x)$ and estimated $\hat{\lambda}^{0i}_{\tau,n}(x)$, including confidence intervals, for the distribution in Table 4 and $i=1,2$. The “true” values are from Proposition 1 and Lemma 1. The “estimate” values use the formulas from Proposition 1 and Lemma 1 but replace the true values with the estimates from Section II.A calculated from the simulated data. The “empirical” values are empirical confidence interval and mean calculations directly from the simulated data. All three quantities are indistinguishable for $n=10{,}000$ and 1,000 replicates, which indicates the asymptotic properties hold in this instance. ### I. Determination of Loan Outcome The detail of the loan-level data is extensive, but it remains up to the data analyst to use the provided fields to determine the outcome of an individual loan (see Securities and Exchange Commission (2016) for detail on available field names). To do so, we have aggregated each month of active trust data into a single source file. This allows us to review each bond’s monthly outstanding principal balance, monthly payment received from the borrower, and the portion of each monthly payment applied to principal. Our algorithm to determine a loan outcome proceeded as follows. For each remaining bond after the filtering of Section I.A, we extracted three vectors, each of which was the same length as the number of months a trust was active and paying. The first vector represented the ordered monthly balance, the second was the ordered monthly payments, and the third was the ordered monthly amount of payment applied to principal. We then considered a loan to be repaid if the sum total principal received was greater than the outstanding loan balance as of the first month the trust was actively paying. In this case, the timing of a repayment was set to be the first month with a zero outstanding principal balance. Note that we do not differentiate between a prepayment or naturally scheduled loan amortization; i.e., all repayments have been treated as a “non- default”. If the sum total principal received was less than the first month’s outstanding loan balance, we then considered a loan outcome to be either right-censored or defaulted. To make this determination, we searched the monthly payments received vector for three consecutive zeros (i.e., three straight months of missed payments). If we found three consecutive missed payments, we assumed the loan to be defaulted with a time-of-default set to be the month in which the first of three zeros was observed. If we did not find three consecutive months of missed payments, the loan was assumed to be a right-censored observation and assigned an event time as of the last month the trust was actively paying. For the pseudo-code of this algorithm, see Figure 15. 1:$B\leftarrow\texttt{bond\\_data}$ $\triangleright$ bond_data is a row of the loan performance data 2:$\texttt{bal\\_vec}\leftarrow\text{each month's sequential outstanding principal balance}$ 3:$\texttt{pmt\\_vec}\leftarrow\text{each month's sequential actual payment}$ 4:$\texttt{prc\\_vec}\leftarrow\text{each month's sequential payment applied to principal}$ 5:$\texttt{init\\_bal}\leftarrow\text{current balance as of the first trust month}$ 6:$\texttt{paid\\_princ}\leftarrow\texttt{sum}(\texttt{prc\\_vec})$ $\triangleright$ plus $10 pad to avoid odd tie behavior 7:if $\texttt{paid\\_princ}>=\texttt{init\\_bal}$ then 8: $D=0$ 9: $R=1$ 10: $C=0$ 11: $X\leftarrow$ location of first zero in bal_vec $\triangleright$ loan repaid 12:else 13: $z\leftarrow$ starting time of three consecutive zero payments in pmt_vec 14: if $z$ empty then 15: $D=0$ 16: $R=0$ 17: $C=1$ 18: $X\leftarrow$ length of pmt_vec $\triangleright$ loan censored 19: else 20: $D=1$ 21: $R=0$ 22: $C=0$ 23: $X\leftarrow z$ $\triangleright$ loan defaults 24: end if 25:end if Figure 15: Determination of Loan Outcome. We first extracted three vectors, each of which was the same length as the number of months a trust was active and paying. The first vector (bal_vec) represented the ordered monthly balance, the second (pmt_vec) was the ordered monthly payments, and the third (prc_vec) was the ordered monthly amount of payment applied to principal. We then considered a loan to be repaid if the sum total principal received was greater than the outstanding loan balance as of the first month the trust was actively paying. In this case, the timing of a repayment was set to be the first month with a zero outstanding principal balance. If the sum total principal received was less than the first month’s outstanding loan balance, we then considered a loan outcome to be either right-censored or defaulted. To make this determination, we searched the monthly payments received vector for three consecutive zeros (i.e., three straight months of missed payments). If we found three consecutive missed payments, we assumed the loan to be defaulted with a time-of-default set to be the month in which the first of three zeros was observed. If we did not find three consecutive months of missed payments, the loan was assumed to be a right-censored observation and assigned an event time as of the last month the trust was actively paying.
* Dimastrogiovanni _et al._ (2010) E. Dimastrogiovanni, N. Bartolo, S. Matarrese, and A. Riotto, Adv. Astron. 2010, 752670 (2010), arXiv:1001.4049 [astro-ph.CO] . * Bamba _et al._ (2008) K. Bamba, S. Nojiri, and S. D. Odintsov, Phys. Rev. D 77, 123532 (2008), arXiv:0803.3384 [hep-th] . * Garnica _et al._ (2022) J. C. Garnica, L. G. Gomez, A. A. Navarro, and Y. Rodriguez, Annalen Phys. 534, 2100453 (2022), arXiv:2109.10154 [gr-qc] . * Armendariz-Picon (2004) C. Armendariz-Picon, JCAP 0407, 007 (2004), arXiv:astro-ph/0405267 . * Koivisto and Mota (2008a) T. Koivisto and D. F. Mota, Astrophys. J. 679, 1 (2008a), arXiv:0707.0279 [astro-ph] . * Koivisto and Mota (2008b) T. Koivisto and D. F. Mota, JCAP 06, 018 (2008b), arXiv:0801.3676 [astro-ph] . * Mehrabi _et al._ (2017) A. Mehrabi, A. Maleknejad, and V. Kamali, Astrophys. Space Sci. 362, 53 (2017), arXiv:1510.00838 [astro-ph.CO] . * Guarnizo _et al._ (2020) A. Guarnizo, J. B. Orjuela-Quintana, and C. A. Valenzuela-Toledo, Phys. Rev. D 102, 083507 (2020), arXiv:2007.12964 [gr-qc] . * Tasinato (2014a) G. Tasinato, JHEP 1404, 067 (2014a), arXiv:1402.6450 [hep-th] . * De Felice _et al._ (2016) A. De Felice _et al._ , JCAP 06, 048 (2016), arXiv:1603.05806 [gr-qc] . * Rodríguez and Navarro (2018) Y. Rodríguez and A. A. Navarro, Phys. Dark Univ. 19, 129 (2018), arXiv:1711.01935 [gr-qc] . * Nakamura _et al._ (2017) S. Nakamura, R. Kase, and S. Tsujikawa, Phys. Rev. D 95, 104001 (2017), arXiv:1702.08610 [gr-qc] . * Geng _et al._ (2021) C.-Q. Geng, Y.-T. Hsu, J.-R. Lu, and L. Yin, Phys. Dark Univ. 32, 100819 (2021), arXiv:2104.06577 [gr-qc] . * Heisenberg and Villarrubia-Rojo (2021) L. Heisenberg and H. Villarrubia-Rojo, JCAP 03, 032 (2021), arXiv:2010.00513 [astro-ph.CO] . * Zhao (2009) W. Zhao, Int. J. Mod. Phys. D18, 1331 (2009), arXiv:0810.5506 [gr-qc] . * Koivisto and Nunes (2013) T. S. Koivisto and N. J. Nunes, Phys. Rev. D88, 123512 (2013), arXiv:1212.2541 [astro-ph.CO] . * Ngampitipan and Wongjun (2011) T. Ngampitipan and P. Wongjun, JCAP 1111, 036 (2011), arXiv:1108.0140 [hep-ph] . * Yao _et al._ (2018) Y.-H. Yao, Y.-J. Yan, and X.-H. Meng, Eur. Phys. J. C78, 153 (2018), arXiv:1704.05772 [astro-ph.CO] . * Wei and Cai (2006) H. Wei and R.-G. Cai, Phys. Rev. D73, 083002 (2006), arXiv:astro-ph/0603052 . * Landim (2016) R. C. G. Landim, Eur. Phys. J. C76, 480 (2016), arXiv:1605.03550 [hep-th] . * Koivisto and Mota (2008c) T. Koivisto and D. F. Mota, JCAP 0808, 021 (2008c), arXiv:0805.4229 [astro-ph] . * Nakamura _et al._ (2019) S. Nakamura, R. Kase, and S. Tsujikawa, JCAP 1912, 032 (2019), arXiv:1907.12216 [gr-qc] . * Yao and Meng (2021) Y. Yao and X.-H. Meng, Phys. Dark Univ. 33, 100852 (2021), arXiv:2011.09160 [astro-ph.CO] . * Allys _et al._ (2016b) E. Allys, P. Peter, and Y. Rodriguez, JCAP 1602, 004 (2016b), arXiv:1511.03101 [hep-th] . * Kimura _et al._ (2017) R. Kimura, A. Naruko, and D. Yoshida, JCAP 1701, 002 (2017), arXiv:1608.07066 [gr-qc] . * Gallego Cadavid _et al._ (2022) A. Gallego Cadavid, C. M. Nieto, and Y. Rodriguez, Phys. Rev. D 105, 124060 (2022), arXiv:2110.14623 [hep-th] . * Gumrukcuoglu and Namba (2019) A. E. Gumrukcuoglu and R. Namba, Phys. Rev. D100, 124064 (2019), arXiv:1907.12292 [hep-th] . * Allys _et al._ (2016c) E. Allys, P. Peter, and Y. Rodriguez, Phys. Rev. D94, 084041 (2016c), arXiv:1609.05870 [hep-th] . * Gómez and Rodríguez (2019) L. G. Gómez and Y. Rodríguez, Phys. Rev. D 100, 084048 (2019), arXiv:1907.07961 [gr-qc] . * Carroll (1998) S. M. Carroll, Phys. Rev. Lett. 81, 3067 (1998), arXiv:astro-ph/9806099 . * Tasinato (2014b) G. Tasinato, Class. Quant. Grav. 31, 225004 (2014b), arXiv:1404.4883 [hep-th] . * Silva _et al._ (2022) H. O. Silva, A. Coates, F. M. Ramazanoğlu, and T. P. Sotiriou, Phys. Rev. D 105, 024046 (2022), arXiv:2110.04594 [gr-qc] . * Coates and Ramazanoğlu (2022) A. Coates and F. M. Ramazanoğlu, Phys. Rev. Lett. 129, 151103 (2022), arXiv:2205.07784 [gr-qc] . * Coates and Ramazanoğlu (2023) A. Coates and F. M. Ramazanoğlu, (2023), arXiv:2301.02263 [gr-qc] . * Clough _et al._ (2022) K. Clough, T. Helfer, H. Witek, and E. Berti, Phys. Rev. Lett. 129, 151102 (2022), arXiv:2204.10868 [gr-qc] . * Mou and Zhang (2022) Z.-G. Mou and H.-Y. Zhang, Phys. Rev. Lett. 129, 151101 (2022), arXiv:2204.11324 [hep-th] . * Aoki and Minamitsuji (2022) K. Aoki and M. Minamitsuji, Phys. Rev. D 106, 084022 (2022), arXiv:2206.14320 [gr-qc] . * Barausse _et al._ (2022) E. Barausse, M. Bezares, M. Crisostomi, and G. Lara, JCAP 11, 050 (2022), arXiv:2207.00443 [gr-qc] . * Sbisà (2015) F. Sbisà, Eur. J. Phys. 36, 015009 (2015), arXiv:1406.4550 [hep-th] . * Gümrükçüoğlu _et al._ (2016) A. E. Gümrükçüoğlu, S. Mukohyama, and T. P. Sotiriou, Phys. Rev. D 94, 064001 (2016), arXiv:1606.00618 [hep-th] . * Langlois and Noui (2016) D. Langlois and K. Noui, JCAP 07, 016 (2016), arXiv:1512.06820 [gr-qc] . * Di Valentino _et al._ (2021) E. Di Valentino, O. Mena, S. Pan, L. Visinelli, W. Yang, A. Melchiorri, D. F. Mota, A. G. Riess, and J. Silk, Class. Quant. Grav. 38, 153001 (2021), arXiv:2103.01183 [astro-ph.CO] .
# Investigating Vision Foundational Models for Tactile Representation Learning Ben Zandonati1 University of Cambridge <EMAIL_ADDRESS>Ruohan Wang1 Institute for Infocomm Research, A*STAR <EMAIL_ADDRESS>Ruihan Gao Carnegie Mellon University <EMAIL_ADDRESS>Yan Wu Institute for Infocomm Research, A*STAR <EMAIL_ADDRESS> ###### Abstract Tactile representation learning (TRL) equips robots with the ability to leverage touch information, boosting performance in tasks such as environment perception and object manipulation. However, the heterogeneity of tactile sensors results in many sensor- and task-specific learning approaches. This limits the efficacy of existing tactile datasets, and the subsequent generalisability of any learning outcome. In this work, we investigate the applicability of vision foundational models to sensor-agnostic TRL, via a simple yet effective transformation technique to feed the heterogeneous sensor readouts into the model. Our approach recasts TRL as a computer vision (CV) problem, which permits the application of various CV techniques for tackling TRL-specific challenges. We evaluate our approach on multiple benchmark tasks, using datasets collected from four different tactile sensors. Empirically, we demonstrate significant improvements in task performance, model robustness, as well as cross-sensor and cross-task knowledge transferability with limited data requirements. ## I Introduction The sense of touch allows humans to feel, understand and ultimately manipulate through physical interaction. It is vital for exploration, object discrimination and fine-grained control, especially where visual perception lacks the resolution to detect surface changes, or is denied entirely. Inspired by the human sense of touch, robotic tactile learning has improved performance in tasks such as object/environment recognition [1, 2], pick-and- place [3] and in-hand manipulation [4]. Tactile representation learning (TRL) leverages machine learning (ML) to make sense of the rich data generated by specialized tactile sensors. Design choices such as sampling resolution, operating conditions and cost result in different tactile sensors adopting distinct sensing mechanisms (e.g. visual signals [5] and barometric signals [6]). Ideally, TRL should be sensor- agnostic, accommodating various data formats of different sensors and able to construct consistent representations of objects and environments. In practice, however, most methods developed are sensor-specific with tailored architectures and data processing routines [e.g. 5, 7, 8, 9]. This siloed approach has multiple limitations. First, individual tactile datasets are usually small due to the high cost of data collection. The tactile representation derived from such small datasets often generalize less well, especially for out-of-distribution data [e.g., 10, 8]. Even calibration differences and expected wear from regular usage present domain shifts detrimental to model performance. Furthermore, the lack of a unifying data format for different tactile sensors makes it difficult to reuse knowledge captured in learned representations. For a new sensor design, the accompanying tactile representation model has to be learned from scratch, along with expensive data collection. All these limit the effectiveness and efficiency of TRL. The above limitations are further highlighted when we contrast TRL with other application domains like computer vision (CV), and natural language processing (NLP). Both CV and NLP benefit from a unifying input format (images and text respectively), which permits fully shared model architectures for convenient knowledge transfer. In particular, foundational models [11] are trained on massive datasets such as ImageNet [12] and CommonCrawl [13] to derive general representational knowledge, which can be specialized to diverse downstream tasks, such as semantic segmentation [14] in CV, and sentiment analysis [15] in NLP. Foundational models improves learning efficiency and model robustness of downstream tasks, especially for limited training data [15]. Biologically, the human somatosensory system shares similar neural mechanisms with the visual cortex responsible for processing spatial features [16]. This implies that tactile properties such as texture are largely descriptions of surface spatial properties [17], motivating the question of whether a vision foundational model could be exploited to tackle the aforementioned challenges in TRL. Specifically, we investigate the following: * • Can vision models be agnostic to data from heterogeneous tactile sensors? * • Can vision foundational models improve model performance and robustness for TRL? * • Can vision architecture facilitate efficient knowledge transfer between downstream learning tasks and models trained on different sensor data? In this work, we present a unified approach to address the above questions. We first present the use of tactile images as a simple unifying data format for heterogeneous tactile sensory outputs, to encode them as spatial features. This recasts TRL as a vision task, but with different input image sizes for different sensors. We adopt convolutional models [18] as the fully shared architecture for all sensors, exploiting convolution’s agnosticity to image sizes. The above construct enables efficient knowledge transfer in multiple ways. First, we show that a foundational vision model pre-trained on natural images can be directly applied to tactile learning tasks by simply performing least square regression to the last layer, providing evidence on the connection between visual and tactile perception in a non-biological system. Second, the foundational model can also be fine-tuned into tactile representation models with improved performance and robustness. In particular, we leverage data augmentation to counteract the limited tactile data during fine-tuning. Lastly, we demonstrate that the fine-tuned tactile representation model retains general features to allow cross-task and cross-sensor transfer. To evaluate our proposed approach, we consider multiple benchmark tasks including standard material classification, continual learning for material classification and detection of fabric composition. We specifically test on data collected from four different sensors, with different data collection procedures, to demonstrate the general applicability of our approach. #### Contributions Our key contributions are summarized below: * • We extensively investigate on the feasibility, effectiveness, efficiency and robustness of using a vision foundational model for TRL. We use tactile images as a unified model input transformed from any tactile sensors. * • We introduce a new evaluation benchmarks for tactile learning, namely fabric composition detection. * • We contribute two new tactile datasets, including a material classification dataset using GelSight sensor and a fabric composition dataset using Contactile sensor. * • Empirically, we demonstrate that our proposed approach learns robust models for all sensors evaluated and outperforms baseline models tailored to specific sensors. ## II Preliminaries and Related Work We present three task settings to support the comprehensive evaluation of our proposed approach. The first two tasks are standard benchmarks for TRL while the third one is a novel task of composition detection task. We also review relevant works. ### II-A Tactile Representation Learning Tasks #### Material Classification This is a common benchmark for TRL [e.g. 19, 20, 21, 22, 23, 8, 24]. Similar to image classification, material classification determines the source material measured by a tactile sensor, from a finite number of classes. For example, early research involved classification of the textural information gathered via sliding an electret microphone across the surface of materials [25]. The task remains a standard benchmark amid the rapid development of different sensor designs. A natural extension to standard material classification investigates the learned model’s robustness to out-of-distribution data. This includes varying data length and the moving speed of the tactile sensor (as controlled by a robot). For example, [26] achieved improved robustness to the sensor’s movement speed via additional sensing modalities. [8] also proposed a customized spiking neural network to reduce the data length needed for classification. #### Continual Learning for New Materials For real-world applications, robots are expected to continuously learn and adapt to novel environments. This also applies to TRL and was investigated in [27, 28], where robots learn new objects continuously by touch. In this work, we similarly extend material classification to the continual learning (CL) [29] setting. Formally, let $\mathrm{D}=\\{B_{1},B_{2},\dots,B_{T}\\}$ be a data sequence with $B_{t}$ denoting the data for material $t$. We wish to design a CL algorithm $\textrm{Alg}(\cdot)$ in the form of $(f_{t},~{}M_{t})=\textrm{Alg}(B_{t},f_{t-1},M_{t-1}),$ (1) where $f_{t}$ is the current classification model after learning the novel material $t$. $f_{t}$ should be capable of classifying all materials observed so far (i.e., $B_{1}$ through $B_{t}$). A small memory buffer $M$ is allowed to store data about previous materials to mitigate model forgetting. $M_{t}$ denotes the current content of the memory buffer. Intuitively, the CL algorithm $\textrm{Alg}(\cdot)$ must learn each material sequentially. It also cannot access training data for previous materials except for those stored in the memory buffer. The algorithm is thus forced to learn new materials on the fly without forgetting its existing knowledge. In contrast, standard material classification learns all materials in $\mathrm{D}$ concurrently and with unlimited access to all data. CL thus represents a more challenging and realistic benchmark. #### Fabric Composition Detection We introduce a new evaluation benchmark for TRL. Concretely, we design a fine- grained fabric composition detection task, in which the learned tactile model must predict the constituents of a specific fabric material, instead of simply identifying it. This task serves as a more challenging benchmark compared to standard material classification. It also allows us to investigate knowledge transfer between sensors and tasks (e.g., from material classification to constituents detection). We will describe the new dataset collected for this task in Sec. III. ### II-B Existing Methods There exists a wide range of tactile sensor designs leveraging various sensing modalities, including strain gauges [24], piezo-resistive layers [30], accelerometers [31], capacitive [32], optical [33, 5] and those combining multiple sensing mechanisms [34, 35]. Most tactile learning methods tailor their respective model architectures and learning algorithms to the specific sensors used [e.g., 8, 23, 36, 24]. These existing approaches learn sensor- specific mappings from raw sensor output to some latent representation, and adjust the model size based on size of sensor output. These tailored decisions inevitably lead to a siloed state for TRL: the developed models can’t be easily reused for different sensors, even when the desired ML task remains identical. [10] partially addresses the above issues by learning a shared latent representation for two different sensors. This approach demonstrates improved performance compared to independently learning each sensor’s data. However, it must still learn sensor-specific mappings from raw data to the shared representation, thus limiting its reuse potential for additional sensors. In contrast, our proposed approach standardises the transformation to map any raw sensor data to tactile images, to be processed by a fully shared ML model. As we will demonstrate in our experiments, our approach grants more flexibility towards knowledge transfer. ## III Sensors and Datasets We present the sensors and the associated datasets considered in this work. They are intended to validate the general applicability of our approach, and to contextualize the challenge posed by heterogeneous sensors. Each dataset is used for one or more learning tasks described in Sec. II-A. #### RoboSkin Roboskin is a capactive sensor designed for iCub [32]. Taunyazov et al. [36] collected a material classification dataset using the RoboSkin on the iCub robot forearm, sweeping across multiple materials without strict control of velocity and exerted forces. This public dataset contains 20 different materials with 50 samples in each class. Each sample contains 75 sensor readings. #### BioTac SynTouch BioTac® is a multi-modal tactile sensor using fluid pressure sensor and thermistor [37]. Gao et al. [10] released a material classification dataset using the BioTac sensor fitted as an extended end-effector on a KUKA LBR iiwa 14 robot arm, sliding laterally across different materials with controlled speed and contact force. BioTac-20 dataset contains the same 20 materials as the RoboSkin dataset with 50 samples in each class. Each sample contains 400 readings. A larger BioTac-50 dataset was later released. We contribute two new datasets using alternative sensors. We will release both datasets publicly to support future research in the community. #### GelSight Gelsight is a camera-based sensor producing images of the contact surface, showing surface geometry and deformation with a soft elastomer [5]. Each reading is an image of $480\times 640$. A material classification dataset consists of 45 materials with 50 samples in each class. As the elastomer is vulnerable to abrasion from sliding motion, data is collected by rolling the sensor locally on material surfaces. The sensor, mounted on a KUKA LBR iiwa 14 robot arm, touches the material surface from above with a 1N force threshold. The sensor is then rotated clockwise by 1 degree, anticlockwise by 2 degrees, and finally clockwise by 1 degree back to the centre position (illustrated in Fig. 2(a)). (a) GelSight (b) BioTac, RoboSkin, Contactile (c) Setup for Contactile Protocol 1 Figure 2: (a) and (b) are illustrations of tactile data collection process for different sensors. (c) is the robot setup for Protocol 1 in Contactile data collection. #### Contactile Contactile® sensor uses a soft, silicone array based on PapillArray [7]. The sensor measures deflection, force and vibration. We collect the data using two protocols. Protocol 1 is identical to that of BioTac dataset. In Protocol 2, the sensor is handheld and slid across materials casually with different contact forces, speeds and along different directions, to mimic more realistic and natural movements. The dataset contains samples collected from 32 fabrics, each consisting of possible 6 constituent materials: Linen, Viscose, Cotton, Wool, Polyester and Elastane (see Tab. I for examples). 40 and 10 samples per material are collected for Protocols 1 and 2 respectively. The collection setup is illustrated in Figs. 2(b) and 2(c). TABLE I: Fabric examples and their composition materials Material | Image | % by mass ---|---|--- Linen | Viscose | Cotton | Wool | Polyester Cotton-Linen | | 45 | 0 | 55 | 0 | 0 Poplin | | 0 | 0 | 20 | 0 | 80 Drill Stretch | | 0 | 0 | 100 | 0 | 0 Felt | | 0 | 65 | 0 | 35 | 0 ## IV Method We present a unified approach to tackle heterogeneous sensors and efficient knowledge transfer in TRL. Our approach relies on a unifying format for different sensor data, and exploits convolution’s agnosticiity to input size to enable fully shared models. These fully shared models in turn enables convenient knowledge transfer. We also discuss data augmentations to counteract limited tactile training data. Lastly, we discuss a continual tactile learning approach as a direct application of knowledge transfer. ### IV-A Tactile Images and Convolutional Architectures We use simple transformations to convert data generated by various sensors into 2D images, which serves as the unified input format for the subsequent ML models. Specifically, tactile images aims to transform tactile sensory output into an encoding of the global geometry for the contact surface. This transformation is inspired by the processing similarities between the human visual cortex and somatosensory system [16], and captures the intuition that significant tactile properties are fundamentally spatial [17]. Camera-based sensors such as GelSight directly capture global surface geometry as images and can be used as model input directly. However, non-camera-based sensors typically have sparse sensing points that only produce localized signals about the contact surface. To better encode the global surface geometry, we thus require more local samples that span across the contact surface. This could be conveniently achieved by concatenating consecutive vectors from the tactile data stream, as the sensor slides over the contact surface. Formally, let $S=\\{s_{1},s_{2},\dots,s_{T}\\}$ be the data stream produced by a sensor sliding across a surface, where $s_{t}\in\mathbb{R}^{n}$ is a single reading from the sensor. We define a tactile image as a matrix $\textrm{Im}(S)=[s_{j},s_{j+1},\dots,s_{k}]$ for some constant $j,k$. Intuitively, $Im(S)$ leverage the temporal dimension of tactile data stream to better encode global surface properties (see also Fig. 3 for an illustration). Figure 3: Tactile Image processing for non-camera-based sensors. We note that tactile images of different sensors still have different dimensions. To achieve fully shared models for knowledge transfer, we thus adopt convolutional architectures such as ResNet [38], since convolution does not require a fixed input size. ResNet is also a representative state-of-the- art model for processing spatial input, including the surface geometry encoded in tactile images. Figure 4: Tactile image representations for the BioTac, RoboSkin and GelSight sensors for two material classes. ### IV-B Model Training With tactile images and our chosen model architecture, we effectively recast TRL as a vision task. For training, we minimise the empirical cross-entropy loss $\operatorname*{arg\,min}_{f}\sum_{(x,y)\in\mathrm{D}}\ell_{ce}(f(x),y)$ (2) where $f$ is the model and $\ell_{ce}$ is the cross-entropy loss. $\mathrm{D}$ denotes the dataset containing labeled tactile images $(x,y)$. Crucially, we can initialize $f$ with a pre-trained model to enable knowledge transfer. In particular, we may interpret TRL as a downstream task for a vision foundational model on general spatial features. In our experiments, we will demonstrate that a foundational model trained on natural images already robustly encodes the general features required for tactile images. #### Data Augmentation As discussed earlier, tactile datasets are typically small due to the high cost of data collection due the interactivity of the modality and significant wear and tear. Data augmentation is therefore important to mitigate model overfitting, especially for larger architectures like ResNet. We propose to directly apply standard CV augmentations: resizing, cropping, flipping and jittering. We observe that each of these augmentations encodes a meaningful variation to the data collection process, even for non-camera-based sensors. For instance, cropping the tactile images encodes varying the duration of robot motion during data collection. Tab. II lists all chosen augmentations and their interpretation. TABLE II: Tactile images augmentations and their physical interpretation Augmentation Technique | Physical Interpretation ---|--- Flipping (along data axis) | Reversing the direction of robot motion. Resizing (along temporal axis) | Vary the speed of robot motion. Cropping (along temporal axis) | Vary the duration of robot motion. Jittering | Simulate sensor noise and drift. The chosen augmentations are readily accessible from common deep learning frameworks [39] and may be directly applied. We will demonstrate empirically that the augmentations is crucial to model robustness. ### IV-C Continual Tactile Learning As robots are increasingly expected to work in unstructured environments, continual learning of unordered new percepts is important. Sec. II-A introduced continual learning (CL) of new materials as a natural extension to standard material classification. The two key challenges for CL are: 1) whether robots could learn about new materials on the fly, and 2) continuous learning does not cause catastrophic forgetting of current knowledge [40, 41]. We adopt schedule-robust online continual learning (SCROLL) [42] to tackle CL of new materials. We choose SCROLL because the method leverages pre-trained models for efficient knowledge transfer, thus allowing new materials to be learned with limited interaction. In addition, SCROLL is robust to the schedule under which the data is presented (e.g., the order in which each material is learned), a crucial property to ensure model reliability in real- world situations. Using the notations introduced in Eq. 1, we characterize SCROLL as a two-phase process. Given a suitable pre-trained embedding model $\psi$, we first learn an online linear classifier $\phi_{t}$ via recursive least squares [43] as novel material data $B_{t}$ is observed. We then fine-tune the composite model $f_{t}=\psi\circ\phi_{t}$ using the current memory buffer $M_{t}$ to yield $f_{t}^{*}$. Both $f_{t}$ and $f_{t}^{*}$ are valid CL models for all data observed so far, with $f_{t}^{*}$ having a fine-tuned representation based on the observed data. SCROLL uses exemplar selection [44] for updating $M_{t}$. The overall algorithm is presented in Alg. 1, Algorithm 1 SCROLL (incremental) Initialization: Buffer $M_{0}=\varnothing$, data statistics $c_{y}^{0}=0,A_{0}=0$ Input: Embedding model $\psi$, next data batch $B_{t}$, current buffer $M_{t-1}$, current data statistics $c_{y}^{t-1},A_{t-1}$ $c_{y}^{t},A_{t}=\textrm{RecursiveLeastSquare}(c_{y}^{t-1},A_{t-1})$ $\phi_{t}=\textrm{RidgeRegressor}(c_{y}^{t},A_{t})$ $f_{t}=\phi_{t}\circ\psi$ $M_{t}=\textrm{SelectExemplar}(M_{t-1},B_{t},\psi)$ $f_{t}^{*}$ = FineTune$(f_{t},M_{t})$ Return $c_{y}^{t},A_{t},M_{t},f_{t}$ and $f_{t}^{*}$ where $c_{y},A$ are necessary data statistics for recursive least squares (see [42] for further details on SCROLL). ## V Experiments We evaluate our approach extensively across a wide variety of sensors and tasks, as introduced in Sec. II and III. Our experiments address the following questions: * • Is our approach generally applicable to heterogeneous tasks and sensors? How does our approach compared to sensor-specific methods? * • What are the effects of tactile image augmentation? * • Does our approach allow efficient knowledge transfer? What are the effects of knowledge transfer? #### Data Pre-Processing Following Sec. IV-A, we transform BioTac data into $19\times 400$ images by stacking 400 consecutive vectors. This corresponds to 4 seconds of data. RoboSkin data is transformed into $60\times 75$ images, corresponding to 1.5 seconds of data. Lastly, Contactile data is transformed into $27\times 599$ images, which is 6 seconds of data. We note that the exact size for the temporal dimension is not crucial, since we will also leverage random cropping and resizing along the temporal dimension for data augmentation. Since these tactile images only have a single channel, the channel is repeated three times to match the input dimension for the vision foundational model used in the experiments. All tactile images and GelSight data is normalized to the range of $[-1,1]$. #### Model Architecture and Pre-training We choose a ResNet-18 pre-trained on MetaDataset [45] as our foundational vision model. It is chosen for its balanced accuracy and computational efficiency. We emphasize that other foundational models may be easily chosen given the trade-off between accuracy and efficiency. We also highlight all experiments use the identical foundational model without any modification, as our approach allows fully shared models. ### V-A Standard Material Classification We compare our approach with baseline methods on standard material classification using BioTac-20, RoboSkin and GelSight datasets. We highlight that the baselines are specifically tailored to the BioTac or RoboSkin sensors, whilst our model is generic. #### Model Details Our model is trained for 100 epochs using stochastic gradient descent (SGD). A validation set is employed to schedule the learning rate, mitigating performance plateaus. An initial learning rate of $0.01$ is chosen empirically, with a momentum of $0.9$ and a weight decay of $0.0001$. 5-fold cross validation is performed for all experiments. #### Baseline Methods We compare our approach to a diverse set of methods investigated in [8], including a spiking neural network (SNN), LSTM, regular support vector machine (SVM) and spike-encoded SVM (SVM Spike). Table V-A reports the classification accuracy for all evaluated methods. Our generic ResNet outperforms the baselines by more than 4%, suggesting the viability of our tactile image approach. In addition, the results clearly shows that fine-tuning from the foundational model is more advantageous than random initialization. This indicates positive knowledge transfer from the pre-trained model and improved generalization. This is especially visible for the GelSight dataset owing to the imbalance between the small size of the dataset and the large input dimension. Pre-training also noticeably improves learning efficiency, as reported in Fig. 5. For both BioTac-20 and RoboSkin datasets, transferring from the foundational model (i.e., with pre-training) achieves higher accuracy with fewer iterations over the training data. Learning efficiency is a desirable property for robots requiring fast adaptation to novel environments. TABLE III: Material Classification Accuracy (%). Numbers for baseline methods are originally reported in [8]. Pre-train denotes initialization with the foundational vision model. Method | BioTac-20 | RoboSkin | GelSight ---|---|---|--- SVM | $94.2\pm 0.7$ | $50.5\pm 5.6$ | n.a SVM (spikes) | $93.5\pm 1.5$ | $63.3\pm 1.8$ | n.a Conv-LSTM | $94.5\pm 1.5$ | $93.5\pm 0.5$ | n.a SNN | $94.6\pm 1.3$ | $92.2\pm 0.5$ | n.a [1pt/1pt] Least Square w/ Pre-train | $93.8\pm 1.2$ | $84.8\pm 1.3$ | $67.1\pm 0.8$ ResNet (ours) | $98.0\pm 0.3$ | $95.0\pm 0.6$ | $92.9\pm 0.3$ ResNet w/ Pre-train (ours) | $\mathbf{98.9\pm 0.2}$ | $\mathbf{96.0\pm 0.5}$ | $\mathbf{95.1\pm 0.3}$ Figure 5: Test Accuracy over the first 20 epochs for both BioTac-20 (red) and RoboSkin 20 (blue), with (solid) and without (dashed) pre-training. #### Foundational Models and Tactile Images To better understand the connection between our foundational model and tactile images, we introduce another baseline in Sec. V-A denoted by “Least Square”. This baseline encodes all tactile images into fixed representations using the pre-trained ResNet, and only learns a least-squares classifier over the fixed representation. The accuracy of this baseline thus directly reflects the usefulness of the pre-trained model towards tactile images. Surprisingly, the results show that the foundational vision model trained from natural images already contains the general features required for tactile texture representation, despite the apparent distributional shift. This provides direct support to the connection between visual and tactile perception, resembling the similarities between the human visual cortex and somatosensory system. The results also provide empirical justification for our choice of tactile images as model input. ### V-B Augmentation and Model Robustness As noted in Sec. IV-B, data augmentations applied to tactile images may be interpreted as diversifying the conditions of data collection. This is crucial for tactile datasets as they are generally expensive to collect. We investigate the effects of augmentation in the following experiments. #### Robustness to Sampling Length For material classification, it is desirable to shorten the sampling length without sacrifice to accuracy. This corresponds to classifying randomly cropped tactile images in our formulation. It was also investigated in [8] as a strength of spiking neural architecture. In Fig. 6, we investigate how random cropping affects classification accuracy over varying data length, and compare our approach to previous methods. (a) BioTac-20 (b) RoboSkin-20 Figure 6: Test accuracies of our approach with and without random cropping augmentations for varying data length. Baseline methods included for comparison. The results clearly show that our model with augmentation outperforms the previous methods, achieving higher test accuracy with less data required. For both datasets, ResNet with augmentation is able to accurately classify the materials with about 0.3 seconds of sensor data. As the data length increases, the test accuracy rapidly increases and remains high, suggesting that our model could efficiently accumulate information over short duration while maintaining robustness over long run. In addition, Fig. 6 shows that augmentation is crucial for robust performance. The same model trained without augmentation performed the worst among all methods, suggesting overfitting to the original data length and less robust features learned. #### Robustness to Movement Speed While some tactile datasets are collected under a tightly controlled robot motion, it is preferable that the learned model generalizes to more varied motions. We simulate different speeds of the robot’s sliding motion during tactile sensing by sub-sampling the test set data along the temporal axis, and investigate the effects of augmentation on this out-of-distribution test set. Fig. 7 shows that the model trained with random resizing augmentation is robust against varying robot speed, achieving consistent accuracy across different movement speed. In contrast, the model with no augmentation generalized poorly even with slight speed deviation. The figure also shows that random cropping improves model robustness against varying movement speed. (a) BioTac-50: Speed (b) RoboSkin-20: Speed Figure 7: The effects of augmentation with respect to varying robot movement speed during tactile sensing. X-axis denotes the multiples of the original robot speed. #### Robustness to Sensor Noise Similar to the previous experiment, we construct another out-of-distribution test set by injecting random sensor noise. Fig. 8 and evaluates the effects of augmentations. (a) BioTac-50: Noise (b) RoboSkin-20: Noise Figure 8: The effect on test accuracy with respect to sensor noise. X-axis denotes maximum noise level added to tactile images. (a) BioTac-50 (b) RoboSkin-20 (c) GelSight-45 Figure 9: CL performance across all BioTac-50, RoboSkin-20, and GelSight-45 datasets, for varying buffer sizes. Accuracy from supervised upper bound and ridge regression are shown to illustrate the performance changes associated with adaptation. With increasing memory buffer, CL achieves better test accuracy and narrows the gap against standard supervised learning. Fig. 8 shows that model trained without random jittering augmentation generalizes poorly to noisy data, especially on BioTac dataset. This is due to the BioTac data being collected under a strict condition, including fixed force and movement speed. The model trained on non-augmented BioTac data thus overfits to the homogeneous data and lack robustness. In contrast, RoboSkin data contains more diverse samples since it is collected without strict speed or force control. As reflected in Fig. 8, the non-augmented model trained on RoboSkin data is therefore naturally robust to a low level of sensor noise. However, as the noise level increases, the test accuracy of all non-augmented models still deteriorate rapidly. Fig. 8 also indicates that the model trained with augmentation can significant sensor noise, with the noise level of 0.5 representing a potentially 50% deviation from the intended value range. At this level, the augmented model still retains a test accuracy of 80% for RoboSkin and 73% for BioTac-50. Lastly, we observe that even for the original test set (i.e, noise level = 0), the augmented model still outperforms the non-augmented version, suggesting more robust features learned with augmentation. Overall, we have demonstrated that standard CV augmentations can be directly applied to tactile images to appreciably boost model robustness in various aspects, including sampling length, movement speed and sensor noise. As several of our experiments relied on simulated test data, we will further demonstrate the usefulness of augmentation with real out-of-distribution data in Sec. V-D. ### V-C Continual Tactile Representation Learning As described in Sec. II-A, we cast material classification in a CL setting, which requires our model to learn each material sequentially. CL enables robots to continuously acquire new tactile experiences, without having to perform expensive retraining from scratch. #### Model Detail The same foundational vision model is used as the embedding model for Alg. 1. During fine-tuning with memory buffer $M_{t}$, we adopt data augmentation and a cosine learning schedule [46] to mitigate overfitting. For all experiments, we perform a 5-fold cross-validation. Fig. 9 shows the CL performance for each dataset over different memory buffer sizes. We report the performance of $f_{t}$ and the fine-tuned $f_{t}^{*}$. We also include the test accuracy of standard material classification as a performance reference. Note that $f_{t}$ obtained via recursive least squares is equivalent to the least-squares baseline discussed in Sec. V-A. Thanks to the foundational vision model, $f_{t}$ thus guarantees a robust minimum performance level for CL (see red lines in Fig. 9). $f_{t}^{*}$ is obtained by adapting $f_{t}$ with the memory buffer. Its performance improves with larger memory buffers, closing the gap with standard material classification. For BioTac and RoboSkin particularly, the CL performance is comparable with standard supervised learning, using a moderate memory buffer of 1500 and 600 respectively. The memory buffer required only represents a fraction of the original datasets, suggesting that our approach also allows efficient and accurate CL of new materials with limited memory requirements. ### V-D Fabric Composition Detection Introduced in Sec. II-A, fabric composition detection involves predicting the presence of six constituent materials, including Linen, Viscose, Cotton, Wool, Polyester and Elastane, in different fabrics. A single model is learned to detect the presence of all constituents concurrently, with one prediction head for each constituent. This task is more challenging than standard material classification, due to the “similar feels” of different fabrics. The physical weave of a fabric also contributes to its feel, adding a potential confounding factor for the task. For this task, the data is collected using Contactile sensor. As discussed in Sec. III, we deliberately used two protocols for data collection. The training set is collected using strict force and velocity control while the test set is collected with more natural movements. The test set thus presents a more realistic setting and a clear domain shift with respect to the training data. #### Model Details The training procedure is similar to that used for standard material classification. The only change is that the number of training epochs is reduced from 100 to 50. Data augmentations are applied to model training when specified. For evaluation, we consider the average classification score for all constituents materials. For instance, Felt contains Viscose and Wool. The learned model only achieves a score of 1 for predicting precisely the two constituents. Any false positive or false negative detection will decrease the score by $\frac{1}{6}$. Sec. V-D shows the average classification score for different model setups. We investigate both knowledge transfer from foundational vision model and model pre-trained on other sensors. We also study the effects of data augmentation. TABLE IV: Fabric Composition Detection Accuracy (%) Model | Test Accuracy Score ---|--- Least Squares w/ vision Pre-train | $74.2$ Least Squares w/ BioTac Pre-Train | $76.1$ [1pt/1pt] ResNet | $76.3$ ResNet + Augmentation | $78.9$ ResNet + Augmentation (BioTac Pre-train) | $80.6$ In Sec. V-D, we again leverages least-squares classifier over a fixed representation to quantify the effectiveness of a pre-trained model. We see that directly applying the foundational vision model achieves 74.2%, while applying the BioTac model obtained in Sec. V-A achieves 76.1%. The result is our first demonstration of successful cross-task and cross-sensor transfer: the BioTac model trained on standard material classification can be directly applied to Contactile data for fabric composition detection. This result demonstrates the general applicability of our approach, and its ability for robust and flexible knowledge transfer. Sec. V-D further demonstrates the usefulness of data augmentations on real out-of-distribution data, with augmentation contributes over 2% in test accuracy compared to the non-augmented model. The results validate our physical interpretations for the applied augmentations, showing that the augmented model is indeed more robust against more varied motions. From another perspective, we may also leverage the synthetic data produced by augmentation to reduce data collection load. This is important if a robot is only allowed limited (exploratory) interaction with environments. Lastly, we remark that the best model is obtained by combining both knowledge transfer and augmentation, achieving 80.6% in test accuracy. ### V-E Observations on the Learned Representation Results from previous sections suggest robust knowledge transfer across sensors despite the varied sensing mechanisms and data format. We hypothesize that this could be a result of a learned invariant descriptor of the tactile properties of the contact surfaces. Since the processing of texture in the human somatosensory cortex is a relatively lower-level function, we are thus interested in understanding if the lower-level abstraction in the learned model recovers similar latent representation for diverse sensor data. Fig. 10 shows the feature activation for different sensors using Deep Dream technique [47]. This qualitative visualization of the learned features shows that feature activation generated right after the first block for 3 ResNets, each fine-tuned on a separate tactile dataset in standard material classification. All 3 feature activation maps have high resemblance of one another, suggesting that learned model indeed recovers consistent representation of tactile properties despite diverse sensing mechanisms. This further supports the knowledge transferrability between different sensors and related tasks. (a) Foundational Model (b) GelSight (c) BioTac-50 (d) RoboSkin Figure 10: Feature activation after block 1 of ResNet. (a) Original feature activation from foundational CV model. (b), (c), (d) Feature activation after fine-tuning with specific sensor data. ## VI Conclusion In this work, we presented a foundational model approach to tactile representation learning. In contrast to sensor-specific tactile models, our approach is characterized by a standardized ML pipeline, including a unifying data format for diverse tactile data, fully shared model architecture and learning techniques, all of which are key requirements for foundational models. Further, the experiment results suggest that our approach not only outperforms sensor-specific models, but crucially allows efficient knowledge transfer between models trained on different sensors and tasks, satisfying the remaining property for foundational models. In particular, we demonstrated the connection between visual and tactile perception, showing that foundational vision models trained on natural images can be a readily accessible source of knowledge for tactile representation learning. This also allows us to effectively perform, with the same unified model, downstream tasks which were previously achieved with an array of methods in the literature. We believe that this investigation thus contributes a robust and general approach to tactile representation learning and provides a strong baseline for future research. ## References * Liu et al. [2017] Huaping Liu, Yupei Wu, Fuchun Sun, and Di Guo. Recent progress on tactile object recognition. _International Journal of Advanced Robotic Systems_ , 14(4):1729881417717056, 2017. doi: 10.1177/1729881417717056. URL https://doi.org/10.1177/1729881417717056. * Luo et al. [2015] Shan Luo, Xiaozhou Liu, Kaspar Althoefer, and Hongbin Liu. Tactile object recognition with semi-supervised learning. In _Intelligent Robotics and Applications: 8th International Conference, ICIRA 2015, Portsmouth, UK, August 24-27, 2015, Proceedings, Part II 8_ , pages 15–26. Springer, 2015. * Costanzo et al. [2020] Marco Costanzo, Giuseppe De Maria, and Ciro Natale. Two-fingered in-hand object handling based on force/tactile feedback. _IEEE Transactions on Robotics_ , 36(1):157–173, 2020. doi: 10.1109/TRO.2019.2944130. * Azulay et al. [2022] Osher Azulay, Inbar Ben-David, and Avishai Sintov. Learning haptic-based object pose estimation for in-hand manipulation with underactuated robotic hands, 2022. URL https://arxiv.org/abs/2207.02843. * Yuan et al. [2017] Wenzhen Yuan, Siyuan Dong, and Edward H. Adelson. Gelsight: High-resolution robot tactile sensors for estimating geometry and force. _Sensors_ , 17(12), 2017. ISSN 1424-8220. doi: 10.3390/s17122762. URL https://www.mdpi.com/1424-8220/17/12/2762. * Fishel and Loeb [2012a] Jeremy A Fishel and Gerald E Loeb. Sensing tactile microvibrations with the biotac—comparison with human sensitivity. In _2012 4th IEEE RAS & EMBS international conference on biomedical robotics and biomechatronics (BioRob)_, pages 1122–1127. IEEE, 2012a. * Khamis et al. [2018] Heba Khamis, Raquel Izquierdo Albero, Matteo Salerno, Ahmad Shah Idil, Andrew Loizou, and Stephen J Redmond. Papillarray: An incipient slip sensor for dexterous robotic or prosthetic manipulation–design and prototype validation. _Sensors and Actuators A: Physical_ , 270:195–204, 2018\. * Taunyazov et al. [2020] Tasbolat Taunyazov, Yansong Chua, Ruihan Gao, Harold Soh, and Yan Wu. Fast texture classification using tactile neural coding and spiking neural network. In _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 9890–9895, 2020. doi: 10.1109/IROS45743.2020.9340693. * Gupta et al. [2021] Anupam Kumar Gupta, Andrei Nakagawa-Silva, Nathan F. Lepora, and Nitish V. Thakor. Spatio-temporal encoding improves neuromorphic tactile texture classification. _IEEE Sensors Journal_ , 21(17):19038–19046, 2021\. doi: 10.1109/JSEN.2021.3087511. * Gao et al. [2020] Ruihan Gao, Tasbolat Taunyazov, Zhiping Lin, and Yan Wu. Supervised autoencoder joint learning on heterogeneous tactile sensory data: Improving material classification performance. In _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 10907–10913, 2020. doi: 10.1109/IROS45743.2020.9341111. * Bommasani et al. [2021] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_ , 2021. * Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In _CVPR09_ , 2009. * Raffel et al. [2020] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551, 2020. * Bardes et al. [2022] Adrien Bardes, Jean Ponce, and Yann LeCun. Vicregl: Self-supervised learning of local visual features. _arXiv preprint arXiv:2210.01571_ , 2022. * Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020. * Hsiao and Gomez-Ramirez [2012] Steven S. Hsiao and Manuel Gomez-Ramirez. _Neural Mechanisms of Tactile Perception_ , chapter 8. 2012\. ISBN 9781118133880. doi: https://doi.org/10.1002/9781118133880.hop203008. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118133880.hop203008. * Haindl and Filip [2013] Michal Haindl and Jiří Filip. Visual texture: Accurate material appearance measurement, representation and modeling. 2013\. * Krizhevsky et al. [2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. _Communications of the ACM_ , 60(6):84–90, 2017\. * Fishel and Loeb [2012b] Jeremy A Fishel and Gerald E Loeb. Bayesian exploration for intelligent identification of textures. _Frontiers in neurorobotics_ , 6:4, 2012b. * Hoelscher et al. [2015] Janine Hoelscher, Jan Peters, and Tucker Hermans. Evaluation of tactile feature extraction for interactive object recognition. In _2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)_ , pages 310–317. IEEE, 2015. * Taunyazoz et al. [2020] Tasbolat Taunyazoz, Weicong Sng, Hian Hian See, Brian Lim, Jethro Kuan, Abdul Fatir Ansari, Benjamin Tee, and Harold Soh. Event-driven visual-tactile sensing and learning for robots. In _Proceedings of Robotics: Science and Systems_ , July 2020. * Baishya and Bäuml [2016] Shiv S Baishya and Berthold Bäuml. Robust material classification with a tactile skin using deep learning. In _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 8–15. IEEE, 2016. * Gao et al. [2021] Ruihan Gao, Tian Tian, Zhiping Lin, and Yan Wu. On explainability and sensor-adaptability of a robot tactile texture representation using a two-stage recurrent networks. In _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 1296–1303, 2021. doi: 10.1109/IROS51168.2021.9636380. * Jamali and Sammut [2011] Nawid Jamali and Claude Sammut. Majority voting: Material classification by tactile sensing using surface texture. _IEEE Transactions on Robotics_ , 27(3):508–521, 2011. doi: 10.1109/TRO.2011.2127110. * Mayol-Cuevas et al. [1998] W.W. Mayol-Cuevas, J. Juarez-Guerrero, and S. Munoz-Gutierrez. A first approach to tactile texture recognition. In _SMC’98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218)_ , volume 5, pages 4246–4250 vol.5, 1998. doi: 10.1109/ICSMC.1998.727512. * Pestell and Lepora [2022] Nicholas Pestell and Nathan Lepora. Artificial sa-i, ra-i and ra-ii/vibrotactile afferents for tactile sensing of texture. _Journal of The Royal Society Interface_ , 19, 04 2022. doi: 10.1098/rsif.2021.0603. * Soh et al. [2012] Harold Soh, Yanyu Su, and Yiannis Demiris. Online spatio-temporal gaussian process experts with application to tactile classification. In _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 4489–4496. IEEE, 2012. * Soh and Demiris [2014] Harold Soh and Yiannis Demiris. Incrementally learning objects by touch: Online discriminative and generative models for tactile-based recognition. _IEEE transactions on haptics_ , 7(4):512–525, 2014. * De Lange et al. [2021] Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 44(7):3366–3385, 2021. * Sundaram et al. [2019] Subramanian Sundaram, Petr Kellnhofer, Yunzhu Li, Jun-Yan Zhu, Antonio Torralba, and Wojciech Matusik. Learning the signatures of the human grasp using a scalable tactile glove. _Nature_ , 569:698–702, 05 2019. doi: 10.1038/s41586-019-1234-z. * Sinapov et al. [2011] Jivko Sinapov, Vladimir Sukhoy, Ritika Sahai, and Alexander Stoytchev. Vibrotactile recognition and categorization of surfaces by a humanoid robot. _IEEE Transactions on Robotics_ , 27(3):488–497, 2011. doi: 10.1109/TRO.2011.2127130. * Schmitz et al. [2010] Alexander Schmitz, Marco Maggiali, Lorenzo Natale, Bruno Bonino, and Giorgio Metta. A tactile sensor for the fingertips of the humanoid robot icub. In _2010 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 2212–2217. IEEE, 2010. * Ward-Cherrier et al. [2018] Benjamin Ward-Cherrier, Nicholas Pestell, Luke Cramphorn, Benjamin Winstone, Maria Elena Giannaccini, Jonathan M. Rossiter, and Nathan F. Lepora. The tactip family: Soft optical tactile sensors with 3d-printed biomimetic morphologies. _Soft Robotics_ , 5:216 – 227, 2018. * Xu et al. [2013] Danfei Xu, Gerald E. Loeb, and Jeremy A. Fishel. Tactile identification of objects using bayesian exploration. In _2013 IEEE International Conference on Robotics and Automation_ , pages 3056–3061, 2013. doi: 10.1109/ICRA.2013.6631001. * Wettels et al. [2008] Nicholas Wettels, Veronica J. Santos, Roland S. Johansson, and Gerald E. Loeb. Biomimetic tactile sensor array. _Advanced Robotics_ , 22(8):829–849, 2008. doi: 10.1163/156855308X314533. URL https://doi.org/10.1163/156855308X314533. * Taunyazov et al. [2019] Tasbolat Taunyazov, Hui Fang Koh, Yan Wu, Caixia Cai, and Harold Soh. Towards effective tactile identification of textures using a hybrid touch approach. In _2019 International Conference on Robotics and Automation (ICRA)_ , pages 4269–4275, 2019. doi: 10.1109/ICRA.2019.8793967. * Kappassov et al. [2015] Zhanat Kappassov, Juan-Antonio Corrales, and Véronique Perdereau. Tactile sensing in dexterous robot hands. _Robotics and Autonomous Systems_ , 74:195–220, 2015. * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2016. * Paszke et al. [2017] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017\. * McCloskey and Cohen [1989] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In _Psychology of learning and motivation_ , volume 24, pages 109–165. Elsevier, 1989. * Ratcliff [1990] Roger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. _Psychological review_ , 97(2):285, 1990. * Wang et al. [2022] Ruohan Wang, Marco Ciccone, Giulia Luise, Massimiliano Pontil, Andrew Yapp, and Carlo Ciliberto. Schedule-robust online continual learning. _arXiv preprint arXiv:2210.05561_ , 2022. * Kailath et al. [2000] Thomas Kailath, Ali H Sayed, and Babak Hassibi. _Linear Estimation_. Prentice Hall, 2000. * Rebuffi et al. [2017] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 2001–2010, 2017. * Triantafillou et al. [2020] Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. Meta-dataset: A dataset of datasets for learning to learn from few examples. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=rkgAGAVKPr. * Loshchilov and Hutter [2016] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_ , 2016. * Mordvintsev et al. [2015] A. Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going deeper into neural networks. 2015\.
Microcanonical and Canonical Fluctuations in Bose-Einstein Condensates – Fock state sampling approach Maciej Bartłomiej Kruk1*, 2, Dawid Hryniuk1, 3, Mick Kristensen4, Toke Vibel4, Krzysztof Pawłowski1, Jan Arlt4, Kazimierz Rzążewski1 1 Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland *<EMAIL_ADDRESS> 2 Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland 3 Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT, United Kingdom 4 Center for Complex Quantum Systems, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark ## Abstract The fluctuations of the atom number between a Bose-Einstein condensate and the surrounding thermal gas have been the subject of a long standing theoretical debate. This discussion is centered around the appropriate thermodynamic ensemble to be used for theoretical predictions and the effect of interactions on the observed fluctuations. Here we introduce the so-called Fock state sampling method to solve this classic problem of current experimental interest for weakly interacting gases. A suppression of the predicted peak fluctuations is observed when using a microcanonical with respect to a canonical ensemble. Moreover, interactions lead to a shift of the temperature of peak fluctuations for harmonically trapped gases. The absolute size of the fluctuations furthermore depends on the total number of atoms and the aspect ratio of the trapping potential. Due to the interplay of these effect, there is no universal suppression or enhancement of fluctuations. ###### Contents 1. 1 Introduction 2. 2 Statistical description of a Bose gas 3. 3 Fock state sampling method 4. 4 Fluctuations in 1D Bose gases 1. 4.1 1D box with periodic boundary conditions (ring trap) 2. 4.2 1D harmonic trap 3. 4.3 Comparison of canonical and microcanonical fluctuations in 1D 5. 5 Fluctuations in 3D systems 1. 5.1 Comparison of canonical and microcanonical fluctuations in 3D 2. 5.2 Microcanonical fluctuations in an interacting 3D gas 6. 6 Conclusion 7. A Technical details of the algorithm 8. B Recurrence relations used to compute the partition functions 1. B.1 Recurrences for microcanonical partition function in a harmonic trap 2. B.2 Recurrence for canonical partition function in a — harmonic trap ## 1 Introduction Bose-Einstein condensates are at the heart of current efforts to understand complex many body quantum systems. Typically, investigations of these systems evaluate mean values of a given observable such as the number of atoms in a given state. Further important insights are often offered by higher moments of the relevant probability distribution and thus a full statistical description of such systems would be highly desirable. However, the statistics of complex systems is one of the fundamental problems in many areas of physics and a full description of a gas of bosonic particles - called a Bose gas in this context -is not available. In this paper, we present a new method to study statistical properties of the ideal and weakly interacting Bose gas at equilibrium. This problem has been studied since the 1940s when E. Schrödinger noticed that the commonly used grand canonical ensemble description of the non-interacting Bose gas leads to absurdly large fluctuations if applied to an isolated system [1]. The questions was further corroborated in the work of R. Ziff, G. Uhlenbeck, M. Kac [2]. They concluded that the canonical fluctuations do not suffer the problem noticed by E. Schrödinger and thus showed that the commonly used statistical ensembles are not equivalent with respect to condensate fluctuations. The statistical problem gained renewed interest [3, 4, 5, 6, 7, 8, 9, 10] after 1995, when the first Bose-Einstein condensate in a dilute gas was produced [11, 12]. To reach the necessary temperatures for atomic Bose- Einstein condensation, the gas is isolated as much as possible from its environment. Thus, its statistics should be close to the microcanonical ensemble predictions and it was shown that the microcanonical fluctuations of the three-dimensional (3D) non-interacting gas are expected to be significantly lower than canonical fluctuations [8, 9]. Figure 1 outlines the expected atom number fluctuation of a Bose gas calculated using the statistical ensembles discussed above. For a long time, the problem of condensate fluctuations was an academic one. First observations were performed in an exotic non-isolated condensate made of light [13] showing the grand canonical fluctuations [14]. For atomic condensates, such measurements turned out more difficult since the technical fluctuations of the total number of atoms, due to the noise in the cooling process, were much larger than the equilibrium fluctuations of the condensed gas. The situation changed with recent experiments that allowed for unprecedented control of the number of atoms [15, 16]. This enabled the first measurements of the condensate fluctuations [17] and confirmed that the observed fluctuations are closer to the microcanonical predictions [18]. Despite using as many as $5\times 10^{5}$ atoms, the experiments are conducted far from the known asymptotic ($N\to\infty$) predictions for the fluctuations [7, 9]. Moreover, for such a large number of atoms, no exact result for microcanonical fluctuations is available. Figure 1: Illustration of the fluctuations of the number of ground state atoms in three statistical ensembles as a function of temperature $T$ in units of the canonical critical temperature $T_{c}$. When the temperature of an ultracold Bose gas is lowered towards $T_{c}$ a grand canonical ensemble calculation predicts unphysically large fluctuations (green line). A canonical ensemble calculation does not suffer from this problem and predicts a peak in fluctuations below $T_{c}$ (orange line). These fluctuations decrease as the number of atoms in the thermal states declines for lower temperatures. A microcanonical calculation (blue line) shows the same features but predicts quantitatively lower peak fluctuations which can be shifted in temperature with respect to the canonical result. Moreover, the effect of interactions on condensate fluctuations remains a controversial problem. The only solid results are offered by the Bogoliubov approximation. It has been shown that the fluctuations of an interacting condensate, in the limit of large atom numbers, are up to two times smaller than the ones of the non-interacting gas [19]. The Bogoliubov approximation has been applied for the problem of fluctuations in the canonical [19, 20, 21, 22, 23, 24] and then in the microcanonical ensemble [25, 26]. However, this approach only holds for low temperatures and weak interactions. In particular, it does not apply in the vicinity of the critical temperature where the maximal fluctuations of the number of condensed atoms are best determined in the experiment. All other results are either limited to 1D [27] or are sensitive to not very-well-controlled approximations [10, 28]. Various methods give qualitatively different results, as summarized in Fig. 4 of [17]. In the case of 3D interacting systems, methods capable to study statistics for all temperatures exist only to the canonical and the grand canonical ensembles. In this paper we introduce a new numerical method, called Fock state sampling, to efficiently study the statistical properties of non-interacting and weakly- interacting systems in the canonical ensemble and, in the most restricted, microcanonical ensemble. The method generates a set of representative configurations in a well-chosen configurational space. By appropriate post- selection, we interpolate smoothly between micro- and canonical ensembles. For the non-interacting gas microcanonical results are obtained for as many as $10^{5}$ atoms in a spherically symmetric trap. The paper is organized as follows. In Sec. 2 we recall the basics of statistical ensembles and formulate our problem. The Fock state sampling method is described in Sec. 3. Section 4 presents our results in one dimensional systems, mostly to benchmark our findings with known results. We study the Bose gas in a box potential with periodic boundary conditions and in a harmonic trap. In particular, comparisons with exact findings and with the other approximate methods (such as the Bogoliubov and the classical field approximation) are presented. The results obtained in three dimensional systems are discussed in Sec. 5. This includes the regime between the canonical and the microcanonical ensemble and the interaction-induced shift of the condensate fluctuations in both ensembles. We summarize our findings in Sec. 6. ## 2 Statistical description of a Bose gas In statistical mechanics the systems of interest at finite temperature are described by statistical ensembles following Gibbs [29]. Such an ensemble is just a collection of copies of the system that fulfills the appropriate conditions. The statistical ensembles are defined by a set of control parameters that correspond to physical constraints imposed on the system. Conceptually, the simplest is the microcanonical ensemble for which the control parameters are: the total number of particles $N$, the total energy $E$, and the volume $V$. For the commonly used harmonic trap this volume is replaced with the trap frequency. This ensemble describes the statistical properties of a fully isolated system, both in terms of exchange of particles and energy. In its quantum version, the properties of the system in a microcanonical ensemble are determined by the partition function $\Gamma(E,N)$, equal to the number of states of $N$ atoms, with total energy $E$. Next in complexity is the canonical ensemble, which also describes a fixed number of particles $N$ but assumes contact with a thermal reservoir imposing a temperature $T$. As a consequence, the energy of the system fluctuates. The least constraining is the grand canonical ensemble which assumes not only a contact with a heat bath but also with a reservoir of particles. The corresponding control parameter $\mu$ is called the chemical potential and plays a role analogous to temperature with respect to particle number. Paradoxically, from the point of view of the computational complexity, the order of ensembles appears inverted, with the grand canonical being the easiest and the microcanonical the most difficult to use. In this paper, we consider statistical properties of $N$ ultracold trapped bosonic atoms. The Hamiltonian is $\hat{H}=\int\,{\rm d}^{3}r\,\hat{\Psi}^{\dagger}(\bm{r})\hat{h}\hat{\Psi}(\bm{r})+\frac{g}{2}\int\,{\rm d}^{3}r\,\hat{\Psi}^{\dagger}(\bm{r})\hat{\Psi}^{\dagger}(\bm{r})\hat{\Psi}(\bm{r})\hat{\Psi}(\bm{r}),$ (1) where $\hat{\Psi}(\bm{r})$ is a bosonic annihilation operator, $\hat{h}=-\frac{\hbar^{2}\Delta}{2m}+V(\bm{r})$ is the energy density of atoms placed in a trapping potential $V(\bm{r})$ and $g$ is a coupling constant related to short range interactions. Note that we consider only contact repulsive interactions, that is $g>0$. We consider both, a box potential of length $L$ with periodic boundary condition, i.e. $V=0$, and a more common harmonic potential $V(x,\,y,\,z)=\frac{1}{2}m\left(\omega_{\perp}^{2}(x^{2}+y^{2})+\omega_{z}^{2}z^{2}\right)$, where we limit our considerations to at most two different frequencies, $\omega_{\perp}$ and $\omega_{z}$ responsible for radial and longitudinal confinement, respectively. In the following we refer to $\lambda=\omega_{\perp}/\omega_{z}$ as the aspect ratio. In particular, the results for a non-interacting trapped Bose gas do not depend on the explicit values of the trap frequencies, but solely on the the parameter $\lambda$. In what follows, whenever we discuss a gas in a harmonic trap we shift the spectrum such that the ground state energy of the non-interacting gas equals zero. A convenient basis is spanned by the Fock states $|\bm{N}\rangle:=|N_{0},\,N_{1}\ldots\rangle$, where $N_{j}$ are occupations of the orbitals $\phi_{j}(\bm{r})$ and we choose $\phi_{j}(\bm{r})$ as the eigenstates of a single particle trapped in a potential $V(\bm{r})$. In this basis, the field operator $\hat{\Psi}(\bm{r})$ is $\hat{\Psi}(\bm{r})=\sum_{j=0}^{\infty}\phi_{j}(\bm{r})\,\hat{a}_{j},$ (2) where $\hat{a}_{j}$ ($\hat{a}_{j}^{\dagger}$) are the bosonic annihilation (creation) operators of atoms in the $j$-th orbital. The lowest energy state, denoted with $\phi_{0}(\bm{r})$, is the ground state in the non-interacting case. Its occupation $\hat{N}_{0}=\hat{a}_{0}^{\dagger}\hat{a}_{0}$ fluctuates in all ensembles. The fluctuations are given by $\Delta^{2}N_{0}={\rm Tr}\left\\{\hat{N}_{0}^{2}\hat{\rho}\right\\}-\left({\rm Tr}\left\\{\hat{N}_{0}\hat{\rho}\right\\}\right)^{2},$ (3) where $\hat{\rho}$ is the density matrix of gas at equilibrium. The definition of the density matrix depends on the chosen ensembles as given below. * • The microcanonical density matrix is $\hat{\rho}_{\rm micro}=\frac{1}{\Gamma\left(N,E\right)}\,\,\delta\left(\hat{H}-E\right),$ (4) where $\delta$ is the Dirac delta function and $\Gamma\left(N,E\right)$ is the microcanonical partition function, namely the number of ways to distribute $N$ atoms between energy levels such that the total energy is $E$. * • In the canonical ensemble, the density matrix is defined as $\hat{\rho}_{\rm cano}=\frac{1}{Z\left(N,\beta\right)}e^{-\beta\hat{H}},$ (5) where $Z$ is the canonical partition function $Z(N,\beta):={\rm Tr}\left\\{e^{-\beta\hat{H}}\right\\}$, $\beta:=1/(k_{\rm B}T)$ and $k_{\rm B}$ is the Boltzmann constant. The partition function restricted to the excited states and excited atoms only, is very useful for the computation of the statistics of a condensate. It is denoted with $\Gamma_{\rm ex}\left(N_{\rm ex},E\right)$ and $Z_{\rm ex}\left(N_{\rm ex},\beta\right)$ in the microcanonical and canonical ensembles, respectively. In particular, $\Gamma_{\rm ex}\left(N_{\rm ex},E\right)$ is the number of ways to distribute $N_{\rm ex}$ atoms between excited energy levels such that the total energy still equals $E$. In the microcanonical ensemble, the probability that there are $N_{0}$ atoms in the condensate is thus $p_{\rm micro}\left(N_{0},\,N,\,E\right):=\Gamma_{\rm ex}\left(N-N_{0},E\right)/\Gamma\left(N,E\right).$ (6) Analogously, in the canonical ensemble we define $Z_{\rm ex}\left(N_{ex},\beta\right):=\sum_{N_{1}+N_{2}+\ldots=N_{\rm ex}}\langle N_{0},\,N_{1},\,\ldots|e^{-\beta\hat{H}}|N_{0},\,N_{1},\,\ldots\rangle,$ (7) where the sum is over all Fock states with exactly $N_{\rm ex}$ atoms in all excited energy levels together. The probability of having $N_{0}$ atoms in the canonical ensemble is then $p_{\rm cano}(N_{0},\,N,\,\beta):=Z_{\rm ex}\left(N-N_{0},\,\beta\right)/Z\left(N,\,\beta\right).$ (8) In principle, given the probabilities $p_{\rm micro}$ and $p_{\rm cano}$ one could compute the average number of ground state atoms and their fluctuations using $\displaystyle\langle N_{0}\rangle_{\rm ens}=\sum_{N_{0}=0}^{N}p_{\rm ens}\left(N_{0},N,E\right)\,N_{0}$ (9) $\displaystyle\left(\Delta^{2}N_{0}\right)_{\rm ens}=\sum_{N_{0}=0}^{N}p_{\rm ens}\left(N_{0},N,E\right)\,N_{0}^{2}-\langle N_{0}\rangle_{\rm ens}^{2}$ (10) where the subscript $ens$ indicates the ensemble, labeled $micro$ and $cano$ in the following. Computations in the canonical and the microcanonical ensembles are thus conceptually the same. However, while there are efficient methods for calculating the probabilities $p_{\rm cano}$ in the canonical ensemble [6, 5] the analogous calculations are much more demanding for the microcanonical ensemble. Apart from a few exceptional cases, calculations in the microcanonical ensemble are hence restricted to small systems. Accurate modelling of experiments, therefore, requires the development of computational methods. Below we describe a new numerical method that allows us to perform reliable calculations in both statistical ensembles with up to $10^{5}$ atoms, without reconstructing the probability distributions $p_{\rm micro}$ and $p_{\rm cano}$. ## 3 Fock state sampling method In practice, the Fock state sampling (FSS) method is a realization of the Metropolis algorithm, widely used in Monte-Carlo simulations [30]. The algorithm generates a Boltzmann-distributed random walk in the space of possible configurations of the system. The set of visited points is interpreted as a collection of microstates of the system representing the canonical ensemble. Importantly, we can also generate points representing the microcanonical ensemble by post-selection from the collection of states. The final collection can be used to compute average values of physical quantities as it is done in the statistical ensembles. Moreover, the method is applicable to weakly interacting Bose gases in the canonical and microcanoncial ensembles as described below. Our earlier application of the Metropolis algorithm was performed in the framework of the classical field approximation (CFA) [31]. However, in the CFA a fraction of an atom can flow from orbital to orbital due to the lack of discretization. This leads to an unavoidable dependence of the results on the energy cut-off, which is analogous to the ultraviolet catastrophe of the black body radiation prior to M. Planck’s introduction of photons. In our novel FSS method, the algorithm walks among all the Fock states satisfying $\sum N_{j}=N$. In practice, we also truncate the orbitals at some high value, but now the results are cut-off independent, provided that the cut-off is sufficiently high. The novelty, apart from the choice of the underlying set of states, rests in the definition of the steps of the random walk. In this respect, the Metropolis algorithm is very flexible - one obtains the correct results provided that the whole space of states is accessible and that the steps satisfy the condition of detailed balance [30]. The FSS method is described in detail below, starting with the non-interacting gas case. Our walk prescription is physically motivated and quickly provides representative collections of the ensemble copies. In this algorithm every atom has equal chances of jumping away from their occupied mode, that is the probability of a jump from a given orbital is proportional to its occupation. The orbital which the atom jumps to is drawn in proportion to its occupation – a well-known Bose-enhancement factor – plus one, which represents spontaneous transitions. This is analogous to A. Einstein’s discussion of $\mathcal{A}$ and $\mathcal{B}$ coefficients for the spontaneous and stimulated emission processes. The probability that an atom jumps from the $j$-th orbital to the $k$-th orbital is thus proportional111Notice that if $j=k$, then the probability is proportional $N_{j}^{2}$, i.e. we account for the fact that the atom was first removed and then it returns. to $N_{j}(N_{k}+1)$. The new configuration is accepted and added to the set of copies representing the canonical ensemble if a drawn random number $r\in[0,1]$ is smaller than $\exp\left(-\beta(E_{\rm f}-E_{\rm i})\right)$, where $E_{\rm i}$ and $E_{\rm f}$ denote the energy of the Fock state at the beginning of the step and the energy of the candidate Fock state, respectively. A further acceleration of the algorithm is described in appendix A. Since the initial state can be arbitrary, a number of steps must be made before the random walk begins to represent the ensemble. This initial part of the walk, during which the system "thermalizes" (also referred to as "burn-in" in the Markov Chain Monte Carlo literature), is rejected from the analysis for practical reasons. We extend this method beyond the non-interacting gas by considering weak contact interactions. In this case, the energy in the Boltzmann factor includes not only the kinetic and potential energy in the trap but also the contribution of the interaction energy. The mean value of the interaction energy, given by the last term in Eq. (1), in the Fock state is given by $E_{\rm int}=\frac{g}{2}\int\,{\rm d}^{3}r\,\langle N_{0},\,N_{1}\ldots|\hat{\Psi}^{\dagger}(\bm{r})\hat{\Psi}^{\dagger}(\bm{r})\hat{\Psi}(\bm{r})\hat{\Psi}(\bm{r})|N_{0},\,N_{1}\ldots\rangle,$ (11) and resembles the lowest order perturbation term. In the case of a harmonic trap, the condensate wave function of the interacting gas differs from the simple Gaussian solution of the single particle ground state. Nonetheless, we only consider the statistics of the population in the lowest orbital in the present version of the method, even in the interacting case. Therefore, we restrict our consideration to very weak interactions. Note however, that for a box with periodic boundary conditions this additional complication does not arise, and $\phi_{0}(\bm{r})=1/\sqrt{V}$ remains the condensate wave-function even in the presence of interactions. Once a set of states representing the canonical ensemble is generated we can perform a post-selection routine by choosing a subset that satisfies additional constraints. In particular, one can reduce the set all the way to the microcanonical ensemble by restricting the spread of energies. This is done by introducing a shrinking energy window around the mean energy and removing the states with energies outside this window. In the limiting case where the width of the energy window approaches zero typically only a few states remain. To avoid large statistical error in this case, we determine the microcanonical values by extrapolating the results from wider windows. Further details of this procedure are given in Sec. 5 for an experimentally relevant case. We extract the important quantities for the discussion of the fluctuations in a Bose-Einstein condensate as follows. The mean number of condensed atoms is given by $\bar{N}_{0}:=\frac{1}{W}\sum_{i=1}^{W}N_{0,\,i},$ (12) and its variance is $\Delta^{2}N_{0}:=\frac{1}{W}\sum_{i=1}^{W}\left(N_{0,\,i}\right)^{2}-\bar{N}_{0}^{2},$ (13) where $W$ is the number of states generated by our algorithm and $N_{0,\,i}$ is the occupation of the ground state in the $i$-th copy. To obtain results for the microcanonical ensemble we use the same formulas (12) and (13), but we restrict the sum to the Fock states that remain after the post-selection, described above and in Sec. 5. In relation to recent experiments it is furthermore of great interest to evaluate the ratio $S:=\frac{\max_{E}\left(\Delta^{2}N_{0}\right)_{\rm micro}}{\max_{T}\left(\Delta^{2}N_{0}\right)_{\rm cano}},$ (14) between peak fluctuations in the microcanonical and canonical ensembles. In particular, when $S$ differs significantly from unity, measurements can be used to identify the appropriate ensemble, which recently demonstrated the microcanonical nature of the Bose-Einstein condensation in ultracold gases [18]. Figure 1 illustrates these peak fluctuation in both ensembles. ## 4 Fluctuations in 1D Bose gases In a first step we apply the Fock state sampling method in a one dimensional setting. This has the advantage that a number of available solutions allow for a more comprehensive benchmarking of the numerical method than the three dimensional case. In particular, we analyze two distinct trapping geometries in 1D. First a ring trap corresponding to a 1D box with periodic boundary conditions is analysed and then a harmonic trap is discussed. In both cases we compare the fluctuations with and without interactions. After benchmarking our method with the canonical ensemble, we use it to discuss the microcanonical case. ### 4.1 1D box with periodic boundary conditions (ring trap) Figure 2 shows the fluctuations of a Bose gas in a ring trap obtained from canonical ensemble calculations using several different approaches. This includes results based on exact counting statistics [5], the classical field approximation [31], the Bogoliubov approximation, and the Fock state sampling method presented here. Figure 2: Fluctuations of a non-interacting Bose gas containing $N=100$ atoms in a 1D ring trap. The variance of $N_{0}$ as a function of temperature in 1D in canonical ensemble is obtained from several different approaches: FSS method, classical field approximation, Bogoliubov approach (BOA) and an exact method (see text). In addition a microcanonical calculation using the FSS method is presented, showing the clear reduction of the expected fluctuations in this ensemble. The temperature $T$ is given in units $2\pi^{2}\hbar^{2}/(mk_{\rm B}L^{2})$. For the non-interacting gas, our present FSS method, as well as the classical field approximation with a well chosen cut-off [32, 33], perfectly reproduce the exact result which is known analytically in this case [33]. Moreover, we also find good agreement with the result based on the Bogoliubov approximation within its range of validity at low temperatures. This validates our method for the case of the non-interacting Bose gas in 1D. Moreover, we employ the post-selection process to the FSS method to evaluate the fluctuation in the microcanonical ensemble. This shows a clear reduction of the fluctuation with respect to the canonical expectation. Figure 3: Fluctuations of a weakly-interacting Bose gas containing $N=100$ atoms in a 1D ring trap. The variance of $N_{0}$ as a function of temperature is obtained from several different approaches: FSS method, classical field approximation, and Bogoliubov approach. A microcanonical calculation shows a significant suppression of the fluctuations. The exact canonical result from Fig. 2 is shown for comparison. (inset) Variance at a low temperature $T=5$ (orange, axis on the right) and at the temperature of maximal fluctuations (blue, axis on the left) as a function of the interaction strength $g$, obtained with the FSS method in the canonical ensemble. The arrows indicate the appropriate axis. The interaction strength $g$ and temperature $T$ are given in units $2\hbar^{2}\pi^{2}/(mL)$ and $2\pi^{2}\hbar^{2}/(mk_{\rm B}L^{2})$ respectively. We now include weak interactions as shown in Fig. 3. No exact canonical results are available for this case, but at low temperature reliable results can be obtained within the Bogoliubov approximation, valid for low temperatures and low interaction strengths. The results based on the Bogoliubov approximation show the expected suppression of fluctuations at low temperature in good agreement with our FSS method computation and earlier results based on the classical field approximation [33]. However, the FSS results show that this suppression is not general, but that the fluctuations surpass the non-interacting case at higher temperature. Note that this result is a significant change of paradigm. The reduction of fluctuations due to interactions for a Bose gas in a box was stressed in a number of papers that relied on the Bogoliubov approximation [19]. On the basis of Fig. 3 it is now clear that this prediction is limited to low temperatures only. In particular, Fig. 3 shows that the fluctuations of an interacting gas are larger near their maximal value as our canonical results exceed the non-interacting exact prediction. This is further corroborated in Fig. 3 (inset). It shows the fluctuations as a function of the interaction strength $g$ in two regimes. At a low temperature $T=5$ (orange) the fluctuations decrease as a function of interaction strength as predicted within the Bogoliubov approximation. However, at the temperature of the peak fluctuations they increase as a function of interaction strength. This effect is even more pronounced in a microcanonical calculation, which can be seen by comparing the microcanonical results in Fig. 2 and Fig. 3. Both results are strongly reduced with respect to their corresponding canonical results. In addition the weakly interacting microcanonical result lies considerably below the non-interacting one at low temperatures. At higher temperatures the effect reverses and the weakly-interacting microcanonical variance is larger. This closely resembles the canonical case which is discussed further in Sec. 4.3. ### 4.2 1D harmonic trap In a next step we extend our analysis to the experimentally more relevant case of a 1D harmonic trap. For the non-interacting gas, the probability distribution of finding $N_{0}$ atoms in the ground state in the canonical ensemble is given by $p_{\rm cano}(N_{0},\,N,\,\beta)=e^{-\beta(N-N_{0})\,\hbar\omega}\prod_{n=N-N_{0}}^{N}\left(1-e^{-\beta\,n\,\hbar\omega}\right).$ (15) The exact result for the fluctuations based on this distribution is shown in Fig. 4. Moreover, we show the result based on the Bogoliubov approximation which again agrees with the exact result within its range of validity at low temperatures. Importantly, the FSS method perfectly reproduces the exact result at all temperatures in the 1D harmonic trap. Based on this agreement, we again include interactions in our analysis. Figure 4 includes the fluctuations of the interacting gas based on the Bogoliubov approximation, which is valid for low temperatures at weak interactions. In clear contrast with the previous case this analysis shows that the fluctuations increase at low temperatures due to the interactions. The results from our FSS method analysis provide an explanation for this behaviour. At low temperature it agrees well with the results based on the Bogoliubov approximation. However, for higher temperatures the primary effect of interactions is a shift of the temperature of peak fluctuations. This indeed qualitatively corresponds to an expected shift of the critical temperature due to interactions in the system. The peak value of the fluctuations remains almost unchanged and it stays in the range of statistical errors of the FSS method results. Thus the apparent increase of fluctuations at low temperature is primarily caused by the shift of the critical temperature. Figure 4: Fluctuations of Bose gas containing $N=100$ atoms as a function of temperature in a 1D harmonic trap. The fluctuations with and without interactions are computed with several methods: FSS method, Bogoliubov approach, and exact methods for the non-interacting gas. Interactions lead to a shift of the temperature of peak fluctuations, resulting in increased fluctuations at low temperature. In addition calculations using a microcanonical ensemble show the significantly lower fluctuations expected in this case. The interaction strength $g$ and temperature $T$ are given in units of $\sqrt{\hbar^{3}\omega/m}$ and $\hbar\omega/k_{\rm B}$, respectively. Now we turn our attention to the microcanonical ensemble. In this case, there are no closed formulas for the condensate fluctuations, even for the non- interacting gas. Instead one may find the ground state statistics using recurrence relations. Here, one can benefit from the fact that the statistical problem for the 1D harmonic trap is directly related to the classic combinatorial problem of the number of partitions of an integer. The relevant combinatorial figure in this problem is the number of partitions of an integer $E$ into a sum of $N_{\rm ex}\leq N$ strictly positive numbers. For the 1D harmonic trap this number is nothing else but $\Gamma_{\rm ex}(N_{\rm ex},\,E)$ introduced above, where the integer $E$ is the energy expressed in harmonic oscillator units. For example $\Gamma_{\rm ex}(N_{\rm ex},\,N_{\rm ex}+1)$ equals $1$, as there is only one way to write the number $N_{\rm ex}+1$ as a sum of $N_{\rm ex}$ positive integers: a single integer equal to $2$ and $N_{\rm ex}-1$ integers equal to $1$. In general, the value $\Gamma_{\rm ex}(N_{\rm ex},\,E)$ may be obtained using the recurrence relation $\Gamma_{\rm ex}(N_{\rm ex},\,E)=\Gamma_{\rm ex}(N_{\rm ex},\,E-N_{\rm ex})+\Gamma_{\rm ex}(N_{\rm ex}-1,\,{E}-1).$ (16) The first term in this relation, $\Gamma_{\rm ex}(N_{\rm ex},\,E-N_{\rm ex})$, is the number of partitions in which the number $1$ does not appear. In fact, if we subtract $1$ from each element of such a partition, there still remain $N_{\rm ex}$ non-zero integers, only that their sum will be ${E}-N_{\rm ex}$. In turn, for every partition of $E$ in which the number $1$ appears, one can take this $1$ away. Such a new set will contain $N_{\rm ex}-1$ non-zero elements, summing up to $E-1$. There are $\Gamma_{\rm ex}(N_{\rm ex}-1,\,E-1)$ such partitions. The full partition function in the microcanonical ensemble is $\Gamma(N,\,E)=\sum_{N_{\rm ex}=0}^{N}\,\Gamma_{\rm ex}(N_{\rm ex},\,E),$ (17) and one can easily find the probability distribution of finding $N_{0}$ atoms in the single particle ground state $p_{\rm micro}\left(N_{0},\,N,\,E\right)=\Gamma_{\rm ex}(N-N_{0},\,E)/\Gamma(N,\,E)$. Having $p_{\rm micro}\left(N_{0},\,N,\,E\right)$ we invoke formulas (9) and (10) to find the average value and fluctuations of $N_{0}$. This exact microcanonical result provides a benchmark for the post-selection process based on the FSS method analysis. As outlined above, the fluctuations in a microcanonical ensemble are calculated by post-selection from a set of states obtained using the FSS method approach in the canonical ensemble, which amounts to a reduction of the energy spread. Figure 4 includes a comparison of our numerical result with the exact calculation. The two are in excellent agreement for the entire temperature range and show that the fluctuations in the microcanonical ensemble are always smaller than in the canonical one. The post-selection works equally well for interacting systems and Fig. 4 also shows microcanonical the ground state atom number fluctuations in a weakly interacting Bose gas. Similar to the canonical interacting result the maximal fluctuations are shifted to lower temperature (energy) in qualitative agreement with the expected shift of the critical temperature due to interactions. Once again, the microcanonical fluctuations are also reduced with respect to the canonical result. Note, that there is no exact calculation which could serve as a benchmark for this case, since to our knowledge, there exists no other method for the weakly interacting gas in the microcanonical ensemble. ### 4.3 Comparison of canonical and microcanonical fluctuations in 1D Equipped with the analysis above, it is possible to address the natural question, whether the fluctuations in a 1D non-interacting gas depend on the choice of the statistical ensemble in the limit of large atom number $N\to\infty$. To this end we first compare the results for the canonical and the microcanonical ensemble using the exact results for the 1D harmonic oscillator. They allow us to evaluate the ratio $S$ of the peak variance in both ensembles as introduced in Eq. (14). Figure 5 (blue points) shows the result, where $S$ tends to $1$, indicating that the microcanonical and canonical fluctuations in the 1D harmonic oscillator asymptotically become equal, in agreement with previous discussions [28]. Figure 5: Ratio $S$ between the peak fluctuations in a microcanonical and a canonical ensemble calculation as a function of atom number for a 1D harmonic trap and a 1D box with periodic boundary conditions. The results for the 1D harmonic trap are obtained from an exact calculation (see text). In the case of the 1D box potential, the FSS method was employed. We also study the same problem for non-interacting atoms in a 1D box potential with periodic boundary conditions as discussed in Sec. 4.1. In this case, no method of finding the exact values of the fluctuations in the microcanonical ensemble is available. Instead, we use our numerical FSS method analysis to obtain the ratio $S$ as shown in Fig. 5 (orange points). Again, we observe that $S$ tends to $1$, indicating that the microcanonical and canonical fluctuations in the 1D box potential become equal in the limit of large atom number. This finding supports the typical notion that results in the thermodynamic limit should be independent of the thermodynamic ensemble. Note however, that this equivalence between the ensembles is not universal, as pointed out previously [9, 28]. In particular we show in the following section, devoted to the experimentally relevant 3D case, that the microcanonical and canonical fluctuations differ even in the limit of large atom numbers. ## 5 Fluctuations in 3D systems Despite its fundamental importance, the question of atom number fluctuations in Bose-Einstein condensates [28] remained largely an academic problem until a few years ago. Recently however, the situation has changed, since improved control of the total number of atoms in ultracold gases has allowed for first measurements of the fluctuations [17, 18]. In the limit of very large atom numbers in a 3D system the asymptotic atom number fluctuations in a microcanonical ensemble calculation [9] are given by $\lim_{N\to\infty}\frac{\left(\Delta^{2}N_{0}\right)_{\rm micro}}{N}=\left(\frac{\zeta(2)}{\zeta(3)}-\frac{3}{4}\frac{\zeta(3)}{\zeta(4)}\right),$ (18) where $\zeta$ denotes the Riemann zeta function. However, in the following we show that despite the large number of up to $10^{5}$ atoms, the asymptotic value in Eq. (18) is still not applicable. On the other hand, these experimentally relevant atom numbers are so large that one cannot compute the expected microcanonical fluctuations using previously existing methods, even in the non-interacting case. Here we show how this problem can be solved for large atom numbers by computing the relevant microcanonical fluctuations with the FSS method after appropriate post-selection. As outlined above, we first generate a set of states representing the non- interacting gas at thermal equilibrium in the canonical ensemble. The atom number variance in such a set is compared with the results obtained from the recurrence relations [5], which ensures that the resulting set is indeed a good representation of the canonical ensemble. Figure 6: Histograms of the distribution of energies of sets of states generated with the FSS method. The sets represent $N=10^{4}$ atoms in a 3D harmonic trap with aspect ratio $\lambda=4$ at temperatures, from left (blue) to right (green), $k_{\rm B}T/(\hbar\omega_{z})=47.5,47.75,48,48.25,48.5$. The set of states representing the system at a given temperature has a distribution of energies dictated by statistics in the canonical ensemble. Figure 6 shows histograms of these energies for $N=10^{4}$ atoms in a 3D harmonic trap with aspect ratio $\lambda=4$ at various temperatures. Figure 7: Fluctuations of atom number obtained after post-selection. The atom number variance $\Delta^{2}N_{0}$ for different aspect ratios $\lambda$ is shown as a function of the fraction $f$ after post-selection. Thus the figure represents the transition from a canonical to a microcanonical result from left to right. The initial sets represent $N=10^{4}$ atoms at the temperature $T_{\rm max}$ at which the canonical variance is maximal. To obtain a set of states representative to the microcanonical ensemble we perform a post-selection analysis also outlined above. In practice, we post- select a set of states at a given temperature to reduce the variance of energies by subsequential removal of states at the highest and lowest energies, symmetrically with respect to the mean energy. The final set thus has an energy in the interval $[E_{\rm mean}-\Delta E/2,E_{\rm mean}+\Delta E/2]$, where $E_{\rm mean}$ is the mean energy and the energy window is $\Delta E$. By reducing the energy interval, the microcanonical ensemble, i.e. the limit $\Delta E\to 0$ is approached. The resulting variance of the condensate atom number is shown in Fig. 7, for $N=10^{4}$ atoms in a 3D harmonic trap with different aspect ratios at the temperatures $T_{\rm max}$ for which the canonical fluctuations are maximal. The variances are presented as a function of the fraction $f$, which is defined as the fraction of the number of remaining states with respect to the initial states. We repeat our analysis several times, starting from different sets of states in the canonical ensemble and the error bars in Fig. 7 correspond to our statistical uncertainty. For significant post-selection and small resulting sets of states (small $f$) the uncertainty of our result can become significant. Therefore a polynomial fit is used to find the asymptotic value of the variance for $\Delta E\to 0$, which corresponds to the size of the fluctuations in a microcanonical ensemble. Note that experiments typically suffer from atom loss and technical heating. Moreover, the experimental results are extracted from multiple experimental realizations by using a correlation technique [17, 18]. Hence, even in the absence of interactions the experiment would not correspond to this ideal $\Delta E\to 0$ limit and we therefore expect the measured atom number variance in the Bose-Einstein condensate to lie somewhere between its extremal values, i.e. between the microcanonical and canonical fluctuations. Figure 7 shows that the atom number variance $\Delta^{2}N_{0}$ is significantly reduced in the transition from the canonical ($f=1$) to the microcanonical ensemble (limit $f\to 0$). While the results are qualitatively similar, the quantitative reduction depends on the aspect ratio of the trapping potential $\lambda$. Importantly, the fluctuations differ significantly in a canonical and a microcanonical calculation even in a very elongated trap ($\lambda=10$) for $N=10^{4}$ atoms. ### 5.1 Comparison of canonical and microcanonical fluctuations in 3D Based on the analysis in a 3D harmonic trap, once again the question arises whether the fluctuations depend on the choice of the statistical ensemble in the limit of large atom number $N\to\infty$. We study this problem for the non-interacting 3D harmonically trapped gas by evaluating the ratio $S$ between microcanonical and canonical fluctuations for experimentally relevant aspect ratios $\lambda$ from $1$ to $20$ and up to $10^{5}$ atoms. Figure 8: Ratio between microcanonical and the canonical fluctuations in a non-interacting harmonically trapped 3D gas. The coefficients $S$ (see eq. 14) and $\tilde{S}$ (see text) characterizing this ratio are shown as a function of the number of atoms $N$ for different aspect ratios. Data points indicate solutions of the FSS method analysis. Solid lines represent exact results obtained using the recurrence relations given in Appendix B. Figure 8 shows the ratio between microcanonical and the canonical fluctuations evaluated with an FSS method analysis and using exact recurrence relations for small atom numbers. Note that the FSS method does not provide the ratio $S$ directly and we therefore take the following approach. In the analysis the set of states representing the canonical ensemble for which $\Delta^{2}N_{0}{}_{\rm cano}$ is maximal is reduced by post-selection and converges to a set consistent with the microcanonical ensemble. However, this is not necessarily the set for which the fluctuations $\Delta^{2}N_{0}{}_{\rm micro}$ reach their maximum and we therefore denote the resulting ratio of the fluctuations by $\tilde{S}$. It may differ from $S$ due to two reasons. Firstly, the microcanonical fluctuations should be labelled by the proper control parameter, i.e. energy. However, in the FSS method we assume that the temperature is inherited from the canonical ensemble. Secondly, for a small number of atoms, the maximal fluctuations are reached at slightly different temperatures in both ensembles. To evaluate the effect, we compute $\tilde{S}$ and $S$ using an exact method (see Appendix B) as shown in Fig. 8. This shows that the two ratios agree for atom numbers above a few hundred atoms and validates the use of $\tilde{S}$ obtained from our FSS method analysis for larger atom numbers. Overall, Fig. 8 shows that the ratio between microcanonical and the canonical fluctuations first grows and then starts to decrease. This growth for small number of atoms is easily understood. In an elongated trap, the atoms easily populate the low lying energy levels associated with the longitudinal direction and thus the system becomes effectively one-dimensional, discussed in the previous section. For $N$ sufficiently large, however, the 3D character of the trap becomes relevant, since the levels in the transverse directions also become populated. This is in striking contradiction with the result in the strict $1D$ case where $S$ approaches $1$ (compare with Fig. 5) and clearly breaks with the notion that results in the large atom limit should be independent of the thermodynamic ensemble. Note that the expected asymptotic value of $S$ is given by $S_{3D}=\left(1-\frac{3\zeta(3)^{2}}{4\zeta(4)\zeta(2)}\right)\approx 0.39$ (19) which is the ratio between the asymptotically known microcanonical fluctuations, Eq. (18), and fluctuations in the canonical ensemble [7] $\lim_{N\to\infty}\frac{\left(\Delta^{2}N_{0}\right)_{\rm cano}}{N}=\frac{\zeta(2)}{\zeta(3)}.$ (20) The FSS method analysis for $10^{5}$ atoms in a spherical trap results in $S\approx 0.45$ which approaches the asymptotic value $0.39$. ### 5.2 Microcanonical fluctuations in an interacting 3D gas Finally, the Fock state sampling method allows for the investigation of weak repulsive interactions in a 3D harmonically trapped gas, which remains a controversial problem. In particular, different theoretical approaches do not even agree whether interactions lead to an increase or a decrease of interactions (as summarized in Fig. 4 of [17]). In the case of 1D confinement this was discussed in Sec. 4. There it was shown that the direction of the shift due to interactions is temperature dependent and thus theories valid at different temperatures come to different conclusions. Moreover, the direction of the shift depends on the particular system e.g. at low temperatures interactions lead to decrease of fluctuation in the 1D ring trap, while an increase is observed for the 1D harmonic potential. In this sense, it is of particular importance to evaluate the interactions in the experimentally relevant 3D harmonic potential. The FSS method allows for such an analysis for very weak interactions where the state of the Bose- Einstein condensate does not differ significantly from the lowest harmonic oscillator state. Moreover, the post-selection process introduced above allows for the calculation of the fluctuations in the microcanonical ensemble. Figure 9: Atom number fluctuations with and without interactions in a 3D harmonically trapped gas containing $N=100$ atoms, computed in the canonical (top) and microcanonical (bottom) ensemble. The variance of the atom number is shown as a function of temperature for a spherically symmetric $\lambda=1$, (blue) and an elongated $\lambda=7$ (orange) trap with and without interactions. The horizontal dashed lines correspond to the maximal condensate atom number fluctuations for the non-interacting gas in each case. The reference temperature $T_{\rm max}$ is the temperature of maximal fluctuations of the non-interacting gas in the canonical ensemble. The units of the interaction strength $g$ are $(m\omega_{z})^{\frac{3}{2}}\omega_{\perp}/\sqrt{\hbar}$. Figure 9 shows the atom number fluctuations for a non-interacting and interacting gas, in the canonical (upper panel) and the microcanonical (lower panel) ensembles. A comparison between the upper and lower panel shows the clear overall suppression of fluctuations in the microcanonical case as discussed in Sec. 5.1. Moreover, each panel shows that the fluctuations in a spherical trap are generally lower than the ones observed in an elongated geometry in accordance with the previous findings of Fig. 8. To discuss the temperature dependence, it is normalized to the temperature of maximal fluctuations of the non-interacting gas in the canonical ensemble $T_{\rm max}$. This avoids an ambiguity in the definition of the critical temperature for finite systems. In the canonical case the interactions lead to a shift of the peak fluctuations to lower temperatures as expected for a shift of the critical temperature due to interactions. This is reminiscent of the effect observed in the 1D harmonic configuration of Fig. 4. The shift leads to increased fluctuations at low temperature and a decrease at $T_{\rm max}$. Importantly, the value of the peak fluctuations depends on the aspect ratio $\lambda$ and can both increase or decrease as shown in Fig. 9 (upper panel). Finally, the experimentally most relevant case are the fluctuations in the microcanonical ensembles as shown in Fig. 9 (lower panel). Similar to the previous case interactions lead to a shift of the peak fluctuations and a resulting increase at low temperature. The peak fluctuations do not show a uniform behaviour as they increase in an elongated trap but remain constant in a spherical geometry. Our results clearly show that the effect of interactions on the atom number fluctuations in a Bose-Einstein condensate depend both on the temperature and the trapping geometry of the gas. In that sense one can not expect a single answer to the question whether interactions increase or decrease the fluctuations in a Bose-Einstein condensate. ## 6 Conclusion In this paper the fluctuations of a Bose-Einstein condensate were investigated in the non-interacting gas and in the case of very weak interactions. Using the Fock state sampling method we studied the fluctuations in different trap geometries, and in the canonical and microcanonical ensembles as a function of temperature. In a 1D system with periodic boundary conditions our results agree with the classic result of S. Giorgini, L. Pitevskii and S. Stringari [19] based on the Bogoliubov approximation for low temperatures. However, the fluctuations increase for higher temperatures and the maximal fluctuations are indeed larger than to the non-interacting result. Thus, we resolved a long standing controversy by showing that the Bogoliubov approximation only leads to reliable results at low temperatures. For the case of the 1D harmonically trapped gas we showed that interactions lead to a clear shift of the peak fluctuations to lower temperatures. This is in general agreement with the expected shift of the critical temperature and leads to increased fluctuations at low temperatures. Moreover, we employed a post-selection process to the FSS method to evaluate the fluctuation in the microcanonical ensemble. This showed a clear reduction of the fluctuations with respect to the canonical expectation for a gas containing 100 atoms. Nonetheless, a general analysis showed that the canonical and microcanonical results agree in the limit of large atom numbers in 1D. Finally, we investigated the experimentally most relevant case of a 3D harmonically trapped gas. In this case the microcanonical calculation yields a reduction of the peak fluctuations which depends on both aspect ratio and atom number. Importantly, our FSS method results for large atom numbers slowly approach the expected asymptotic value and thus confirm that the canonical and microcanonical fluctuations do not agree in the limit of large atom numbers in 3D. The weakly interacting 3D harmonically trapped gas shows fluctuations which are both shifted in temperature and altered in amplitude depending on temperature and trapping geometry. Thus it is clearly not possible to provide a universal rule for the effect of interactions on the fluctuations of a Bose- Einstein condensate. In view of these results, recent experiments [18] should be compared to calculations in the microcanonical ensemble. In future work we will extend the FSS method to include the realistic condensate wave-function. This will allow us to make quantitative predictions for the fluctuations in realistic experimental systems. Moreover, we will be able to map out the effect of the ensemble choice and the interactions in a large parameter space. ## Acknowledgements We thank P. Deuar for fruitful discussions and Laurits Nikolaj Stokholm for valuable comments on the manuscript. #### Funding information M. B. K. acknowledges support from the (Polish) National Science Center Grant No. 2018/31/B/ST2/01871. K. P. acknowledges support from the (Polish) National Science Center Grant No. 2019/34/E/ST2/00289. K. R. and M. B. K. acknowledge support from the (Polish) National Science Center Grant No. 2021/43/B/ST2/01426. This research was supported in part by PLGrid Infrastructure. Center for Theoretical Physics of the Polish Academy of Sciences is a member of the National Laboratory of Atomic, Molecular and Optical Physics (KL FAMO). This work has been supported by the Danish National Research Foundation through the Center of Excellence “CCQ” (Grant agreement No. DNRF156) and by the Independent Research Fund Denmark - Natural Sciences via Grant No. 8021-00233B and 0135-00205B. ## Appendix A Technical details of the algorithm The FSS method is a realization of the Metropolis algorithm that samples multimode Fock state configurations in the canonical ensemble. Let $|\theta\rangle=|N_{0},N_{1},\ldots\rangle$, $|\theta^{\prime}\rangle=|N^{\prime}_{0},N^{\prime}_{1},\ldots\rangle$ be the Fock states representing respectively the initial state and a slightly modified candidate. In the FSS method, we restrict ourselves in generating $\theta^{\prime}$ to only the ones that amount to moving a single particle from one orbital $\phi_{i}$ to another $\phi_{j}$, compared to the original state $\theta$, that is $N^{\prime}_{i}=N_{i}-1$, $N^{\prime}_{j}=N_{j}+1$ and $N^{\prime}_{k}=N_{k}$ for $k\neq i\land k\neq j$. Let $q_{A}(\theta,i)$ and $q_{B}(\theta,j)$ be the probabilities of randomly selecting orbitals $\phi_{i}$ and $\phi_{j}$ (note the explicit dependence on the state $\theta$). We define the proposal distribution $Q(\theta^{\prime}|\theta)=q_{A}(\theta,i)q_{B}(\theta,j)$ (21) with $q_{A}(i)\propto\exp({-\gamma E_{i})N_{i},~{}~{}q_{B}(j)\propto\exp({-\gamma E_{j}}})(N_{j}+1),$ (22) where $\gamma>0$ is a parameter of the method and $E_{i}$, $E_{j}$ are the single particle energies of their respective orbitals. The exponential factors are introduced to remedy the wasteful jumps in high energy orbitals. The parameter $\gamma$ allows for tuning of the acceptance rate of the algorithm and thus optimizing its convergence rate. The proposal distribution is not symmetric for $\gamma\neq 0$, that is $Q(\theta^{\prime}|\theta)\neq Q(\theta|\theta^{\prime})$ and it is taken into account, however the ratio $Q(\theta^{\prime}|\theta)/Q(\theta|\theta^{\prime})\approx 1$ when the number of particles is large (on the order of 100 and above). The tuning parameter $\gamma$ takes for example values of about $0.2$ and $0.1$ for 100 and 1000 particles, respectively, in a spherical harmonic trap. In our implementation of the algorithm, the complexity of calculating the energy difference between states $\theta$ and $\theta^{\prime}$ is $O(\log^{2}N)$ in the non-interacting case and $O(N^{2})$ in the interacting case, where $N$ is the total number of particles. The algorithm scales perfectly for large computer clusters or CPUs with large number of cores as multiple independent instances can be launched simultaneously and after thermalization, each instance produces independent samples from the same ensemble. ## Appendix B Recurrence relations used to compute the partition functions ### B.1 Recurrences for microcanonical partition function in a harmonic trap Following [34] we introduce, a new function of three arguments, $\tilde{\Gamma}_{\rm ex}(N,E,\epsilon)$. This function is the number of ways to distribute $N$ particles between excited energy levels such that the total energy is $E$, but with the restriction that energy levels above $\epsilon$ are empty. With this definition we have the relation $\tilde{\Gamma}_{\rm ex}(N,E,\epsilon=0)=\delta_{N,0}\,\delta_{E,0},$ (23) where, as before, we assume that the the energy levels are written in oscillator units and are therefore given by integers. The values of $\tilde{\Gamma}_{\rm ex}(N,E,\epsilon)$ for other energy thresholds $\epsilon$ follow the recurrence $\tilde{\Gamma}_{\rm ex}(N,E,\epsilon_{j})=\sum_{n_{j}=0}^{N}\tilde{\Gamma}_{\rm ex}(N-n_{j},E-n_{j}\,\epsilon_{j},\epsilon_{j}-1)\binom{\mathcal{D}(\epsilon_{j})+n_{j}-1}{\mathcal{D}(\epsilon_{j})-1},$ (24) where $\mathcal{D}(\epsilon_{j})$ is a degeneracy of the energy level $\epsilon_{j}$. The label $n_{j}$ in the recurrence (24) has the meaning of the number of atoms occupying the energy level $\epsilon_{j}$. There are $\binom{\mathcal{D}(\epsilon_{j})+n_{j}-1}{\mathcal{D}(\epsilon_{j})-1}$ ways of distributing $n_{j}$ indistinguishable bosons between $\mathcal{D}(\epsilon_{j})$ levels. If exactly $n_{j}$ atoms seats in the energy level $\epsilon_{j}$, then the remaining $N-n_{j}$ ones occupy energy levels up to $\epsilon_{j}-1$, such that their total energy is $E-n_{j}\,\epsilon_{j}$. The microcanonical partition function for the excited atoms, i.e. $\Gamma_{\rm ex}(N_{\rm ex},E)$, can be obtained from the auxiliary function $\tilde{\Gamma}_{\rm ex}(N,E,\epsilon)$ using the relation $\Gamma_{\rm ex}(N,E)=\tilde{\Gamma}_{\rm ex}(N,E,E),$ (25) which expresses the fact that there is no partition leading to the total energy $E$ which would involve energy levels higher than $E$. As described in the main text, the microcanonical partition function is given by $\Gamma(N,E)=\sum_{N_{\rm ex}=0}^{N}\Gamma_{\rm ex}(N_{\rm ex},E),$ (26) and the probability of finding $N_{0}$ atoms in the ground state is $p_{\rm micro}\left(N_{0},\,N,\,E\right)=\Gamma_{\rm ex}(N-N_{0},E)/\Gamma(N,E).$ (27) Alternatively, one can directly use a recurrence for an auxiliary partition function $\tilde{\Gamma}(N,E,k)$, defined similarly to the previous one, but including atoms in the ground state. It still obeys the recurrence relation (24), but with a slightly different initial condition $\tilde{\Gamma}(N,E,k=0)=\delta_{E,\,0}$. In this implementation the probability of finding exactly $N_{\rm ex}$ excited atoms is $p_{\rm micro}\left(N_{0},\,N,\,E\right)=\frac{1}{\tilde{\Gamma}(N,E,E)}\left(\tilde{\Gamma}(N-N_{\rm ex},E,E)-\tilde{\Gamma}(N-N_{\rm ex}-1,E,E)\right).$ (28) There are other recurrence relations, see for instance [5], but (24) turned out easy to program and relatively efficient. ### B.2 Recurrence for canonical partition function in a — harmonic trap To compute the canonical partition function $Z(N,\beta)$ we invoke the recurrence relation $Z(N,\beta)=\frac{1}{N}\sum_{n=1}^{N}Z(1,n\beta)Z(N-n,\beta).$ (29) This relation appears in many contexts and is nicely described in [5]. ## References * [1] Erwin Schrödinger “Statistical Thermodynamics” New York: Dover Publications, 1989 * [2] Robert M. Ziff, George E. Uhlenbeck and Mark Kac “The Ideal Bose-Einstein Gas, Revisited” In _Phys. Rep._ 32.4, 1977, pp. 169–248 DOI: 10.1016/0370-1573(77)90052-7 * [3] Martin Holthaus, Eva Kalinowski and Klaus Kirsten “Condensate Fluctuations in Trapped Bose Gases: Canonical Vs. Microcanonical Ensemble” In _Ann. Phys._ 270.1, 1998, pp. 198–230 DOI: 10.1006/aphy.1998.5852 * [4] Martin Holthaus and Eva Kalinowski “The Saddle-point Method for Condensed Bose Gases” In _Ann. Phys._ 276.2, 1999, pp. 321–360 DOI: 10.1006/aphy.1999.5950 * [5] C. Weiss and M. Wilkens “Particle Number Counting Statistics in Ideal Bose Gases” In _Opt. Express_ 1.10, 1997, pp. 272–83 DOI: 10.1364/OE.1.000272 * [6] Martin Wilkens and Christoph Weiss “Particle Number Fluctuations in an Ideal Bose Gas” In _J. Mod. Opt._ 44.10, 1997, pp. 1801–1814 DOI: 10.1080/09500349708231847 * [7] H.. Politzer “Condensate Fluctuations of a Trapped, Ideal Bose Gas” In _Phys. Rev. A_ 54 American Physical Society, 1996, pp. 5048–5054 DOI: 10.1103/PhysRevA.54.5048 * [8] Mariusz Gajda and Kazimierz Rzążewski “Fluctuations of Bose-Einstein Condensate” In _Phys. Rev. Lett._ 78 American Physical Society, 1997, pp. 2686–2689 DOI: 10.1103/PhysRevLett.78.2686 * [9] Patrick Navez et al. “Fourth Statistical Ensemble for the Bose-Einstein Condensate” In _Phys. Rev. Lett._ 79.10, 1997, pp. 1789–1792 DOI: 10.1103/PhysRevLett.79.1789 * [10] Zbigniew Idziaszek et al. “Fluctuations of the Weakly Interacting Bose-Einstein Condensate” In _Phys. Rev. Lett._ 82 American Physical Society, 1999, pp. 4376–4379 DOI: 10.1103/PhysRevLett.82.4376 * [11] M.. Anderson et al. “Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor” In _Science_ 269.5221 American Association for the Advancement of Science, 1995, pp. 198–201 DOI: 10.1126/science.269.5221.198 * [12] K.. Davis et al. “Bose-Einstein Condensation in a Gas of Sodium Atoms” In _Phys. Rev. Lett._ 75 American Physical Society, 1995, pp. 3969–3973 DOI: 10.1103/PhysRevLett.75.3969 * [13] Julian Schmitt et al. “Observation of Grand-canonical Number Statistics in a Photon Bose-Einstein Condensate” In _Phys. Rev. Lett._ 112.3, 2014, pp. 030401 DOI: 10.1103/PhysRevLett.112.030401 * [14] Christoph Weiss and Jacques Tempere “Grand-canonical condensate fluctuations in weakly interacting Bose-Einstein condensates of light” In _Phys. Rev. E_ 94 American Physical Society, 2016, pp. 042124 DOI: 10.1103/PhysRevE.94.042124 * [15] M. Gajdacz et al. “Preparation of Ultracold Atom Clouds at the Shot Noise Level” In _Phys. Rev. Lett._ 117 American Physical Society, 2016, pp. 073604 DOI: 10.1103/PhysRevLett.117.073604 * [16] Mick Kristensen et al. “Sub-atom Shot Noise Faraday Imaging of Ultracold Atom Clouds” In _J. Phys. B: At., Mol. Opt. Phys._ 50.3 Institute of Physics Publishing Ltd., 2017, pp. 034004 DOI: 10.1088/1361-6455/50/3/034004 * [17] M.. Kristensen et al. “Observation of Atom Number Fluctuations in a Bose-Einstein Condensate” In _Phys. Rev. Lett._ 122 American Physical Society, 2019, pp. 163601 DOI: 10.1103/PhysRevLett.122.163601 * [18] M.. Christensen et al. “Observation of Microcanonical Atom Number Fluctuations in a Bose-Einstein Condensate” In _Phys. Rev. Lett._ 126 American Physical Society, 2021, pp. 153601 DOI: 10.1103/PhysRevLett.126.153601 * [19] S. Giorgini, L.. Pitaevskii and S. Stringari “Anomalous Fluctuations of the Condensate in Interacting Bose Gases” In _Phys. Rev. Lett._ 80 American Physical Society, 1998, pp. 5040–5043 DOI: 10.1103/PhysRevLett.80.5040 * [20] V.. Kocharovsky, Vl.. Kocharovsky and Marlan O. Scully “Condensate Statistics in Interacting and Ideal Dilute Bose Gases” In _Phys. Rev. Lett._ 84 American Physical Society, 2000, pp. 2306–2309 DOI: 10.1103/PhysRevLett.84.2306 * [21] V.. Kocharovsky, Vl.. Kocharovsky and Marlan O. Scully “Condensation of $N$ Bosons. III. Analytical Results for All Higher Moments of Condensate Fluctuations in Interacting and Ideal Dilute Bose Gases Via the Canonical Ensemble Quasiparticle Formulation” In _Phys. Rev. A_ 61 American Physical Society, 2000, pp. 053606 DOI: 10.1103/PhysRevA.61.053606 * [22] Hongwei Xiong et al. “Fluctuations of the condensate in ideal and interacting Bose gases” In _Journal of Physics B: Atomic, Molecular and Optical Physics_ 34.21 IOP Publishing, 2001, pp. 4203–4215 DOI: 10.1088/0953-4075/34/21/310 * [23] Hongwei Xiong, Shujuan Liu, Guoxiang Huang and Zaixin Xu “Canonical statistics of trapped ideal and interacting Bose gases” In _Phys. Rev. A_ 65 American Physical Society, 2002, pp. 033609 DOI: 10.1103/PhysRevA.65.033609 * [24] Jian-hui Wang and Yong-li Ma “Thermodynamics and finite-size scaling of homogeneous weakly interacting Bose gases within an exact canonical statistics” In _Phys. Rev. A_ 79 American Physical Society, 2009, pp. 033604 DOI: 10.1103/PhysRevA.79.033604 * [25] Zbigniew Idziaszek “Microcanonical fluctuations of the condensate in weakly interacting Bose gases” In _Phys. Rev. A_ 71 American Physical Society, 2005, pp. 053604 DOI: 10.1103/PhysRevA.71.053604 * [26] Jianhui Wang, Jizhou He and Yongli Ma “Condensate fluctuations of interacting Bose gases within a microcanonical ensemble” In _Phys. Rev. E_ 83 American Physical Society, 2011, pp. 051132 DOI: 10.1103/PhysRevE.83.051132 * [27] Iacopo Carusotto and Yvan Castin “Condensate Statistics in One-Dimensional Interacting Bose Gases: Exact Results” In _Phys. Rev. Lett._ 90 American Physical Society, 2003, pp. 030401 DOI: 10.1103/PhysRevLett.90.030401 * [28] Vitaly V. Kocharovsky et al. “Fluctuations in Ideal and Interacting Bose-Einstein Condensates: From the Laser Phase Transition Analogy to Squeezed States and Bogoliubov Quasiparticles” In _Adv. At., Mol., Opt. Phys._ 53, 2006, pp. 291–411 DOI: 10.1016/S1049-250X(06)53010-1 * [29] Josiah Willard Gibbs “Elementary Principles in Statistical Mechanics: Developed with Especial Reference to the Rational Foundation of Thermodynamics”, Cambridge Library Collection - Mathematics Cambridge University Press, 2010 DOI: 10.1017/CBO9780511686948 * [30] Nicholas Metropolis et al. “Equation of State Calculations by Fast Computing Machines” In _The Journal of Chemical Physics_ 21.6 AIP Publishing, 1953, pp. 1087–1092 DOI: 10.1063/1.1699114 * [31] Mirosław Brewczyk, Mariusz Gajda and Kazimierz Rzążewski “Classical Fields Approximation for Bosons at Nonzero Temperatures” In _J. Phys. B: At., Mol. Opt. Phys._ 40.2, 2007, pp. R1 URL: http://stacks.iop.org/0953-4075/40/i=2/a=R01 * [32] Emilia Witkowska, Mariusz Gajda and Kazimierz Rzążewski “Bose Statistics and Classical Fields” In _Phys. Rev. A_ 79 American Physical Society, 2009, pp. 033631 DOI: 10.1103/PhysRevA.79.033631 * [33] Maciej Kruk, Maciej Łebek and Kazimierz Rzą żewski “Statistical properties of cold bosons in a ring trap” In _Phys. Rev. A_ 101 American Physical Society, 2020, pp. 023622 DOI: 10.1103/PhysRevA.101.023622 * [34] Z. Idziaszek “Kwantowe fluktuacje zimnych gazów atomowych”, 2001
# Seeing the Bigger Picture : The _Rosetta_ Mission Amateur Observing Campaign and Lessons for the Future Helen Usher The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK Colin Snodgrass University of Edinburgh, Institute for Astronomy, Royal Observatory Edinburgh, Blackford Hill, Edinburgh, EH9 3HJ, UK Simon F. Green The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK Andrew Norton The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK Paul Roche School of Physics and Astronomy, Cardiff University, McKenzie House, Newport Road, Cardiff, CF24 0DE, UK (Received 29 September 2020; Revised 6 November 2020; Accepted 7 November 2020) ###### Abstract Amateur astronomers can make useful contributions to the study of comets. They add temporal coverage and multi-scale observations which can aid the study of fast-changing, and large-scale comet features. We document and review the amateur observing campaign set up to complement the _Rosetta_ space mission, including the data submitted to date, and consider the campaign’s effectiveness in the light of experience from previous comet amateur campaigns. We report the results of surveys of campaign participants, the amateur astronomy community, and schools who participated in a comet 46P observing campaign. We draw lessons for future campaigns which include the need for: clarity of objectives; recognising the wider impact campaigns can have on increasing science capital; clear, consistent, timely and tailored guidance; easy upload procedures with in-built quality control; and, regular communication, feedback and recognition. Comets(280), short period comets, amateur astronomy, astrophotography ††journal: AASPSJ††software: The data analyses were undertaken using Astropy (Robitaille et al., 2013; Price-Whelan et al., 2018) and Numpy (Harris et al., 2020). The plots were generated with Matplotlib (Caswell et al., 2020). The pixel scales were extracted using astrometry.net 363636http://astrometry.net/. The Google Map was produced using jupyter- gmaps373737https://github.com/pbugnion/gmaps. ## 1 Introduction ### 1.1 Background Comets are small, active, volatile-rich, solar system bodies. Each comet is unique, but a consistent feature is their unpredictability. Their appearance can change dramatically over very short timescales: brightening or fading, rapidly or slowly, breaking apart, exhibiting spectacular tails, or not. Their constituent parts (nucleus, coma, tails and trails) are on significantly different physical scales: from a nucleus at <10 km to tails and trails which can extend many AU. Comets have a wide range of orbital elements, which can change over time due to gravitational perturbations from solar system bodies, and non-gravitational forces from comet activity (e.g., outgassing). Their position in the sky can change rapidly. These diverse characteristics make studying comets exciting, but challenging. Despite being observed and studied over millenia, comets are still not well understood (A’Hearn, 2004; Meech, 2017). Understanding comet formation and evolution is important in informing and constraining theories of solar system formation and evolution (A’Hearn, 2017, 2011). To observe and characterise comet activity requires observations over many different time periods and intervals, and at different image scales. Observing a comet over its different apparitions across many years allows its long-term evolution to be monitored. Multiple observations in a single night can pinpoint the start of outburst events, while monitoring over subsequent days and weeks allows morphological changes in the coma, tails, or trails to be analysed. Between these extremes, short regular observations allow the comet position to be measured, refining its orbit and non-gravitational forces, and monitoring over different time intervals allows analysis of changes due to rotation or seasons to be undertaken. As it is impossible to resolve a comet nucleus from Earth, space missions are required for close-up observations. The Halley missions, _Deep Space 1_ , _Stardust_ and _Deep Impact_ close fly-bys provided snap-shot views of the inner coma and nucleus of five comets. The European Space Agency (ESA) _Rosetta_ mission to 67P/Churyumov–Gerasimenko, which orbited the comet and placed the _Philae_ lander on its surface, provided the first opportunity to observe surface activity and evolution of a comet for over two years around its perihelion passage. These missions have been essential to add a ground-truth element to comet observations, and are resulting in new insights into comet formation and evolution. Each has been supported and complemented by ground-based observing campaigns. ### 1.2 Observational Constraints and Opportunities It can be difficult for professional observers to cover the wide range of observations needed for analysis of all comets’ dynamic features. Professional telescope resources are scarce. While the proposal method of allocating resources is good for long-term regular monitoring, it can be too rigid when a rapid response is needed to observe short-term changes such as outbursts. Even long-term monitoring is constrained by over-demand for professional facilities. The best observing locations are at altitude and away from light pollution, clustered and not longitudinally well spread. This is problematic when comet visibility windows are short, or in periods of bad observing weather. Large telescopes often cannot image lower than 20-25°elevation due to enclosures, and many have a minimum solar elongation for safety, but comets are often brightest and more interesting to study while close to the Sun and often at low altitude. The image scale of large telescopes produces high resolution, but with relatively narrow fields of view. Imaging large-scale features, such as large comae, tails and trails, requires mosaics, taking significant extra telescope time. As a result, cometary science is an area where amateurs can still make important contributions, supplementing observations from professionals. Amateurs literally observe for love, being able to choose what, when, where and how to observe. For many, as their interest and expertise deepens they look for more rewarding targets, transient events, longer-term monitoring and/or scientific projects (Bowler, 2009). They are free to monitor and observe comets whenever they are visible and weather conditions allow, and can respond quickly to alerts when changes in a comet are noted. Subject to visibility, multiple images can be taken over a long period during a night, allowing stacking of data to improve signal-to-noise ratios and allowing very faint features to be detected. Amateurs are spread all across the world, which is particularly useful when observability windows are small in any one location due to altitude and hours of darkness. Good longitudinal coverage allows continuous monitoring for studying rotation effects and transient features. Some amateurs have excellent unobstructed horizons or have mobile equipment and can travel to find suitable observing situations. Small telescopes can safely image closer to the Sun. Finally, comets have large- scale features (particularly tails and trails) which are well suited to smaller amateur telescopes with wider fields of view. Recently, the greatly reduced cost of high-quality camera technology, telescopes and related equipment, along with sophisticated software, has meant that many more amateur astronomers can now make high-quality, robust observations and undertake complex astrometric, photometric and morphological analyses. The growth of internet technologies and social media has meant that it is much easier for amateurs to: access databases such as JPL HORIZONS111https://ssd.jpl.nasa.gov/horizons.cgi giving accurate ephemerides for planetarium and mount control programs; be alerted to new comets or activity in known comets; share software and techniques; work to consistent standards; share observations in active groups; and upload to data archives. Additionally, amateurs and students now have real-time access to high-quality, shared telescope facilities (such as iTelescope222 https://itelescope.net/, Las Cumbres Observatory (LCO)333 https://lco.global/, Slooh444 https://slooh.com/about/about-slooh-education, the Open University’s PIRATE and COAST 555http://pirate.open.ac.uk/, MicroObservatory666https://mo- www.cfa.harvard.edu/MicroObservatory/, and other education-orientated telescope networks). These facilities are located in favourable locations, at altitude, chosen for good observing and weather conditions (much better than most observers’ home locations), and robust calibration processes. Robotic scheduling allows for observing even at inconvenient times. ### 1.3 Pro-Am Comet Campaigns Amateurs have participated in professionally coordinated observing campaigns in support of space missions including the _Halley Watch_ and _Deep Impact/EPOXI_ campaigns, as well as for particularly interesting or well- placed comets such as C/2012 S1 ISON and the “4*P” Campaign covering the close approaches to Earth of comets 41P/Tuttle-Giacobini-Kresak, 45P/Honda-Mrkos- Pajdusakova, and comet 46P/Wirtanen. The _Rosetta_ mission included a ground-based observation campaign to support and provide context for the in situ activity (Snodgrass et al., 2017). This campaign included encouraging amateur astronomers across the world to make and submit observations. There are lessons which can be learnt from a review of the organisation and outputs from these campaigns, to inform future campaigns (e.g., for comet 67P in 2021), and future comet missions such as Comet Interceptor (Snodgrass & Jones, 2019). The _Rosetta_ amateur campaign has not previously been formally documented and reviewed. This paper presents details of the data currently available from the campaign. It documents the results of surveys of _Rosetta_ campaign participants, the amateur astronomer community, and some schools who participated in a 46P observing campaign, to inform a discussion on good practice and lessons for future campaigns. While the details may differ, many of the lessons from this campaign are also relevant to other non-comet observing campaigns. ## 2 Previous Comet Campaigns ### 2.1 Halley The ground-based _International Halley Watch_ campaign in 1986 was a major undertaking, with a budget of $10 million, thoroughly planned and implemented. The involvement of amateurs was an important element, but was challenging as electronic communication was in its infancy. Details of positions, requirements, results etc., all needed to be communicated in hard copy. Observations were made either visually or with film cameras, and the results were posted back to the campaign (Edberg, 1988; Sekanina & Fry, 1991; Dunlop, 2003). The campaign received much publicity and 1,575 people registered, of which 873 submitted observations (Sekanina & Fry, 1991). To ensure consistency, very detailed guidance was provided. This proved effective, with 90% of astrometric submissions being used to determine the orbit and so were important for determining the spacecraft’s trajectory. All the observations were published in hard copy, digitized and released on CD in the 90s, and then made available online777https//pdssbn.astro.umd.edu/data_sb/missions/ihw/index.shtml. The images have also been subjected to modern filtering techniques to draw out more coma features. The final report on the amateur involvement (Sekanina & Fry, 1991) noted that: * • astronomers worldwide contributed useful data; * • not all observations made were reported; * • the majority of observers took their efforts seriously enough to submit data; and * • new observers complied with requirements to a greater extent than experienced observers. “Do not expect even the most careful and lucid instructions to be followed rigorously. Even professionals can be wilful on occasion and amateurs additionally lack the insight to appreciate the importance of standardising observing technique.” ### 2.2 _Deep Impact/EPOXI_ The _Deep Impact/EPOXI_ mission was designed so that most mission-critical science was undertaken from Earth to enable a wider range of observations (A’Hearn et al., 2005). A worldwide ground campaign was needed (Meech et al., 2005). For the _Deep Impact_ stage (9P/Tempel 1), the observations covered the full time-range, from pre-mission characterisation, through impact and post- impact. A Small Telescope Science Program was established to complement the professional observatories. For the follow-on mission (_EPOXI_) to 103P/Hartley 2 the amateur data contributed significantly to a multi- wavelength program of near-continuous observations from August 2010 through encounter on 4 November 2010. The brightness measurements (a key output from amateur data) allowed the development of an ice sublimation model to estimate dust emissions (Meech et al., 2005). Initially the program requested amateur measurements based on their observations; later there was a call for submission of raw data sets for further analysis to photometric standards. The CARA888http://cara.uai.it/home (Cometary Archive for Amateur Astronomers) group was very active (Milani et al., 2007) in observing Tempel 1 around impact. Their observations covered nearly every clear night over 10 months, and resulted in 800 photometric observations. It chose to use the $Af\rho$ measure (A’Hearn et al., 1984), allowing comparison of data from different telescopes, photometric apertures, epochs, and geometrical positions. CARA members used a consistent set of filters (R and I), took many dozens of images per observation date (and calibration frames), and checked quality. They followed a standardised data processing recipe. CARA also provided software to observers to allow them to analyse and calculate the $Af\rho$ value in a consistent way (Milani et al., 2007). The measurements allowed an observation that the $Af\rho$ value increased by 60% following impact and took two days to return to the previous level. ### 2.3 ISON Morphology Campaign This 2013 global campaign involved professionals and amateurs, who obtained mostly continuum images to help characterise dust in the coma of comet C/2012 S1 ISON. ISON was an unusually well-placed and bright comet on a sungrazing orbit, discovered more than a year before its exceptionally close perihelion passage, and consequently well studied over a wide range of wavelengths at professional observatories (Knight et al., 2017; Moreno et al., 2014). The morphology campaign comprised many hundreds of observations made by nearly two dozen groups (Samarasinha et al., 2015). When at its brightest the comet was only visible for a short period each night due to its small solar elongation. The distribution of amateur observers across the world meant that good temporal coverage could be achieved. The data were used to constrain the duration of coma features, look for diurnal changes, constrain grain velocities, and determine the approximate time grains spent in the sunward side of the coma. The campaign was managed online999www.psi.edu/ison. Observers were asked to reduce the data before submitting. The campaign team then enhanced images to look for coma features. The results were: the data were far from uniform; few observers had access to narrowband filters (used to separate gas and dust signatures in the coma); the low altitude of observing meant high air mass; no features were visible when the comet was at its brightest, but features were seen earlier in the period. While the challenge was to deal with the non-uniformity of the data set, the temporal coverage was of value. The overall conclusion for the usefulness of the amateur data was “These campaigns may be most valuable in situations where any single observer can only obtain data during a small window of time, but contributions from many such observers…leads to a more complete understanding of the spatial and temporal evolution of the comet.”(Samarasinha et al., 2015). ### 2.4 4*P Campaign The Planetary Science Institute ran the 4*P campaign101010https://www.psi.edu/41P45P46P, starting observations in 2017, for comets 41P and 45P and in 2018 for comet 46P. The 46P element was supplemented by a campaign organised by the University of Maryland111111https://wirtanen.astro.umd.edu. For 46P, 18 amateur observers submitted observations. These campaigns comprised both professional and amateur observations. ### 2.5 Other Comet Campaigns In addition to formal campaigns organised by professional astronomers, there are ‘informal’ campaigns that are self-organised within the active amateur comet observing community whenever a particularly bright or interesting comet appears. It has often been the case that such monitoring discovers interesting behaviour and triggers observations by professionals with access to larger facilities, e.g., in the case of a major outburst, such as that of comet 17P/Holmes in 2007 (Miles, 2010). A recent example of an informal amateur-led campaign of observations was that for C/2019 Y4 ATLAS. The comet brightened significantly through the early part of 2020, with predictions for possible naked-eye visibility. It was well placed for observing for large parts of the night from northern latitudes, placed close to the zenith, and its appearance coincided with good weather, and the COVID-19 lockdown. Multiple observers across the world monitored its development, sharing their observations and analysis primarily via a simple comet mailing list and some Facebook groups (notably Comet Watch). Observations were submitted to Comet Observations Database (COBS)121212https://www.cobs.si/, International Comet Quarterly (ICQ)131313http://www.icq.eps.harvard.edu/, Minor Planet Center (MPC)141414https://www.minorplanetcenter.net/ and the British Astronomical Association (BAA)151515https://britastro.org/cometobs/ (and other) archives. The comet became very interesting on 19 March 2020 when it started to fragment. Professional astronomers were alerted to the dramatic changes and were successful in applying for _Hubble_ observations161616http://tiny.cc/HubbleCometAtlas. There is now a rich, high- cadence archive available for detailed analysis: 740 and 789 observations in BAA and COBS archives (at 2020-8-18) respectively (note overlap of datasets). ## 3 _Rosetta_ Campaign The most ambitious comet mission to date is ESA’s _Rosetta_ mission171717https://www.esa.int/Science_Exploration/Space_Science/Rosetta to comet 67P/Churyumov–Gerasimenko, with aims to contribute to the study of comet and solar system origins, and the relationship between cometary and interstellar material. It was the first long-term mission to orbit, land on, and ‘live with’ a comet, making multi-instrument observations for over 2 years. The orbiter instruments included remote sensors (such as cameras and radio receivers) and direct sensors (such as dust and particle analysers) (Glassmeier et al., 2007). The orbiter’s cameras made observations of the comet from distances ranging from 672 million km (when waking from hibernation) to just 2.7 km at closest orbit (additionally it observed whilst descending to the comet’s surface for its ‘hard landing’). Larger orbits (e.g., at 1500 km) were used to study the plasma environment and the wider coma. At perihelion the orbiter was at a distance of $\sim$300 km. ### 3.1 67P Ground-based Campaign Logistics A ground-based campaign was part of the mission, including both professional and amateur observations, and coordinated with planning of spacecraft operations (Snodgrass et al., 2017). The ground- and space-based observations combined to serve three key purposes: * • monitoring the overall behaviour and activity of the comet in support of the mission; * • providing a basis for multi-scale studies – e.g., how does the composition of the coma vary from 10 to 10 000 km from the nucleus? What are the chemical reactions behind this variation? * • allowing comparison between 67P and other comets, and therefore application of the _Rosetta_ results to the larger population. Unfortunately, during the active phase of the mission at the comet (January 2014 – September 2016), 67P was not very favourably positioned for Earth-based observations. The next apparition, with perihelion in November 2021, is much more favourable. Observations with large professional telescopes were possible from late February 2014 until shortly after the _Philae_ landing in November 2014 (Snodgrass et al., 2016), after which the comet was at low solar elongation for many months. The comet passed through perihelion in August 2015 and was reasonably well placed for observations during the second half of 2015 and the first half of 2016. The amateur campaign organisation was funded by JPL as part of the NASA contribution to the ESA-led mission. A website was established by JPL to hold the main campaign information. This was a static site, with real-time interactions taking place via a Facebook group PACA_Rosetta67P181818https://www.facebook.com/groups/paca.rosetta67p (launched in January 2014 and archived in November 2019). This was used for communication, sharing guidance, discussions and sharing images. When it was archived it had 203 members. The amateur campaign was formally launched in April 2015, following approximately one year of preparation work in parallel with the early part of the _Rosetta_ mission (when the comet was still too far from the Sun to be observable by most amateurs), but initial plans to include amateur astronomers were already discussed as early as 2011, at the beginning of coordination efforts for professional observations. The invitation to contribute stated that ‘All formats of data will be acceptable and encouraged. … CCD, DSLR images, spectra, sketches, visible observations. …most helpful will be raw, unprocessed and in FITS format’. Further, more detailed, guidance was issued on 5 June 2015 with guidelines on what observations were required, including filters, orientation and format. On filters, ‘at a minimum, continuum images (UBVRI), but LRGB, or specific narrow band filters (eg OIII) are also acceptable, for studying colours of the comet. We recommend Sloan r’ and g’ filters for a consistent set of data on dust and gas.’ It was stated that submissions should include unenhanced images (targets, darks and flats, if any). The need for accurate time information was stressed. Each observer was asked to complete a user agreement form, which collected contact details and some basic information on the telescope(s) to which they had access. The data format and filename requirements were set out in detail, along with a request for supplementary information regarding the observations (context information including date/time, location, camera, filter, exposure times, position angle, plate scale – but not telescope details). Of the 327 people who registered, 26 FITS format data sets (from individuals or collaborations) are known to have been submitted. This is a relatively low number, and it is likely that more amateurs hold observations of comet 67P which could be usefully added to the data set for the next analytical stage of this research. Observers are encouraged to contact the lead author if they wish to contribute observations. The ESA/Planetary Science Archive (PSA) set up registered user accounts for FTP upload, which were used by some observers. Although this was intended to be the single route for all data collection, delays in setting it up (not available until late September 2015) and initially a lack of clear instructions and/or assistance in using the FTP protocol meant that most users did not use it (a campaign member later documented the process for her fellow observers). Apparent confusion between the requirements for this temporary collection FTP site and the more complicated rules for permanently archived data at the PSA also appeared to put off some users. The JPL project manager set up a Dropbox alternative, and most users submitted this way. These files were renamed to a standard file-naming convention, which included the date and time of observation, filter, exposure length and initials of observer. The intention was for a subset of the observations to be permanently archived and made publicly available, following some quality assessment. When funding ceased, the personnel involved moved on to other projects, and this meant work on collation and archiving effectively ceased. At the time of writing there are still two separate locations holding data (with overlap). The PSA data have not been renamed to match the JPL conventions. Table 1 contains data from both repositories. As well as the science data in FITS format uploaded to servers, other images and observations were uploaded to the Flickr191919https://www.flickr.com/groups/paca_67p/ and/or Facebook202020https://www.facebook.com/groups/paca.rosetta67p 67P PACA groups. Certificates of appreciation were made available to those who took part in the Facebook group, and these were well received. ### 3.2 Data submitted #### 3.2.1 FITS Data With so many different observers, using such a wide range of equipment and workflows, and of different experience levels, it is inevitable that the data set and the associated metadata varies widely in quantity and quality. It is not always clear whether/how observations have been calibrated/reduced. The lack of robust metadata was potentially particularly problematic for detailed analysis - filter and sensor details in particular. Given the relatively small number of observers, it has been possible to contact most observers and ask for data and FITS header information to be verified and supplemented (subsection 4.1). It has not been possible to reach all observers though as some email addresses appear to be no longer valid, and contact details are not available for those who did not register initially. An analysis of the data set (Table 1) shows: * • 10,432 observation files known to have been submitted by 26 observers/observing groups covering 284 dates (48 dates and 308 observations were during the previous perihelion passage in 2008-2009). Figure 1 shows observations over the main 2015-2016 observing period; Table 1: Observations Submitted by Amateur Astronomers - FITS Format Observers | Period | Dates | Obs | Filters | Locations | Obs | ApbbTelescope Aperture | FLccTelescope Focal Length | FOVddImage Field of View | Scale ---|---|---|---|---|---|---|---|---|---|--- | | (no) | (no) | | | CodeaaMPC/IAU Observatory code if applicable. | (mm) | (mm) | (arcmin) | (arcsec/pix) T Angel and C Harlingten | 2014-07-08 to 2016-03-15 | 89 | 5659 | C | Spain | Z85 | 100 | 400 | 11.7x7.79 | 0.92 J Loum | 2015-10-21 to 2016-04-04 | 40 | 1545 | C,Q,R,G | USA | W14 | 254 | 1194 | 25.2x18.8 | 1.10 SlooheeThese observers used multiple telescope configurations either locally or remotely. | 2014-11-12 to 2015-07-31 | 46 | 645 | L,R,G,B | Tenerife, Chile | G40, W88 | 350, 355, 432 | 3850, 3904, 2929 | 12.8x8.61, 31.3x20.8, 43.4x43.4 | 0.70, 1.41, 1.91 E BryssinckeeThese observers used multiple telescope configurations either locally or remotely., F-J Hambsch | 2014-06-27 to 2016-01-15 | 81 | 469 | C, R | Chile, Belgium, Australia | G39, B96, Q62 | 400, 400, 700 | 2700, 1520, 4531 | 47.0x47.0, 36.5x54.7, 27.8x27.8 | 1.38, 0.82, 1.09 N Hidenori | 2015-11-30 to 2016-04-09 | 21 | 423 | L | Japan | Q21 | 400 | 1520 | 84.0x84.0 | 1.23 F Garcia | 2008-8-23 to 2016-04-03 | 52 | 329 | Clear | Spain | J38 | 250 | 2030 | 17.1x17.1 | 2.00 P Carson | 2105-07-18 to 2016-04-30 | 34 | 324 | L | UK | K02 | 315 | 1656 | 36.7x27.7 | 1.32 A Chapman | 2016-01-09 to 2016-03-13 | 4 | 245 | r’ | Argentina | | 203 | 807 | 38.2x28.6 | 1.65 A Diepvens | 2015-08-11 to 2016-01-11 | 17 | 207 | L,R | Belgium | C23 | 200 | 1350 | 34.5x23.2 | 1.89 M TsumaraffIt has not yet been possible to confirm these data with the observer. | 2015-07-14 to 2016-04-11 | 13 | 120 | C,R,G,B | ? | ? | ? | ? | 249.0x165.6 | 3.72 P LakeeeThese observers used multiple telescope configurations either locally or remotely. | 2014-03-07 to 2015-08-08 | 5 | 88 | L,V,R,I | USA | H06 | 510, 508 | 2260, 2280 | 55.7x55.7, 54.7x36.5 | 1.09, 0.82 W ClarkeeThese observers used multiple telescope configurations either locally or remotely. | 2015-08-13 to 2015-12-02 | 6 | 60 | L,R,G,B | USA | H06, U69, H06 | 508, 610, 430 | 2280, 3962, 1939 | 32.1x32.1, 32.1x32.1, 49.0x32.7 | 0.63, 1.26, 0.96 R Castillo | 2015-08-13 to 2016-03-13 | 3 | 54 | L | Spain | | 254 | 1194 | 39.6x26.4 | 3.10 J-P Nougayrede, G Arlic, and F Metz, and C Andre, | 2016-03-01 | 1 | 52 | L | France | 586 | 600 | 2002 | 47.5x31.7 | 0.93 Northolt Branch Obs | 2016-01-15 to 2016-03-05 | 3 | 51 | None | UK | Z80 | 71 | 418 | 73.3x54.7 | 3.16 N HoweseeThese observers used multiple telescope configurations either locally or remotely. | 2012-04-25 to 2013-07-05 | 3 | 41 | R | Australia, USA | E10, F65 | 2000 | 20000 | 10.2x10.2 | 0.3 T TraubffIt has not yet been possible to confirm these data with the observer. | 2014-7-22 to 2106-03-29 | 8 | 26 | L,R,G,B | USA | | 610 | 7788 | 16.4x16.4 | 0.32 J ChamboeeThese observers used multiple telescope configurations either locally or remotely. | 2015-06-24 to 2015-11-19 | 4 | 17 | L,R,G,B | Australia, USA, USA | Q62, H06, U69 | 510, 508, 610 | 2260, 2280, 3962 | 32.1x32.1, 32.1x32.1, 32.1x32.1 | 0.63, 0.63, 1.26 J L MaestreffIt has not yet been possible to confirm these data with the observer. | 2015-11-18 | 1 | 15 | | Spain | | 406 | 3900 | 21.7x21.7 | 1.27 P BrlaseeThese observers used multiple telescope configurations either locally or remotely. | 2014-06-17 to 2015-12-19 | 12 | 14 | L,V,R | Australia, Australia, USA, Spain, USA | Q62, Q62, U69, I89, H06 | 700, 430, 610, 318, 106 | 4527, 2912, 3962, 2541, 530 | 27.8x27.8, 43.9x43.9, 32.1x32.1, 37.3x24.9, 234x156 | 0.55, 0.64, 0.63, 0.73, 3.51 M Tissington (SARAS) | 2015-10-17 to 2016-01-10 | 5 | 13 | C | Tenerife | J54 | 355 | 1877 | 24.4x24.4 | 1.43 R Nicollerat | 2015-10-10 | 1 | 12 | C | Switzerland | K17 | 354 | 2937 | 13.5x13.6 | 0.79 T ZwachffIt has not yet been possible to confirm these data with the observer. | 2016-04-07 to 2016-04-08 | 1 | 11 | L,R,V,B | Spain | I89 | 150 | 1095 | 111.6x74.4 | 1.67 K Yoshimoto | 2015-07-25 to 2016-03-07 | 5 | 9 | C,V,I | Japan | | 160 | 1000 | 35.2x35.2 | 4.1 Isle of Man Observatory | 2016-02-18 to 2016-03-15 | 2 | 2 | None | Isle of Man | 987 | 406 | 4064 | 9.0x9.0 | 1.06 P Detterline | 2016-01-06 | 1 | 1 | C | Australia | | 356 | 1914 | 24.1x16.2 | 0.66 Note. — The table is sorted by the number of images submitted by each observer. Figure 1: Number of Amateur Observations 2015-2016 * • there is good geographical coverage (Figure 2); * • there is good temporal coverage around perihelion on 13 August 2015, and around the dates of particular interest identified so far when outbursts were noted by spacecraft instruments in July, August and September 2015 (Vincent et al., 2016) (Figure 3); * • there are wide ranges of apertures, fields of view, and pixel scales used for observations (Table 1); * • some observers made just a small number of observations each night, others acquired multiple images in different filters; * • Tony Angel and Caisey Harlingten’s data set is by far the largest in number, with a large number of images per night; * • only 8 observers provided calibration/reduction files (578 files) as requested in the guidance, although others submitted calibrated images. Some submitted stacked images rather than unprocessed images; * • 993 observations were undertaken with remote telescopes, which have standard pipeline calibration processes; * • the information in the FITS header does not always conform to the guidance or to FITS standards; Figure 2: Observing locations (Blue=Professional, Red=Remote, Green=Local) * • a variety of filters have been used, but primarily standard imaging filters ( Clear, Luminance, Red, Green, Blue) rather than scientific filters (UVBRI or Sloan r’, g’). In some cases there is no filter data in the FITS header and so follow up with observers has been needed before analysis; * • the guidance asked for a narrative file providing extra details of the observations, but these were not generally provided. For some observers, who did not initially register, this has meant no contact details are available either. Figure 3: Number of Amateur Observations Around Perihelion (13 August 2015) Details of the professional observations submitted to the _Rosetta_ campaign were obtained. A comparative analysis of the dates of observations was undertaken. This showed that there were 58 days, during the period 2013-04-17 to 2016-04-30, when amateur observations were available but no professional data were available. In the 3 months around perihelion (2015-07-01 to 2015-10-01) there were 15 days when only amateur observations were available (Figure 4). The aim of using amateur observations to improve temporal coverage has been achieved. Figure 4: Professional and Amateur Observations Around Perihelion (13 August 2015). The figure shows how amateur observations supplemented the professional observations, with 15 days during the period 2015-07-01 to 2015-09-30, around perihelion, when only amateur observations are available. #### 3.2.2 Images In addition to the submission of FITS data, members uploaded JPEG images to FLICKR and Facebook. The PACA67/P(Churyumov-Gerasimenko) FLICKR212121https://tinyurl.com/Paca-67P-Flickr group has 272 ground-based images (1 July 2020), uploaded by 47 observers. Of these, 36 uploaded $\leq$5, 9 between 6 and 25, and the remaining two, 36 and 56. The majority (77%) also included scientific analysis, primarily photometric measurements (Figure 5), but also morphology (Figure 6) and screenshots from Astrometrica222222http://www.astrometrica.at/. Images at key points in the comet’s orbit, or significant milestones in the mission, were often uploaded (Figure 7). Some members also processed data from the mission instruments. It is much more difficult to catalogue the uploads to Facebook, as the discussions and uploads relate not only to science data, but also to the mission more generally, and social and conference elements too. Facebook does not lend itself to effective cataloguing and archiving of content. Figure 5: Example of Flickr Upload - scientific analysis of observation on 18 September 2015, wide field showing tail. Credit: Tony Angel and Caisey Harlingten. Figure 6: Example of Flickr Upload - scientific analysis of observation of coma on 18 September 2015, including coma morphology. Credit: Erik Bryssinck. Figure 7: Example of Flickr Upload - Image representing a significant milestone in the mission - here the last observations on 2016 July 28 and 30. Credit: Wendy Clark/Slooh ### 3.3 Potential Uses for the Amateur Data Set Amateur data can be used for astrometry, photometry and morphology. Astrometry measures the comet’s position, which allows study of changes due to non- gravitational forces caused by comet activity. Characterisation of comet orbits is important for ensuring effective in-situ measurements, for predicting possible stellar occultations, and also for monitoring any potential hazards for Earth. Photometric studies allow the measurement of total brightness, which allows monitoring of dust and gas production rates, and how they vary through the orbital/rotational cycles. Coma morphology, monitoring outbursts and jets from the nucleus, also gives insights into rotation and pole orientation. Such measurements can be compared with in-situ data to verify correlations between large-scale and local structures that could allow interpretation of events in comets not visited by spacecraft. Photometry can be performed automatically using different apertures to correspond with different scales at the comet (with pixel scale, and therefore aperture radius, calculated automatically by querying the HORIZONS232323https://ssd.jpl.nasa.gov/horizons.cgi database for comet distance at each observation time). Differential photometry techniques rely on comparisons with stars in the same frame as the comet. For amateur data there are two potential challenges to this approach: the robustness of calibration (particularly flat fielding) which could result in inconsistencies across the frame; and, knowledge of the filter and CCD response is required to ensure colour match to catalogue objects. The Af$\rho$ parameter can also be calculated as a way of comparing results across different telescope apertures and systems. This is already done under the CARA project. For morphological study the challenge is obtaining sufficient resolution and the use of the most appropriate specialist filters (e.g., CN) which are not generally used by amateurs. Larger amateur telescopes, and the public and schools access telescopes, such as Slooh and Faulkes telescopes, are capable of discerning fine transient features. Where there are multiple frames on one night it is possible to co-add/stack images to improve resolution and signal- to-noise ratios. ## 4 Surveys ### 4.1 Registered _Rosetta_ Campaign Observers To improve the robustness of the metadata, and understand the pre-processing of submitted data, each amateur observer was contacted directly where possible. Feedback was sought on their experience of the campaign and its processes, and suggestions for future campaigns. Of particular interest were the reasons why such a small percentage of those who signed up to the campaign actually submitted data. The responses were gathered through a Google Forms survey (Appendix A) which was sent to all those who signed up for the campaign and who had previously agreed to be contacted (to meet data protection regulations). Thirty participants completed the survey, of whom 20 (out of 26) were observers who had submitted FITS data. This unfortunately meant that the survey produced little useful data on why observations were not made or submitted, but it was possible to gather some information from responses to initial emails. Some people signed up for the campaign as they were interested in the mission and wanted to be kept informed, so there was never an intention to submit data. Others suffered from poor observing conditions: weather and observability (the comet was often poorly placed and visible only during the early hours of the morning). For some, they could not meet the requirement for submitting FITS files, having used methods of capture such as DSLRs, although some of these images were uploaded to Facebook or Flickr sites. Some observers struggled with the technical requirements including upload. The main results of the survey are: * • Observers heard about the campaign from a wide range of sources - official website, group websites (forums, Facebook, societies), email groups, at conferences, articles in the astronomy and general press, personal recommendation (particularly Padma Yanamandra-Fisher), and inspiration from members of the _Rosetta_ team giving talks to local astronomical societies. * • The reasons for sign-up were related to wanting to be part of the _Rosetta_ mission and to contribute to the scientific study of comets. * • While many were experienced observers, who had engaged in campaigns before (including some involved in _Halley Watch_), some were new to scientific observing and were looking to enhance their skills and enjoyment. * • Over half (59%) of observations were made primarily for the campaign, 26% primarily for personal use, with the remainder being mixed use (including submitting to other data collection organisations such as the BAA and COBS, and to forums and magazines). Figure 8: Survey Results from Campaign Participants (Rating - 1:Low, 6:High) Generally, observers were happy with the guidance provided (Figure 8), although some commented that publicity for the guidance could have been better. It became clear later in the process that some of the terminology in the guidance was interpreted differently between amateurs and professionals (e.g., professionals refer to the process of applying calibration frames as ’reduction’, amateurs refer to it as ’calibration’, and to an amateur using Astrometrica ’reduction’ is analysing the data using catalogue matching and producing measurements of position and brightness)242424http://tiny.cc/TAngelSeggauPresentation For those who were members, most found the Facebook page a very useful source for advice and discussion. The group was a closed group, and this limited wider sharing of images and engagement. Not all observers got the advice they needed, and not all of the guidance was implemented by all observers. There was a wide variation in both software used, and workflows. Just over half (56%) of observers said they submitted their observations as they made them, with 44% submitting as a block at the end of the campaign. Some observers found the upload process difficult (Figure 8). Additionally, it was suggested that FTP was an unsophisticated approach, and the need to manually rename, in some cases thousands of, files was onerous. Observers suggested it would be useful to have verification processes in place at the start of a campaign to ensure compliance with FITS header requirements, and highlight any quality or compliance issues for timely resolution. Tools for determining optimum observations (e.g., exposure times, number of frames, filters) based on each observer’s equipment, location, and mount characteristics, would be welcomed. For less experienced users, more detailed guidance (including walk-through and video guides) would be helpful. Effective communication is critical to an effective campaign. The survey results for communication methods show that most amateur astronomers can be traditional in their preferences for modes of communication, and many do not use social media. The preference for email lists was common to almost all (90%) respondents. A clearer understanding at the outset of the use to which the observations were to be put would have helped observers make the most useful observations. Most contributors would welcome more information on the progress of the campaign, the analysis and results. A very encouraging finding was that all observers said they enjoyed being part of the campaign and were likely or very likely to participate in further campaigns. ### 4.2 Amateur Astronomy Community A more general survey of the amateur astronomer community was also undertaken (Appendix B). This was to gauge knowledge of the original campaign and determine what might encourage greater participation in future campaigns. This survey was widely disseminated through societies such as the BAA, astronomy forums, comet mailing lists, Facebook (including the PACA page), Twitter and via the Royal Astronomical Society’s Specialist Discussion meeting on comets in December 2019. Forty-four people responded, from 8 different countries (although 72% were from the UK, reflecting the distribution methods). Only 2 had submitted data for the campaign. Fifty-five percent had heard of the _Rosetta_ campaign, having heard from a range of sources. The main sources cited were: forums (5), BAA (4), PACA Facebook group, magazines, professional conferences/mailing, personal contact (2) and web and mailing list (all 2 each). The survey asked a general question about where observers got their information on comets. (This was designed to capture data on sources for publicising future campaigns.) Again there was a wide range: specialist mailing lists (e.g., comets-ml), forums, newsletters, national associations (such as BAA, Society for Popular astronomy (SPA), American Astronomical Society (AAS), Astronomy Ireland), local societies, specialist comet websites (MPC, JPL Horizons, COBS, CARA, personal websites of specialist comet observers, general astronomy websites (Astronomy Picture of the Day (APOD), weekly/monthly skyguides), social media groups (Comet Watch, PACA, on Facebook), magazines, news organisations, remote telescope operators (e.g., iTelescope), YouTube channels, planetarium software and word of mouth. For future campaign communication there was a clear split over social media, with 47% not wishing to use social media. There was a strong preference for a dedicated website and/or forum to host all the information for the campaign and allow discussion, supplemented by a mailing list and regular newsletters. On guidance, respondents felt that availability, consistency, and detail were important. Guides should include details of comet observability based on location, charts on how to find the comet, best equipment to use, and observing techniques. The level of detail should be tailored to different observing cohorts (general public, schools, general observers, specialist and experienced comet observers). The science observations’ guidance should cover the purpose of the observations, ensuring accurate timing, requirements for FITS headers, and the provision of calibration files or evidence of appropriate pipeline processes. All terminology should be clearly explained to ensure consistency and avoid confusion. Tools could be developed for planning observations (e.g., to calculate optimum exposure times based on equipment, the movement of the comet, and the purpose). The upload process should be simple, incorporate a compliance check for FITS header information, and automatically generate filenames with the naming convention. It should be made easy to provide brief context data e.g., weather conditions, any issues with the observing. Where initial analysis was to be undertaken by observers (particularly for novice observers and schools) detailed walk-through guides, and video tutorials should be prepared. Two particularly interesting ideas were: work with mobile app providers (e.g., developers of planetarium tools such as SkySafari, Stellarium) to provide both publicity and guidance as part of the app (e.g., inclusion in ‘Tonight’s best’ recommendations, alerts for observing opportunities); and secondly set up a mentoring scheme to provide detailed help and guidance. A dedicated forum would help the community share knowledge, experience and to keep enthusiasm - as well as disseminating and showcasing results. To encourage involvement in future campaigns, respondents said that a clear statement of the scientific value amateurs can add to campaigns was the first priority, related to campaign aims, objectives and outcomes. All possible communication channels should be used for initial and ongoing communication – one size does not fit all. Ideally, a ‘buzz’ should be created around the campaign, in the mainstream media if possible, magazine, astronomy societies, videos, web and social media. Outreach and schools’ events would also bring the campaign to new audiences. The campaign could set up some student projects, which could report through teacher and learning networks – possibly linked to societies or academics. Competitions could be organised to generate wider interest (e.g., first sighting of comet, best images, best sketch, best with a smart phone). These images (rather than science data) in a gallery could be a rich source for publicity and illustration purposes. The outputs from the campaign, in terms of both scientific output (posters, conference presentations, research papers, press releases) with amateurs as co-authors or cited data submitters, and a data archive for future use, should be regularly reported. ### 4.3 Faulkes Telescope Project Comet 46P/Wirtanen Schools Campaign A campaign252525http://resources.faulkes-telescope.com/course/view.php?id=150 of observations of comet 46P during its close approach to Earth in 2018/9 was set up to test the feasibility of running a campaign aimed at schools (through the Faulkes Telescope Project/Las Cumbres Observatory (Brown et al., 2013)) and to test processes and guidance. The campaign included developing: background materials on comets; details on observing 46P including finder charts; walk-through guides for setting up observations; details of observations required; and, detailed guides for astrometric, photometric and morphological analysis. The project also provided some hands-on support for teachers. In total 2,638 observations were made during the period 1 June 2018 to 30 April 2019 (not all directly from the campaign). To assess the effectiveness, and learn lessons, a third survey (Appendix C) was undertaken of those UK schools who had participated. All three submitted their feedback - see Acknowledgements. Schools said they chose to participate to inspire their pupils in science and astronomy, using real research. They heard about the campaign through an astronomy forum and the Faulkes Telescope Project mailing list. Sixty-two pupils participated, 30 in primary (state school) and 32 in secondary (private schools). There was a mix of whole-class participation, astronomy clubs and individual pupils. All pupils were involved in scheduling observations on the LCO telescope network, processing and analysing the data. All schools said their pupils enjoyed being part of the campaign, and that the enthusiasm was maintained through the three months of the campaign. The guidance was considered useful, but more-detailed guidance on processing would have been helpful, perhaps in the form of videos. All felt that a forum for discussion with other educators would be a useful addition for future campaigns. Those leading the work in their school said the project was engaging, it allowed them to share their love of astronomy and engage their pupils (and their parents) in comet observations. It provided a catalyst for developing after-school astronomy observing sessions, and for science activities around solar observing (during school day). The educational value was considered to be broad. One school was a girls school, and this project inspired them to be more involved in physics and science. Others said the combination of astronomy, physics, chemistry, maths, geography, and planning (including dealing with different time zones) made for a rich educational experience. All would like to be involved in future campaigns. ## 5 Discussion Pro-Am campaigns have demonstrated that amateurs can add value, particularly by providing better temporal coverage. What can be learnt from the effectiveness of these campaigns, and the _Rosetta_ campaign in particular to inform future amateur campaigns? Older campaigns had greater logistical challenges due to the lack of modern communication methods. More modern ones have potentially better communications and better equipped amateurs . Ensuring adequate mission/campaign resources to actively manage the planning, implementation and follow up is always a challenge. Process and cost efficiency is essential, and this means effective planning, clear guidance, tools for observers, effective initial quality control, and realistic and robust plans for collecting and archiving the data. ### 5.1 Campaign Objectives It is important to be clear about the goals of the amateur elements of a campaign. Obtaining high-quality science data is usually the primary goal, to allow long-term analysis and short-term alerting of the professionals to significant changes in the comet. But to look only for the best scientific data risks missing many other potential campaign benefits, for example: * • increasing science capital by raising awareness of comets, and astronomy, for the general public. This is particularly important for campaigns in support of space missions, with their associated large publicly-funded costs. * • deepening the skills, interest and knowledge of amateur observers - adding a new dimension to their ‘hobby’ (although for many it is a very serious affair). * • involving schools can increase the interest in astronomy, science and other related disciplines. It can also widen horizons on career choices. Observing and studying comets can be a fun vehicle for teaching a wide range of subjects – as the survey from the 46P noted, students practiced their maths, geography, physics, biology, chemistry, planning, cooperation and analysis skills. They gained insights into the way real research is undertaken, including the challenges of equipment failure, software problems, and weather. ### 5.2 Data Collection What, where, how and when data should be submitted can be difficult to optimise. Amateurs do not have to submit their data, and are less likely to do so if the requirements are perceived to be too onerous, but without compliance with appropriate standards submitted data can be almost useless. #### 5.2.1 What to Collect? If the campaign is looking to analyse morphological changes in the comet over a long period then multiple images stacked, repeated over multiple nights, will give good SNR to allow faint detail to be teased out. If looking to constrain the start of outbursts, then the submission of individual accurately timed, high-cadence images is important (even though these might be low SNR). Larger aperture telescopes will provide the best resolution, although tracking is more of a constraint. For large-scale features such as large comae and dust and gas tails, smaller telescopes with wider fields of view will be most suited. Longer exposures are also possible before star or comet trailing becomes an issue. For some purposes it may be useful to receive the results of analysis, rather than raw data. An example would be astrometry measurements from standard software packages such as the widely-used Astrometrica. The CARA project provided observers with its own developed software262626http://cara.uai.it/soft_list to measure Af$\rho$ in a consistent way, and the results were collected and collated, rather than raw data. For the _Deep Impact_ mission photometric measurements were requested, with raw data FITS files only submitted later. The _ExoClock_ 272727https://www.exoclock.space/ project also provides software and an agreed methodology (for measuring exoplanet transit lightcurves), as does the Lunar Impact Flash282828https://www.nasa.gov/centers/marshall/news/lunar/observing_schedule.html project (for detecting and measuring lunar meteor strikes). Robust guidelines, good tutorials, and, ideally, provided software are key requirements for making these types of submission useful. The _GAIA_ alert292929http://gsaweb.ast.cam.ac.uk/alerts/home follow-up project takes a slightly different approach with observers asked to do some initial data analysis (with Astrometry.net303030http://astrometry.net/ and Sextractor313131https://sextractor.readthedocs.io/en/latest/) before uploading the results to a calibration server323232http://gsaweb.ast.cam.ac.uk/followup. This server calculates magnitude, without needing knowledge of filter used, and populates a live lightcurve for each _GAIA_ alert object with data points credited to the observer. In presenting science results, particularly when engaging with the media, and engaging schools, it is very helpful to have good-quality colour images of the comet. Producing colour images from multiple science filters is tricky - not least because the comet may move significantly between the images taken in subsequent filters. So a single (or better, stacked) colour image taken with a standard digital camera or a one-shot-colour astronomy camera can really add value for publicity and public engagement purposes. For these, precise timing is not important, nor many details of the capture and processing. This opens up the campaign to a much broader group of astronomers and even the general public (as demonstrated by the multiple images of C/2020 F3 NEOWISE posted on social media and websites). Clear guidance on what files (images, calibration files), what format (FITS and what FITS headers, JPEGS, other pictures) and what processing can or cannot be done is critical. This must be available before the start of the campaign, and stressed during the campaign - allowing observers to decide whether they are prepared to spend the time and effort needed for science observations. Science data should be unprocessed, and to be useful must be accompanied by specific metadata (e.g., accurate timing, exposure length, filter, sensor details). Other metadata (e.g., context data) are useful but not essential. For publicity or educational purposes, JPEGs are acceptable, and enhancement techniques are useful, while details such as precise timing are less critical. Given the different levels of rigour needed, it would be advisable to set up different, clearly differentiated, channels for submission. The process for pictures could be much simpler. #### 5.2.2 Where to Upload? The decision on where to upload and how to archive is difficult, particularly for smaller campaigns. For _Rosetta_ , the ESA’s PSA archive was planned as the repository. Late set-up, a lack of clear guidance to users, and confusion over necessary filename conventions, meant that the data and observations were split between uploading via FTP to ESA storage, a Dropbox facility, FLICKR and Facebook pages. While the FLICKR site currently houses a very useful archive of images, the absence of cataloguing makes it difficult and time-consuming to locate any specific observations. For Facebook it is even more difficult, and now that the group (which was members-only) has been archived the images are not publicly available. Both FLICKR and Facebook rely on private companies for existence, and their future cannot be guaranteed. ESA’s PSA archive standards are stringent to ensure long-term accessibility and compatibility. The time and cost of converting all the amateur data to a consistent format is unlikely to be a priority for ESA or another agency. For _Halley Watch_ , all data were initially held in hardcopy, before being digitised on CD, and made available online at NASA’s PDS: Small Bodies Node. The _Rosetta_ archive could similarly be stored but not converted into a future-proof format or catalogued in detail. The filename convention adopted for upload to the ESA PSA FTP site (Observation date_UTC Time_ Object_Filter_Exposure in seconds_Observer initials.FITS) is good and would be sufficient for any future researchers to at least identify date of observation, filter and observer. With an index (of observer and their equipment and location) this would allow for a quick filtering of observations for any purpose and this method may be appropriate for future campaigns too. The challenge is to decide who will provide the storage and the accessibility. It is also worth noting that even conversion of files to a standard naming convention appeared to be a barrier to participation to some observers (given the large number of observations they made), with most of the files uploaded via Dropbox eventually being renamed by a JPL intern. #### 5.2.3 How and When to Upload? The key is simplicity but robustness. With modern large-chip, high-resolution images, file sizes are large. If multiple observations are made over a night then the amount of data needing to be uploaded becomes multiple Gb. In some parts of the world this is not an issue, but in remote locations internet speeds are slow and connection costly. A way of compressing data for upload is important. Ideally a web-based interface (rather than an FTP or similar system) needs to be provided with a zipping tool to save bandwidth built in. Quality control should be built in - verifying FITS headers, and highlighting non-compliance early enough for corrections to be made. The system should generate consistent filenames to be used as an access tool. A log should be kept of all observations uploaded, by observer, with context and contact details, and this should form a key part of the archive. Ideally, observations should be uploaded as soon after they are made as possible, along with a short covering narrative. #### 5.2.4 A Long-term Collaborative Comet Campaign Website and Archive? The _Halley Watch_ project has demonstrated that having access to a digital archive can result in extra analysis long after the campaign - data analysis tools and techniques improve over time. There are currently a number of organisations who take either observation files or observational data from amateurs (eg BAA takes JPEG images, COBS and MPC take astrometric and photometric results). For the latter, consistency of measurement technique (particularly apertures used) is challenging, and this constrains the robustness of the data. If a longer-term, more generic solution is considered (potentially including professional data too), there are many practical questions to be addressed: who should host the website and upload facilities, who should store comet data, how would it be quality-controlled, how long should it be kept, with what access, and how could the management and support costs funded. In the short term, in Europe, the Europlanet VESPA333333http://www.europlanet- vespa.eu/ programme may be able to help. The Planetary Virtual Observatory and Laboratory (PVOL)343434http://pvol2.ehu.eus/pvol2/ database (Hueso et al., 2018) is an example of a VESPA-funded project. It makes available planetary images taken by amateurs across the world, with consistent meta-data. Unfortunately the Europlanet programme is funded in short-term blocks by the European Commission, so its long-term future cannot be guaranteed. ### 5.3 Effective Communication Modern communication methods should make effective communication much easier than earlier campaigns - although the existence of multiple channels adds complexity. There is a split between observers who use social media and those who do not, and this needs to be factored in. A website to hold all the guidance and tools (including upload), live updates, feedback to observers, and a discussion forum is the foundation. There are established interactive mailing lists with a wide membership such as Comets-ml. There are also a few core comet and Pro-Am Facebook groups. These should be used. Traditional print media (magazines, newspapers) may be reducing in number, but still have a place, along with their digital arms, for getting messages out to observers and the general public. Local and national societies provide good access to traditional (and often highly-skilled) observers, and internet forums provide access to active communities too. The personal touch should not be forgotten - some observers in the _Rosetta_ campaign became involved after a talk at their astronomy society from the mission scientist Matt Taylor. Two schools were involved in the 46P campaign due to personal contact with the organiser. Personal requests from Padma Yanamandra-Fisher also led to experienced observers joining the campaign. Core messages and information and guidance need to be consistent however they are communicated, but modified for specific audiences. Regular communication, during both the data gathering and subsequent analysis stages, is key to keeping observers engaged and enthusiastic, as is recognition and credit in publications. ## 6 Conclusions ### 6.1 Campaign Summary The comet 67P amateur campaign certainly created interest in the _Rosetta_ mission: 10,432 observations were submitted by 26 observers/groups, covering 284 dates. This compares with 17,352 observations over 463 dates by professionals. There are 58 days during the main observing period (2013-04-17 to 2016-04-30), and 15 in the 3-month period around perihelion in August 2015 when amateur but no professional data are available. So amateurs have added significantly to the observational coverage. There is good longitudinal coverage (Figure 2), and wide scale variations (Table 1). ### 6.2 Surveys Summary In total 77 people responded to the surveys: * • Observers and the wider astronomy community felt clarity of purpose and guidance, and regular communication were the most important elements of a campaign. Data submission should be made straightforward, with tools to ensure compliance with standards. There was clearly room for improvement in both of these areas in the _Rosetta_ campaign. * • Useful metadata were collected as part of the survey to supplement/correct data from FITS headers. Having these data submitted in a consistent format with the observations would have been better, and should be implemented for the future. * • Observers really enjoyed being part of the 67P campaign and would wish to be involved in future. * • Educators said the schools campaign had wide educational benefits, as well as being enjoyable and inspiring for pupils, staff and parents. ### 6.3 Elements of an Ideal Campaign The survey results, along with analysis of previous and current amateur observing campaigns, have informed the following suggested elements of an ideal campaign. While these are framed in terms of a comet campaign, many of the principles and actions would also be applicable to other non-comet campaigns. 1. 1. Agree clear aims and objectives for both science outcomes and wider benefits. 2. 2. Agree the observations and other data/images to be collected. 3. 3. Be realistic, given the resources available to run the campaign, and the uncertainty of comet brightness. 4. 4. Prepare well in advance and learn from other campaigns (re-using material where appropriate). Involve the amateur community, and the professionals who will use the data, in the planning. 5. 5. Build in a test phase well before the campaign is due to start. This should include sample observations, by a range of observers, to test the processes, systems and guidance. The feedback from both observers and researchers will allow refinement and streamlining (e.g., minimum metadata required, ease of upload, clarity of guidance), so that the actual campaign data are not compromised. It will also establish a set of experienced super-users who may support the community and act as mentors. 6. 6. Set up a campaign website to be the information hub: repository for guidance (at various levels), tools, feedback, forum for discussion, and uploading data. (In the longer term this could become be an overarching website covering many campaigns.) 7. 7. Carefully consider the launch elements so that the momentum can be maintained. This may mean launching different elements, for different cohorts, at different times. 8. 8. Use a wide variety of communication routes: press releases, astronomy press, societies of all sizes, mailing lists, forums and social media. But keep everything consistent and try to draft once then disseminate, not cover everything individually. Create a buzz around the campaign by running competitions (e.g., first sighting, first image with different size telescopes, art competitions). Contact the main software providers, particularly app developers, and engage them to include in bulletins, highlights lists and observing alerts. (This will be dependent on the expected brightness and observability of the comet.) 9. 9. Provide tools for observing: guides to position, optimum observing and exposure times; ideally these should be tailored for each observer’s location and equipment (as with _Exoclock_ 353535https://www.exoworldsspies.com/en/observers/ project). There should be more general information for novice observers and more technical for experienced observers, including details of ideal filter specification. Develop tools and guidance to allow DSLR users to submit scientific observations, if the comet is expected to be bright enough, e.g., to ensure proper timing, as this will open up the campaign to many more observers (see the deluge of DSLR images of comet C/2020 F3 NEOWISE) and be particularly useful where viewing conditions are difficult due to low altitude, and/or short observing windows. 10. 10. Where practical the guidance should include multi-media, e.g., short video tutorials and walk-through guides (particularly for the educational aspects). Consider setting up a mentoring scheme using experienced amateurs to guide other amateurs and schools. 11. 11. Use the website forum to allow real-time discussion and provision of advice. Encourage participants to share their observing experiences as well as data. For educators, encourage them to share how they are using the campaign in classes and activities. 12. 12. For upload, make it easy, ideally with compression to save bandwidth. Keep it to one location, with timely verification of data submitted via FITS tool, plus a short narrative for context information. Use a naming convention which can be used to search for data, but automate file naming on collection rather than introducing additional complications for observers. Remember that analysis techniques will improve over time so having an archive will be a legacy for future astronomers. 13. 13. Ideally, following upload there should be a pipeline process to quickly measure magnitude and position (if the observer has not already reported to MPC). The magnitude should be logged on a real-time lightcurve, with data points credited to observers (like _GAIA_). This should be on the front page of the campaign website. 14. 14. Provision of regular updates on what is happening with the campaign and what research is being undertaken is key to keeping observers engaged and valued for both the current and future campaigns. 15. 15. Recognise all submissions as adding value (e.g., produce certificates of contribution to campaign). 16. 16. Make the final data set freely available, and accessible, using the FAIR principles (Findability, Accessibility, Interoperability, and Reusable) (Wilkinson et al., 2016). 17. 17. Undertake a post-campaign evaluation to learn and disseminate lessons for future campaigns. 18. 18. Celebrate success. Comet 67P returns to perihelion in November 2021, and is favourably placed for observation from ground-based telescopes. This apparition will provide an excellent opportunity to test the observing campaign principles and good practice set out in this paper. The resultant data can be analysed alongside the earlier campaign data to learn more about the evolution of this favourite comet. We would like to acknowledge the inspiration of Mike A’Hearn. We hope we can continue his legacy by supporting Pro-Am comet observing campaigns in the future. We are grateful to Dr Padma Yanamandra-Fisher for providing insight into the campaign, and support and encouragement to the amateur observers during the campaign. Tony Angel and Wendy Clark have freely shared their practical experiences of participation in the campaign, and their experiences of observing comets more generally. We thank them. Mary Abgarian provided access to the amateur data held by JPL and Nicolas Ligier kindly provided data on the observations from the professional campaign. The Faulkes Telescope Project provided access to the Las Cumbres Observatory telescope network for the 46P schools’ campaign. The main contributors to the campaign were St Mary’s Catholic Primary School, Bridgend; RGS Dodderhill School, Droitwich Spa; and Marlborough College, Marlborough. We are grateful for the enthusiasm of their pupils and staff, particularly Ben Wooding, John McGrath and Gavin James, and hope we have inspired some future comet scientists. We would like to thank Elizabeth Warner, University of Maryland, for providing information on the practical administration of previous amateur campaigns and on gathering observer feedback. These were an invaluable starting point in the design of the surveys for this work. We thank the reviewers for their helpful and constructive comments. Last, but certainly not least, we are grateful for the time, skill and enthusiasm of the observers who have submitted data and images. In addition to those who submitted FITS data (shown in Table 1), the following observers submitted images to the PACA Flickr group: P Yanamandra- Fisher, V Agnihotri, B Backman, A Baransky, J G Bosch, D Buczynski, M Bunnell, P Camilleri, K Churyumov, G Conzo, P Cox, D Eagle, G Fagiolo, C Feliciano, F Garcia, J Gonzalez, N James, M Kardasis, R Kaufman, S Kunihiro, D Lovro, A Maury, G Masi, R Miles, E Morales, R Naves, T Noel, A Novichonok, A Oksanen, D Peach, T Prystavski, D Romeu, K Sugawara, K Takeshita, J Tillbrook, A Tough, J Tuten, S White, A Yoneda. ## Appendix A Survey Questions: Registered _Rosetta_ Campaign Observers This questionnaire seeks your experience of the Amateur Observing Campaign in support of ESA’s _Rosetta_ mission to comet 67P. It also invites you to submit details of any observations, and your opinions on how future campaigns could build on the _Rosetta_ campaign. This is part of a PhD research project being undertaken by Helen Usher at the Open University, UK, under the supervision of Dr Colin Snodgrass. Personal details provided will only be used for the purposes of this research (at Open University). No personal details will be released, except to give you credit for the observations you made (and you will be informed beforehand). If you have any questions on this research please feel free to contact Helen Usher directly -<EMAIL_ADDRESS> 1. 1. What sources do you use for information on comet observing (please give as many details as possible eg which websites, magazines) ? 2. 2. Membership of Astronomy Groups 3. 3. How did you hear about the amateur campaign? 4. 4. What sources do you use for information on comet observing (please give as many details as possible eg which websites, magazines) ? 5. 5. Why did you sign up? 6. 6. Are you an observer, or someone just interested in the campaign? 7. 7. Did you make observations? 8. 8. If you didn’t make observations, could you briefly explain why not 9. 9. Were your observations primarily for personal, primarily to submit to the campaign? 10. 10. Dates of observations 11. 11. What guidance did you refer to before making observations? 12. 12. Where did you access the guidance? (JPL/ESA/Facebook/Other) 13. 13. How easy was it to find the guidance? (1-6) 14. 14. How clear and useful was this guidance? (1-6) 15. 15. What factors led you to give the score above? 16. 16. Did you use a remote shared facility?(iTelescope/Slooh/FT/Other) 17. 17. Did you use your own equipment? Location of telescope, description, aperture, focal length, camera type, make and model, make and type of filters used. 18. 18. What software (if any) did you use for acquisition? 19. 19. Could you provide details of your acquisition workflow? 20. 20. If you calibrated your images before submission what software did you use? 21. 21. What was your calibration workflow? 22. 22. What software did you use for any processing? 23. 23. What was your processing workflow? 24. 24. Did you submit your observations?(Y/N) 25. 25. If you did not submit could you tell us why not? 26. 26. When did you submit observations ?(As I made them/All at once at the end of campaign) 27. 27. Did you submit to (ESA FTP/via P Yanamandra-Fisher/Facebook/Flickr) 28. 28. Did you submit (Calibrated FITS/RAW FITS/JPEGS/Calibration files/Context info) 29. 29. How straightforward did you find the upload process? (1-6) 30. 30. What factors led you to give the score above? 31. 31. If you uploaded FITS files did you ensure the FITS headers contained all the required observation data? 32. 32. How could we help you to easily provide these FITS header data in future? (accurate FITS header data makes analysis much easier and more robust) 33. 33. When you registered what were you expecting (including support, guidance, on- going communication)? 34. 34. What sources did you use to obtain the information and guidance on the campaign (please be as explicit as possible)? 35. 35. How well were your expectations and needs met? (1-6) 36. 36. What factors led you to give the score above? 37. 37. Did you join the Facebook Group?(Y/N) 38. 38. If no, could you give details of why not, and what you would have preferred instead? 39. 39. If yes, how useful did you find the Facebook group (1-6)? 40. 40. What factors led you to give the score above? 41. 41. Did you post images and/or comments?(Y/N) 42. 42. If there was a future similar campaign (eg 67P at next apparition) would you be likely to participate? (Definitely/Probably/Possibly/No) 43. 43. Was there any information (or were there any tools) which would have made observation and upload easier for you? 44. 44. What are your preferred methods of communication? (Email mailing list/Website/Social Media/Dedicated forum/Dedicated group message board/Regular online newsletters/Magazines/Microsoft teams or similar/Other) 45. 45. Is there anything you feel should be done differently for future campaigns? 46. 46. Are you aware of any other professional-amateur collaborations and observing campaigns which are particularly effective, and from which we might draw good practice lessons? 47. 47. How should other observers be encouraged to be part of future campaigns? 48. 48. Any other comments/suggestions/complaints/kudos/answers to unasked questions? 49. 49. Finally, did you have fun? ## Appendix B Survey Questions: Amateur Astronomers The ESA _Rosetta_ mission to comet 67P included an amateur observing campaign. The aim was to encourage amateurs across the world to submit observations of the comet, which could then be used to supplement professional observations. Amateur data can add greater temporal sampling and wider fields of view. This questionnaire, which forms part of a PhD study by Helen Usher at the Open University, seeks information on the effectiveness of the awareness raising methods used, and seeks views on how future observing campaigns could most effectively reach comet observers worldwide. The personal details provided for the purposes of this research (at Open University). No personal details will be released. If you have any questions on this research please feel free to contact Helen Usher directly -<EMAIL_ADDRESS> 1. 1. Country 2. 2. What sources do you use for information on comet observing (please give as many details as possible eg which websites, magazines) ? 3. 3. Membership of Astronomy Groups 4. 4. Did you know that there was an official amateur astronomer campaign in support of the _Rosetta_ space mission to comet 67P? If so, can you remember where you heard about it? 5. 5. Did you participate in the campaign? If you participated in the campaign, have you received the more detailed survey for participants from Helen Usher? (If not, it is available here https://forms.gle/iUMeLYMu5SVguAqVA) 6. 6. How should observers be encouraged to be part of future campaigns? 7. 7. What publicity should be used? 8. 8. What guidance and tools should be provided? 9. 9. How should the guidance and tools be made available? 10. 10. How should ongoing communication be handled? 11. 11. Any other comments? ## Appendix C Survey Questions: 46P Schools’ Campaign Observers Thank you for participating in the campaign. We hope you enjoyed being part of it, and it provided good learning opportunities for (you and) your pupils. This was the first time we have really attempted a comet observing campaign, but we hope to do more in the future! We would therefore be very grateful if you could fill in this short questionnaire to let us know what was good and useful, and what could be improved. As well as informing future FT/LCO campaigns, Helen Usher will be drawing out more general lessons as part of her PhD studies with the Open University, UK. If you are happy for Helen to follow-up then please include your name and contact details. The data will be kept securely and used purely for the purposes of this research. No names will be released without prior approval. Thank you! Helen Usher and the FT team 1. 1. Your name, role, school 2. 2. School type (Primary/Secondary) 3. 3. Why did you decide to observe/be part of the 46P observing campaign? 4. 4. How did you hear about the FT campaign? 5. 5. How many pupils involved? (age range) 6. 6. Did you use the activities with (whole class/astronomy group/individual or selected pupils) 7. 7. What activities did you undertake? 8. 8. How much did your pupils enjoy being part of the campaign? (1-6) 9. 9. What factors led you to give the score above? 10. 10. How much did you enjoy being part of the campaign? (1-6) 11. 11. What factors led you to give the score above? 12. 12. What do you consider to be the educational value of the campaign? 13. 13. How useful was the guidance? [Listed] 14. 14. Was there any guidance missing? 15. 15. How do you think the guidance could be improved for future campaigns (particularly any that you rated not useful)? 16. 16. What would be your preferred method of communication with the FT campaign team for guidance etc?(FT website/FT Facebook/Twitter/Email/Discussion forum/Microsoft teams/In person/other) 17. 17. Would it be useful to be able to discuss and share with other schools, and if so how? 18. 18. Would you like to be involved in future campaigns? 19. 19. How would you encourage other schools to be involved in future? 20. 20. Any other comments? ## References * A’Hearn (2004) A’Hearn, M. F. 2004, Comets II * A’Hearn (2011) —. 2011, Annu. Rev. Astron. Astrophys., 49, 281, doi: 10.1146/annurev-astro-081710-102506 * A’Hearn (2017) —. 2017, Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., 375, 20160261, doi: 10.1098/rsta.2016.0261 * A’Hearn et al. (2005) A’Hearn, M. F., Belton, M. J., Delamere, A., & Blume, W. H. 2005, Space Sci. Rev., 117, 1, doi: 10.1007/s11214-005-3387-3 * A’Hearn et al. (1984) A’Hearn, M. F., Schleicher, D. G., Millis, R. L., Feldman, P. D., & Thompson, D. T. 1984, Astron. J., 89, 579, doi: 10.1086/113552 * Bowler (2009) Bowler, S. 2009, Astron. Geophys., 50, 2.10, doi: 10.1111/j.1468-4004.2009.50210.x * Brown et al. (2013) Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, Publ. Astron. Soc. Pacific, 125, 1031, doi: 10.1086/673168 * Caswell et al. (2020) Caswell, T. A., Droettboom, M., Lee, A., et al. 2020, matplotlib/matplotlib: REL: v3.3.1, doi: 10.5281/ZENODO.3984190 * Dunlop (2003) Dunlop, S. 2003, in Inf. Handl. Astron. - Hist. Vistas (Dordrecht: Springer Netherlands), 275–294, doi: 10.1007/0-306-48080-8_16 * Edberg (1988) Edberg, S. J. 1988, Int. Astron. Union Colloq., 98, 95, doi: 10.1017/S0252921100092307 * Glassmeier et al. (2007) Glassmeier, K.-H. H., Boehnhardt, H., Koschny, D., et al. 2007, Space Sci. Rev., 128, 1, doi: 10.1007/s11214-006-9140-8 * Harris et al. (2020) Harris, C. R., Jarrod Millman, K., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 * Hueso et al. (2018) Hueso, R., Juaristi, J., Legarreta, J., et al. 2018, Planet. Space Sci., 150, 22, doi: 10.1016/j.pss.2017.03.014 * Knight et al. (2017) Knight, M. M., Snodgrass, C., Vincent, J.-B., et al. 2017, Mon. Not. R. Astron. Soc., 469, S661, doi: 10.1093/mnras/stx2472 * Meech (2017) Meech, K. J. 2017, Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., 375, 20160247, doi: 10.1098/rsta.2016.0247 * Meech et al. (2005) Meech, K. J., Ageorges, N., A’Hearn, M. F., et al. 2005, Source Sci. New Ser., 310, 265 * Milani et al. (2007) Milani, G. A., Szabó, G. M., Sostero, G., et al. 2007, Icarus, 187, 276, doi: 10.1016/j.icarus.2006.10.014 * Miles (2010) Miles, R. 2010, Soc. Astron. Sci. 28th Annu. Symp. Telesc. Sci., 51. https://arxiv.org/abs/1006.4019 * Moreno et al. (2014) Moreno, F., Pozuelos, F., Aceituno, F., et al. 2014, Astrophys. J., 791, 118, doi: 10.1088/0004-637X/791/2/118 * Price-Whelan et al. (2018) Price-Whelan, M., Sipőcz, B. M., Günther, H. M., et al. 2018, Astron. J., 32, 123, doi: 10.3847/1538-3881/aabc4f * Robitaille et al. (2013) Robitaille, T. P., Tollerud, E. J., Greenfield, P., et al. 2013, Astron. Astrophys., 558, 33, doi: 10.1051/0004-6361/201322068 * Samarasinha et al. (2015) Samarasinha, N. H., Mueller, B. E., Knight, M. M., et al. 2015, Planet. Space Sci., 118, 127, doi: 10.1016/j.pss.2015.10.006 * Sekanina & Fry (1991) Sekanina, Z., & Fry, L. 1991, The Comet Halley archive: Summary volume, Tech. rep., NASA: JPL. https://ntrs.nasa.gov/citations/19930002055https://ntrs.nasa.gov/search.jsp?R=19930002055 * Snodgrass & Jones (2019) Snodgrass, C., & Jones, G. H. 2019, Nat. Commun., 10, 1, doi: 10.1038/s41467-019-13470-1 * Snodgrass et al. (2016) Snodgrass, C., Jehin, E., Manfroid, J., et al. 2016, Astron. Astrophys., 588, A80, doi: 10.1051/0004-6361/201527834 * Snodgrass et al. (2017) Snodgrass, C., A’Hearn, M. F., Aceituno, F., et al. 2017, Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., 375, 20160249, doi: 10.1098/rsta.2016.0249 * Vincent et al. (2016) Vincent, J.-B., A’Hearn, M. F., Lin, Z.-Y., et al. 2016, Mon. Not. R. Astron. Soc., 462, S184, doi: 10.1093/mnras/stw2409 * Wilkinson et al. (2016) Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., et al. 2016, Sci. Data, 3, 1, doi: 10.1038/sdata.2016.18
# Quantization of Length in Spaces with Position-Dependent Noncommutativity Jishnu Aryampilly<EMAIL_ADDRESS>Department of Physics, Pondicherry University, Puducherry 605014, India Muthukumar Balasundaram<EMAIL_ADDRESS>Department of Physics, Pondicherry University, Puducherry 605014, India Aamir Rashid<EMAIL_ADDRESS>Department of Physics, Pondicherry University, Puducherry 605014, India (Dated: ) ###### Abstract We present a novel approach to quantizing the length in noncommutative spaces with positional-dependent noncommutativity. The method involves constructing ladder operators that change the length not only along a plane but also along the third direction due to a noncommutative parameter that is a combination of canonical/Weyl-Moyal type and Lie algebraic type. The primary quantization of length in canonical-type noncommutative space takes place only on a plane, while in the present case, it happens in all three directions. We establish an operator algebra that allows for the raising or lowering of eigenvalues of the operator corresponding to the square of the length. We also attempt to determine how the obtained ladder operators act on different states and work out the eigenvalues of the square of the length operator in terms of eigenvalues corresponding to the ladder operators. We conclude by discussing the results obtained. ## 1 Introduction In the expansive realm of fundamental physics, the notion of spacetime stands as a bedrock upon which our understanding of the universe is constructed. Conventionally, the principles of classical physics have offered a framework to describe spacetime where space and time coordinates exhibit commutativity, enabling precise measurements of position and duration. Nevertheless, emerging theories and frameworks have heralded a departure from this classical paradigm, revealing the intriguing prospects of a noncommutative spacetime. The idea of noncommutativity in the coordinates of spacetime can be attributed to Heisenberg [1], and it was subsequently developed further by Snyder [2] as a means to address the issue of divergences in quantum field theory. Noncommutative spacetime theory suggests that space and time coordinates do not commute but instead exhibit a fundamental uncertainty or noncommutativity relation. This revelation necessitates acknowledging that precise measurements of both position and time are inherently uncertain at infinitesimal scales. This departure from classical notions has sparked considerable interest and led to the formulation of various theoretical models attempting to capture the essence of noncommutative spacetime [3, 4, 5]. The growing interest in noncommutative spaces is connected to the prediction of noncommutative structures in string theory and loop quantum gravity [6, 7, 8, 9]. The literature on noncommutative theories is replete in the contexts of quantum field theories [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], quantum mechanics [21, 22, 23, 24, 25, 26, 27, 28, 29, 30] and gravity theories [31, 32, 33, 34, 35, 36]. In the case of noncommutative geometry by Alain Connes [37, 38], the spectral manifold is shown to have a geometric analog of the Heisenberg commutation relation involving the Dirac operator and Feynman slash of real scalar fields, leading to the quantization of volume [39]. The idea of length as an operator has already been discussed in canonical quantum gravity [40]. Loop quantum gravity has developed the rigorous construction of (spatial) geometrical operators, such as the area and the volume[41, 42]. Within the context of noncommutative spaces, the idea of length as an operator was proposed in [43], where it was shown to result in the quantization of length in the noncommutative space with the canonical/Weyl-Moyal type noncommutativity $[\hat{x}^{\mu},\hat{x}^{\nu}]=i\theta^{\mu\nu}$ (1) among the coordinates where $\theta^{\mu\nu}$ is a constant and real antisymmetric matrix. The motivation behind such a proposal was the following. The noncommutativity of spatial coordinates is closely linked to the existence of a minimum length within a system. This minimum length is commonly associated with inherent uncertainties in distance measurements. Rather than associating the minimum length with uncertainties, it can be attributed to the minimum value of the quantized length. If length is taken as an operator with a spectrum of eigenvalues, then a set of ladder operators may exist to go from one eigenstate to another such that $[\hat{L}^{2},\hat{a}]=\lambda\,\hat{a},$ (2) where $\hat{L}^{2}$ is the operator corresponding to the square of length and $\hat{a}$ and $\hat{a}^{\dagger}$ being the ladder operators. If $\hat{a}=\sum_{\mu}\alpha_{\mu}\,\hat{x}^{\mu}$, where $\alpha_{\mu}$’s are complex constants, then Eq.(2) leads to an eigenvalue equation which, in the case of 3-D with $\theta^{12}=\theta^{13}=\theta^{23}=\theta$, gives three real eigenvalues and their corresponding eigenvectors. Two of the eigenvectors give the required ladder operators that change the length in the plane formed by them. The third eigenvector points in the direction normal to that plane and the length is not quantized along this normal direction[43]. In this paper, we follow an approach similar to the operator methods in the quantum harmonic oscillator and the angular momentum problems and apply it to the case of position-dependent noncommutativity. Position-dependent noncommutativity has been discussed before in [44, 45, 47, 46, 48]. In particular, the noncommutative parameter used in our approach is the following combination of the canonical/Weyl-Moyal type and Lie algebraic type: $[\hat{x}^{\mu},\hat{x}^{\nu}]=i(\theta^{\mu\nu}+B^{\mu\nu}_{\phantom{~{}~{}}\rho}\,\hat{x}^{\rho}),$ (3) where $\theta^{\mu\nu}$ corresponds to a constant and real antisymmetric matrix and $B^{\mu\nu\rho}$ is real and completely antisymmetric. The paper is organized as follows. In Section 2, we establish an operator algebra of the length-square operator that allows for the raising and lowering of eigenvalues of $\hat{L}^{2}$. In Section 3, we apply this approach in a 3-dimensional space. We construct a commutation relation between the operator $\hat{L}^{2}$ corresponding to the square of length and the ladder operators analogous to the commutation relation between the Hamiltonian of the harmonic oscillator and its raising/lowering operator. We work with this commutation relation to obtain an eigenvalue equation and consequently construct a set of operators $\hat{a}_{-}$, $\hat{a}_{+}$ and $\hat{b}$. Once we have obtained the operators, we adopt them to construct a ladder of states that constitute the eigenstates of the $\hat{L}^{2}$ operator. In Section 4, we work out the eigenvalues of $\hat{L}^{2}$ in terms of eigenvalues corresponding to the operators $\hat{a}_{+}\hat{a}_{-}$ and $\hat{b}$. The actions of $\hat{a}_{-}$ and $\hat{a}_{+}$ change not only the eigenvalues of $\hat{L}^{2}$ but also the eigenvalues of $\hat{b}$, and in this way, the length is quantized along the direction of $\hat{b}$ also. In Section 5, by introducing another operator $\hat{K}$ that commutes with the ladder operators, we investigate the system’s degeneracy further. We conclude in Section 6. ## 2 Construction of Ladder Operators We establish an operator algebra in a manner that allows for the raising or lowering of eigenvalues of the operator $\hat{L}^{2}$. We define the operator corresponding to the square of the distance as $\hat{L}^{2}=g_{\mu\nu}\hat{x}^{\mu}\hat{x}^{\nu},$ (4) where $g_{\mu\nu}$ is a constant symmetric metric of $D$-dimensional spacetime and Einstein’s summation convention is used over the repeated indices $\mu$ and $\nu$ which take the values $1,2,\ldots D$. The prescription to construct a set of ladder operators $\\{\hat{a}^{\mu}\\}$ is that they satisfy the following commutation relation: $[\hat{L}^{2},\hat{a}^{\mu}]=\lambda\,\hat{a}^{\mu},$ (5) where $\lambda$ is a constant to be determined and $\hat{a}^{\mu}$ is linearly related to $\hat{x}^{\mu}$ as $\hat{a}^{\mu}=U^{\mu}_{\phantom{\mu}\nu}\,\hat{x}^{\nu},$ (6) where $U^{\mu}_{\phantom{\mu}\nu}$ is the transformation matrix. Substituting Eq.(4) and Eq.(6) into Eq.(5) and using Eq.(3) leads to the following operator equation: $2i\,\theta_{\mu}^{\phantom{\mu}\rho}\,U^{\sigma}_{\phantom{\mu}\rho}\,\hat{x}^{\mu}\,+\,i\,U^{\sigma}_{\phantom{\mu}\rho}\,B_{\nu\phantom{\rho}\kappa}^{\phantom{\nu}\rho}\,(\hat{x}^{\nu}\,\hat{x}^{\kappa}\,+\,\hat{x}^{\kappa}\,\hat{x}^{\nu})=\lambda\,U^{\sigma}_{\phantom{\mu}\mu}\,\hat{x}^{\mu}.$ (7) Since $(\hat{x}^{\nu}\,\hat{x}^{\kappa}\,+\,\hat{x}^{\kappa}\,\hat{x}^{\nu})$ is symmetric and $B_{\nu\phantom{\rho}\kappa}^{\phantom{\nu}\rho}$ is antisymmetric under the exchange of $\nu$ and $\kappa$, the second term is zero which leads to the eigenvalue equation for the transformation matrix: $2i\,\theta_{\mu}^{\phantom{\mu}\rho}\,U^{\sigma}_{\phantom{\mu}\rho}\,=\lambda\,U^{\sigma}_{\phantom{\mu}\mu}\,.$ (8) If $\hat{X}^{\dagger}=\begin{pmatrix}\hat{x}^{1},\hat{x}^{2},\ldots\hat{x}^{D}\end{pmatrix}$ and $g$ is the matrix form of the metric tensor, then $\hat{L}^{2}=\hat{X}^{\dagger}\,g\,\hat{X}.$ (9) To relate $\hat{L}^{2}$ to the ladder operators we define $\hat{A}^{\dagger}=\begin{pmatrix}\hat{a}^{1\dagger},\hat{a}^{2\dagger},\ldots\hat{a}^{D\dagger}\\\ \end{pmatrix}$. Going by the analogy with harmonic oscillator and angular momentum problems, the ladder operators will be useful only if the length operator is related to number operators $(\hat{a}^{1}){}^{\dagger}\hat{a}^{1},\,(\hat{a}^{2}){}^{\dagger}\hat{a}^{2},\ldots$. Therefore, we require $\hat{L}^{2}$ in the following form: $\hat{L}^{2}=\frac{1}{\gamma}\,\hat{A}^{\dagger}g\hat{A}=\frac{1}{\gamma}\,\hat{X}^{\dagger}U^{\dagger}\,g\,U\hat{X},$ (10) where $\gamma$ is a constant. Comparing Eq.(9) and Eq.(10), we get following condition for the transformation matrix: $U^{\dagger}\,g\,U=\gamma\,g.$ (11) ## 3 The Length Operator in 3-D Space We attempt to apply our approach to a 3-dimensional space. For this case, the commutation relation that we use can therefore be defined as $[\hat{x}^{i},\hat{x}^{j}]=i(\theta^{ij}+B^{ij}_{\phantom{ij}k}\,\hat{x}_{k}),$ (12) We define the ladder operator as $\hat{a}=\alpha_{k}\hat{x}^{k},$ (13) where Einstein’s summation convention is implied and $\alpha_{k}$’s are complex constants to be determined. The operator corresponding to the square of the length, in this case, thus becomes $\hat{L}^{2}=g_{ij}\hat{x}^{i}\hat{x}^{j}$ (14) As discussed, we assume the following relation in analogy with the angular momentum operator in quantum mechanics $[\hat{L}^{2},\hat{a}]=\lambda\hat{a}.$ (15) Then, the substitution of Eq.(14) and Eq.(13) in Eq.(15) and using the commutator in Eq.(12) gives the relation $2i\theta^{k}_{\phantom{k}m}\alpha_{k}=\lambda\alpha_{m},$ (16) where $\theta^{km}$ is a constant antisymmetric matrix and $g_{ij}$ is assumed to be diag(1,1,1). We assume the entries of $\theta^{km}$ to be $\theta^{12}=\theta^{13}=\theta^{23}=\theta$, which leads Eq.(16) to give the nontrivial eigenvalues for $\lambda=\pm{2\sqrt{3}\theta}$. The third trivial solution is $\lambda=0$. The set of values for $\alpha_{i}$ corresponding to $\lambda=-{2\sqrt{3}\theta}$ is worked out to be $(\alpha_{1},\,\alpha_{2},\,\alpha_{3})=(\rho\sigma,-\rho\sigma^{*}\rho)$, where $\rho=\displaystyle e^{i\delta_{1}}$ and $\sigma=-e^{i\pi/3}$. The operator $\hat{a}$ corresponding to this negative $\lambda$ is denoted by $\hat{a}_{-}$. It is identified with the lowering operation by comparing Eq.(15) with a negative $\lambda$ to an analogous relation in the harmonic oscillator problem. The lowering operator can then be expressed as $\displaystyle\hat{a}_{-}=\rho\left[\sigma\hat{x}^{1}-\sigma^{*}\hat{x}^{2}+\hat{x}^{3}\right].$ (17) The eigenvector for the positive value $\lambda={2\sqrt{3}\theta}$ leads to the raising operator $\hat{a}_{+}=(\hat{a}_{-})^{\dagger}$. The eigenvector corresponding to the third eigenvalue, that is, the trivial solution $\lambda=0$, leads to the operator $\hat{b}=\beta_{i}\,\hat{x}^{i}$ with $(\beta_{1},\beta_{2},\beta_{3})$ = $(1,-1,1)$. With these denotations, we write the Hermitian conjugate of the basis $\hat{A}$ in Eq.(10) as $\hat{A}^{\dagger}=(\hat{a}_{+},\hat{a}_{-},\hat{b})$. The explicit values of $\alpha_{i}$ in Eq.(16) and $\beta_{i}$ have the properties such as $\alpha_{i}\alpha^{i}=0$, $\alpha_{i}\,\beta^{i}=\alpha_{i}^{*}\,\beta^{i}=0$ and $\theta^{ij}\beta_{j}=0$. Essentially, the three eigenvalues for $\lambda$ in Eq.(15) lead to following commutators $[\hat{L}^{2},\hat{a}_{\pm}]=\pm 2\sqrt{3}\theta\hat{a}_{\pm},$ (18) and $[\hat{L}^{2},\hat{b}]=0.$ (19) Also, the commutation relations among the ladder operators $\hat{a}_{-}$, $\hat{a}_{+}$ and $\hat{b}$ are obtained as $\displaystyle[\hat{a}_{-},\hat{a}_{+}]$ $\displaystyle=\sqrt{3}(3\theta+B\hat{b}),$ (20) $\displaystyle[\hat{b},\hat{a}_{\pm}]$ $\displaystyle=\mp\sqrt{3}\,B\,\hat{a}_{\pm},$ (21) $\displaystyle[\hat{a}_{+}\hat{a}_{-},\hat{a}_{+}]$ $\displaystyle=\sqrt{3}(3\theta+B\hat{b}+\sqrt{3}B)\hat{a}_{+},$ (22) $\displaystyle[\hat{a}_{+}\hat{a}_{-},\hat{a}_{-}]$ $\displaystyle=-\sqrt{3}(3\theta+B\hat{b})\hat{a}_{-},$ (23) With these commutators, the operator form of the square of length is expressed as, $\hat{L}^{2}=\frac{1}{3}[2\hat{a}_{+}\hat{a}_{-}+\sqrt{3}(3\theta+B\hat{b})+\hat{b}^{2}].$ (24) The Eq.(18) leads to $[\hat{L}^{2},\hat{a}_{+}\hat{a}_{-}]=0$ and since $[\hat{L}^{2},\hat{b}]=0$, it is possible to construct a complete set of simultaneous eigenstates of $\hat{L}^{2}$, $\hat{a}_{+}\hat{a}_{-}$ and $\hat{b}$. ## 4 Eigenvalues of the Length-Square Operator Let us start with an eigenstate $|n\rangle$ of the number operator $\hat{a}_{+}\hat{a}_{-}$ and consider the action of the ladder operators on it. From Eq.(22) and Eq.(23), it is clear that the action of $\hat{a}_{\pm}$ on $|n\rangle$ is to raise/lower the eigenvalue of $\hat{a}_{+}\hat{a}_{-}$. So, we define the actions of $\hat{a}_{-}$, $\hat{a}_{+}$ and $\hat{b}$ respectively on the normalized state $|n\rangle$ as $\displaystyle\hat{a}_{-}|n\rangle$ $\displaystyle=h_{1}(n)|n-1\rangle,$ (25) $\displaystyle\hat{a}_{+}|n\rangle$ $\displaystyle=h_{2}(n)|n+1\rangle,$ (26) $\displaystyle\hat{b}|n\rangle$ $\displaystyle=g(n)|n\rangle.$ (27) Considering the Hermitian conjugate of Eq.(25), we obtain $\langle n|\hat{a}_{+}=\langle n-1|h_{1}^{*}(n).$ (28) Thus, it can easily be shown that $\langle n|\hat{a}_{+}\hat{a}_{-}|n\rangle=\langle n-1|h_{1}^{*}(n)h_{1}(n)|n-1\rangle={|{h_{1}(n)}|}^{2}.$ (29) Similarly upon taking the Hermitian conjugate of Eq.(26), we can obtain $\langle n|\hat{a}_{-}\hat{a}_{+}|n\rangle={|{h_{2}(n)}|}^{2}.$ (30) It can be further rewritten in terms of commutator to obtain the relation $\langle n|\hat{a}_{-}\hat{a}_{+}|n\rangle=\langle n|(\hat{a}_{+}\hat{a}_{-}+[\hat{a}_{-},\hat{a}_{+}])|n\rangle$ and using the commutator in Eq.(20), we obtain the relation $\langle n|\hat{a}_{-}\hat{a}_{+}|n\rangle=\langle n|(\hat{a}_{+}\hat{a}_{-}+3\sqrt{3}\theta+\sqrt{3}B\hat{b})|n\rangle.$ (31) The above relation can be reexpressed using Eq.(29), Eq.(30) and Eq.(27) as $|h_{2}(n)|^{2}=|h_{1}(n)|^{2}+3\sqrt{3}\theta+\sqrt{3}Bg(n)$ (32) Considering the Hermitian conjugation of Eq.(25) with $n+1$ in place of $n$, we have $\langle n+1|\hat{a}_{+}=\langle n|{h_{1}}^{*}(n+1)$. Therefore, $\langle n|\hat{a}_{+}|n\rangle$ gives ${h_{1}}^{*}(n+1)$ on one hand but on the other hand it gives ${h_{2}}(n)$, which leads to the relation $|h_{2}(n)|^{2}=|h_{1}(n+1)|^{2}.$ (33) Also, considering the action of $\hat{b}$ on the state $\hat{a}|n\rangle$ and by using the commutator in Eq.(21), we can find the relation between $g(n+p)$ and $g(n)$ for any integer $p$ $g(n+p)=g(n)-pB\sqrt{3}.$ (34) The Eq.(18) implies that the eigenvalues of $\hat{L}^{2}$ are decreased by $\hat{a}_{-}$ by the amount $2\sqrt{3}\theta$. This decrease cannot go on forever since $\hat{L}^{2}$ will take negative values, which is unphysical. So, let us define a ground state of the system by $|\overline{n}\rangle$ such that the action of the lowering operator $\hat{a}_{-}$ on it gives $\hat{a}_{-}|\overline{n}\rangle=0.$ (35) Here, we have $h_{1}(\overline{n})=0$ or $|h_{1}(\overline{n})|^{2}=0.$ (36) We can further extend our analysis using Eq.(32), Eq.(33) and Eq.(34) to compute values of $h_{1}$ as $\displaystyle|h_{1}(\overline{n}+1)|^{2}$ $\displaystyle=[3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})],$ (37) $\displaystyle|h_{1}(\overline{n}+2)|^{2}$ $\displaystyle=2[3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})]-3B^{2}$ (38) and so on. The analysis can be further extended to obtain $|h_{1}(\overline{n}+m)|^{2}=m[3\sqrt{3}\theta+\sqrt{3}B(g(\overline{n})-\frac{(m-1)}{2}\sqrt{3}B)].$ (39) For a general $n$, where $n=\overline{n}+m$ and $m\geq 0$, the above equation modifies to a function involving $n$ and $\overline{n}$. In a similar manner, the general form of (34) is, then, better expressed as a function of $n$ and $\overline{n}$ $g(n)=g(\overline{n})-(n-\overline{n})\sqrt{3}B.$ (40) where $g(\overline{n})$ comes from the operator $\hat{b}$ acting on the ground state, that is, $\hat{b}|\overline{n}\rangle=g(\overline{n})|\overline{n}\rangle.$ (41) Accordingly, Eq.(39) can be rewritten as $|h_{1}(n)|^{2}\\!=\\!(n-\overline{n})[3\sqrt{3}\theta+\sqrt{3}B(g(\overline{n})\\!-\\!\tfrac{(n-\overline{n}-1)}{2}\sqrt{3}B)].$ (42) We can also find using Eq.(33) that ${|h_{2}(n)|}^{2}=(n-\overline{n}+1)[3\sqrt{3}\theta+\sqrt{3}B(g(\overline{n})\\!-\\!\tfrac{(n-\overline{n})}{2}\sqrt{3}B)]$ (43) Since $|h_{1}(n)|^{2}$ and $|h_{2}(n)|^{2}$ cannot take negative values for any $n$, we can infer that $3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})\geq\frac{(n-\overline{n})}{2}3B^{2}.$ (44) Suppose $\tilde{n}$ is the maximum value $n$ can take such that the above inequality stands valid. Then, we can fix a top-most state $|\tilde{n}\rangle$ such that $\hat{a}_{+}|\tilde{n}\rangle=0.$ (45) Now, since $h_{2}(\tilde{n})=0$ and $\tilde{n}\geq\overline{n}$, using (43), we can express $g(\overline{n})$ in terms of $\tilde{n}$ and $\overline{n}$. Thus, we obtain $g(\overline{n})=\frac{(\tilde{n}-\overline{n})}{2}\sqrt{3}B-\frac{3\theta}{B}.$ (46) Putting Eq.(46) back into Eq.(44), we get $\tilde{n}\geq\overline{n}+m$. So essentially $n=\overline{n},~{}\overline{n}+1,~{}\ldots,\tilde{n}$ and $\tilde{n}=\overline{n},~{}\overline{n}+1,\ldots.$ But there is no restriction on $\overline{n}$, and it can take both positive and negative integer values. We may now proceed to solve for the eigenvalues of $L^{2}$. The length-square operator in Eq.(24) leads to the following eigenvalue equation $\displaystyle\hat{L}^{2}|n\rangle=\frac{1}{3}\big{(}2|h_{1}(n)|^{2}+3\sqrt{3}\theta+\sqrt{3}Bg(n)+g^{2}(n)\big{)}|n\rangle.$ (47) Using Eq.(42) and Eq.(40), we can reexpress the form of the eigenvalue of the length square operator as $\displaystyle\hat{L}^{2}|n\rangle=\frac{1}{3}\big{(}[2(n-\overline{n})+1]3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})+g^{2}(\overline{n})\big{)}|n\rangle.$ (48) From the expression for $g(\overline{n})$ deduced in Eq.(46), it is evident that $g(\overline{n})$ is dependent on both $\tilde{n}$ and $\overline{n}$. In consequence, we also find that both $|h_{1}(n)|^{2}$ and $|h_{2}(n)|^{2}$ now depend on $\overline{n}$ and $\tilde{n}$ in addition to $n$. But, $\tilde{n}$ and $\overline{n}$ do not necessarily take fixed values and could assume different values such that Eq.(44) is obeyed. In light of this, we now see that the state of the system should depend on $n$, $\tilde{n}$ and $\overline{n}$. As a result, we can infer that we will require three indices to express the system’s state, although the eigenvalues of $\hat{L}^{2}$ depend only on $n-\overline{n}$ and $\tilde{n}-\overline{n}$. The state of the system can then better be represented as $|\tilde{n},\overline{n},n\rangle$. Now, employing Eq.(46) in Eq.(48) and further simplifying, the eigenvalue of the length square operator thus emerges to be $\displaystyle\hat{L}^{2}|\tilde{n},\overline{n},n\rangle=\big{(}[2n\\!-\\!\tilde{n}\\!-\\!\overline{n}]\sqrt{3}\theta\\!+\\!\tfrac{B^{2}}{4}(\tilde{n}-\overline{n})(\tilde{n}-\overline{n}+2)+\tfrac{3{\theta}^{2}}{B^{2}}\big{)}|\tilde{n},\overline{n},n\rangle.$ (49) Fundamentally, we have derived the eigenvalues of $\hat{L}^{2}$ by expressing them in relation to the eigenvalues associated with $\hat{a}_{+}\hat{a}_{-}$ and $\hat{b}$. The length is quantized in all directions in contrast to [43], where the length was quantized only in a plane. It can be seen that if $\theta$ is assigned the value of 0 in Eq.(49), the length-square equation reduces to the form of the angular momentum problem with the angular momentum quantum number $l$ analogous to the value $\frac{\tilde{n}-\overline{n}}{2}$ and $\hbar$ analogous to $B$. Note that $n$ appears only in the first term which is square-bracketed in Eq.(49) and since $n$ takes the values from $\overline{n}$ to $\tilde{n}$, this first term in the cases when $n=\overline{n}$ and $n=\tilde{n}$ respectively involves $-[\tilde{n}-\overline{n}]$ and $[\tilde{n}-\overline{n}]$. Therefore, the minimum of $\hat{L}^{2}$ occurs when $n=\overline{n}$ since $\tilde{n}\geq\overline{n}$, and this minimum is given by the equation $\displaystyle\hat{L}^{2}|\tilde{n},\overline{n},\overline{n}\rangle=\big{(}-[\tilde{n}\\!-\\!\overline{n}]\sqrt{3}\theta\\!+\\!\tfrac{B^{2}}{4}(\tilde{n}-\overline{n})(\tilde{n}-\overline{n}+2)+\tfrac{3{\theta}^{2}}{B^{2}}\big{)}|\tilde{n},\overline{n},\overline{n}\rangle.$ (50) If $\tilde{n}=\overline{n}$, the eigenvalue of $\hat{L}^{2}$ is $\tfrac{3{\theta}^{2}}{B^{2}}$. But the eigenvalue of $\hat{L}^{2}$ can be lower than $\tfrac{3{\theta}^{2}}{B^{2}}$ if $\big{(}-[\tilde{n}\\!-\\!\overline{n}]\sqrt{3}\theta\\!+\\!\tfrac{B^{2}}{4}(\tilde{n}-\overline{n})(\tilde{n}-\overline{n}+2)\big{)}<0$ i.e., if $\tilde{n}<\overline{n}+(\tfrac{4\sqrt{3}\theta}{B^{2}}-2).$ (51) It is clear from Eq.(51) that if $(\frac{4\sqrt{3}\theta}{B^{2}}-2)\leq 0$, then the inequality is violated and there cannot be a minimum lower than $\tfrac{3{\theta}^{2}}{B^{2}}$, i.e. if $\frac{\theta}{B^{2}}\leq\frac{1}{2\sqrt{3}}$, then the minimum eigenvalue of $\hat{L}^{2}$ is $\tfrac{3{\theta}^{2}}{B^{2}}$. On the other hand, when $\tilde{n}=\overline{n}+m$, where $m>0$, the condition that the eigenvalue of $\hat{L}^{2}$ should be greater than or equal to zero leads to a real inequality relationship between $\tilde{n}$ and $\overline{n}$ only if $\frac{\theta}{B^{2}}\leq\frac{1}{4\sqrt{3}}$ which makes Eq.(51) inconsistent with $\tilde{n}=\overline{n}+m$ for positive $m$ and so $(-[\tilde{n}\\!-\\!\overline{n}]\sqrt{3}\theta\\!+\\!\tfrac{B^{2}}{4}(\tilde{n}-\overline{n})(\tilde{n}-\overline{n}+2))$ cannot be less than zero in a consistent way. Therefore, the minimum eigenvalue of $\hat{L}^{2}$ is $\tfrac{3{\theta}^{2}}{B^{2}}$. This is also clear from Eq.(48) since the minimum of Eq.(46) is $-\frac{3\theta}{B}$ and the minimum of $n-\overline{n}$ is 0. ## 5 Degeneracy of States The dependency of the state of the system on more than one index can be better understood by the construction of another operator $\hat{K}$ that commutes with $\hat{a}_{\pm}$ and $\hat{b}$. Since $[\hat{L}^{2},\hat{a}_{\pm}]=\pm 2\sqrt{3}\theta\hat{a}_{\pm}$ and $[\hat{b},\hat{a}_{\pm}]=\mp\sqrt{3}\,B\,\hat{a}_{\pm}$, the linear combination of $\hat{L}^{2}$ and $\hat{b}$, $B\hat{L}^{2}+2\theta\hat{b}$, commutes with $\hat{a}_{\pm}$. But with little changes, we construct the following linear combination $\hat{K}=\hat{L}^{2}+\frac{2\theta}{B}\hat{b}+\sqrt{3}\theta+\frac{3\theta^{2}}{B^{2}}$ (52) which also commutes with $\hat{a}_{\pm}$. Since $\hat{b}$ exhibits commutativity with $\hat{L}^{2}$, we can summarise the commutator relations as $\displaystyle[\hat{K},\hat{a}_{\pm}]$ $\displaystyle=0,$ (53) $\displaystyle[\hat{K},\hat{b}]$ $\displaystyle=0.$ (54) These equations, along with Eq.(21), form a set of equations analogous to the operator algebra involving $\hat{\mathcal{L}}^{2}$, $\hat{\mathcal{L}}_{\pm}$ and $\hat{\mathcal{L}}_{z}$ in the angular momentum problem in quantum mechanics. The eigenvalue of the operator $\hat{K}$ acting on the state $|\tilde{n},\overline{n},n\rangle$ can be evaluated from Eq.(49) and Eq.(52) as $\hat{K}|\tilde{n},\overline{n},n\rangle=\left(\frac{B^{2}}{4}(\tilde{n}-\overline{n})(\tilde{n}-\overline{n}+2)+\sqrt{3}\,\theta\right)|\tilde{n},\overline{n},n\rangle.$ (55) It is evident that the eigenvalue of the operator $\hat{K}$ is independent of $n$. Also, its eigenvalues do not change if $\tilde{n}$ and $\overline{n}$ are changed, keeping $\tilde{n}-\overline{n}$ fixed, resulting in huge degeneracy. Figure 1: Degenerate states of operator $\hat{K}$ lying on the shaded-sphere The physical meaning of $\hat{K}$ can be described in the following way. The Eq.(10) represents the invariance of the length operator in going from the basis $\hat{X}$ to the basis $\hat{A}$. Any constant eigenvalue of $\hat{L}^{2}$ in Eq.(4) in the 3-D case would correspond to a sphere centered at the origin. In the basis $\hat{A}$, the constant eigenvalue of $\hat{L}^{2}$ in Eq.(24) would represent the same sphere centered at the origin. Writing $\hat{K}$ in the form of the Eq.(24) leads to the expression $\displaystyle\hat{K}=\frac{1}{3}[2\hat{a}_{+}\hat{a}_{-}+3\sqrt{3}\theta+\sqrt{3}B\hat{b}^{\prime}+{\hat{b}}^{\prime\,2}],$ (56) where $\hat{b}^{\prime}=\hat{b}+\tfrac{3\theta}{B}$. If the constant eigenvalue of Eq.(24) represents a sphere centered at the origin, then the constant eigenvalue of Eq.(56) represents a sphere shifted along $\hat{b}$ axis by the amount $-3\theta/B$. In Figure 1, the sphere corresponding to a constant eigenvalue of $\hat{L}^{2}$ is represented by the dashed circle centered at $O$ and the shifted-sphere is represented as the shaded sphere centered at $C$. The degenerate eigenstates of $\hat{K}$ lie on the surface of this shifted sphere, but these states will have different eigenvalues for $\hat{L}^{2}$ since $\hat{L}^{2}$ is measured from the origin $O$. While $CA$ and $CB$ corresponding to the eigenvalues of $\hat{K}$ are equal, $OA$ and $OB$ corresponding to the eigenvalues of $\hat{L}^{2}$ are not equal. For example, the states $|\tilde{n},\overline{n},n\rangle=|8,2,n\rangle$ with different $n$’s have different eigenvalues for $\hat{L}^{2}$ in Eq.(49), but they have the same eigenvalue for $\hat{K}$ in Eq.(55). But the states $|8,2,5\rangle$ and $|9,3,6\rangle$ have the same eigenvalue for $\hat{L}^{2}$ and the same eigenvalue for $\hat{K}$. Such a common set of degenerate states of $\hat{L}^{2}$ and $\hat{K}$ lie on a circle perpendicular to the $\hat{b}$ axis since the shift happens along the $\hat{b}$ axis. In the basis $(\hat{x}^{1},\hat{x}^{2},\hat{x}^{3})$, the shifted sphere corresponds to the shift along all three directions. In other words, if $d^{i}$ denotes the shift along the $x^{i}$ direction, then defining the operator $\hat{y}^{i}=\hat{x}^{i}+d^{i}$ such that $B^{ij}_{\phantom{ij}k}d^{k}=\theta^{ij}$ in Eq.(12) would lead to the Lie structure $[\hat{y}^{i},\hat{y}^{j}]=iB^{ij}_{\phantom{ij}k}\hat{y}^{k}$. Although this structure would lead to the quantization of $(\hat{y}^{1})^{2}+(\hat{y}^{2})^{2}+(\hat{y}^{3})^{2}$, our length operator is different from this and in terms of $\hat{y}^{i}$, it is $(\hat{y}^{1}-d^{1})^{2}+(\hat{y}^{2}-d^{2})^{2}+(\hat{y}^{3}-d^{3})^{2}$. While the quantization method using $\hat{y}^{i}$ would employ raising and lowering operations along $\hat{y}^{3}$ direction and keep the eigenvalues of $(\hat{y}^{1})^{2}+(\hat{y}^{2})^{2}+(\hat{y}^{3})^{2}$ fixed, it can be easily shown that $\hat{y}_{\pm}=\hat{y}^{1}\pm i\hat{y}^{2}$ neither raises/lowers the eigenstates of $\hat{L}^{2}$ nor it commutes with $\hat{L}^{2}$, calling for the approach that starts with the Eq.(5) to look for the properly oriented $\hat{a}_{\pm}$ in place of $\hat{y}_{\pm}$. ## 6 Conclusion In conclusion, we have explored length quantization in the context of noncommutative spaces with position-dependent noncommutativity. Building upon the formalism similar to the quantum harmonic oscillator and angular momentum problems, we have constructed ladder operators and derived the operator corresponding to the length-square in terms of these ladder operators. This investigation has been specifically applied to the case of a 3-dimensional space where the noncommutativity parameter involved a combination of canonical/Weyl-Moyal type and Lie-type. We have found that the length quantization in this scenario leads to distinct, discrete eigenvalues for the length-square operator. The ladder operators drive the behavior of eigenvalues, resulting in the quantization of length not only within a plane but also along a direction normal to that plane. The derived ladder operators and their commutation relations have enabled us to construct a comprehensive operator for the square of length. This operator yields a structured ladder of eigenstates that are simultaneously eigenstates of both the length-square operator and certain combinations of ladder operators. Through this approach, we have given the formalism of how quantization occurs within spaces with position-dependent noncommutativity. Furthermore, we have identified the ground state and explored the maximum and minimum possible values of the quantum numbers associated with the ladder operators, thereby defining the range of valid eigenstates. The study of degenerate states constitutes a significant facet of our research inquiry. In our analysis, we have examined the implications of position- dependent noncommutativity on the degeneracy of states within the framework of the length quantization problem. By introducing an operator $K$, we have unveiled the distinct conditions under which degenerate states emerge, shedding light on the intricate balance between spatial geometry and quantum behavior. The quantization of length in noncommutative spaces with position-dependent noncommutativity opens up new avenues for exploring the fundamental nature of spacetime and its implications for physical theories. Our findings may inspire further investigations into the intriguing interplay between geometry and quantum algebra in noncommutative spacetime settings. We have presented only the quantization of a bare fundamental geometric element, i.e., length. The construction of a field theory or mechanics with the underlying quantized geometric element is beyond the scope of this work. ## References * [1] R. Jackiw, Nucl. Phys. B Proc. Suppl. 108 (2002), 30-36 doi:10.1016/S0920-5632(02)01302-6 [arXiv:hep-th/0110057 [hep-th]]. * [2] H. S. Snyder, Phys. Rev. 71, 38-41 (1947) doi:10.1103/PhysRev.71.38 * [3] R. Banerjee, B. Chakraborty, S. Ghosh, P. Mukherjee and S. Samanta, Found. Phys. 39, 1297-1345 (2009) doi:10.1007/s10701-009-9349-y [arXiv:0909.1000 [hep-th]]. * [4] P. Padmanabhan, ”Physics on Noncommutative Spacetimes”(2012) Physics-Dissertations. Paper 122. * [5] J. Madore, [arXiv:gr-qc/9906059 [gr-qc]]. * [6] N. Seiberg and E. Witten, JHEP 09, 032 (1999) doi:10.1088/1126-6708/1999/09/032 [arXiv:hep-th/9908142 [hep-th]]. * [7] A. Addazi, P. Belli, R. Bernabei and A. Marciano, Chin. Phys. C 42, no.9, 094001 (2018) doi:10.1088/1674-1137/42/9/094001 [arXiv:1712.08082 [hep-th]]. * [8] J. Frohlich and K. Gawedzki, [arXiv:hep-th/9310187 [hep-th]]. * [9] J. Frohlich, O. Grandjean and A. Recknagel, [arXiv:hep-th/9706132 [hep-th]]. * [10] R. J. Szabo, Phys. Rept. 378, 207-299 (2003) doi:10.1016/S0370-1573(03)00059-0 [arXiv:hep-th/0109162 [hep-th]]. * [11] R. Gopakumar, S. Minwalla and A. Strominger, JHEP 05, 020 (2000) doi:10.1088/1126-6708/2000/05/020 [arXiv:hep-th/0003160 [hep-th]]. * [12] A. Joseph, Phys. Rev. D 79, 096004 (2009) doi:10.1103/PhysRevD.79.096004 [arXiv:0811.3972 [hep-ph]]. * [13] G. Mandal, S. J. Rey and S. R. Wadia, Eur. Phys. J. C 24, 495-514 (2002) doi:10.1007/s100520200939 [arXiv:hep-th/0111059 [hep-th]]. * [14] H. O. Girotti, Am. J. Phys. 72, 608 (2004) doi:10.1119/1.1624116 [arXiv:hep-th/0301237 [hep-th]]. * [15] M. R. Douglas and N. A. Nekrasov, Rev. Mod. Phys. 73, 977-1029 (2001) doi:10.1103/RevModPhys.73.977 [arXiv:hep-th/0106048 [hep-th]]. * [16] G. Grensing, World Scientific, 2013, ISBN 978-981-4472-69-2, 978-981-4472-71-5 doi:10.1142/8771 * [17] A. P. Balachandran, A. Ibort, G. Marmo and M. Martone, SIGMA 6, 052 (2010) doi:10.3842/SIGMA.2010.052 [arXiv:1003.4356 [hep-th]]. * [18] S. Doplicher, K. Fredenhagen and J. E. Roberts, Commun. Math. Phys. 172 (1995), 187-220 doi:10.1007/BF02104515 [arXiv:hep-th/0303037 [hep-th]]. * [19] L. Alvarez-Gaume, F. Meyer and M. A. Vazquez-Mozo, Nucl. Phys. B 753 (2006), 92-127 doi:10.1016/j.nuclphysb.2006.07.009 [arXiv:hep-th/0605113 [hep-th]]. * [20] T. Filk, Phys. Lett. B 376, 53-58 (1996) doi:10.1016/0370-2693(96)00024-X * [21] A. Saha, S. Gangopadhyay and S. Saha, Phys. Rev. D 83, 025004 (2011) doi:10.1103/PhysRevD.83.025004 [arXiv:1005.3373 [hep-th]]. * [22] V. P. Nair and A. P. Polychronakos, Phys. Lett. B 505, 267-274 (2001) doi:10.1016/S0370-2693(01)00339-2 [arXiv:hep-th/0011172 [hep-th]]. * [23] B. Muthukumar, JHEP 01, 073 (2007) doi:10.1088/1126-6708/2007/01/073 [arXiv:hep-th/0609117 [hep-th]]. * [24] B. Muthukumar, AIP Conf. Proc. 939 (2007) no.1, 359-362 doi:10.1063/1.2803828 * [25] D. Sinha, B. Chakraborty and F. G. Scholtz, J. Phys. A 45, 105308 (2012) doi:10.1088/1751-8113/45/10/105308 [arXiv:1108.2569 [hep-th]]. * [26] S. Biswas, P. Nandi and B. Chakraborty, Phys. Rev. A 102 (2020) no.2, 022231 doi:10.1103/PhysRevA.102.022231 [arXiv:1911.03196 [hep-th]]. * [27] K. Bolonek and P. Kosinski, Phys. Lett. B 547 (2002), 51-54 doi:10.1016/S0370-2693(02)02731-4 [arXiv:hep-th/0208162 [hep-th]]. * [28] P. Aschieri, C. Blohmann, M. Dimitrijevic, F. Meyer, P. Schupp and J. Wess, Class. Quant. Grav. 22, 3511-3532 (2005) doi:10.1088/0264-9381/22/17/011 [arXiv:hep-th/0504183 [hep-th]]. * [29] A. Muhuri, D. Sinha and S. Ghosh, Eur. Phys. J. Plus 136, no.1, 35 (2021) [arXiv:2006.16528 [quant-ph]]. * [30] E. Harikumar, V. S. Kumar and A. Khare, Phys. Lett. B 589, 155-161 (2004) [arXiv:hep-th/0402064 [hep-th]]. * [31] X. Calmet and A. Kobakhidze, Phys. Rev. D 72 (2005), 045010 doi:10.1103/PhysRevD.72.045010 [arXiv:hep-th/0506157 [hep-th]]. * [32] R. Banerjee, P. Mukherjee and S. Samanta, Phys. Rev. D 75, 125020 (2007) doi:10.1103/PhysRevD.75.125020 [arXiv:hep-th/0703128 [hep-th]]. * [33] A. P. Balachandran, T. R. Govindarajan, K. S. Gupta and S. Kurkcuoglu, Class. Quant. Grav. 23 (2006), 5799-5810 doi:10.1088/0264-9381/23/20/003 [arXiv:hep-th/0602265 [hep-th]]. * [34] E. Harikumar and V. O. Rivelles, Class. Quant. Grav. 23 (2006), 7551-7560 doi:10.1088/0264-9381/23/24/024 [arXiv:hep-th/0607115 [hep-th]]. * [35] M. Roy and B. Muthukumar, [arXiv:2205.02479 [hep-th]]. * [36] M. Chaichian, A. Tureanu and G. Zet, Phys. Lett. B 660, 573-578 (2008) doi:10.1016/j.physletb.2008.01.029 [arXiv:0710.2075 [hep-th]]. * [37] A. Connes, M. R. Douglas and A. S. Schwarz, JHEP 02, 003 (1998) doi:10.1088/1126-6708/1998/02/003 [arXiv:hep-th/9711162 [hep-th]]. * [38] A. Connes, [arXiv:math/0011193 [math.QA]]. * [39] A. H. Chamseddine, A. Connes and V. Mukhanov, Phys. Rev. Lett. 114, no.9, 091302 (2015) doi:10.1103/PhysRevLett.114.091302 [arXiv:1409.2471 [hep-th]]. * [40] T. Thiemann, J. Math. Phys. 39, 3372-3392 (1998) [arXiv:gr-qc/9606092 [gr-qc]]. * [41] C. Rovelli and L. Smolin, Nucl. Phys. B 442 (1995), 593-622 [erratum: Nucl. Phys. B 456 (1995), 753-754] doi:10.1016/0550-3213(95)00150-Q [arXiv:gr-qc/9411005 [gr-qc]]. * [42] A. Ashtekar and J. Lewandowski, Adv. Theor. Math. Phys. 1 (1998), 388-429 doi:10.4310/ATMP.1997.v1.n2.a8 [arXiv:gr-qc/9711031 [gr-qc]]. * [43] M. Balasundaram and A. Rashid, Adv. High Energy Phys. 2022 (2022), 8009789 doi:10.1155/2022/8009789 [arXiv:2206.07972 [hep-th]]. * [44] L. M. Lawson, J. Phys. A 53, no.11, 115303 (2020) doi:10.1088/1751-8121/ab7497 [arXiv:2012.06906 [hep-th]]. * [45] D. N. Blaschke, F. Gieres, S. Hohenegger, M. Schweda and M. Wohlgenannt, SIGMA 14, 133 (2018) doi:10.3842/SIGMA.2018.133 [arXiv:1806.02131 [hep-th]]. * [46] M. Gomes and V. G. Kupriyanov, Phys. Rev. D 79, 125011 (2009) doi:10.1103/PhysRevD.79.125011 [arXiv:0902.3252 [math-ph]]. * [47] V. Gayral, J. M. Gracia-Bondia and F. Ruiz Ruiz, Nucl. Phys. B 727, 513-536 (2005) doi:10.1016/j.nuclphysb.2005.08.016 [arXiv:hep-th/0504022 [hep-th]]. * [48] A. Fring, L. Gouba and F. G. Scholtz, J. Phys. A 43, 345401 (2010) doi:10.1088/1751-8113/43/34/345401 [arXiv:1003.3025 [hep-th]].
# GreaseLM: Graph REASoning Enhanced Language Models for Question Answering Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren Percy Liang, Christopher D. Manning, Jure Leskovec Stanford University <EMAIL_ADDRESS> ###### Abstract Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it. However, pretrained language models (LM), the foundation of most modern QA systems, do not robustly represent latent relationships between concepts, which is necessary for reasoning. While knowledge graphs (KG) are often used to augment LMs with structured representations of world knowledge, it remains an open question how to effectively fuse and reason over the KG representations and the language context, which provides situational constraints and nuances. In this work, we propose GreaseLM, a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations. Information from both modalities propagates to the other, allowing language context representations to be grounded by structured world knowledge, and allowing linguistic nuances (e.g., negation, hedging) in the context to inform the graph representations of knowledge. Our results on three benchmarks in the commonsense reasoning (i.e., CommonsenseQA, OpenbookQA) and medical question answering (i.e., MedQA-USMLE) domains demonstrate that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8$\times$ larger.111All code, data and pretrained models are available at https://github.com/snap-stanford/GreaseLM. ## 1 Introduction Question answering is a challenging task that requires complex reasoning over both explicit constraints described in the textual context of the question, as well as unstated, relevant knowledge about the world (i.e., knowledge about the domain of interest). Recently, large pretrained language models fine-tuned on QA datasets have become the dominant paradigm in NLP for question answering tasks (Khashabi et al., 2020). After pretraining on an extreme-scale collection of general text corpora, these language models learn to implicitly encode broad knowledge about the world, which they are able to leverage when fine-tuned on a domain-specific downstream QA task. However, despite the strong performance of this two-stage learning procedure on common benchmarks, these models struggle when given examples that are distributionally different from examples seen during fine-tuning (McCoy et al., 2019). Their learned behavior often relies on simple (at times spurious) patterns to offer shortcuts to an answer, rather than robust, structured reasoning that effectively fuses the explicit information provided by the context and implicit external knowledge (Marcus, 2018). On the other hand, massive knowledge graphs (KG), such as Freebase (Bollacker et al., 2008), Wikidata (Vrandečić & Krötzsch, 2014), ConceptNet (Speer et al., 2017), and Yago (Suchanek et al., 2007) capture such external knowledge explicitly using triplets that capture relationships between entities. Previous research has demonstrated the significant role KGs can play in structured reasoning and query answering (Ren et al., 2020; 2021; Ren & Leskovec, 2020). However, extending these reasoning advantages to general QA (where questions and answers are expressed in natural language and not easily mapped to strict logical queries) requires finding the right integration of knowledge from the KG with the information and constraints provided by the QA example. Prior methods propose various ways to leverage both modalities (i.e., expressive large language models and structured KGs) for improved reasoning (Mihaylov & Frank, 2018; Lin et al., 2019; Feng et al., 2020). However, these methods typically fuse the two modalities in a shallow and non-interactive manner, encoding both separately and fusing them at the output for a prediction, or using one to augment the input of the other. Consequently, previous methods demonstrate restricted capacity to exchange useful information between the two modalities. It remains an open question how to effectively fuse the KG and LM representations in a truly unified manner, where the two representations can interact in a non-shallow way to simulate structured, situational reasoning. Figure 1: GreaseLM Architecture. The textual context is appended with a special interaction token and passed through $N$ LM-based unimodal encoding layers. Simultaneously, a local KG of relevant knowledge is extracted and connected to an interaction node. In the later GreaseLM layers, the language representation continues to be updated through LM layers and the KG is processed using a GNN, simulating reasoning over its knowledge. In each layer, after each modality’s representation is updated, the representations of the interaction token and node are pulled, concatenated, and passed through a modality interaction (MInt) unit to mix their representations. In subsequent layers, the mixed information from the interaction elements mixes with their respective modalities, allowing knowledge from the KG to affect the representations of individual tokens, and context from language to affect fine-grained entity knowledge representations in the GNN. In this work, we present GreaseLM, a new model that enables fusion and exchange of information from both the LM and KG in multiple layers of its architecture (see Figure 1). Our proposed GreaseLM consists of an LM that takes as input the natural language context, as well as a graph neural network (GNN) that reasons over the KG. After each layer of the LM and GNN, we design an interactive scheme to bidirectionally transfer the information from each modality to the other through specially initialized interaction representations (i.e., interaction token for the LM; interaction node for the GNN). In such a way, all the tokens in the language context receive information from the KG entities through the interaction token and the KG entities indirectly interact with the tokens through the interaction node. By such a deep integration across all layers, GreaseLM enables joint reasoning over both the language context and the KG entities under a unified framework agnostic to the specific language model or graph neural network, so that both modalities can be contextualized by the other. GreaseLM demonstrates significant performance gains across different LM architectures. We perform experiments on several standard QA benchmarks: CommonsenseQA, OpenbookQA and MedQA-USMLE, which require external knowledge across different domains (commonsense reasoning and medical reasoning) and use different KGs (ConceptNet and Disease Database). Across both domains, GreaseLM outperforms comparably-sized prior QA models, including strong fine-tuned LM baselines (by 5.5%, 6.6%, and 1.3%, respectively) and state-of-the-art KG+LM models (by 0.9%, 1.8%, and 0.5%, respectively) on the three competitive benchmarks. Furthermore, with the deep fusion of both modalities, GreaseLM exhibits strong performance over baselines on questions that exhibit textual nuance, such as resolving multiple constraints, negation, and hedges, and which require effective reasoning over both language context and KG. ## 2 Related Work Integrating KG information has become a popular research area for improving neural QA systems. Some works explore using two-tower models to answer questions, where a graph representation of knowledge and language representation are fused with no interaction between them (Wang et al., 2019). Other works seek to use one modality to ground the other, such as using an encoded representation of a linked KG to augment the textual representation of a QA example (e.g., Knowledgeable Reader, Mihaylov & Frank, 2018; KagNet, Lin et al., 2019; KT-NET, Yang et al., 2019). Others reverse the flow of information and use a representation of the text (e.g., final layer of LM) to provide an augmentation to a graph reasoning model over an extracted KG for the example (e.g., MHGRN, Feng et al., 2020; Lv et al., 2020). In all of these settings, however, the interaction between both modalities is limited as information between them only flows one way. More recent approaches explore deeper integrations of both modalities. Certain approaches learn to access implicit knowledge encoded in LMs (Bosselut et al., 2019; Petroni et al., 2019; Hwang et al., 2021) by training on structured KG data, and then use the LM to generate local KGs that can be used for QA (Wang et al., 2020; Bosselut et al., 2021). However, these approaches discard the static KG once they train the LM on its facts, losing important structure that can guide reasoning. More recently, QA-GNN (Yasunaga et al., 2021) proposed to jointly update the LM and GNN representations via message passing. However, they use a single pooled representation of the LM to seed the textual component of this joint structure, limiting the updates that can be made to the textual representation. In contrast to prior works, we propose to make individual token representations in the LM and node representations in the GNN mix for multiple layers, enabling representations of both modalities to reflect particularities of the other (e.g., knowledge grounds language; language nuances specifies which knowledge is important). Simultaneously, we retain the individual structure of both modalities, which we demonstrate improves QA performance substantially (§5). Additionally, some works explore integrating knowledge graphs with language models in the pretraining stage. However, much like for QA, the modality interaction is typically limited to knowledge feeding language (Zhang et al., 2019; Shen et al., 2020; Yu et al., 2020), rather than designing interactions across multiple layers. Sun et al. (2020)’s work is perhaps most similar, but they do not use the same interaction bottleneck, requiring high-precision entity mention spans for linking, and they limit expressivity through shared modality parameters for the LM and KG. ## 3 Proposed Approach: GreaseLM In this work, we augment large-scale language models (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020; Liu et al., 2021) with graph reasoning modules over KGs. Our method, GreaseLM (depicted in Figure 1), consists of two stacked components: (1) a set of unimodal LM layers which learn an initial representation of the input tokens, and (2) a set of upper cross-modal GreaseLM layers which learn to jointly represent the language sequence and linked knowledge graph, allowing textual representations formed from the underlying LM layers and a graph representation of the KG to mix with one another. We denote the number of LM layers as $N$, and the number of GreaseLM layers as $M$. The total number of layers in our model is $N+M$. Notation. In the task of multiple choice question answering (MCQA), a generic MCQA-type dataset consists of examples with a context paragraph $c$, a question $q$ and a candidate answer set $\mathcal{A}$, all expressed in text. In this work, we also assume access to an external knowledge graph (KG) ${\mathcal{G}}$ that provides background knowledge that is relevant to the content of the multiple choice questions. Given a QA example $(c,q,\mathcal{A})$, and the KG ${\mathcal{G}}$ as input, our goal is to identify which answer $a\in\mathcal{A}$ is correct. Without loss of generality, when an operation is applied to an arbitrary answer, we refer to that answer as $a$. We denote a sequence of tokens in natural language as $\\{w_{1},\dots,w_{T}\\}$, where $T$ is the total number of tokens, and the representation of a token $w_{t}$ from the $\ell$-th layer of the model as ${\bm{h}}_{t}^{(\ell)}$. We denote a set of nodes from the KG as $\\{e_{1},\dots,e_{J}\\}$, where $J$ is the total number of nodes, and the representation of a node $e_{j}$ in the $\ell$-th layer of the model as ${\bm{e}}_{j}^{(\ell)}$. ### 3.1 Input representation We concatenate our context paragraph $c$, question $q$, and candidate answer $a$ with separator tokens to get our model input $[c;q;a]$ and tokenize the combined sequence into $\\{w_{1},\dots,w_{T}\\}$. Second, we use the input sequence to retrieve a subgraph of the KG ${\mathcal{G}}$ (denoted ${\mathcal{G}}_{\text{sub}}$), which provides knowledge from the KG that is relevant to this QA example. We denote the set of nodes in ${\mathcal{G}}_{\text{sub}}$ as $\\{e_{1},\dots,e_{J}\\}$. KG Retrieval. Given each QA context, we follow the procedure from Yasunaga et al. (2021) to retrieve the subgraph ${\mathcal{G}}_{\text{sub}}$ from ${\mathcal{G}}$. We describe this procedure in Appendix B.1. Each node in ${\mathcal{G}}_{\text{sub}}$ is assigned a type based on whether its corresponding entity was linked from the context $c$, question $q$, answer $a$, or as a neighbor to these nodes. In the rest of the paper, we use “KG” to refer to ${\mathcal{G}}_{\text{sub}}$. Interaction Bottlenecks. In the cross-modal GreaseLM layers, information is fused between both modalities, for which we define a special interaction token $w_{int}$ and a special interaction node $e_{int}$ whose representations serve as the bottlenecks through which the two modalities interact (§3.3). We prepend $w_{int}$ to the token sequence and connect $e_{int}$ to all the linked nodes ${\mathcal{V}}_{\text{linked}}$ in ${\mathcal{G}}_{\text{sub}}$. ### 3.2 Language Pre-encoding In the unimodal encoding component, given the sequence of tokens $\\{w_{int},w_{1},\dots,w_{T}\\}$, we first sum the token, segment, and positional embeddings for each token to compute its $\ell$=0 input representation $\\{{\bm{h}}_{int}^{(0)},{\bm{h}}_{1}^{(0)},\dots,{\bm{h}}_{T}^{(0)}\\}$, and then compute an output representation for each layer $\ell$: $\displaystyle\\{{\bm{h}}_{int}^{(\ell)},{\bm{h}}_{1}^{(\ell)},\dots,{\bm{h}}_{T}^{(\ell)}\\}=$ $\displaystyle\text{ LM- Layer}(\\{{\bm{h}}_{int}^{(\ell-1)},{\bm{h}}_{1}^{(\ell-1)},\dots,{\bm{h}}_{T}^{(\ell-1)}\\})$ (1) $\displaystyle\text{ for }\ell=1,\dots,N$ where LM-Layer($\cdot$) is a single LM encoder layer, whose parameters are initialized using a pretrained model (§4.1). We refer readers to Vaswani et al. (2017) for technical details of these layers. ### 3.3 GreaseLM GreaseLM uses a cross-modal fusion component to inject information from the KG into language representations and information from language into KG representations. The GreaseLM layer is designed to separately encode information from both modalities, and fuse their representations using the bottleneck of the special interaction token and node. It is comprised of three components: (1) a transformer LM encoder block which continues to encode the language context, (2) a GNN layer that reasons over KG entities and relations, and (3) a modality interaction layer that takes the unimodal representations of the interaction token and interaction node and exchanges information through them. We discuss these three components below. Language Representation. In the $\ell$-th GreaseLM layer, the input token embeddings $\\{{\bm{h}}_{int}^{(N+\ell-1)},{\bm{h}}_{1}^{(N+\ell-1)},\dots,{\bm{h}}_{T}^{(N+\ell-1)}\\}$ are fed into additional transformer LM encoder blocks that continue to encode the textual context based on the LM’s pretrained representations: $\displaystyle\\{\tilde{{\bm{h}}}_{int}^{(N+\ell)},\tilde{{\bm{h}}}_{1}^{(N+\ell)},\dots,\tilde{{\bm{h}}}_{T}^{(N+\ell)}\\}=$ $\displaystyle\text{ LM- Layer}(\\{{\bm{h}}_{int}^{(N+\ell-1)},{\bm{h}}_{1}^{(N+\ell-1)},\dots,{\bm{h}}_{T}^{(N+\ell-1)}\\})$ (2) $\displaystyle\text{ for }\ell=1,\dots,M$ where $\tilde{{\bm{h}}}$ corresponds to pre-fused embeddings of the language modality. As we will discuss below, because $\bm{h}_{int}^{N+\ell-1}$ will encode information received from the knowledge graph representation, these late language encoding layers will also allow the token representations to mix with KG knowledge. Graph Representation. The GreaseLM layers also encode a representation of the local KG ${\mathcal{G}}_{\text{sub}}$ linked from the QA example. To represent the graph, we first compute initial node embeddings $\\{{\bm{e}}_{1}^{(0)},\dots,{\bm{e}}_{J}^{(0)}\\}$ for the retrieved entities using pretrained KG embeddings for these nodes (§4.1). The initial embedding of the interaction node ${\bm{e}}_{int}^{0}$ is initialized randomly. Then, in each layer of the GNN, the current representation of the node embeddings $\\{{\bm{e}}_{int}^{(\ell-1)},{\bm{e}}_{1}^{(\ell-1)},\dots,{\bm{e}}_{J}^{(\ell-1)}\\}$ is fed into the layer to perform a round of information propagation between nodes in the graph and yield pre-fused node embeddings for each entity: $\displaystyle\\{\tilde{{\bm{e}}}_{int}^{(\ell)},\tilde{{\bm{e}}}_{1}^{(\ell)},\dots,\tilde{{\bm{e}}}_{J}^{(\ell)}\\}=$ $\displaystyle\text{ GNN}(\\{{\bm{e}}_{int}^{(\ell-1)},{\bm{e}}_{1}^{(\ell-1)},\dots,{\bm{e}}_{J}^{(\ell-1)}\\})$ (3) $\displaystyle\text{ for }\ell=1,\dots,M$ where GNN corresponds to a variant of graph attention networks (Veličković et al., 2018) that is a simplification of the method of Yasunaga et al. (2021). The GNN computes node representations $\tilde{{\bm{e}}}_{j}^{(\ell)}$ for each node $e_{j}\in\\{e_{1},\dots,e_{J}\\}$ via message passing between neighbors on the graph. $\displaystyle\tilde{{\bm{e}}}_{j}^{(\ell)}=f_{n}\Biggl{(}\sum_{e_{s}\in\mathcal{N}_{e_{j}}\cup\\{e_{j}\\}}\alpha_{sj}{\bm{m}}_{sj}\Biggr{)}+{\bm{e}}_{j}^{(\ell-1)}$ (4) where ${\mathcal{N}}_{e_{j}}$ represents the neighborhood of an arbitrary node $e_{j}$, ${\bm{m}}_{sj}$ denotes the message one of its neighbors $e_{s}$ passes to $e_{j}$, $\alpha_{sj}$ is an attention weight that scales the message ${\bm{m}}_{sj}$, and $f_{n}$ is a 2-layer MLP. The messages ${\bm{m}}_{sj}$ between nodes allow entity information from a node to affect the model’s representation of its neighbors, and are computed in the following manner: $\bm{r}_{sj}={f}_{r}(\tilde{\bm{r}}_{sj},~{}\bm{u}_{s},~{}\bm{u}_{j})$ (5) $\bm{m}_{sj}={f}_{m}(\bm{e}_{s}^{(\ell-1)},~{}\bm{u}_{s},~{}\bm{r}_{sj})$ (6) where $\bm{u}_{s},\bm{u}_{j}$ are node type embeddings, $\tilde{\bm{r}}_{sj}$ is a relation embedding for the relation connecting $e_{s}$ and $e_{j}$, ${f}_{r}$ is a 2-layer MLP, and ${f}_{m}$ is a linear transformation. The attention weights $\alpha_{sj}$ scale the contribution of each neighbor’s message by its importance, and are computed as follows: $\bm{q}_{s}={f}_{q}(\bm{e}_{s}^{(\ell-1)},~{}\bm{u}_{s})$ (7) $\bm{k}_{j}={f}_{k}(\bm{e}_{j}^{(\ell-1)},~{}\bm{u}_{j},~{}\bm{r}_{sj})$ (8) $\gamma_{sj}=\frac{\bm{q}_{s}^{\top}\bm{k}_{j}}{\sqrt{D}}$ (9) $\alpha_{sj}=\frac{\exp(\gamma_{sj})}{\sum_{e_{s}\in\mathcal{N}_{e_{j}}\cup\\{e_{j}\\}}\exp(\gamma_{sj})}$ (10) where ${f}_{q}$ and ${f}_{k}$ are linear transformations and $\bm{u}_{s},\bm{u}_{j},\bm{r}_{sj}$ are defined the same as above. As discussed in the following paragraph, message passing between the interaction node $e_{int}$ and the nodes from the retrieved subgraph will allow information from text that $e_{int}$ receives from $w_{int}$ to propagate to the other nodes in the graph. Modality Interaction. Finally, after using a transformer LM layer and a GNN layer to update token embeddings and node embeddings respectively, we use a modality interaction layer (MInt) to let the two modalities fuse information through the bottleneck of the interaction token $w_{int}$ and the interaction node $e_{int}$. We concatenate the pre-fused embeddings of the interaction token $\tilde{{\bm{h}}}_{int}^{(i)}$ and interaction node $\tilde{{\bm{e}}}_{int}^{(i)}$, pass the joint representation through a mixing operation (MInt), and then split the output post-fused embeddings into ${\bm{h}}_{int}^{(i)}$ and ${\bm{e}}_{int}^{(i)}$: $\displaystyle[{\bm{h}}_{int}^{(\ell)};{\bm{e}}_{int}^{(\ell)}]=\text{ MInt}([\tilde{{\bm{h}}}_{int}^{(\ell)};\tilde{{\bm{e}}}_{int}^{(\ell)}]),$ (11) We use a two-layer MLP as our MInt operation, though other fusion operators could be used to mix the representation. All the tokens other than the interaction token $w_{int}$ and all the nodes other than the interaction node $e_{int}$ are not involved in the modality interaction process: ${\bm{w}}^{(\ell)}=\tilde{{\bm{w}}}^{(\ell)}\text{ for }w\in\\{w_{1},\dots,w_{T}\\}$ and ${\bm{e}}^{(\ell)}=\tilde{{\bm{e}}}^{(\ell)}\text{ for }e\in\\{e_{1},\dots,e_{J}\\}$. However, they receive information from the interaction representations ${\bm{h}}_{int}^{(\ell)}$ and ${\bm{e}}_{int}^{(\ell)}$ in the next layers of their respective modal propagation (i.e., Eqs. 2, 3). Consequently, across multiple GreaseLM layers, information propagates between both modalities (see Fig. 1 for visual depiction), grounding language representations to KG knowledge, and knowledge representations to contextual constraints. #### Learning & Inference. For the MCQA task, given a question $q$ and an answer $a$ from all the candidates $\mathcal{A}$, we compute the probability of $a$ being the correct answer as $p(a\mid q,c)\propto\exp(\text{MLP}({\bm{h}}_{int}^{(N+M)},~{}{\bm{e}}_{int}^{(M)},~{}{\bm{g}}))$, where ${\bm{g}}$ denotes attention-based pooling of $\\{{\bm{e}}_{j}^{(M)}\mid e_{j}\in\\{e_{1},\dots,e_{J}\\}\\}$ using ${\bm{h}}_{int}^{(N+M)}$ as a query. We optimize the whole model end-to-end using the cross entropy loss. At inference time, we predict the most plausible answer as $\operatorname*{arg\,max}_{a\in\mathcal{A}}~{}p(a\mid q,c)$. Dataset | Example ---|--- CommonsenseQA | A weasel has a thin body and short legs to easier burrow after prey in a what? (A) tree (B) mulberry bush (C) chicken coop (D) viking ship (E) rabbit warren OpenbookQA | Which of these would let the most heat travel through? (A) a new pair of jeans (B) a steel spoon in a cafeteria (C) a cotton candy at a store (D) a calvin klein cotton hat MedQA-USMLE | A 57-year-old man presents to his primary care physician with a 2-month history of right upper and lower extremity weakness. He noticed the weakness when he started falling far more frequently while running errands. Since then, he has had increasing difficulty with walking and lifting objects. His past medical history is significant only for well-controlled hypertension, but he says that some members of his family have had musculoskeletal problems. His right upper extremity shows forearm atrophy and depressed reflexes while his right lower extremity is hypertonic with a positive Babinski sign. Which of the following is most likely associated with the cause of this patients symptoms? (A) HLA-B8 haplotype (B) HLA-DR2 haplotype (C) Mutation in SOD1 (D) Mutation in SMN1 Table 1: Examples of the MCQA task for each of the datasets evaluated in this work. ## 4 Experimental Setup We evaluate GreaseLM on three diverse multiple-choice question answering datasets across two domains: CommonsenseQA (Talmor et al., 2019) and OpenBookQA (Mihaylov et al., 2018) as commonsense reasoning benchmarks, and MedQA-USMLE (Jin et al., 2021) as a clinical QA task. CommonsenseQA is a 5-way multiple-choice question answering dataset of 12,102 questions that require background commonsense knowledge beyond surface language understanding. We perform our experiments using the in-house data split of Lin et al. (2019) to compare to baseline methods. OpenbookQA is a 4-way multiple-choice question answering dataset that tests elementary scientific knowledge. It contains 5,957 questions along with an open book of scientific facts. We use the official data splits from Mihaylov & Frank (2018). MedQA-USMLE is a 4-way multiple-choice question answering dataset, which requires biomedical and clinical knowledge. The questions are originally from practice tests for the United States Medical License Exams (USMLE). The dataset contains 12,723 questions. We use the original data splits from Jin et al. (2021). ### 4.1 Implementation & training details #### Language Models. We seed GreaseLM with RoBERTa-Large (Liu et al., 2019) for our experiments on CommonsenseQA, AristoRoBERTa (Clark et al., 2019) for our experiments on OpenbookQA, and SapBERT (Liu et al., 2021) for our experiments on MedQA-USMLE, demonstrating GreaseLM’s generality with respect to language model initializations. Hyperparameters for training these models can be found in Appendix Table 7. Knowledge Graphs. We use ConceptNet (Speer et al., 2017), a general-domain knowledge graph, as our external knowledge source $\mathcal{G}$ for both CommonsenseQA and OpenbookQA. It has 799,273 nodes and 2,487,810 edges in total. For MedQA-USMLE, we use a self-constructed knowledge graph that integrates the Disease Database portion of the Unified Medical Language System (UMLS; Bodenreider, 2004) and DrugBank (Wishart et al., 2018). The knowledge graph contains 9,958 nodes and 44,561 edges. Additional information about node initialization and hyperparameters for preprocessing these KGs can be found in Appendix B.2. ### 4.2 Baseline methods #### Fine-tuned LMs. To study the effect of using KGs as external knowledge sources, we compare our method with vanilla fine-tuned LMs, which are knowledge-agnostic. We fine-tune RoBERTa-Large (Liu et al., 2019) for CommonsenseQA, and AristoRoBERTa222OpenbookQA provides an extra corpus of scientific facts in a textual form. AristoRoBERTa is based off RoBERTa-Large, but uses the facts corresponding to each question, prepared by Clark et al. (2019), as an additional input along with the QA context. (Clark et al., 2019) for OpenbookQA. For MedQA-USMLE, we use a state-of-the-art biomedical language model, SapBERT (Liu et al., 2021), which is an augmentation of PubmedBERT (Gu et al., 2022) that is trained with entity disambiguation objectives to allow the model to better understand entity knowledge. LM+KG models. We also evaluate GreaseLM’s ability to exploit its knowledge graph augmentation by comparing with existing LM+KG methods: (1) Relation Network (RN; Santoro et al., 2017), (2) RGCN (Schlichtkrull et al., 2018), (3) GconAttn (Wang et al., 2019), (4) KagNet (Lin et al., 2019), (5) MHGRN (Feng et al., 2020), and (6) QA-GNN (Yasunaga et al., 2021). QA-GNN is the existing top-performing model under this LM+KG paradigm. The key difference between GreaseLM and these baseline methods is that they do not fuse the representations of both modalities across multiple interaction layers, allowing the representation of both modalities to affect the other (§3.3). For fair comparison, we use the same LM to initialize these baselines as for our model. ## 5 Experimental Results Table 2: Performance comparison on Commonsense ​QA in-house split (controlled experiments). As the official test is hidden, here we report the in-house Dev (IHdev) and Test (IHtest) accuracy, following the data split of Lin et al. (2019). Experiments are controlled using same seed LM. Methods | IHdev-Acc. (%) | IHtest-Acc. (%) ---|---|--- RoBERTa-Large (w/o KG) | 73.1 ($\pm$0.5) | 68.7 ($\pm$0.6) RGCN (Schlichtkrull et al., 2018) | 72.7 ($\pm$0.2) | 68.4 ($\pm$0.7) GconAttn (Wang et al., 2019) | 72.6 ($\pm$0.4) | 68.6 ($\pm$1.0) KagNet (Lin et al., 2019) | 73.5 ($\pm$0.2) | 69.0 ($\pm$0.8) RN (Santoro et al., 2017) | 74.6 ($\pm$0.9) | 69.1 ($\pm$0.2) MHGRN (Feng et al., 2020) | 74.5 ($\pm$0.1) | 71.1 ($\pm$0.8) QA-GNN (Yasunaga et al., 2021) | 76.5 ($\pm$0.2) | 73.4 ($\pm$0.9) GreaseLM (Ours) | 78.5 ($\pm$0.5) | 74.2 ($\pm$0.4) Table 3: Test Accuracy comparison on OpenBook ​​QA. Experiments are controlled using the same seed LM for all LM+KG methods. Model | Acc. ---|--- AristoRoBERTa (no KG) | 78.4 \+ RGCN | 74.6 \+ GconAttn | 71.8 \+ RN | 75.4 \+ MHGRN | 80.6 \+ QA-GNN | 82.8 GreaseLM (Ours) | 84.8 Table 4: Test accuracy comparison to public OpenBookQA model implementations. ∗UnifiedQA (11B params) and T5 (3B) are 30x and 8x larger than our model. Model | Acc. | # Params ---|---|--- ALBERT (Lan et al., 2020) \+ KB | 81.0 | $\sim$235M HGN (Yan et al., 2020) | 81.4 | $\geq$355M AMR-SG (Xu et al., 2021) | 81.6 | $\sim$361M ALBERT + KPG (Wang et al., 2020) | 81.8 | $\geq$235M QA-GNN (Yasunaga et al., 2021) | 82.8 | $\sim$360M T5* (Raffel et al., 2020) | 83.2 | $\sim$3B T5 + KB (Pirtoaca, ) | 85.4 | $\geq$11B UnifiedQA* (Khashabi et al., 2020) | 87.2 | $\sim$11B GreaseLM (Ours) | 84.8 | $\sim$359M Our results in Tables 2 and 4 demonstrate a consistent improvement on the CommonsenseQA and OpenbookQA datasets. On CommonsenseQA, our model’s test performance improves by 5.5% over fine-tuned LMs and 0.9% over existing LM+KG models. On OpenbookQA, these improvements are magnified, with 6.4% over raw LMs, and 2.0% over the prior best LM+KG system, QA-GNN. The boost over QA-GNN suggests that GreaseLM’s multi-layer fusion component that passes information between the text and KG representations is more expressive than LM+KG methods which do not integrate such sustained interaction between both modalities. We also achieve competitive results to other systems on the leaderboard of OpenbookQA (Table 4), posting the third highest score. However, we note that the T5 (Raffel et al., 2020) and UnifiedQA (Khashabi et al., 2020) models are pretrained models with 8$\times$ and $30\times$ more parameters, respectively, than our model. Among models with comparable parameter counts, GreaseLM achieves the highest score. An ablation study on different model components and hyperparameters is reported in Appendix C.1. Quantitative Analysis. Given these overall performance improvements, we investigated whether GreaseLM’s improvements were reflected in questions that required more complex reasoning. Because we had no gold structures from these datasets to categorize the reasoning complexity of different questions, we defined three proxies: the number of prepositional phrases in the questions, the presence of negation terms, and the presence of hedging terms. We use the number of prepositional phrases as a proxy for the number of explicit reasoning constraints being set in the questions. For example, the CommonsenseQA question in Table 1, “A weasel has a thin body and short legs to easier burrow after prey in a what?” has three prepositional phrases: to easier burrow, after prey, in a what, which each provide an additional search constraint for the answer (n.b., in certain cases, the prepositional phrases do not provide constraints that are needed for selecting the correct answer). The presence of negation and hedging terms stratifies our evaluation to questions that have explicit negation mentions (e.g., no, never) and terms indicating uncertainty (e.g., sometimes; maybe). Our results in Table 5 demonstrate that GreaseLM generally outperforms RoBERTa-Large and QA-GNN for both questions with negation terms and hedge terms, indicating GreaseLM handles contexts with nuanced constraints. Furthermore, we also note that GreaseLM performs better than the baselines across all questions with prepositional phrases, our measure for reasoning complexity. QA-GNN and GreaseLM perform comparably on questions with no prepositional phrases, but the increasing complexity of questions requires deeper cross-modal fusion between language and knowledge representations. While QA-GNN’s end fusion approach of initializing a node in the GNN from the LM’s final representation of the context is an effective approach, it compresses the language context to a single vector before allowing interaction with the KG, potentially limiting the cross-relationships between language and knowledge that can be captured (see example in Figure 2). Interestingly, we note that both GreaseLM and QA-GNN significantly outperform RoBERTa-Large even when no prepositional phrases are in the question. We hypothesize that some of these questions may require less reasoning, but require specific commonsense knowledge that RoBERTa may not have learned during pretraining (e.g., “What is a person considered a bully known for?”). Table 5: Performance of GreaseLM on the CommonsenseQA IH-dev set on complex questions with semantic nuance such as prepositional phrases, negation terms, and hedge terms. Model | # Prepositional Phrases | Negation | Hedge ---|---|---|--- 0 | 1 | 2 | 3 | 4 | Term | Term n | 210 | 429 | 316 | 171 | 59 | 83 | 167 RoBERTa-Large | 66.7 | 72.3 | 76.3 | 74.3 | 69.5 | 63.8 | 70.7 QA-GNN | 76.7 | 76.2 | 79.1 | 74.9 | 81.4 | 66.2 | 76.0 GreaseLM (Ours) | 75.7 | 79.3 | 80.4 | 77.2 | 84.7 | 69.9 | 78.4 Qualitative Analysis. In Figure 2, we examine GreaseLM’s node-to-node attention weights induced by the GNN layers of the model, and analyze whether they reflect more expressive reasoning steps compared to QA-GNN. Figure 2 shows an example from the CommonsenseQA IH-dev set. In this example, GreaseLM correctly predicts that the answer is “airplane” while QA-GNN makes an incorrect prediction, “motor vehicle”. For both models, we perform Best First Search (BFS) on the retrieved KG subgraph ${\mathcal{G}}_{\text{sub}}$ to trace high attention weights from the interaction node (purple). For GreaseLM, we observe that the attention by the interaction node increases on the “bug” entity in the intermediate GNN layers, but drops again by the final layer, resembling a suitable intuition surrounding the hedge term “unlikely”. Meanwhile, the attention on “windshield” consistently increases across all layers. For QA-GNN, the attention on “bug” increases over multiple layers. As “bug” is mentioned multiple times in the context, it may be well- represented in QA-GNN’s context node initialization, which is never reformulated by language representations, unlike in GreaseLM. Figure 2: Qualitative analysis of GreaseLM’s graph attention weight changes across multiple layers of message passing compared with QA-GNN. GreaseLM demonstrates attention change patterns that more closely resemble the expected change in focus on the “bug” entity. #### Domain generality Table 6: Performance on MedQA-USMLE Methods | Acc. (%) ---|--- Baselines (Jin et al., 2021) | Chance | 25.0 PMI | 31.1 IR-ES | 35.5 IR-Custom | 36.1 clinicalBERT-Base | 32.4 BioRoBERTa-Base | 36.1 BioBERT-Base | 34.1 BioBERT-Large | 36.7 Baselines (Our implementation) | SapBERT-Base (w/o KG) | 37.2 QA-GNN | 38.0 GreaseLM (Ours) | 38.5 Our reported results thus far demonstrate the viability of our method in the general commonsense reasoning domain. In this section, we explore whether GreaseLM could be adapted to other domains by evaluating on the MedQA-USMLE dataset. Our results in Table 6 demonstrate that GreaseLM outperforms state- of-the-art fine-tuned LMs (e.g., SapBERT; Liu et al., 2021) and a QA-GNN augmentation of SapBERT. Additionally, we note the improved performance over all classical methods and LM methods first reported in Jin et al. (2021). Additional results in Appendix C show that our approach is also agnostic to the language model used with improvements recorded by GreaseLM when it is seeded with other LMs, such as PubmedBERT (Gu et al., 2022), and BioBERT (Lee et al., 2020). While these results are promising as they suggest that GreaseLM is an effective augmentation of pretrained LMs for different domains and KGs (i.e., the medical domain with the DDB + Drugbank KG), there is still ample room for improvement on this task. ## 6 Conclusion In this paper, we introduce GreaseLM, a new model that enables interactive fusion through joint information exchange between knowledge from language models and knowledge graphs. Experimental results demonstrate superior performance compared to prior KG+LM and LM-only baselines across standard datasets from multiple domains (commonsense and medical). Our analysis shows improved capability modeling questions exhibiting textual nuances, such as negation and hedging. ## Acknowledgment We thank Rok Sosic, Maria Brbic, Jordan Troutman, Rajas Bansal, and our anonymous reviewers for discussions and for providing feedback on our manuscript. We thank Xiaomeng Jin for help with data preprocessing. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), IIS-2030477 (RAPID), NIH under No. R56LM013365; Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, Amazon, JPMorgan Chase, Docomo, Hitachi, Intel, JD.com, KDDI, Toshiba, NEC, and UnitedHealth Group. J. L. is a Chan Zuckerberg Biohub investigator. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities. ## References * Bodenreider (2004) Olivier Bodenreider. The unified medical language system (UMLS): Integrating biomedical terminology. _Nucleic acids research_ , 2004. * Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In _SIGMOD_ , 2008. * Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2013. * Bosselut et al. (2019) Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Çelikyilmaz, and Yejin Choi. Comet: Commonsense transformers for automatic knowledge graph construction. In _Association for Computational Linguistics (ACL)_ , 2019. * Bosselut et al. (2021) Antoine Bosselut, Ronan Le Bras, and Yejin Choi. Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2021. * Clark et al. (2019) Peter Clark, Oren Etzioni, Daniel Khashabi, Tushar Khot, Bhavana Dalvi Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, et al. From ‘f’ to ‘a’on the NY Regents science exams: An overview of the Aristo project. _arXiv preprint arXiv:1909.01958_ , 2019. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In _NAACL_ , 2019. * Feng et al. (2020) Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. Scalable multi-hop relational reasoning for knowledge-aware question answering. In _Empirical Methods in Natural Language Processing (EMNLP)_ , 2020\. * Gu et al. (2022) Yuxian Gu, Robert Tinn, Hao Cheng, Michael R. Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretraining for biomedical natural language processing. _ACM Transactions on Computing for Healthcare (HEALTH)_ , 3:1 – 23, 2022. * Hwang et al. (2021) Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs. In _AAAI_ , 2021. * Jin et al. (2021) Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. _Applied Sciences_ , 2021. * Khashabi et al. (2020) Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. Unifiedqa: Crossing format boundaries with a single qa system. In _Findings of EMNLP_ , 2020. * Lan et al. (2020) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. In _International Conference on Learning Representations (ICLR)_ , 2020. * Lee et al. (2020) Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. _Bioinformatics_ , 36:1234 – 1240, 2020. * Lin et al. (2019) Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In _Empirical Methods in Natural Language Processing (EMNLP)_ , 2019\. * Liu et al. (2021) Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. Self-alignment pretraining for biomedical entity representations. In _NAACL_ , 2021. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ , 2019. * Lv et al. (2020) Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2020. * Marcus (2018) G. Marcus. Deep learning: A critical appraisal. _ArXiv_ , abs/1801.00631, 2018. * McCoy et al. (2019) R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In _ACL_ , 2019. * Mehrabi et al. (2021) Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, and A. G. Galstyan. Lawyers are dishonest? quantifying representational harms in commonsense knowledge resources. _ArXiv_ , abs/2103.11320, 2021. * Mihaylov & Frank (2018) Todor Mihaylov and Anette Frank. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In _Association for Computational Linguistics (ACL)_ , 2018. * Mihaylov et al. (2018) Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? A new dataset for open book question answering. In _Empirical Methods in Natural Language Processing (EMNLP)_ , 2018\. * Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. Language models as knowledge bases? In _Empirical Methods in Natural Language Processing (EMNLP)_ , 2019\. * (25) George Sebastian Pirtoaca. Ai2 leaderboard. URL https://leaderboard.allenai.org/open_book_qa/submission/brhieieqaupc4cnddfg0. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research (JMLR)_ , 2020. * Ren & Leskovec (2020) Hongyu Ren and Jure Leskovec. Beta embeddings for multi-hop logical reasoning in knowledge graphs. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2020. * Ren et al. (2020) Hongyu Ren, Weihua Hu, and Jure Leskovec. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. In _International Conference on Learning Representations (ICLR)_ , 2020. * Ren et al. (2021) Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Michihiro Yasunaga, Haitian Sun, Dale Schuurmans, Jure Leskovec, and Denny Zhou. Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs. In _International Conference on Machine Learning (ICML)_ , 2021. * Santoro et al. (2017) Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2017. * Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In _European Semantic Web Conference_ , 2018. * Shen et al. (2020) Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, and Weizhu Chen. Exploiting structured knowledge in text via graph-guided representation learning. In _EMNLP_ , 2020. * Sheng et al. (2020) Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. Towards controllable biases in language generation. In _the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, long_ , 2020. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2017. * Suchanek et al. (2007) Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: A Core of Semantic Knowledge. In _16th International Conference on the World Wide Web_ , pp. 697–706, 2007. * Sun et al. (2020) Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuanjing Huang, and Zheng Zhang. Colake: Contextualized language and knowledge embedding. In _COLING_ , 2020. * Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In _North American Chapter of the Association for Computational Linguistics (NAACL)_ , 2019. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_ , pp. 5998–6008, 2017. * Veličković et al. (2018) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=rJXMpikCZ. * Vrandečić & Krötzsch (2014) Denny Vrandečić and Markus Krötzsch. Wikidata: A free collaborative knowledgebase. _Commun. ACM_ , 57(10):78–85, September 2014\. ISSN 0001-0782. doi: 10.1145/2629489. URL https://doi.org/10.1145/2629489. * Wang et al. (2020) Peifeng Wang, Nanyun Peng, Pedro Szekely, and Xiang Ren. Connecting the dots: A knowledgeable path generator for commonsense question answering. _arXiv preprint arXiv:2005.00691_ , 2020. * Wang et al. (2019) Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, et al. Improving natural language inference using external knowledge in the science questions domain. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2019. * Wishart et al. (2018) David S Wishart, Yannick D Feunang, An C Guo, Elvis J Lo, Ana Marcu, Jason R Grant, Tanvir Sajed, Daniel Johnson, Carin Li, Zinat Sayeeda, et al. Drugbank 5.0: a major update to the drugbank database for 2018. _Nucleic acids research_ , 2018. * Xu et al. (2021) Weiwen Xu, Huihui Zhang, Deng Cai, and Wai Lam. Dynamic semantic graph construction and reasoning for explainable multi-hop science question answering. _arXiv preprint arXiv:2105.11776_ , 2021. * Yan et al. (2020) Jun Yan, Mrigank Raman, Aaron Chan, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, and Xiang Ren. Learning contextualized knowledge structures for commonsense reasoning. _arXiv preprint arXiv:2010.12873_ , 2020. * Yang et al. (2019) An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In _Association for Computational Linguistics (ACL)_ , 2019. * Yasunaga et al. (2021) Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. QA-GNN: Reasoning with language models and knowledge graphs for question answering. _ArXiv_ , abs/2104.06378, 2021. * Yu et al. (2020) Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. Jaket: Joint pre-training of knowledge graph and language understanding. _ArXiv_ , abs/2010.00796, 2020. * Zhang et al. (2019) Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. Ernie: Enhanced language representation with informative entities. In _ACL_ , 2019. ## Appendix A Ethics Statement We outline potential ethical issues with our work below. First, GreaseLM is a method to fuse language representations and knowledge graph representations for effective reasoning about textual situations. Consequently, GreaseLM could reflect many of the same biases and toxic behaviors exhibited by language models and knowledge graphs that are used to initialize it. For example, prior large-scale language models have been shown to encode biases about race, gender, and other demographic attributes (Sheng et al., 2020). Because GreaseLM is seeded with pretrained language models that often learn these patterns, it is possible to reflect them in open-world settings. Second, the ConceptNet knowledge graph (Speer et al., 2017) used in this work has been shown to encode stereotypes (Mehrabi et al., 2021), rather than completely clean commonsense knowledge. If GreaseLM were used outside these standard benchmarks in conjunction with ConceptNet as a KG, it might rely on unethical relationships in its knowledge resource to arrive at conclusions. Consequently, while GreaseLM could be used for applications outside these standard benchmarks, we would encourage implementers to use the same precautions they would apply to other language models and methods that use noisy knowledge sources. Another source of ethical concern is the use of the MedQA-USMLE evaluation. While we find clinical reasoning using language models and knowledge graphs to be an interesting testbed for GreaseLM and for joint language and reasoning models in general, we do not encourage users to use these models for real world clinical prediction, particularly at these performance levels. ## Appendix B Experimental Setup Details ### B.1 Entity Linking Given each QA context, we follow the procedure from Yasunaga et al. (2021) to retrieve the subgraph ${\mathcal{G}}_{\text{sub}}$ from ${\mathcal{G}}$. First, we perform entity linking to ${\mathcal{G}}$ to retrieve an initial set of nodes ${\mathcal{V}}_{\text{linked}}$. Second, we add any bridge entities that are in a 2-hop path between any pair of linked entities in ${\mathcal{V}}_{\text{linked}}$ to get the set of retrieved entities ${\mathcal{V}}_{\text{retrieved}}$. Then we prune the set of nodes ${\mathcal{V}}_{\text{retrieved}}$ using a relevance score computed for each node. To compute the relevance score, we follow the procedure of Yasunaga et al. (2021) – we concatenate the node name with the context of the QA example, and pass it through a pre-trained LM, using the output score of the node name as the relevance score. We only retain the top 200 scores nodes and prune the remaining ones. Finally, we retrieve all the edges that connect any two nodes in ${\mathcal{V}}_{\text{sub}}$, forming the retrieved subgraph ${\mathcal{G}}_{\text{sub}}$. Each node in ${\mathcal{G}}_{\text{sub}}$ is assigned a type according to whether its corresponding entity was linked from the context $c$, question $q$, answer $a$, or from a bridge path. ### B.2 Graph Initialization To compute initial node embeddings (§3.3) for entities retrieved in ${\mathcal{G}}_{\text{sub}}$ from ConceptNet, we follow the method of MHGRN (Feng et al., 2020). We convert knowledge triples in the KG into sentences using pre-defined templates for each relation. Then, these sentences are fed into a BERT-large LM to compute embeddings for each sentence. Finally, for all sentences containing an entity, we extract all token representations of the entity’s mention spans in these sentences, mean pool over these representations and project this mean-pooled representation. For MedQA-USMLE, node embeddings are initialized similarly using the pooled token output embeddings of the entity name from the SapBERT model (described in §4.2; Liu et al., 2021). For MedQA, 5% of examples do not yield a retrieved entity. In these cases, we represent the graph using a dummy node initialized with 0. In essence, GreaseLM backs off to only using LM representations as the graph propagates no information. ### B.3 Hyperparameters Table 7: Hyperparameter settings for models and experiments Category | Hyperparameter | Dataset ---|---|--- CommonsenseQA | OpenbookQA | MedQA-USMLE Model architecture | Number of GreaseLM layers $M$ | 5 | 6 | 3 Number of Unimodal LM layers $N$ | 19 | 18 | 9 Number of attention heads in GNN | 2 | 2 | 2 Dimension of node embeddings and the messages in GNN | 200 | 200 | 200 Dimension of MLP hidden layers (except MInt operator) | 200 | 200 | 200 Number of hidden layers of MLPs | 1 | 1 | 1 Dimension of MInt operator hidden layer | 400 | 200 | 400 Regularization | Dropout rate of the embedding layer, GNN layers and fully-connected layers | 0.2 | 0.2 | 0.2 Optimization | Learning rate of parameters in LM | 1.00E-05 | 1.00E-05 | 5.00E-05 Learning rate of parameters not in LM | 1.00E-03 | 1.00E-03 | 1.00E-03 Number of epochs in which LM’s parameters are kept frozen | 4 | 4 | 0 Optimizer | RAdam | RAdam | RAdam Learning rate schedule | constant | constant | constant Batch size | 128 | 128 | 128 Number of epochs | 30 | 70 | 20 Max gradient norm (gradient clipping) | 1.0 | 1.0 | 1.0 Data | Max number of nodes | 200 | 200 | 200 Max number of tokens | 100 | 100 | 512 ## Appendix C Additional Experimental Results ### C.1 Ablation studies In Table 8, we summarize an ablation study conducted using the CommonsenseQA IHdev set. Modality interaction. A key component of GreaseLM is the connection of the LM to the GNN via the modality interaction module (Eq. 11). If we remove modality interaction, the performance drops significantly, from 78.5% to 76.5% (approximately the performance of QA-GNN). Integrating the modality interaction in every other layer instead of consecutive layers also hurts performance. A possible explanation is that skipping layers could impede learning consistent representations across layers for both the LM and the GNN, a property which may be desirable given we initialize the model using a pretrained LM’s weights (e.g., RoBERTa). We also find that sharing parameters between modality interaction layers (Eq. 11) outperforms not sharing, possibly because our datasets are not very large (e.g., 10k for CommonsenseQA), and sharing parameters helps prevent overfitting. Table 8: Ablation study of our model components, using the CommonsenseQA IH-dev set. Ablation Type | Ablation | Dev Acc. ---|---|--- GreaseLM | - | 78.5 Modality Interaction | No interaction | 76.5 Interaction in every other layer | 76.3 Interaction Layer Parameter Sharing | No parameter sharing | 77.1 Number of GreaseLM layers ($M$) | $M=4$ | 77.7 $M=6$ | 78.0 $M=7$ | 76.2 Graph Connectivity | Interaction node connected to all | 77.6 nodes in ${\mathcal{V}}_{\text{sub}}$, not only ${\mathcal{V}}_{\text{linked}}$ Node Initialization | Random | 60.8 TransE (Bordes et al., 2013) | 77.7 Number of GreaseLM layers. We find that $M=5$ GreaseLM layers achieves the highest performance. However, both the results for $M=4$ and $M=6$ are relatively close to the top performance, indicating our method is not overly sensitive to this hyperparameter. Graph connectivity. The interaction node $e_{int}$ is a key component of GreaseLM that bridges the interaction between the KG and the text. Selecting which nodes in the KG are directly connected to $e_{int}$ affects the rate at which information from different portions of the KG can reach the text representations. We find that connecting $e_{int}$ KG nodes explicitly linked to the input text performs best. Connecting $e_{int}$ to all nodes in the subgraph (e.g., bridge entities) hurts performance (-0.9%), possibly because the interaction node is overloaded by having to attend to all nodes in the graph (up to 200). By connecting the interaction node only to linked entities, each linked entity serves as a filter for relevant information that reaches the interaction node. KG node embedding initialization. Effectively initializing KG node representations is critical. When we initialize nodes randomly instead of using the BERT-based initialization method from Feng et al. (2020), the performance drops significantly (78.5% ​$\rightarrow$​ 60.8%). While using standard KG embeddings (e.g., TransE; Bordes et al., 2013) recovers much of the performance drop (77.7%), we still find that using BERT-based entity embeddings performs best. ### C.2 Effect of LM Initialization on GreaseLM Table 9: Performance on the in-house splits of CommonsenseQA for different LM initializations of our method, GreaseLM. Methods | IHdev-Acc. | IHtest-Acc. ---|---|--- RoBERTa-Large | 73.1 | 68.7 \+ GreaseLM (Ours) | 78.5 | 74.2 RoBERTa-Base | 65.1 | 59.8 \+ GreaseLM (Ours) | 69.3 | 65.0 Table 10: Initialization on MedQA-USMLE Methods | Acc. (%) ---|--- SapBERT-Base | 37.2 \+ GreaseLM (Ours) | 38.5 BioBERT-Base | 34.1 \+ GreaseLM (Ours) | 34.6 PubmedBERT-Base | 38.0 \+ GreaseLM (Ours) | 38.7 To evaluate whether our method is agnostic to the LM used to seed the GreaseLM layers, we replace the LMs we use in previous experiments (RoBERTa-large for CommonsenseQA and SapBERT for MedQA-USMLE) with RoBERTa-base for CommonsenseQA, and BioBERT and PubmedBERT for MedQA-USMLE. Across multiple LM initializations in two domains, our results demonstrate that GreaseLM can provide a consistent improvement for multiple LMs when used as a modality junction between KGs and language.
# CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation Lianyu Hu, Wei Feng†, , Liqing Gao, Zekang Liu, Liang Wan† Lianyu Hu, Wei Feng, Liqing Gao, Zekang Liu, Liang Wan are with the College of Intelligence and Computing, Tianjin University, Tianjin 300350, China (e-mail: <EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). $\dagger$ Wei Feng and Liang Wan are the Corresponding authors. ###### Abstract In sign language, the conveyance of human body trajectories predominantly relies upon the coordinated movements of hands and facial expressions across successive frames. Despite the recent impressive advancements of sign language understanding methods, they often solely focus on individual frames, inevitably overlooking the inter-frame correlations that are essential for effectively modeling human body trajectories. To address this limitation, this paper introduces a spatial-temporal correlation network, denoted as CorrNet+, which explicitly identifies and captures body trajectories across multiple frames. In specific, CorrNet+ employs two parallel modules to build human body trajectories: a correlation module and an identification module. The former captures the cross-spacetime correlations in local spatial-temporal neighborhoods, while the latter dynamically constructs human body trajectories by distinguishing informative spatial regions. Afterwards, a temporal attention module is followed to adaptively evaluate the contributions of different frames in the whole video. The resultant features offer a holistic perspective on human body movements, facilitating a deeper understanding of sign language. As a unified model, CorrNet+ achieves new state-of-the-art performance on two extensive sign language understanding tasks, including continuous sign language recognition (CSLR) and sign language translation (SLT). Especially, CorrNet+ surpasses previous methods equipped with resource- intensive pose-estimation networks or pre-extracted heatmaps for hand and facial feature extraction. Compared with CorrNet, CorrNet+ achieves a significant performance boost across all benchmarks while halving the computational overhead, achieving a better computation-accuracy trade-off. A comprehensive comparison with previous spatial-temporal reasoning methods verifies the superiority of CorrNet+. Code is available at https://github.com/hulianyuyy/CorrNet_Plus. ###### Index Terms: Continuous sign language recognition, Sign language translation, Spatial- temporal correlation, Model efficiency. ## I Introduction Sign language is one of the most widely-used communication tools for the deaf community in their daily life, which mainly conveys its meaning by facial expressions, head movements, hand gestures and body postures [1, 2]. However, mastering this language remains an overwhelming challenge for the hearing people, thus hindering direct interactions between two distinct groups. To alleviate this barrier, recent strides in automatic sign language understanding techniques [3, 4] have emerged, broadly categorized into three distinct domains: (1) isolated sign language recognition (ISLR), which aims to classify a video segment into an independent gloss111Gloss is the atomic lexical unit to annotate sign languages.; (2) continuous sign language recognition (CSLR), which progresses by classifying the input sign videos into a series of glosses to express sentences, instead of recognizing a single gloss only; (3) sign language translation (SLT), which directly translating the input sign videos into spoken texts that can be naturally understood by the hearing people. The difference of these tasks is illustrated in Fig. 1(a). To hopefully bridge the communication gaps between two groups, this paper focuses on CSLR and SLT, as they hold greater promise for real-life applications in sign language systems. Evidently, human body trajectories serve as prominent cues for understanding actions in human-centric video comprehension, which have gained substantial attention across various tasks [5, 6, 7, 8, 9, 10]. In sign language, these trajectories are mainly conveyed by both manual components (hand/arm gestures), and non-manual components (facial expressions, head movements, and body postures) [1, 2]. Especially, the coordinated horizontal and vertical movements of human face and both hands, coupled with adjoint actions like finger twisting and facial expressions, play a major role in expressing sign language. Tracking and leveraging the trajectories of these crucial body parts is of great benefit to understanding sign language. Figure 1: (a) Illustration for the difference among the isolated sign language recognition (ISLR) task, continuous sign language recognition (CSLR) task and sign language translation (SLT) task. (b) Visualization of correlation maps with Grad-CAM [11] between the current frame and two adjacent frames in the left/right side. It’s observed that without extra supervision, our method well attends to informative regions in adjacent frames to identify human body trajectories. However, current sign language methods [12, 13, 14, 15, 16, 17, 18, 19, 20, 21] usually treat each frame equally, overlooking their cross-frame interactions and thereby failing to leverage human body trajectories. Especially, they usually adopt a shared 2D CNN to independently extract spatial features for each frame [21, 12, 15, 20, 18]. Consequently, frames are processed individually without considering their interactions, thus inhibited to harness the potential of cross-frame trajectories for sign comprehension. Some methods propose to use a 3D or (2+1)D CNN [13, 22] to capture the local cross-spacetime features. However, their fixed design and limited spatial- temporal receptive fields hinder the establishment of spatial relationships across distant regions. Moreover, these methods incur substantial computational costs compared to their 2D counterparts. Alternative temporal techniques, such as temporal shift [23] or temporal convolutions [24], could address short-term temporal dynamics. However, it’s hard for them to aggregate information from distant spatial regions due to the limited spatial-temporal receptive field. Besides, they may fail to dynamically model human body movements for different samples with a fixed structure during inference. With the above considerations, it’s necessary to develop an effective and efficient method for capturing human body trajectories to advance sign language comprehension. To address these challenges, we introduce CorrNet+, a novel framework explicitly designed to model human body trajectories across adjacent frames. As depicted in Figure 1(b), our approach dynamically attends to the movements of informative regions across wide spatial distances. Unlike certain prior methods [16, 25, 22, 26] that rely on expensive supervision such as pose estimation techniques or body heatmaps, our method alleviates the need for such resource-intensive guidance and can be trained in a self-motivated manner. Notably, our approach achieves superior performance compared to previous methods while significantly reducing the required computational demands. CorrNet+ employs two parallel modules to build human body trajectories: a correlation module and an identification module. The former computes correlation maps within a local spatial-temporal region to identify human body trajectories. The latter dynamically emphasizes the informative regions that convey critical information. Besides these two components, considering human body trajectories are unevenly distributed in the video, a temporal attention module is then introduced to highlight the critical human body movements. The generated features provide a comprehensive perspective on human body movements, thereby enhancing the comprehension of sign language. Remarkably, CorrNet+ achieves new state-of-the-art performance on three large-scale CSLR benchmarks (PHOENIX2014 [27], PHOENIX2014-T [28] and CSL-Daily [29]), and two widely-used SLT benchmarks (PHOENIX2014-T [28] and CSL-Daily [29]). Especially, CorrNet+ largely outperforms previous methods equipped with resource-intensive pose-estimation networks or pre-extracted heatmaps for hand and facial feature extraction [16, 25, 22, 26]. Compared with CorrNet [30], CorrNet+ brings notable performance gain across all benchmarks and drastically reduces the consumed computations by half, achieving a better computation- accuracy trade-off. A comprehensive comparison with other spatial-temporal reasoning methods demonstrates the superiority of CorrNet+. Visualizations hopefully verify the efficacy of CorrNet+ on emphasizing human body trajectories across adjacent frames. Abundant ablations demonstrate the effects of each component within CorrNet+. This paper is a substantial extension from a preliminary conference version [30] with a number of major changes. First, we reformulate the design of the correlation module in Section 3.2 to make it more lightweight and powerful, which is a key component for effectively modeling human body trajectories. Second, a new temporal attention module is introduced to dynamically emphasize the critical body trajectories in Section 3.4. Finally, we incorporate new results on the SLT benchmarks, and significantly extend the experimental results on the CSLR benchmarks in Section 4. We additionally append new visualizations to clearly show the effects of our proposed method. The remainder of this paper is organized as follows. Section 2 reviews the related work. Section 3 elaborates the proposed method. Section 4 reports the experimental results, followed by a brief conclusion in Section 5. ## II Related Work ### II-A Continuous Sign Language Recognition Continuous sign language recognition tries to translate image frames into corresponding glosses in a weakly-supervised way: only sentence-level label is provided. Earlier methods [31, 32] in CSLR always employ hand-crafted features or HMM-based systems [33, 34, 35, 27] to perform temporal modeling and translate sentences step by step. Hand-crafted features [31, 32] are carefully selected to provide better visual information. HMM-based systems[33, 34, 35, 27] first employ a feature extractor to capture visual features and then adopt an HMM to perform long-term temporal modeling. The recent success of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) brings huge progress for CSLR. The widely used CTC loss [36] in recent CSLR methods [37, 38, 12, 13, 14, 15] enables training deep networks in an end-to-end manner by sequentially aligning target sentences with input frames. These CTC-based methods first rely on a feature extractor, i.e., 3D or 2D&1D CNN hybrids, to extract frame-wise features, and then adopt a LSTM for capturing long-term temporal dependencies. However, several methods [37, 13] found in such conditions the feature extractor is not well-trained and then present an iterative training strategy to relieve this problem, but consume much more computations. Some recent studies [15, 12, 17, 39] try to directly enhance the feature extractor by adding alignment losses [15, 17, 39] or adopt pseudo labels [12] in a lightweight way, alleviating the heavy computational burden. TLP [40] proposes to enhance the temporal information extraction process by designing advanced temporal pooling methods. SEN [18] tries to locate the informative spatial regions in sign videos in a self- supervised way. CVT-SLR [21] employs a contrastive visual-textual transformation to tackle the insufficient training problem existed in CSLR. CTCA [20] designs a cross-temporal context aggregation module to enhance local temporal context and global temporal context. Our method is designed to explicitly incorporate body trajectories to identify a sign, especially those from hands and face. Some previous methods have also explicitly leveraged the hand and face features for better recognition. For example, CNN-LSTM-HMM [26] employs a multi-stream HMM (including hands and face) to integrate multiple visual inputs to improve recognition accuracy. STMC [25] first utilizes a pose-estimation network to estimate human body keypoints and then sends cropped appearance regions (including hands and face) for information integration. C2SLR [16] leverages the pre-extracted pose keypoints as supervision to guide the model to explicitly focus on hand and face regions. TwoStream Network [22] builds two branches consisting of a visual branch and a pose branch to fuse beneficial information from complementary modalities. Our method doesn’t rely on additional cues like heavy pose estimation networks [16, 25, 22] or multiple streams [26] which consume much more computations to leverage hand and face information. Instead, our model could be end-to-end trained to dynamically attend to body trajectories in a self-motivated and lightweight way. ### II-B Sign Language Translation Camgoz et al. [28] pioneer the neural SLT task and publish the neural dataset PHOENIX2014-T [28] which regards the SLT as a sequence-to-sequence problem. They implement the neural SLT system using the encoder-decoder paradigm [41]. This paradigm is adopted by subsequent studies which focus on addressing the challenges of data scarcity and domain gap. Then, SLRT [42] first introduces a Transformer-based encoder-decoder framework to perform end-to-end SLT, with a Connectionist Temporal Classification (CTC) loss [36] to soft-match sign representations and gloss sequences. STMC-T [25] improves sign language translation by introducing multiple cues aimed by a pose estimation network. SignBack [29] tries to handle the insufficient training data problem by introducing back-translation techniques to generate new pseudo samples. Motivated by the progress of neural machine translation (NMT), several methods attempt to introduce these advanced techniques into SLT. For example, Chen et al. [43, 22] made the first attempt to introduce large language models into SLT with carefully designed pretraining strategies. XmDA [44] presents two new data augmentation methods, namely, cross-modality mix-up and cross-modality knowledge distillation to expand the training samples. Zhu el al. [45] testifies the effectiveness of several NMT techniques including data augmentation, transfer learning and multilingual NMT on SLT. Most existing methods adopt gloss representations as an intermediate state to promote translation accuracy. Some methods [46, 19] propose to eliminate the need of label-laboring glosses and design gloss-free SLT methods. GloFE [46] presents an end-to-end sign language translation framework by exploiting the shared underlying semantics of signs and the corresponding spoken translation. GFSLT- VLP [19] improves SLT by inheriting language-oriented prior knowledge from pretrained models, without any gloss annotation assistance. ### II-C Applications of Correlation Operation Correlation operation has been widely used in various domains, especially optical flow estimation and video action recognition. Rocco et al. [47] used it to estimate the geometric transformation between two images, and Feichtenhofer et al. [48] applied it to capture object co-occurrences across time in tracking. For optical flow estimation, Deep matching [49] computes the correlation maps between image patches to find their dense correspondences. CNN-based methods like FlowNet [50] and PWC-Net [51] design a correlation layer to help perform multiplicative patch comparisons between two feature maps. More recently, VideoFlow [52] proposes to propagate motion correlations between adjacent frames for multi-frame optical flow estimation. FlowFormer++ [53] introduces a masked autoencoding pretraining strategy and encodes the cross-frame correlations to help optical flow estimation. For video action recognition, Zhao et al. [54] firstly employ a correlation layer to compute a cost volume to estimate the motion information. STCNet [55] considers spatial correlations and temporal correlations, respectively, inspired by SENet [56]. MFNet [57] explicitly estimates the approximation of optical flow based on fixed motion filters. Wang et al. [58] design a learnable correlation filter and replace 3D convolutions with the proposed filter to capture spatial- temporal information. PCD [59] presents to minimize the distribution of correlation information in videos for domain adaptation. Different from these methods that explicitly or implicitly estimate optical flow, the correlation operator in our method is used in combination with other operations to identify and track body trajectories across frames. ## III Method ### III-A Overview As shown in Fig. 2, our model comprises a foundational base model, followed by different task-specific heads to support various sign language understanding tasks. Given a sign video with $T$ input frames $\bm{x}=\\{\bm{x}^{0}_{t}\\}_{t=1}^{T}\in\mathcal{R}^{T\times 3\times H_{0}\times W_{0}}$ with spatial size of $H_{0}\times W_{0}$, the base model first uses a feature extractor instantiated as a 2D CNN222Here we only consider the feature extractor based on 2D CNN, because recent findings [3, 16] show 3D CNN can not provide as precise gloss boundaries as 2D CNN, and lead to lower accuracy. to extract spatial-wise features $\bm{v}=\\{\bm{v}_{t}\\}_{t=1}^{T}\in\mathcal{R}^{T\times d}$ with $d$ representing the number of channels. It further incorporates a 1D CNN and a BiLSTM to perform short-term and long-term temporal modeling, respectively. Various task-specific heads are attached to support different sign language understanding tasks. For the CSLR task, we attach a classifier instantiated as a fully connected layer to recognize the input video into a series of glosses $\bm{g}=\\{\bm{g}_{i}\\}_{i=1}^{N}$. Here, $N$ denotes the length of the label sequence. This process is supervised by the widely-used CTC loss [36] $\mathcal{L}_{\rm CTC}$ to align input video frames with target gloss sequences. For the SLT task, we attach a visual-language (VL) mapper instantiated as a MLP and a translation network to translate the gloss-wise features $v$ into spoken texts $\bm{s}=\\{\bm{s}_{i}\\}_{i=1}^{H}$. Here, $H$ denotes the length of the output text sequence. This procedure is supervised by the standard sequence-to-sequence cross-entropy loss [60] $\mathcal{L}_{\rm CE}$. Figure 2: An overview for our CorrNet+, which can support both the CSLR task and the SLT task with a common base model. In this base model, it first employs a feature extractor (2D CNN) to capture frame-wise features, and then adopts a 1D CNN and a BiLSTM to perform short-term and long-term temporal modeling, respectively. For the CSLR task, we attach a classifier instantiated as a fully connected layer to perform classification. For the SLT task, we attach a VL-mapper instantiated as a MLP and a translation network to predict sentences. The feature extractor is consisted of multiple stages to extract spatial-wise features for each frame independently. After each stage of the feature extractor, we insert a correlation stage to capture cross-frame interactions. An identification module and a correlation module are first concurrently placed to identify body trajectories across adjacent frames, whose outputs are then element-wisely multiplied and fed into the temporal attention module to dynamically emphasize the key human body trajectories in the whole video. Despite the recent advancements in sign language understanding methods, they usually treat each frame equally by using a common 2D CNN to extract spatial- wise features and thus fail to capture cross-frame interactions. While some methods propose to model local spatial-temporal information with spatial- temporal reasoning methods like 3D CNN [13, 22] and temporal convolutions, they suffer from excessive computations and limited spatial-temporal receptive fields. Consequently, they struggle to effectively capture human body movements across a broader spatial-temporal region. To address these limitations, we design a spatial-temporal correlation network (CorrNet+) as shown in Fig. 2. We seamlessly insert a spatial-temporal correlation network (ST-correlation) after each stage in the feature extractor to capture the local spatial-temporal correlations for each frame. Specifically, We simultaneously deploy two critical components including a correlation module and an identification module to capture the cross-frame interactions and identify informative spatial regions. The outputs $\bm{E}$ and $\bm{M}$ from both modules are element-wisely multiplied and then added via a residual input connection, yielding intermediate representations $\bm{y}$. We then feed $\bm{y}$ into a temporal attention module to dynamically evaluate the contributions of different frames in the whole video to emphasize keyframes and suppress meaningless ones. We next introduce each component in detail. ### III-B Correlation Module As a rich and expressive communication protocol, sign language is mainly conveyed by both manual components (hand/arm gestures), and non-manual components (facial expressions, head movements, and body postures) [1, 2]. However, these informative body parts, e.g., hands and face, often exhibit misalignment across adjacent frames. To address this spatial discrepancy and establish connections between distant spatial regions, we propose a novel approach by computing correlation maps between neighboring frames to identify and track human body trajectories. We first briefly recap the solution of CorrNet [30] and naturally introduce our solution to overcome its inherent limitations. Formally, each frame could be represented as a 3D tensor $\bm{x}_{t}\in\mathcal{R}^{C\times H\times W}$, where $C$ represents the number of channels and $H\times W$ denotes spatial size. In CorrNet [30], we compute the affinities between all patches in the current frame $\bm{x}_{t}$ and patches in adjacent frames to model human body trajectories. Taking a feature patch $\bm{p}_{t}(i,j)$ with the spatial location $(i,j)$ in the current frame $\bm{x}_{t}$ as an example, its affinity $\bm{A}(i,j,i^{\prime},j^{\prime})$ with another patch $\bm{p}_{t+1}(i^{\prime},j^{\prime})$ in $\bm{x}_{t+1}$ is computed in a dot- product way as: $\bm{A}(i,j,i^{\prime},j^{\prime})=\frac{1}{C}\sum_{c=1}^{C}{\bm{p}^{c}_{t}(i,j)\times\bm{p}^{c}_{t+1}(i^{\prime},j^{\prime}).}$ (1) Fig. 3(a) illustrates this process. However, the computed correlation maps yield a tensor of size $H\times W\times H\times W$, resulting in an overall computation complexity of $O(H^{2}W^{2})$ quadratic to the number of patches. Though this operation can effectively build cross-frame interactions to handle the spatial misalignment, it imposes a substantial computational burden. Moreover, the high computational costs restrict the spatial-temporal interactions to neighboring frames, hindering our ability to consecutively capture human body trajectories across a broader temporal context. Figure 3: Illustration for the difference between the correlation operator in CorrNet [30] and CorrNet+. (a) CorrNet [30]. It computes correlation maps between a spatial patch $p_{t}(i,j)$ in $x_{t}$ and all other patches in adjacent frame $x_{t+1}$ and $x_{t-1}$. The overall computation complexity is $O(H^{2}W^{2})$, quadratic to the number of spatial patches in each frame, which incurs heavy extra computations. (b) To reduce computations, we condense the features of $x_{t}$ into several compact representations, which are then used to compute correlation maps with adjacent frames on behalf of $x_{t}$. In this case, as the number of selected patches is reduced from $H\times W$ to $O(1)$ for $x_{t}$, the computation complexity is drastically decreased from $O(H^{2}W^{2})$ to $O(HW)$. It also enables us to compute correlation maps with neighbors in a larger temporal duration to more effectively capture the whole human body movements in expressing a sign. Figure 4: An framework overview for our proposed correlation module. It first condenses each frame into a compact representation, and then uses it to compute correlation maps with adjacent frames within a predefined range of $L$ to model human body trajectories. To handle these limitations, we reformulate the correlation module to make it more lightweight and powerful, whose framework is shown in fig. 4. Specifically, we compress all patches in $\bm{x}_{t}$ into a compact tensor to compute the correlation maps with significantly reduced computational overhead. We further extend the spatial-temporal neighborhood of the correlation operator to capture the trajectories of the signer in a large temporal duration. In specific, we use three different ways to compress the features of each frame from various views. For simplicity, we choose the average aggregation, maximum aggregation and attention aggregation functions as our protocols. For average aggregation, given the input feature $\bm{x}\in\mathcal{R}^{T\times C\times H\times W}$, we perform average pooling along the spatial dimension to transform it into a representation $\bm{x}^{\rm avg}\in\mathcal{R}^{T\times C\times 1\times 1}$ as: $\bm{x}^{\rm avg}={\rm AvgPool}(\bm{x}).$ (2) For maximum aggregation, we perform max pooling to compress $\bm{x}$ into a representation $\bm{x}^{\rm max}\in\mathcal{R}^{T\times C\times 1\times 1}$ as: $\bm{x}^{\rm max}={\rm MaxPool}(\bm{x}).$ (3) For attention aggregation, we randomly initialize a tensor $\bm{q}\in\mathcal{R}^{1\times C\times 1\times 1}$ acting as a query. It is then used to compute affinities $\bm{A}\in\mathcal{R}^{T\times 1\times H\times W}$ with patches in each frame following the multi-head attention (MHA) [60] process, whose features are fed into a Multi-Layer Perception (MLP) module [60] to obtain the output $\bm{x}^{\rm att}\in\mathcal{R}^{T\times C\times 1\times 1}$ as: $\bm{x}^{\rm att}={\rm MLP}({\rm MHA}({\rm query}=\bm{q},{\rm key}=\bm{x},{\rm value}=\bm{x})).$ (4) In this procedure, the number of heads is set as 1 for the MHA process, and the dimension expansion factor is 1 for the MLP module to minimize computations. After obtaining the condensed features $\bm{x}^{\rm avg}$, $\bm{x}^{\rm max}$ and $\bm{x}^{\rm att}$, we combine them into a compact representation. Practically, we multiply these features with a learnable coefficient $\bm{\gamma}\in\mathcal{R}^{3}$ to control their importance for fusion to obtain $\bm{x}^{p}\in\mathcal{R}^{T\times C\times 1\times 1}$ as: $\bm{x}^{p}=\bm{x}^{\rm avg}\times\bm{\gamma}_{1}+\bm{x}^{\rm max}\times\bm{\gamma}_{2}+\bm{x}^{\rm att}\times\bm{\gamma}_{3}.$ (5) Here, $\bm{\gamma}$ is initialized as a tensor filled with values of $\frac{1}{3}$, and then updated via gradient-based backward propagation in the training process. Especially, as only one compact representation is used on behalf of the current frame, the computation complexity of calculating correlation maps between adjacent frames can be drastically reduced to only $O(HW)$, in contrast to $O(H^{2}W^{2})$ in CorrNet [30]. In practice, the computations are notably decreased from 3.64 GFLOPs333FLOPs denotes the number of multiply-add operations and GFLOPs denotes measuring FLOPs by giga. to 0.01 GFLOPs, bringing only quite a few extra computations. Considering sign language is mainly conveyed by consecutive human body motion like hand and arm movements, it’s necessary to identify and track the body trajectories in a large temporal neighborhood to understand signs. We strategically enlarge the temporal receptive field of the correlation module to achieve this goal. Specifically, for an input video $\bm{x}\in\mathcal{R}^{T\times C\times H\times W}$, we sample $L$ neighboring frames for each frame to formulate a neighboring frame set $\bm{x}^{L}\in\mathcal{R}^{T\times C\times L\times H\times W}$. We then recover the last dimension of $\bm{x}^{p}$ as $\bm{x}^{p}\in\mathcal{R}^{T\times C\times 1\times 1\times 1}$ and use it to compute affinities with $\bm{x}^{L}$ to obtain the local spatial-temporal correlation maps $\bm{A}^{L}\in\mathcal{R}^{T\times L\times H\times W}$ as: $\displaystyle\bm{A}^{L}=\sum_{i=0}^{C}\bm{x}^{p}_{:i}\times\bm{x}^{L}_{:i}$ (6) where $:$ denotes taking all elements in the corresponding dimension. $L$ can be set as various values in different network stages to capture information of different temporal scales. Given the spatial-temporal correlation maps $\bm{A}^{L}$, we constrain values in $\bm{A}^{L}$ into the range of (0,1) by passing it through a sigmoid function. We further subtract 0.5 from the results to emphasize informative regions with positive values and suppress redundant areas with negative values as: $\hat{\bm{A}}^{L}={\rm sigmoid}(\bm{A}_{L})-0.5.$ (7) After identifying the correlations between adjacent frames, we incorporate them back into each frame to reason about the local human body movements. Specifically, we recover the second dimension of the cross-frame correlations $\hat{\bm{A}}^{L}$ and repeat it for $C$ times to obtain $\hat{\bm{A}}^{L}\in\mathcal{R}^{T\times C\times L\times H\times W}$. We then use $\hat{\bm{A}}^{L}$ to multiply with the features of the neighboring frame set $\bm{x}^{L}$ to obtain the local human body trajectories $\bm{E}\in\mathcal{R}^{T\times C\times 1\times 1}$ as: $\bm{E}=\sum_{l=1}^{L}\sum_{i^{\prime},j^{\prime}}{\hat{\bm{A}}^{L}_{::l}(i^{\prime},j^{\prime})\times\bm{x}^{L}_{::l}(i^{\prime},j^{\prime})\times\bm{\beta}_{::l}}$ (8) where a learnable coefficient $\bm{\beta}\in\mathcal{R}^{1\times 1\times L\times 1\times 1}$ is attached to measure the importance of different neighboring frames. $\bm{\beta}$ is initialized as a tensor filled with values of $\frac{1}{L}$, and updated via gradient-based backward propagation in the training process. This correlation calculation is repeated for each frame in a video to track body trajectories in videos. Figure 5: Illustration for our identification module. To avoid heavy computations in identifying informative spatial regions when modeling local spatial-temporal information, we decompose the spatial-temporal modeling structure along the spatial and temporal dimensions simultaneously to form a multiscale architecture, enlarging the model capacity. ### III-C Identification Module The correlation module computes correlation maps among spatial-temporal neighboring patches to model cross-frame interactions. However, not all regions play an equal role in sign expression. Therefore, it’s critical to selectively emphasize informative regions that carry essential body trajectories within the current frame $x_{t}$ and suppress background noise and non-critical elements. To achieve this goal, we present an identification module to dynamically emphasize these informative spatial regions. Specifically, as informative regions like hand and face are misaligned in adjacent frames, the identification module leverages the closely correlated local spatial-temporal features to tackle the misalignment and locate informative spatial regions. As shown in Fig. 5, the identification module first projects input features $\bm{x}\in\mathcal{R}^{T\times C\times H\times W}$ into $\bm{x}_{r}\in\mathcal{R}^{T\times C/r\times H\times W}$ with a $1\times 1\times 1$ convolution to decrease the computations, with a channel reduction factor $r$ as 16 by default. As the informative regions, e.g., hands and face, are not exactly aligned in adjacent frames, it’s necessary to consider a large spatial-temporal neighborhood to identify these features. Instead of directly employing a large 3D spatial-temporal kernel, we present a multi-scale paradigm by decomposing it into parallel branches of progressive dilation rates to reduce required computations and increase the model capacity. Specifically, as shown in Fig. 5, with a same small base convolution kernel of $K_{t}\times K_{s}\times K_{s}$, we employ multiple convolutions with their dilation rates increasing along spatial and temporal dimensions concurrently. The spatial and temporal dilation rates range within (1, $N_{s}$) and (1, $N_{t}$), respectively, resulting in total $N_{s}\times N_{t}$ branches. Group-wise convolutions are employed for each branch to reduce parameters and computations. Features from different branches are multiplied with learnable coefficients {$\bm{\sigma}_{1},\dots,\bm{\sigma}_{N_{s}\times N_{t}}$} to control their importance, and then added to mix information from branches of various spatial-temporal receptive fields as: $\bm{x}_{m}=\sum_{i=1}^{N_{s}}\sum_{j=1}^{N_{t}}{\bm{\sigma}_{ij}\times{\rm conv}^{s}_{ij}(\bm{x}_{r})}$ (9) where the group-wise convolution ${\rm conv}^{s}_{i,j}$ of different branches receives features of different spatial-temporal neighborhoods, with dilation rate $(j,i,i)$. After receiving features from a large spatial-temporal neighborhood, $x_{m}$ passes through a convolution with kernel size of 1 to project the features into $\bm{x}_{b}\in\mathcal{R}^{T\times C\times H\times W}$ to recover the channels from $C/r$ to $C$. We then pass $\bm{x}_{b}$ through a sigmoid function to generate attention maps with values ranging within (0,1), which are further subtracted from 0.5 to obtain $\bm{M}\in\mathcal{R}^{T\times C\times H\times W}$ to emphasize informative regions with positive values and suppress redundant areas with negative values as: $\bm{M}={\rm sigmoid}({\rm conv}_{1\times 1\times 1}(\bm{x}_{m}))-0.5.$ (10) Given the attention maps $\bm{M}$ to identify informative regions, it’s multiplied with the cross-frame interactions $\bm{E}$ computed by the correlation module to emphasize critical spatial regions that convey body trajectories and suppress others like background or noise. This refined trajectory information is finally incorporated into original spatial features $\bm{x}$ via a residual connection as: $\bm{y}=\bm{x}+\bm{\alpha}\bm{E}\times\bm{M}.$ (11) Here, $\bm{\alpha}$ is initialized as zero to keep the original spatial features and makes the model keep original behaviors. Figure 6: Illustration for our temporal attention module. We employ a temporal multiscale architecture to aggregate local temporal information to dynamically evaluate the contributions of each frame in a lightweight manner. ### III-D Temporal Attention Module The above modules effectively identify the critical cross-frame interactions within informative spatial regions. However, across the entire video, not all frames are equally important in expressing sign language. Some frames carry crucial information while others merely convey idle meanings. To address this, we introduce a temporal attention module. Drawing inspiration from the design principles of the identification module, we dynamically consider the importance of different frames to adaptively emphasize the keyframes and suppress others. Fig. 6 gives the overview of the temporal attention module. Given the input features $\bm{y}\in\mathcal{R}^{T\times C\times H\times W}$ generated by the correlation module and identification module, we first perform spatial pooling to eliminate the spatial dimensions, and then project the features into $\bm{y}_{r}\in\mathcal{R}^{T\times C/r\times H\times W}$ with a convolution kernel size of 1 to decrease the computations. To sufficiently evaluate the contributions of different frames, we propose a multiscale architecture to leverage the local information in a large temporal neighborhood. In specific, as shown in Fig. 6, with a same small temporal kernel of $P_{t}$, multiple parallel depth-wise convolutions are concurrently employed with different dilation rates ranging from 1 to $M_{t}$ to model information from various temporal receptive fields. Features from different branches are multiplied with learnable coefficients {$\bm{\delta}_{1}$, $\dots$, $\bm{\delta}_{P_{t}}$} to adjust their importance and added to fuse complementary information from different temporal ranges as: $\bm{y}_{m}=\sum_{i=1}^{P_{t}}{\bm{\delta}_{i}\times{\rm conv}^{t}_{i}(\bm{y}_{r})}$ (12) where ${\rm conv}^{t}_{i}$ denotes the group-wise convolution of $i$-th branch with dilation rate $i$. After receiving the closely correlated spatial-temporal information, $y_{m}$ passes through a convolution with kernel size of 1 to project the features into $\bm{y}_{b}\in\mathcal{R}^{T\times C\times H\times W}$ to recover the channels from $C/r$ to $C$. We then pass $\bm{y}_{b}$ through a sigmoid function to generate temporal attention maps with values ranging within [0,1], which are further subtracted from 0.5 to obtain $\bm{U}\in\mathcal{R}^{T\times C}$ to emphasize keyframes with positive values and suppress others with negative values as: $\bm{U}={\rm sigmoid}({\rm conv}_{1}(\bm{y}_{m}))-0.5.$ (13) We then recover the spatial dimensions of $\bm{U}$ to obtain $\bm{U}\in\mathcal{R}^{T\times C\times H\times W}$, and multiply it with input features $\bm{y}$ to dynamically adjust the weights of input frames according to their contributions. These augmented representations are further incorporated into the input features $\bm{y}$ via a residual connection as: $\bm{z}=\bm{y}+\bm{\lambda}\bm{y}\times\bm{U}.$ (14) Here, $\bm{\lambda}$ is initialized as zero during training to avoid hurting the original temporal features. ## IV Experiments ### IV-A Experimental Setup #### IV-A1 Datasets. PHOENIX2014 [27] is recorded from a German weather forecast broadcast with nine actors before a clean background with a resolution of 210 $\times$ 260\. It contains 6841 sentences with a vocabulary of 1295 signs, divided into 5672 training samples, 540 development (Dev) samples and 629 testing (Test) samples. PHOENIX2014-T [28] is available for both CSLR and sign language translation tasks. It contains 8247 sentences with a vocabulary of 1085 signs, split into 7096 training instances, 519 development (Dev) instances and 642 testing (Test) instances. It can be used for both CSLR and SLT tasks. CSL-Daily [29] revolves the daily life, recorded indoor at 30fps by 10 signers. It contains 20654 sentences, divided into 18401 training samples, 1077 development (Dev) samples and 1176 testing (Test) samples. It can be used for both CSLR and SLT tasks. CSL [61] is collected in the laboratory by fifty signers with a vocabulary size of 178 with 100 sentences. It contains 25000 videos, divided into training and testing sets by a ratio of 8:2. #### IV-A2 Training details. For fair comparisons, we follow the same setting as state-of-the-art methods [15, 16] to prepare our model. We adopt ResNet18 [62] as the 2D CNN backbone with ImageNet [63] pretrained weights. The 1D CNN of state-of-the-art methods is set as a sequence of {K5, P2, K5, P2} layers where K$\theta$ and P$\theta$ denotes a 1D convolutional layer and a pooling layer with kernel size of $\theta$, respectively. A two-layer BiLSTM with hidden size 1024 is attached for long-term temporal modeling, followed by a fully connected layer for sentence prediction. We train our models for 80 epochs with initial learning rate 0.001 which is divided by 5 at epoch 40 and 60. Adam [64] optimizer is adopted as default with weight decay 0.0001 and batch size 2. All input frames are first resized to 256$\times$256, and then randomly cropped to 224$\times$224 with 50% horizontal flipping and 20% temporal rescaling during training. During inference, a 224$\times$224 center crop is simply adopted. Following VAC [15], we employ the VE loss and VA loss for visual supervision, with weights 1.0 and 25.0, respectively. We adopt the TLP loss [40] to extract more powerful representations. Our model is trained and evaluated upon a 3090 graphical card. For the SLT task, the translation network is instantiated as a mBART model [65]. In practice, we found that the gloss labels are beneficial for SLT. Thus we let the translation process additionally supervised with the recognition loss $\mathcal{L}_{\rm CTC}$, whose final losses can be expressed as: $\mathcal{L}_{\rm T}=\mathcal{L}_{\rm CTC}+\mathcal{L}_{\rm CE}$. We set the learning rate of the visual mapper and translation network as 0.0002 and 1e-6, respectively. We train our models for 40 epochs with learning rates divided by 5 at epoch 20 and 30. TABLE I: Ablations for the effectiveness of the proposed correlation module, identification module and temporal attention module on the PHOENIX2014 dataset. Correlation | Identification | Temporal Weighting | Dev(%) | Test(%) ---|---|---|---|--- ✘ | ✘ | ✘ | 20.2 | 21.0 ✓ | ✘ | ✘ | 19.2 | 19.7 ✘ | ✓ | ✘ | 19.5 | 20.1 ✘ | ✘ | ✓ | 19.6 | 20.2 ✓ | ✓ | ✘ | 18.4 | 18.7 ✓ | ✘ | ✓ | 18.8 | 19.2 ✘ | ✓ | ✓ | 19.0 | 19.3 ✓ | ✓ | ✓ | 18.0 | 18.2 #### IV-A3 Evaluation Metric. For the CSLR task, we use Word Error Rate (WER) as the evaluation metric, which is defined as the minimal summation of the substitution, insertion, and deletion operations to convert the predicted sentence to the reference sentence, as: $\rm WER=\frac{\\#sub+\\#ins+\\#del}{\\#reference}.$ (15) Note that the lower WER, the better accuracy. For the SLT task, following previous studies [22, 29], we use commonly-used metrics in machine translation, including tokenized BLEU [66] with ngrams from 1 to 4 (BLEU@1-BLEU@4) and Rouge-L F1 (Rouge) [67] to evaluate the performance of SLT. The higher value, the better performance. ### IV-B Ablation Study We report ablative results on both development (Dev) and testing (Test) sets of PHOENIX2014 dataset to test the effectiveness of each component in our CorrNet+. TABLE II: Ablations for the locations of CorrNet+ on the PHOENIX2014 dataset. Stage 2 | Stage 3 | Stage 4 | Dev(%) | Test(%) ---|---|---|---|--- ✘ | ✘ | ✘ | 20.2 | 21.0 ✓ | ✘ | ✘ | 19.3 | 19.9 ✘ | ✓ | ✘ | 19.2 | 19.7 ✘ | ✘ | ✓ | 19.0 | 19.5 ✓ | ✓ | ✘ | 18.5 | 18.8 ✓ | ✓ | ✓ | 18.0 | 18.2 TABLE III: Ablations for the effectiveness of correlation module on the PHOENIX2014 dataset. Configurations | Dev(%) | Test(%) | Extra GFLOPs/ Original GFLOPs ---|---|---|--- CorrNet [30] | 18.8 | 19.4 | 3.600 / 3.640 CorrNet+ | 18.0 | 18.2 | 0.010 / 3.640 - | 20.2 | 21.0 | - $L$=[2,2,2] | 19.0 | 19.0 | 0.007 / 3.640 $L$=[6,6,6] | 18.5 | 18.8 | 0.010 / 3.640 $L$=[10,10,10] | 18.4 | 18.7 | 0.012 / 3.640 $L$=[2,6,10] | 18.0 | 18.2 | 0.010 / 3.640 $L$=[10,6,2] | 18.6 | 18.8 | 0.012 / 3.640 $L$=[6,10,14] | 18.3 | 18.4 | 0.012 / 3.640 Effectiveness of the proposed modules. Tab. I provides a comprehensive analysis of the effectiveness of the proposed modules. We notice that using any of the proposed three modules yields a notable accuracy boost, with 19.2% & 19.7%, 19.5% & 20.1% accuracy and 19.6 & 20.2% WER on the Dev and Test Sets, respectively. Notably, the correlation module offers the most substantial accuracy improvement. Combining any two modules further activates the effectiveness with 18.4% & 18.7%, 18.8% & 19.2% and 19.0% & 19.3% WER on the Dev and Test Sets, respectively. We notice that combining the correlation module and the identification module gives the most performance promotion. When employing all proposed modules, the accuracy reaches the peak with absolute 18.0% & 18.2% WER, giving +2.2% & +2.8% accuracy boost. Effects of locations for CorrNet+. Tab II ablates the locations of our proposed modules in Stage 2, 3 or 4. We observe that choosing any one of these locations brings a notable accuracy boost, with 19.3% & 19.9%, 19.2% & 19.7% and 19.0% & 19.5% WER. When combining two or more locations, a larger accuracy gain is witnessed. The accuracy reaches the peak when proposed modules are placed after Stage 2, 3 and 4, with 18.0% & 18.2% accuracy, which is adopted by default. TABLE IV: Ablations for the effectiveness of the aggregation functions in correlation module on the PHOENIX2014 dataset. Aggregation function | Dev(%) | Test(%) ---|---|--- AvgPool | 18.5 | 18.8 AvgPool & MaxPool | 18.3 | 18.5 AvgPool & MaxPool & AttPool | 18.0 | 18.2 TABLE V: Ablations for the multi-scale architecture of identification module on the PHOENIX14 dataset. Configuration | Dev(%) | Test(%) ---|---|--- - | 20.2 | 21.0 $N_{t}$=4, $N_{s}$=1 | 18.8 | 18.9 $N_{t}$=4, $N_{s}$=2 | 18.4 | 18.6 $N_{t}$=4, $N_{s}$=3 | 18.0 | 18.2 $N_{t}$=4, $N_{s}$=4 | 18.3 | 18.5 $N_{t}$=2, $N_{s}$=3 | 18.6 | 18.7 $N_{t}$=3, $N_{s}$=3 | 18.3 | 18.5 $N_{t}$=4, $N_{s}$=3 | 18.0 | 18.2 $N_{t}$=5, $N_{s}$=3 | 18.5 | 18.6 $K_{t}$=9, $K_{s}$=7 | 19.1 | 19.2 Study of the effectiveness of correlation module. In the upper part of Tab. III, we first verify the effectiveness of CorrNet+ by comparing it to the CorrNet [30]. By computing correlation maps between all spatial patches among consecutive frames, CorrNet promotes the WER to 18.8% & 19.4% on the Dev and Test sets, respectively. However, it raises substantial computational overhead (3.60 GFLOPs), nearly equivalent to the entire model’s computation (3.64 GFLOPs). Instead, by compressing the features of each frame, CorrNet+ notably decreases the incurred computations from 3.60 GFLOPs to 0.01 GFLOPs and brings +0.8% & +1.2% accuracy boost, achieving a better accuracy-computation trade- off. In the lower part of Tab. III, we investigate the effects of the temporal receptive filed $L=\\{L_{1},L_{2},L_{3}\\}$ across three network stages for the correlation module. When disabling $L$, the model degenerates into our baseline. We observe that when setting $L=[2,2,2]$ (focusing solely on adjacent frames) CorrNet+ outperforms the baseline by 1.2% & 2.0% on the Dev and Test sets, respectively. Gradually increasing $L$ from [1,1,1] to [5,5,5] consistently improves accuracy with similar computational costs. We then investigate different configurations for the temporal receptive fields as network stages progress. We notice that $L=[2,6,10]$ yields the peak accuracy, and either reversing the order of $L$ of further increasing $L$ would degrade the performance. Study on the effectiveness of aggregation functions in correlation module. We verify the effectiveness of the aggregation functions for the correlation module in Tab. IV. It’s observed that by solely using the average aggregation function, CorrNet+ already achieve better results (18.5% & 18.8%) than CorrNet (18.8% & 19.2%). When incorporating both the maximum and attention aggregation functions, the performance is further promoted to 18.3% & 18.5% and 18.0% & 18.2%, underscoring the complementarity of the proposed aggregation functions. TABLE VI: Ablations for the configurations of temporal attention module on the PHOENIX2014 dataset. Configuration | Dev(%) | Test(%) ---|---|--- - | 20.2 | 21.0 $M_{t}$=1 | 18.6 | 18.7 $M_{t}$=2 | 18.3 | 18.5 $M_{t}$=3 | 18.0 | 18.2 $M_{t}$=4 | 18.2 | 18.4 $M_{t}$=5 | 18.3 | 18.5 $P_{t}$=5 | 19.1 | 19.2 $\bm{U}\odot\bm{y}$ | 21.2 | 22.1 $\bm{U}\odot\bm{y}+\bm{y}$ | 19.6 | 20.3 $(\bm{U}-0.5)\odot\bm{y}$ | 18.5 | 18.8 $(\bm{U}-0.5)\odot\bm{y}+\bm{y}$ | 18.0 | 18.2 TABLE VII: Ablations for the generalizability of CorrNet over multiple backbones on the PHOENIX2014 dataset. Configuration | Dev(%) | Test(%) ---|---|--- SqueezeNet [56] | 22.2 | 22.6 w/ CorrNet+ | 19.4 | 19.6 ShuffleNet V2 [68] | 21.7 | 22.2 w/ CorrNet+ | 19.1 | 19.5 GoogleNet [69] | 21.4 | 21.5 w/ CorrNet+ | 18.9 | 19.0 RegNetX-800mf [70] | 20.4 | 21.2 w/ CorrNet+ | 18.3 | 18.4 RegNetY-800mf [70] | 20.1 | 20.8 w/ CorrNet+ | 17.8 | 18.0 TABLE VIII: Comparison with other methods of spatial-temporal attention or temporal reasoning on the PHOENIX2014 dataset. Method | Dev(%) | Test(%) ---|---|--- - | 20.2 | 21.0 w/ SENet [56] | 19.8 | 20.4 w/ CBAM [71] | 19.7 | 20.2 w/ NLNet [72] | - | - I3D [5] | 22.6 | 22.9 R(2+1)D [73] | 22.4 | 22.3 TSM [23] | 19.9 | 20.5 CorrNet+ | 18.0 | 18.2 TABLE IX: Comparison with other methods that explicitly exploit hand and face features on the PHOENIX2014 dataset. Method | Dev(%) | Test(%) ---|---|--- CNN+HMM+LSTM [26] | 26.0 | 26.0 DNF [13] | 23.1 | 22.9 STMC [25] | 21.1 | 20.7 C2SLR [16] | 20.5 | 20.4 CorrNet+ | 18.0 | 18.2 Study on the multi-scale architecture of identification module. In Tab. V, without identification module, our baseline achieves 20.2% and 21.0% WER on the Dev and Test Set, respectively. The base kernel size is set as $3\times 3\times 3$ for $K_{t}\times K_{s}\times K_{s}$. When fixing $N_{t}$=4 and varying spatial dilation rates to expand spatial receptive fields, a larger $N_{s}$ consistently brings better accuracy. When $N_{s}$ reaches 3, it brings no more accuracy gain. Consequently, we set $N_{s}$ as 3 by default and investigate the impact of $N_{t}$. Notably, increasing $N_{t}$ to 5 or decreasing $N_{t}$ to 2 and 3 achieves worse accuracy. We thus adopt $N_{t}$ as 4 by default. We also compare our proposed multi-scale architecture with a normal implementation of more parameters. The receptive field of the identification module with $N_{t}$=4, $N_{s}$=3 is identical to a normal convolution with $K_{t}$=9 and $K_{s}$=7. As shown in the bottom of Tab. V, although a normal convolution owns more parameters and computations than ours, it performs worse than our method which verifies the effectiveness of our proposed architecture. Study on the configurations of temporal attention module. In the upper part of Tab. VI, we investigate the effects for the number of branches $M_{t}$ in the temporal attention module. We notice that as $M_{t}$ increases, the performance consistently rises until it reaches 3, and a larger $M_{t}$ can’t bring more performance gain. We thus set $M_{t}=3$ by default. We then investigate the efficacy of the multiscale architecture by comparing it against a large convolution with kernel size $P_{t}$ of 5, which has the same temporal receptive field. We observe that our design outperforms it by a large margin with lower computational costs. In the lower part of Tab. VI, we explore the implementations of the temporal attention module to augment original features. Initially, a direct multiplication of the attention maps $\bm{U}$ with input features $y$ severely degrades performance due to the disruption of input feature distributions. However, when implemented residually by adding y, the expression $\bm{U}\odot\bm{y}+\bm{y}$ notably mitigates this phenomenon, resulting in performance gains of +0.6% and +0.7% on the Dev and Test Sets, respectively. We further subtract 0.5 from the attention maps $\bm{U}$ to emphasize or suppress certain positions, and then element-wisely multiply it with $\bm{y}$. This refined implementation brings +1.1% & +1.5% performance boost. Finally, we update this implementation in a residual way by adding input features $\bm{y}$ as $(\bm{U}-0.5)\odot\bm{y}+\bm{y}$, achieving a notable performance boost by +2.2% & +2.8%. Generalizability of CorrNet+. We deploy CorrNet+ upon multiple backbones, including SqueezeNet [56], ShuffleNet V2 [68], GoogLeNet [69], RegNetX-800mf [70] and RegNetY-800mf [70] to validate its generalizability in Tab. VII. It’s observed that our proposed model generalizes well upon different backbones, bringing +2.8% & +3.0%, +2.6% & +2.7%, +2.5% & +2.5%, +2.1% & +2.8% and +2.3% & +2.8% accuracy boost on the Dev and Test Sets, respectively. Comparisons with other spatial-temporal reasoning methods. Tab. VIII compares our approach with other methods of spatial-temporal reasoning ability. SENet [56] and CBAM [71] perform channel attention to emphasize key information. NLNet [72] employs non-local means to aggregate spatial-temporal information from other frames. I3D [5] and R(2+1)D [73] deploys 3D or 2D+1D convolutions to capture spatial-temporal features. TSM [23] adopts temporal shift operation to obtain features from adjacent frames. In the upper part of Tab. VIII, one can see CorrNet+ largely outperforms other attention-based methods, i.e., SENet, CBAM and NLNet, for its superior ability to identify and aggregate body trajectories. NLNet is out of memory due to its quadratic computational complexity with spatial-temporal size. In the bottom part of Tab. VIII, we observed that I3D and R(2+1)D demonstrate degraded accuracy, which may be attributed to their limited spatial-temporal receptive fields and increased training complexity. TSM slightly brings 0.3% & 0.3% accuracy boost. Our proposed approach significantly outperforms these methods, affirming its efficacy in aggregating salient spatial-temporal information from even distant spatial neighbors. TABLE X: Comparison with state-of-the-art methods on the PHOENIX2014 and PHOENIX2014-T datasets over the CSLR setting. $*$ indicates extra clues such as face or hand features are included by additional networks or pre-extracted heatmaps. Method | PHOENIX2014 | PHOENIX2014-T ---|---|--- Dev(%) | Test(%) | Dev(%) | Test(%) del/ins | WER | del/ins | WER SFL [14] | 7.9/6.5 | 26.2 | 7.5/6.3 | 26.8 | 25.1 | 26.1 FCN [12] | - | 23.7 | - | 23.9 | 23.3 | 25.1 CMA [38] | 7.3/2.7 | 21.3 | 7.3/2.4 | 21.9 | - | - VAC [15] | 7.9/2.5 | 21.2 | 8.4/2.6 | 22.3 | - | - SMKD [17] | 6.8/2.5 | 20.8 | 6.3/2.3 | 21.0 | 20.8 | 22.4 CVT-SLR [21] | 6.4/2.6 | 19.8 | 6.1/2.3 | 20.1 | 19.4 | 20.3 TLP [40] | 6.3/2.8 | 19.7 | 6.1/2.9 | 20.8 | 19.4 | 21.2 CoSign-2s [74] | - | 19.7 | - | 20.1 | 19.5 | 20.1 AdaSize [75] | 7.0/2.6 | 19.7 | 7.2/3.1 | 20.9 | 19.7 | 21.2 AdaBrowse+ [76] | 6.0/2.5 | 19.6 | 5.9/2.6 | 20.7 | 19.5 | 20.6 SEN [18] | 5.8/2.6 | 19.5 | 7.3/4.0 | 21.0 | 19.3 | 20.7 CTCA [20] | 6.2/2.9 | 19.5 | 6.1/2.6 | 20.1 | 19.3 | 20.3 RadialCTC [39] | 6.5/2.7 | 19.4 | 6.1/2.6 | 20.2 | - | - SLT∗ [28] | - | - | - | - | 24.5 | 24.6 C+L+H∗ [26] | - | 26.0 | - | 26.0 | 22.1 | 24.1 DNF∗ [13] | 7.3/3.3 | 23.1 | 6.7/3.3 | 22.9 | - | - STMC∗ [25] | 7.7/3.4 | 21.1 | 7.4/2.6 | 20.7 | 19.6 | 21.0 C2SLR∗ [16] | - | 20.5 | - | 20.4 | 20.2 | 20.4 CorrNet+ | 5.3/2.7 | 18.0 | 5.6/2.4 | 18.2 | 17.2 | 19.1 TABLE XI: Comparison with state-of-the-art methods on the CSL-Daily dataset [29] over the CSLR setting. Method | Dev(%) | Test(%) ---|---|--- BN-TIN [29] | 33.6 | 33.1 FCN [12] | 33.2 | 32.5 Joint-SLRT [42] | 33.1 | 32.0 TIN-Iterative [13] | 32.8 | 32.4 CTCA [20] | 31.3 | 29.4 AdaSize [75] | 31.3 | 30.9 AdaBrowse+ [76] | 31.2 | 30.7 SEN [18] | 31.1 | 30.7 CorrNet+ | 28.6 | 28.2 TABLE XII: Comparison with state-of-the-art methods on the CSL dataset [61] over the CSLR setting. Method | WER(%) ---|--- LS-HAN [61] | 17.3 SubUNet [77] | 11.0 SF-Net [78] | 3.8 FCN [12] | 3.0 STMC [25] | 2.1 VAC [15] | 1.6 C2SLR [16] | 0.9 SEN [18] | 0.8 CorrNet+ | 0.7 Comparisons with previous methods equipped with hand or face features. Many previous CSLR methods explicitly leverage hand and face features for better recognition by employing multiple input streams [26], human body keypoints [25, 16] and pre-extracted hand patches [13]. They require extra resource- intensive pose-estimation networks like HRNet [79] or additional multiple training stages. Our approach doesn’t rely on extra supervision and could be end-to-end trained to dynamically attend to body trajectories like hand and face actions in a self-motivated way. Tab. IX shows that our method could outperform these methods by a large margin with much fewer computations. TABLE XIII: Comparison with state-of-the-art methods on the PHOENIX2014-T dataset [28] and CSL-Daily dataset [29] over the SLT setting. PHOENIX2014-T --- | Method | Dev | Test | Rouge | BLEU1 | BLEU2 | BLEU3 | BLEU4 | Rouge | BLEU1 | BLEU2 | BLEU3 | BLEU4 Sign2Gloss2Text | SL-Luong [28] | 44.14 | 42.88 | 30.30 | 23.02 | 18.40 | 43.80 | 43.29 | 30.39 | 22.82 | 18.13 SignBT [29] | 49.53 | 49.33 | 36.43 | 28.66 | 23.51 | 49.35 | 48.55 | 36.13 | 28.47 | 23.51 STMC-Transf [80] | 46.31 | 48.27 | 35.20 | 27.47 | 22.47 | 46.77 | 48.73 | 36.53 | 29.03 | 24.00 MMTLB [43] | 50.23 | 50.36 | 37.50 | 29.69 | 24.63 | 49.59 | 49.94 | 37.28 | 29.67 | 24.60 TwoStream-SLT [22] | 52.01 | 52.35 | 39.76 | 31.85 | 26.47 | 51.59 | 52.11 | 39.81 | 32.00 | 26.71 SLTUNET [81] | 49.61 | - | - | - | 25.36 | 49.98 | 50.42 | 39.24 | 31.41 | 26.00 Sign2Text | SL-Luong [28] | 31.80 | 31.87 | 19.11 | 13.16 | 9.94 | 31.80 | 32.24 | 19.03 | 12.83 | 9.58 Joint-SLRT [42] | - | 47.26 | 34.40 | 27.05 | 22.38 | - | 46.61 | 33.73 | 26.19 | 21.32 STMC-T [82] | 48.24 | 47.60 | 36.43 | 29.18 | 24.09 | 46.65 | 46.98 | 36.09 | 28.70 | 23.65 SignBT [29] | 50.29 | 51.11 | 37.90 | 29.80 | 24.45 | 49.54 | 50.80 | 37.75 | 29.72 | 24.32 MMTLB [43] | 53.10 | 53.95 | 41.12 | 33.14 | 27.61 | 52.65 | 53.97 | 41.75 | 33.84 | 28.39 SLTUNET [81] | 52.23 | - | - | - | 27.87 | 52.11 | 52.92 | 41.76 | 33.99 | 28.47 | TwoStream-SLT [22] | 54.08 | 54.32 | 41.99 | 34.15 | 28.66 | 53.48 | 54.90 | 42.43 | 34.46 | 28.95 | CorrNet+ | 54.54 | 54.56 | 42.31 | 34.48 | 29.13 | 53.76 | 55.32 | 42.74 | 34.86 | 29.42 CSL-Daily | Method | Dev | Test | Rouge | BLEU1 | BLEU2 | BLEU3 | BLEU4 | Rouge | BLEU1 | BLEU2 | BLEU3 | BLEU4 Sign2Gloss2Text | SL-Luong [28] | 40.18 | 41.46 | 25.71 | 16.57 | 11.06 | 40.05 | 41.55 | 2573 | 16.54 | 11.03 SignBT [29] | 48.38 | 50.97 | 36.16 | 26.26 | 19.53 | 48.21 | 50.68 | 36.00 | 26.20 | 19.67 MMTLB [43] | 51.35 | 50.89 | 37.96 | 28.53 | 21.88 | 51.43 | 50.33 | 37.44 | 28.08 | 21.46 SLTUNET [81] | 52.89 | - | - | - | 22.95 | 53.10 | 54.39 | 40.28 | 30.52 | 23.76 TwoStream-SLT [22] | 53.91 | 53.58 | 40.49 | 30.67 | 23.71 | 54.92 | 54.08 | 41.02 | 31.18 | 24.13 Sign2Text | SL-Luong [28] | 34.28 | 34.22 | 19.72 | 12.24 | 7.96 | 34.54 | 34.16 | 19.57 | 11.84 | 7.56 SignBT [29] | 49.49 | 51.46 | 37.23 | 27.51 | 20.80 | 49.31 | 51.42 | 37.26 | 27.76 | 21.34 MMTLB [43] | 53.38 | 53.81 | 40.84 | 31.29 | 24.42 | 53.25 | 53.31 | 40.41 | 30.87 | 23.92 SLTUNET [81] | 53.58 | - | - | - | 23.99 | 54.08 | 54.98 | 41.44 | 31.84 | 25.01 TwoStream-SLT [22] | 55.10 | 55.21 | 42.31 | 32.71 | 25.76 | 55.72 | 55.44 | 42.59 | 32.87 | 25.79 CorrNet+ | 55.52 | 55.64 | 42.78 | 33.13 | 26.14 | 55.84 | 55.82 | 42.96 | 33.26 | 26.14 Figure 7: Visualizations of heatmaps by Grad-CAM [11]. Top: raw frames; Bottom: heatmaps of identification module. Our identification module could generally focus on the human body (light yellow areas) and especially pays attention to informative regions like hands and face (dark red areas) to track body trajectories. ### IV-C Comparison with State-of-the-Art Methods We verify the effectiveness of our proposed method upon two sign language understanding tasks, i.e., continuous sign language recognition (CSLR) and sign language translation (SLT). We next introduce the results of our method upon both settings, respectively. Figure 8: Visualizations of correlation maps for correlation module. Based on correlation operators, each frame could especially attend to informative regions in adjacent left/right frames like hands and face (dark red areas). Figure 9: Visualizations of temporal attention maps for temporal attention module. It’s observed that it tends to emphasize frames with rapid movements and suppress those frames with static contents. #### IV-C1 Continuous sign language recognition PHOENIX2014 and PHOENIX2014-T. Tab. VII shows a comprehensive comparison between our CorrNet+ and other state-of-the-art methods. The entries notated with $*$ indicate these methods utilize additional factors like face or hand features for better accuracy. We notice that CorrNet+ outperforms other state- of-the-art methods by a large margin upon both datasets, thanks to its special attention on body trajectories. Especially, CorrNet+ outperforms previous CSLR methods [26, 25, 16, 13] equipped with hand and faces acquired by heavy pose- estimation networks or pre-extracted heatmaps (notated with *), without additional expensive supervision. CSL-Daily. CSL-Daily is a recently released large-scale dataset with the largest vocabulary size (2k) among commonly-used CSLR datasets, with a wide content covering family life, social contact and so on. Tab. VIII shows that our CorrNet+ achieves new state-of-the-art accuracy upon this challenging dataset with notable progress, which generalizes well upon real-world scenarios. CSL. As shown in Tab. IX, our CorrNet+ could achieve extremely superior accuracy (0.7% WER) upon this well-examined dataset, outperforming existing CSLR methods. #### IV-C2 Sign language translation We compare our method with recent methods upon two widely-used SLT datasets, Phoenix-2014T and CSL-Daily, in Tab. XIII. These methods are roughly divided into two categories, Sign2Gloss2Text which first transforms input videos into intermediate gloss representations and then performs translation, and Sign2Text which directly conducts end-to-end translation from input videos. We observe that our method outperforms previous methods across both datasets, demonstrating its effectiveness in sign language comprehension. Especially, the powerful TwoStream-SLT [22] adopts both RGB videos and skeleton data as inputs to fuse beneficial information from both modalities, which requires more expensive supervision and heavy computations. In contrast, our method achieves better performance by only inputting RGB videos, demonstrating a better accuracy-computation trade-off. ### IV-D Visualizations Visualizations for identification module. Fig. 7 shows the heatmaps generated by our identification module. Our identification module pays special attention to the human body (light yellow areas), especially informative regions of hands and face (dark red areas) to capture human body trajectories. These results verify the effectiveness of our identification module in dynamically emphasizing critical areas in expressing sign language and suppressing other background regions to overlook noisy information. Visualizations for correlation module. Fig. 8 illustrates the correlation maps generated by our correlation module, which shows the computed spatial-temporal correlations between the current frame and temporal neighboring frames. Three adjacent frames are shown to visualize the correlation maps. We observe that our correlation module pays major attention to informative regions in adjacent frames like hands or the face to enable precise tracking of body trajectories during sign expression. Especially, it learns to focus on the moving body parts that play a major role in expressing signs to enhance sign language comprehension. For example, in the 3rd and 4th row, the correlation module consistently pays major attention to the quickly moving right hands to capture sign information while overlooking the redundant information in the background. Visualizations for temporal attention module. Fig. 9 visualizes the temporal attention maps generated by our temporal attention module over some selected frames. The darker color, the higher value. We observe that our temporal attention module tends to allocate higher weights for frames with rapid movements (e.g., the latter several frames in the first line; the frontal frames in the second line). It learns to assign lower weights for static frames with few body movements. This observation is consistent with the habits of our human beings, as our humans always pay more attention to those moving objects in the visual field to capture key movements. These observations clearly reveal the effectiveness of our temporal attention module in emphasizing the critical segments in the whole sign video. ## V Conclusion Recent methods on sign language understanding usually solely focus on each frame to extract their spatial features and overlook their cross-frame interactions, thus failing to capture the key human body movements. To handle this problem, this paper introduces an enhanced correlation network (CorrNet+) to capture human body trajectories, which comprises a correlation module, an identification module and a temporal attention module. The effectiveness of CorrNet+ is verified on two sign language understanding tasks including continuous sign language recognition (CSLR) and sign language translation (SLT) with new state-of-the-art performance compared to previous methods. Especially, by only inputting RGB videos on both tasks, CorrNet+ outperforms previous methods equipped with resource-intensive pose estimation networks or pre-extracted heatmaps with much fewer computations for hand and facial feature extraction. Compared to CorrNet [30], CorrNet+ achieves a significant performance boost across multiple benchmarks with drastically reduced computational costs, demonstrating a better accuracy-computation trade-off. Plentiful visualizations further verify the effectiveness of CorrNet+ in intelligently emphasizing human body trajectories across adjacent frames in a self-motivated way. ## References * [1] P. Dreuw, D. Rybach, T. Deselaers, M. Zahedi, and H. Ney, “Speech recognition techniques for a sign language recognition system,” _hand_ , vol. 60, p. 80, 2007. * [2] S. C. Ong and S. Ranganath, “Automatic sign language analysis: A survey and the future beyond lexical meaning,” _IEEE Transactions on Pattern Analysis & Machine Intelligence_, vol. 27, no. 06, pp. 873–891, 2005. * [3] N. Adaloglou, T. Chatzis, I. Papastratis, A. Stergioulas, G. T. Papadopoulos, V. Zacharopoulou, G. J. Xydopoulos, K. Atzakas, D. Papazachariou, and P. Daras, “A comprehensive study on deep learning-based methods for sign language recognition,” _IEEE Transactions on Multimedia_ , vol. 24, pp. 1750–1762, 2021. * [4] R. Rastgoo, K. Kiani, and S. Escalera, “Sign language recognition: A deep survey,” _Expert Systems with Applications_ , vol. 164, p. 113794, 2021. * [5] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in _proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 6299–6308. * [6] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks for action recognition in videos,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 41, no. 11, pp. 2740–2755, 2018. * [7] Z. Shou, D. Wang, and S.-F. Chang, “Temporal action localization in untrimmed videos via multi-stage cnns,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 1049–1058. * [8] P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “Learning to track for spatio-temporal action localization,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 3164–3172. * [9] L. Zhu, Z. Xu, Y. Yang, and A. G. Hauptmann, “Uncovering the temporal context for video question answering,” _International Journal of Computer Vision_ , vol. 124, pp. 409–421, 2017. * [10] Y. Li, X. Wang, J. Xiao, W. Ji, and T.-S. Chua, “Invariant grounding for video question answering,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 2928–2937. * [11] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 618–626. * [12] K. L. Cheng, Z. Yang, Q. Chen, and Y.-W. Tai, “Fully convolutional networks for continuous sign language recognition,” in _ECCV_ , 2020. * [13] R. Cui, H. Liu, and C. Zhang, “A deep neural framework for continuous sign language recognition by iterative training,” _TMM_ , vol. 21, no. 7, pp. 1880–1891, 2019. * [14] Z. Niu and B. Mak, “Stochastic fine-grained labeling of multi-state sign glosses for continuous sign language recognition,” in _ECCV_ , 2020. * [15] Y. Min, A. Hao, X. Chai, and X. Chen, “Visual alignment constraint for continuous sign language recognition,” in _ICCV_ , 2021. * [16] R. Zuo and B. Mak, “C2slr: Consistency-enhanced continuous sign language recognition,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 5131–5140. * [17] A. Hao, Y. Min, and X. Chen, “Self-mutual distillation learning for continuous sign language recognition,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 11 303–11 312. * [18] L. Hu, L. Gao, Z. Liu, and W. Feng, “Self-emphasizing network for continuous sign language recognition,” in _Thirty-seventh AAAI conference on artificial intelligence_ , 2023. * [19] B. Zhou, Z. Chen, A. Clapés, J. Wan, Y. Liang, S. Escalera, Z. Lei, and D. Zhang, “Gloss-free sign language translation: Improving from visual-language pretraining,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2023, pp. 20 871–20 881. * [20] L. Guo, W. Xue, Q. Guo, B. Liu, K. Zhang, T. Yuan, and S. Chen, “Distilling cross-temporal contexts for continuous sign language recognition,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 10 771–10 780. * [21] J. Zheng, Y. Wang, C. Tan, S. Li, G. Wang, J. Xia, Y. Chen, and S. Z. Li, “Cvt-slr: Contrastive visual-textual transformation for sign language recognition with variational alignment,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 23 141–23 150. * [22] Y. Chen, R. Zuo, F. Wei, Y. Wu, S. Liu, and B. Mak, “Two-stream network for sign language recognition and translation,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 17 043–17 056, 2022. * [23] J. Lin, C. Gan, and S. Han, “Tsm: Temporal shift module for efficient video understanding,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 7083–7093. * [24] Z. Liu, D. Luo, Y. Wang, L. Wang, Y. Tai, C. Wang, J. Li, F. Huang, and T. Lu, “Teinet: Towards an efficient architecture for video recognition,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 07, 2020, pp. 11 669–11 676. * [25] H. Zhou, W. Zhou, Y. Zhou, and H. Li, “Spatial-temporal multi-cue network for continuous sign language recognition,” in _AAAI_ , 2020. * [26] O. Koller, N. C. Camgoz, H. Ney, and R. Bowden, “Weakly supervised learning with multi-stream cnn-lstm-hmms to discover sequential parallelism in sign language videos,” _PAMI_ , vol. 42, no. 9, pp. 2306–2320, 2019. * [27] O. Koller, J. Forster, and H. Ney, “Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers,” _Computer Vision and Image Understanding_ , vol. 141, pp. 108–125, 2015. * [28] N. C. Camgoz, S. Hadfield, O. Koller, H. Ney, and R. Bowden, “Neural sign language translation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 7784–7793. * [29] H. Zhou, W. Zhou, W. Qi, J. Pu, and H. Li, “Improving sign language translation with monolingual data by sign back-translation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 1316–1325. * [30] L. Hu, L. Gao, Z. Liu, and W. Feng, “Continuous sign language recognition with correlation network,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 2529–2539. * [31] W. Gao, G. Fang, D. Zhao, and Y. Chen, “A chinese sign language recognition system based on sofm/srn/hmm,” _Pattern Recognition_ , vol. 37, no. 12, pp. 2389–2402, 2004. * [32] W. T. Freeman and M. Roth, “Orientation histograms for hand gesture recognition,” in _International workshop on automatic face and gesture recognition_ , vol. 12. Zurich, Switzerland, 1995, pp. 296–301. * [33] O. Koller, O. Zargaran, H. Ney, and R. Bowden, “Deep sign: Hybrid cnn-hmm for continuous sign language recognition,” in _Proceedings of the British Machine Vision Conference 2016_ , 2016. * [34] J. Han, G. Awad, and A. Sutherland, “Modelling and segmenting subunits for sign language recognition based on hand motion analysis,” _Pattern Recognition Letters_ , vol. 30, no. 6, pp. 623–633, 2009. * [35] O. Koller, S. Zargaran, and H. Ney, “Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent cnn-hmms,” in _CVPR_ , 2017. * [36] A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in _Proceedings of the 23rd international conference on Machine learning_ , 2006, pp. 369–376. * [37] J. Pu, W. Zhou, and H. Li, “Iterative alignment network for continuous sign language recognition,” in _CVPR_ , 2019. * [38] J. Pu, W. Zhou, H. Hu, and H. Li, “Boosting continuous sign language recognition via cross modality augmentation,” in _ACM MM_ , 2020. * [39] Y. Min, P. Jiao, Y. Li, X. Wang, L. Lei, X. Chai, and X. Chen, “Deep radial embedding for visual sequence learning,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VI_. Springer, 2022, pp. 240–256. * [40] L. Hu, L. Gao, Z. Liu, and W. Feng, “Temporal lift pooling for continuous sign language recognition,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV_. Springer, 2022, pp. 511–527. * [41] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” _arXiv preprint arXiv:1409.0473_ , 2014\. * [42] N. C. Camgoz, O. Koller, S. Hadfield, and R. Bowden, “Sign language transformers: Joint end-to-end sign language recognition and translation,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 10 023–10 033. * [43] Y. Chen, F. Wei, X. Sun, Z. Wu, and S. Lin, “A simple multi-modality transfer learning baseline for sign language translation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 5120–5130. * [44] J. Ye, W. Jiao, X. Wang, Z. Tu, and H. Xiong, “Cross-modality data augmentation for end-to-end sign language translation,” _arXiv preprint arXiv:2305.11096_ , 2023. * [45] D. Zhu, V. Czehmann, and E. Avramidis, “Neural machine translation methods for translating text to sign language glosses,” in _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , 2023, pp. 12 523–12 541. * [46] K. Lin, X. Wang, L. Zhu, K. Sun, B. Zhang, and Y. Yang, “Gloss-free end-to-end sign language translation,” _arXiv preprint arXiv:2305.12876_ , 2023. * [47] I. Rocco, R. Arandjelovic, and J. Sivic, “Convolutional neural network architecture for geometric matching,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 6148–6157. * [48] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Detect to track and track to detect,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 3038–3046. * [49] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, “Deepflow: Large displacement optical flow with deep matching,” in _Proceedings of the IEEE international conference on computer vision_ , 2013, pp. 1385–1392. * [50] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 2758–2766. * [51] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8934–8943. * [52] X. Shi, Z. Huang, W. Bian, D. Li, M. Zhang, K. C. Cheung, S. See, H. Qin, J. Dai, and H. Li, “Videoflow: Exploiting temporal cues for multi-frame optical flow estimation,” _arXiv preprint arXiv:2303.08340_ , 2023. * [53] X. Shi, Z. Huang, D. Li, M. Zhang, K. C. Cheung, S. See, H. Qin, J. Dai, and H. Li, “Flowformer++: Masked cost volume autoencoding for pretraining optical flow estimation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 1599–1610. * [54] Y. Zhao, Y. Xiong, and D. Lin, “Recognize actions by disentangling components of dynamics,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 6566–6575. * [55] A. Diba, M. Fayyaz, V. Sharma, M. M. Arzani, R. Yousefzadeh, J. Gall, and L. Van Gool, “Spatio-temporal channel correlation networks for action classification,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 284–299. * [56] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7132–7141. * [57] M. Lee, S. Lee, S. Son, G. Park, and N. Kwak, “Motion feature network: Fixed motion filter for action recognition,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 387–403. * [58] H. Wang, D. Tran, L. Torresani, and M. Feiszli, “Video modeling with correlation networks,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 352–361. * [59] Y. Xu, H. Cao, K. Mao, Z. Chen, L. Xie, and J. Yang, “Aligning correlation information for domain adaptation in action recognition,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2022. * [60] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_ , vol. 30, 2017. * [61] J. Huang, W. Zhou, Q. Zhang, H. Li, and W. Li, “Video-based sign language recognition without temporal segmentation,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 32, no. 1, 2018. * [62] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [63] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _2009 IEEE conference on computer vision and pattern recognition_. Ieee, 2009, pp. 248–255. * [64] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014. * [65] Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, and L. Zettlemoyer, “Multilingual denoising pre-training for neural machine translation,” _Transactions of the Association for Computational Linguistics_ , vol. 8, pp. 726–742, 2020. * [66] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_ , 2002, pp. 311–318. * [67] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in _Text summarization branches out_ , 2004, pp. 74–81. * [68] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 116–131. * [69] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 1–9. * [70] I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Dollár, “Designing network design spaces,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 10 428–10 436. * [71] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 3–19. * [72] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7794–7803. * [73] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri, “A closer look at spatiotemporal convolutions for action recognition,” in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , 2018, pp. 6450–6459. * [74] P. Jiao, Y. Min, Y. Li, X. Wang, L. Lei, and X. Chen, “Cosign: Exploring co-occurrence signals in skeleton-based continuous sign language recognition,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2023, pp. 20 676–20 686. * [75] L. Hu, L. Gao, Z. Liu, and W. Feng, “Scalable frame resolution for efficient continuous sign language recognition,” _Pattern Recognition_ , vol. 145, p. 109903, 2024. * [76] L. Hu, L. Gao, Z. Liu, C.-M. Pun, and W. Feng, “Adabrowse: Adaptive video browser for efficient continuous sign language recognition,” in _Proceedings of the 31st ACM International Conference on Multimedia_ , 2023, pp. 709–718. * [77] N. Cihan Camgoz, S. Hadfield, O. Koller, and R. Bowden, “Subunets: End-to-end hand shape and continuous sign language recognition,” in _ICCV_ , 2017. * [78] Z. Yang, Z. Shi, X. Shen, and Y.-W. Tai, “Sf-net: Structured feature network for continuous sign language recognition,” _arXiv preprint arXiv:1908.01341_ , 2019. * [79] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang _et al._ , “Deep high-resolution representation learning for visual recognition,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 43, no. 10, pp. 3349–3364, 2020. * [80] K. Yin and J. Read, “Better sign language translation with stmc-transformer,” _arXiv preprint arXiv:2004.00588_ , 2020. * [81] B. Zhang, M. Müller, and R. Sennrich, “SLTUNET: A simple unified model for sign language translation,” in _The Eleventh International Conference on Learning Representations_ , 2023. [Online]. Available: https://openreview.net/forum?id=EBS4C77p_5S * [82] H. Zhou, W. Zhou, Y. Zhou, and H. Li, “Spatial-temporal multi-cue network for sign language recognition and translation,” _IEEE Transactions on Multimedia_ , vol. 24, pp. 768–779, 2021. | Lianyu Hu received the bachelor degree and master degree in computer science and technology from Dalian University of Technology in 2018 and 2021, respectively. He is currently a Ph.D. candidate with the College of Intelligence and Computing at Tianjin University, China. His research interests include video understanding, multimodal understanding and sign language understanding. ---|--- | Wei Feng is a full Professor at the School of Computer Science and Technology, College of Computing and Intelligence, Tianjin University, China. He received the PhD degree in computer science from City University of Hong Kong in 2008. His major research interests are active robotic vision and visual intelligence, specifically including active camera relocalization and lighting recurrence, active 3D scene perception, video analysis, and generic pattern recognition. ---|--- | Liqing Gao received the BS and MS degree in Electronic & Information Engineering, Inner Mongolia University, China, in 2015 and 2018. She is working toward the PhD degree in the College of Intelligence and Computing, Tianjin University, China. Her research interests include sign language recognition and gesture recognition. ---|--- | Zekang Liu received the BS and ME in Software Engineering from Hebei University of Economics and Business, China and Tianjin Normal University, China, in 2017 and 2019, respectively. He is studying for a Eng.D in the College of Intelligence Computing, Tianjin University, China. His research interests include vehicle detection and sign-language recognition. ---|--- | Liang Wan is a full Professor in the College of Intelligence Computing, and deputy director of Medical College, Tianjin University, P. R. China. She obtained a Ph.D. degree in computer science and engineering from The Chinese University of Hong Kong in 2007, and worked as a PostDoc Research Associate/Fellow at City University of Hong Kong from 2007 to 2011. Her current research interests focus on image processing and computer vision, including image segmentation, low-level image restoration, and medical image analysis. ---|---
# Spectrum formation in X-ray pulsars at very low mass accretion rate: Monte- Carlo approach Alexander A. Mushtukov,1,2 Valery F. Suleimanov,3,4,2 Sergey S. Tsygankov5,2 and Simon Portegies Zwart1 1 Leiden Observatory, Leiden University, NL-2300RA Leiden, The Netherlands 2 Space Research Institute of the Russian Academy of Sciences, Profsoyuznaya Str. 84/32, Moscow 117997, Russia 3 Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D-72076 Tübingen, Germany 4 Kazan Federal University, Kremlevskaya str. 18, Kazan 42008, Russia 5 Department of Physics and Astronomy, FI-20014 University of Turku, Finland E-mail<EMAIL_ADDRESS>(AAM) ###### Abstract It has been recently discovered that the transition of X-ray pulsars to the low luminosity state ($L\lesssim 10^{35}\,{\rm erg\ \rm s^{-1}}$) is accompanied by a dramatic spectral change. Namely, the typical power-law-like spectrum with high energy cutoff transforms into a two-component structure with a possible cyclotron absorption feature on top of it. It was proposed that these spectral characteristics can be explained qualitatively by the emission of cyclotron photons in the atmosphere of the neutron star caused by collisional excitation of electrons to upper Landau levels and further comptonization of the photons by electron gas. The latter is expected to be overheated in a thin top layer of the atmosphere. In this paper, we perform Monte Carlo simulations of the radiative transfer in the atmosphere of an accreting neutron star while accounting for a resonant scattering of polarized X-ray photons by thermally distributed electrons. The spectral shape is shown to be strongly polarization-dependent in soft X-rays ($\lesssim 10\,{\rm keV}$) and near the cyclotron scattering feature. The results of our numerical simulations are tested against the observational data of X-ray pulsar A 0535+262 in the low luminosity state. We show that the spectral shape of the pulsar can be reproduced by the proposed theoretical model. The applications of the discovery to the observational studies of accreting neutron stars are discussed. ###### keywords: X-rays: binaries – stars: neutron – radiative transfer – scattering – stars: magnetic field – polarization ††pubyear: 2021††pagerange: Spectrum formation in X-ray pulsars at very low mass accretion rate: Monte-Carlo approach–B ## 1 Introduction Classical X-ray pulsars (XRPs) are accreting strongly magnetized neutron stars (NSs) orbiting around optical stellar companions (for a review see e.g., Walter et al., 2015). One of the richest families of XRPs are systems with Be star as a companion (BeXRPs; Reig, 2011). These systems exhibit strong transient activity that allows us to study radiation’s interaction with a matter under conditions of extremely strong magnetic fields over a wide range of mass accretion rates covering up to six orders of magnitude (Doroshenko, 2020). From an observational point of view, spectra of XRPs at high luminosities ($>10^{36}$ erg s-1) have similar shapes which can be well fitted by a power- law with an exponential cutoff at high energies (see, e.g., Nagase, 1989; Filippova et al., 2005). However, it was recently discovered that decrease of the observed luminosity below this value is accompanied by the dramatic changes of the energy spectra in several XRPs (Tsygankov et al., 2019a; Tsygankov et al., 2019b; Doroshenko et al., 2021; Lutovinov et al., 2021), pointing to varied physical and geometrical properties of the emission region. In particular, originally it was shown for the transient XRPs, GX 304$-$1 and A 0535+26, that once their luminosity drops down to $~{}10^{34}-10^{35}$ erg s-1, the "canonical" spectral shape of their emission undergoes a dramatic transition into a two-component structure consisting of two humps peaking at $\sim 5-7$ keV and $\sim 30-50$ keV (Tsygankov et al., 2019a; Tsygankov et al., 2019b; Lutovinov et al., 2021). The only other source with a similar spectral structure observed earlier is the persistent low-luminosity XRP X Persei (Doroshenko et al., 2012), which, however, never showed spectral transitions as a function of luminosity. In the case of A 0535+26, a cyclotron absorption feature was also detected on top of the high energy component of the spectrum. The appearance of the high-energy spectral component was interpreted by the recombination of electrons collisionally excited to the upper Landau levels in the heated layers of the NS atmosphere (see Fig. 1, Tsygankov et al. 2019b). However, no quantitative description had been done. Dramatic spectral changes in XRPs at very low mass accretion rates can shed light on different physical processes responsible for the spectra formation at different luminosities. Depending on the luminosity, one can distinguish different regimes of accretion, i.e., the interaction of radiation with accreting plasma: (i) At high mass accretion rates, the luminosity and local X-ray energy flux are high enough to stop accretion flow above NS surface in radiation-dominated shock (Lyubarsky & Sunyaev, 1982). Below the shock region, the matter slowly settles down to the stellar surface and forms the so-called accretion column confined by a strong magnetic field (Inoue, 1975; Basko & Sunyaev, 1976; Wang & Frank, 1981; Mushtukov et al., 2015b; Gornostaev, 2021). Because of magnetic field confinement and opacity reduced in a strong magnetic field (Herold, 1979), the accretion column can survive under the extreme conditions of high radiation pressure. The minimal accretion luminosity, which is sufficient to stop the matter in radiation pressure dominated shock, is called the critical luminosity $L_{\rm crit}$ (Basko & Sunyaev, 1976). The critical luminosity is of the order of $10^{37}\,{\rm erg\ \rm s^{-1}}$, and depends on the geometry of accretion flow and magnetic field strength at the NS surface (Becker et al., 2012; Mushtukov et al., 2015a). (ii) At low mass accretion rates (sub-critical regime, when $L<L_{\rm crit}$), the radiation pressure is insufficient to stop accretion flow above NS surface, and final braking of the flow happens in the atmosphere of a NS due to collisions between particles. Sub-critical accretion results in hot spots at the stellar surface located close to the NS’s magnetic poles. Even in the sub-critical regime, the accretion flow dynamics can be affected by radiation pressure, and X-ray spectra can be shaped by the interaction of photons with the accreting material (see, e.g., Becker & Wolff 2007; Mushtukov et al. 2015c). Only in the case of very low mass accretion rates ($\dot{M}\lesssim 10^{15}\,{\rm g\,s^{-1}}$) the influence of interaction between accreting matter and radiation above the NS surface is negligible. In this regime, almost all energy of accretion flow is released in the geometrically thin NS atmosphere, where the accreted protons transfer their kinetic energy to the plasma heating. In this case, the geometry of the emitting region is elementary, and the plane-parallel approximation for the particle heated atmosphere is a reasonable approximation for a complete description of spectra formation. In this work, based on the Monte-Carlo simulations of radiative transfer in a strong magnetic field, we propose the first simple two-slab model describing the spectral formation in XRPs at very low mass accretion rates. In Section 2, we introduce our model set up and the basic assumptions of radiative transfer. The basic details of our numerical code are given in Section 3. Results of numerical simulations and their comparison with the observational data are given in Section 4. Summarizing our results in Section 5, we propose applications to the observational studies and argue that the proposed concept of spectra formation at extremely low state of accretion luminosity can be used in detailed diagnostics of “propeller" state in accretion and be a base of alternative estimations of NS magnetic field strength. ## 2 Model set up Figure 1: Schematic picture of the theoretical model. The X-ray energy spectrum is originated from the atmosphere of a NS with the upper layer overheated by low-level accretion. The accretion flow is stopped in the atmosphere due to collisions. Collisions result in excitation of electrons to upper Landau levels. The following radiative de-excitation of electrons produces cyclotron photons. The cyclotron photons are partly reprocessed by magnetic Compton scattering and partially absorbed in the atmosphere. The reprocessed photons form a high-energy component of the spectrum, while the absorbed energy is released in thermal emission and forms a low-energy part of the spectrum. ### 2.1 Geometry of emitting region Sub-critical mass accretion rates (see e.g. Basko & Sunyaev 1976; Mushtukov et al. 2015a) result in the simplest geometry of emitting regions: the X-ray photons are produced by hot spots at the NS surface. If the mass accretion rate is well below the critical luminosity value $L\ll L_{\rm crit}$, the influence of X-ray energy flux on the dynamics of the accretion process is insignificant (Mushtukov et al., 2015c). In the case of magnetic field dominated by dipole component, the area of the spots at the NS surface can be roughly estimated as $\displaystyle S\approx 3\times 10^{9}\,\Lambda^{-7/8}\,m^{-13/20}\,R_{6}^{19/10}\,B_{12}^{-1/2}\,L_{37}^{2/5}\quad\mbox{cm}^{2},$ (1) where $\Lambda$ is a constant which depends on accretion flow geometry: $\Lambda<1$ for the case of accretion through the disc with $\Lambda=0.5$ being a commonly used value (see e.g. Ghosh & Lamb 1978; Lai 2014), $m$ is the NS mass measured in units of solar masses $M_{\odot}$, $R_{6}$ is the NS radius measured in $10^{6}\,{\rm cm}$, $B_{12}$ is the magnetic field strength at the pole of a NS measured in units of $10^{12}\,{\rm G}$, and $L_{37}$ is the accretion luminosity of XRP measured in units of $10^{37}\,{\rm erg\ \rm s^{-1}}$. The corresponding effective temperature of a hot spots can be estimated as $\displaystyle T_{\rm eff}=\frac{L}{2\sigma_{\rm SB}S}=6.6\,\Lambda^{7/32}\,m^{13/80}\,R_{6}^{-19/40}\,B_{12}^{1/8}\,L_{37}^{3/20}\quad\mbox{keV},$ (2) where $\sigma_{\rm SB}$ is the Stefan–Boltzmann constant. In the case of accretion on weakly magnetized NSs, the protons of accretion flow brake in a stellar atmosphere due to Coulomb interactions with atmospheric electrons (see Chapter 3 in Frank et al. 2002). In the case of highly magnetized atmospheres, the physical picture is similar, but a significant fraction of proton kinetic energy turns into excitation of electrons to upper Landau levels (Nelson et al., 1995). Further de-excitation of electrons results in production of cyclotron photons at energy $E_{\rm cyc}\approx 11.6\,B_{12}\,{\rm keV}$ (it is a case if $B\lesssim 10^{13}\,{\rm G}$, see e.g. Harding & Preece 1987). It is expected that the sources of cyclotron photons decay exponentially with the optical depth $\propto e^{-\tau/\tau_{\rm br}}$, where $\tau_{\rm br}$ is a typical optical depth due to Thomson scattering, where the accretion flow is braked due to collisions. The total number of cyclotron photons emitted in the atmosphere per ${\rm cm}^{2}$ in a ${\rm sec}$ due to accretion flow braking can be estimated as $\displaystyle\frac{1}{S}\frac{{\rm d}N_{\rm cyc}}{{\rm d}t}\approx 6.2\times 10^{35}\frac{L_{37}}{E_{\rm cyc,keV}S_{10}}\quad{\rm cm^{-2}s^{-1}},$ (3) where $S_{10}$ is the area of a hot spot at the NS surface in units of $10^{10}\,{\rm cm}$. A temperature structure of the accretion heated weakly magnetized atmosphere could be divided into two qualitatively different layers. The most of energy is released in deep optically thick layers (if $\tau_{\rm br}>1$) where the plasma mass density is high enough and cooling due to free-free emission can compensate the heating due to braking of the accretion flow. The deep heated layer is close to being isothermal, with the temperature close to the effective one (2). However, the upper rarefied layers of the atmosphere have too low density to be able to cool with free-free emission. As a result, the upper layers are cooled by Compton scattering and overheated up to tens and even a hundred keV (see, e.g., Suleimanov et al. 2018). We assume that the highly magnetized accretion heated atmospheres have a similar qualitatively structure. Thus, we consider a plane-parallel magnetized semi-infinite atmosphere with a thin overheated upper layer and an isothermal deeper layer (see Fig. 1). The temperatures of both parts are free parameters of our model, as well as the optical thickness of the overheated slab. The magnetic field is taken to be perpendicular to the NS surface, which is a good approximation for the regions located close to the magnetic poles of a NS. The resulting spectrum is computed by solving the radiation transfer equation using the Monte-Carlo method. ### 2.2 Radiative transfer Both opacity and refractive index are dependent on photon polarization in a strongly magnetized plasma and vacuum. As a result, the solution of the radiative transfer problem has to account for the polarization of photons. In the general case, the polarization of X-ray photons can be described in terms of four Stokes parameters. At the same time, the radiative transfer equation turns into a set of four equations: one for each Stokes parameter. Because the plasma in a strong magnetic field is anisotropic and birefringent (Ginzburg, 1970) the polarization of X-ray photon can vary along its trajectory. It makes the description of polarised radiative transfer by the Stokes parameters even more complicated. However, any X-ray photon can be represented as a linear composition of two orthogonal normal modes, which conserve their polarisation state along their trajectories (see, e.g., Zheleznyakov 1977). The linear composition of the normal modes changes its polarisation due to the difference in the phase velocity of the modes. If the difference between the phase velocities of normal modes is large enough, the radiative transfer problem reduces and can be effectively solved for two normal modes, i.e., the radiation can be described by specific intensities in two normal polarization modes $I_{E}^{(s)}$, where $s=1\,\,(s=2)$ corresponds to X-mode (O-mode). The ellipticity of normal modes is determined by plasma conditions (temperature, mass density and chemical composition, see e.g., Ginzburg 1970; Kirk 1980) and vacuum polarization (see Harding & Lai 2006 for review). In this paper, we neglect the effects of vacuum polarization (see Appendix B), assume that the dielectric tensor in the atmosphere coincides with the dielectric tensor of cold plasma, the normal modes are transverse and use the approximated ellipticity of the modes affected by plasma effects only: $\displaystyle K_{s}(E,\theta)$ $\displaystyle\equiv$ $\displaystyle-i\left(\frac{E_{x}}{E_{y}}\right)$ $\displaystyle=$ $\displaystyle\frac{2\cos\theta}{\frac{E_{\rm cyc}}{E}\sin^{2}\theta-(-1)^{s}\sqrt{\frac{E^{2}_{\rm cyc}}{E^{2}}\sin^{4}\theta+4\cos^{2}\theta}},$ where $E_{x}$ ($E_{y}$) is the photon’s electric field component along (perpendicular to) the $\mathbf{k}-\mathbf{B}$ plane, where $\mathbf{k}$ is the vector of photon momentum and $\mathbf{B}$ is the vector of external magnetic field, and $\theta$ is the angle between the photon momentum and $B$-field direction (Gnedin & Syunyaev, 1973). The main processes of interaction between radiation and matter in our model are * • magnetic Compton scattering, * • cyclotron absorption, * • bremsstrahlung affected by a strong magnetic field. The radiative transfer equation can be written as $\displaystyle\cos\theta\frac{{\rm d}I_{E}^{(s)}(\Omega)}{{\rm d}\tau_{\rm T}}$ $\displaystyle=$ $\displaystyle\frac{\alpha_{\rm abs}^{(s)}(\Omega)}{\alpha_{\rm T}}I_{E}^{(s)}(\Omega)$ $\displaystyle-$ $\displaystyle\sum\limits_{j=1}^{2}\int\limits_{0}^{\infty}{\rm d}E^{\prime}\int\limits_{(4\pi)}{\rm d}\Omega^{\prime}$ $\displaystyle\times\left[R_{sj}(E,\Omega|E^{\prime},\Omega^{\prime})I_{E^{\prime}}^{(j)}(\Omega^{\prime})\right.$ $\displaystyle-\left.R_{js}(E^{\prime},\Omega^{\prime}|E,\Omega)I_{E}^{(s)}(\Omega)\right]$ $\displaystyle-$ $\displaystyle\frac{\alpha_{\rm abs}^{(s)}(\Omega)}{\alpha_{\rm T}}\frac{B_{E}}{2}-S_{\rm ini}^{(s)}(E,\Omega),$ where $\tau_{\rm T}$ is the optical depth due to non-magnetic Thomson scattering, $\alpha_{\rm abs}^{(s)}$ is the coefficient of true absorption (due to free-free and cyclotron mechanisms), $\alpha_{\rm T}$ is the absorption coefficient due to non-magnetic Thomson scattering, $B_{E}(T)$ is the Planck function. The redistribution function $R_{s\,s^{\prime}}(E,\Omega|E^{\prime},\Omega^{\prime})$ describes redistribution of X-ray photons over energy, momentum and polarization state, and is related to the probability of photon transition from energy $E^{\prime}$, direction given by solid angle $\Omega^{\prime}$ and polarisation state $s^{\prime}$ to energy $E$, direction $\Omega$ and polarization state $s$ due to magnetic Compton scattering. The radiative transfer equation (2.2) neglects non-linear effects like induced scattering because they are expected to be negligible in low-luminosity states. The first term in the right-hand side of equation (2.2) describes absorption due to bremsstrahlung and cyclotron mechanism. With the second term, we account for the photon redistribution due to Compton scattering. The third term introduces the thermal emission of photons, which is calculated in the assumption of local thermodynamic equilibrium (LTE). The last term gives the primary sources of cyclotron emission, which are distributed exponentially in our particular case $S_{\rm ini}\propto e^{-\tau/\tau_{\rm br}}$. Figure 2: The scheme of Monte Carlo simulations performed in the paper. Figure 3: $E\,F_{E}$ spectrum (in the arbitrary units) of X-ray radiation leaving the atmosphere of a NS at very low mass accretion rates. The fiducial case is given by red solid line and corresponds to the case of atmosphere with describing by the following set of parameters: $E_{\rm cyc}=50\,{\rm keV}$, $T_{\rm bot}=3\,{\rm keV}$, $T_{\rm up}=80\,{\rm keV}$, $\tau_{\rm up}=0.1$, $\tau_{\rm br}=1$, $Z=1$. Different panels show the influence of different parameters on the final X-ray energy spectrum. (a) The effect of optical thickness $\tau_{\rm up}$ of the overheated upper layer. (b) The influence of effective depth $\tau_{\rm br}$, where the accretion flow is stopped by collisions. (c) The influence of the temperature $T_{\rm up}$ of the overheated upper layer. (d) The influence of the temperature $T_{\rm bot}$ of the atmosphere below the overheated upper layer. (d) The influence of the chemical composition of the atmosphere. (f) The influence of the angle $\theta_{\rm B}$ between the magnetic field lines and normal to the NS surface. The run results from $2\times 10^{6}$ photons, leaving the atmosphere. #### 2.2.1 Compton scattering The redistribution function due to Compton scattering in (2.2) is given by $\displaystyle R_{s_{\rm f},s_{\rm i}}(E_{\rm f},\Omega_{\rm f}|E_{\rm i},\Omega_{\rm i})=\frac{1}{\sigma_{\rm T}}\frac{{\rm d}\sigma_{s_{\rm f},s_{\rm i}}}{{\rm d}\Omega_{\rm f}{\rm d}E_{\rm f}}([f_{\rm e}(p_{b})],E_{\rm i},\Omega_{\rm i},\Omega_{\rm f}),$ (6) where the double differential cross section of scattering by the electron gas with the electron distribution over the momentum $f_{\rm e}(p_{b})$ is $\displaystyle\frac{{\rm d}\sigma_{s_{\rm f},s_{\rm i}}}{{\rm d}\Omega_{\rm f}{\rm d}E_{\rm f}}([f_{\rm e}(p_{b})],E_{\rm i},\Omega_{\rm i},E_{\rm f},\Omega_{\rm f})$ $\displaystyle=$ $\displaystyle\frac{{\rm d}\sigma_{s_{\rm f},s_{\rm i}}}{{\rm d}\Omega_{\rm f}}(p_{b}^{*},E_{\rm i},\Omega_{\rm i},\Omega_{\rm f})$ (7) $\displaystyle\times\left(\frac{{\rm d}p_{b}^{*}}{{\rm d}E_{\rm f}}\right)f_{\rm e}(p_{b}^{*}),$ the solid angles are related to the spherical coordinates describing the direction of photon motion $(\theta_{{\rm i},{\rm f}},\varphi_{{\rm i},{\rm f}})$ as ${\rm d}\Omega_{{\rm i},{\rm f}}=\sin\theta_{{\rm i},{\rm f}}{\rm d}\theta_{{\rm i},{\rm f}}{\rm d}\varphi_{{\rm i},{\rm f}}$, $p^{*}_{b}$ is the electron momentum required for the photon transition $\\{E_{\rm i},\Omega_{\rm i}\\}\longrightarrow\\{E_{\rm f},\Omega_{\rm f}\\}$, $\displaystyle\frac{{\rm d}\sigma_{s_{\rm f},s_{\rm i}}}{{\rm d}\Omega_{\rm f}}(p_{b},E_{\rm i},\Omega_{\rm i},\Omega_{\rm f})$ $\displaystyle=$ $\displaystyle\frac{{\rm d}\sigma_{s_{\rm f},s_{\rm i}}}{{\rm d}\Omega_{\rm f}}(p_{b}=0,E^{*}_{\rm i},\theta^{*}_{\rm i},\varphi_{\rm i},\theta^{*}_{\rm f},\varphi_{\rm f})$ (8) $\displaystyle\times\frac{(1-\beta^{2})}{(1-\beta\cos\theta_{\rm f})^{2}},$ is the ordinary scattering cross section by the electron of momentum $p_{b}$ along magnetic field lines, and $\displaystyle E^{*}_{\rm i}$ $\displaystyle=$ $\displaystyle E_{\rm i}\gamma(1-\beta\cos\theta_{\rm i}),$ (9) $\displaystyle\cos\theta^{*}_{\rm i,f}$ $\displaystyle=$ $\displaystyle\frac{\cos\theta_{\rm i,f}+\beta}{1+\beta\cos\theta_{\rm i,f}}$ (10) $\displaystyle\gamma$ $\displaystyle=$ $\displaystyle[1+p_{b}^{2}/(m_{\rm e}c)^{2}]^{1/2}=(1-\beta^{2})^{-1/2}$ (11) account for the Doppler effect and aberration due to transition between different reference frames. Compton scattering is affected by a strong magnetic field in the atmosphere of a NS (Daugherty & Harding, 1986). The scattering cross-section strongly depends on photon energy $E$, momentum, and polarization state. The scattering of photons which energy is close to the cyclotron energy $E_{\rm cyc}\approx 11.6\,(B/10^{12}\,{\rm G})\,{\rm keV}$ becomes resonant. In this paper, Compton scattering is considered in a non-relativistic manner (see e.g., Herold 1979). The differential cross sections of Compton scattering by electron at rest are expressed by the complex amplitudes of scattering $a^{\rm(p)}_{s_{\rm f}s_{\rm i}}$: $\displaystyle\frac{{\rm d}\sigma_{s_{\rm f}s_{\rm i}}}{{\rm d}\Omega_{\rm f}}(p_{b}=0,E_{\rm i},\theta_{\rm i},\varphi_{\rm i},\theta_{\rm f},\varphi_{\rm f})=\frac{3}{32\pi}\sigma_{\rm T}|a^{\rm(p)}_{s_{\rm f}s_{\rm i}}|^{2},$ (12) where $E_{\rm i}$ is the initial photon energy, $s_{\rm i}$ and $s_{\rm f}$ denote the photon polarization state before and after the scattering respectively. The complex amplitudes $a^{\rm(p)}_{s_{\rm f}s_{\rm i}}$ depend on exact expression for the polarization modes. The amplitudes can be combined in a matrix $\displaystyle\widehat{a}^{\rm(p)}=\left(\begin{array}[]{cc}a^{\rm(p)}_{\rm 11}&a^{\rm(p)}_{\rm 12}\\\ a^{\rm(p)}_{\rm 21}&a^{\rm(p)}_{\rm 22}\end{array}\right),$ (15) which is given by $\displaystyle\widehat{a}^{\rm(p)}=\widehat{M}_{\rm pv}(E_{\rm f},\theta_{\rm f})\,\widehat{a}^{\rm(v)}\,\widehat{M}^{-1}_{\rm pv}(E_{\rm i},\theta_{\rm i}),$ (16) where the unitary matrix $\displaystyle\widehat{M}_{\rm pv}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{1+|K_{2}|^{2}}}\left(\begin{array}[]{cc}-iK_{2}&1\\\ i&K_{2}\end{array}\right)$ (19) $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{1+|K_{1}|^{2}}}\left(\begin{array}[]{cc}-i&K_{1}\\\ iK_{1}&1\end{array}\right),$ (22) and the matrix $\widehat{a}^{\rm(v)}$ is composed of the scattering amplitudes calculated for the linearly polarised vacuum modes Herold 1979: $\displaystyle a^{\rm(v)}_{\rm 11}=\frac{E_{\rm i}}{E_{\rm i}+E_{\rm cyc}}e^{i(\varphi_{\rm i}-\varphi_{\rm f})}+\frac{E_{\rm i}}{E_{\rm i}-E_{\rm cyc}}e^{-i(\varphi_{\rm i}-\varphi_{\rm f})}$ (23) $\displaystyle a^{\rm(v)}_{\rm 22}=2\sin\theta_{\rm i}\sin\theta_{\rm f}$ $\displaystyle+\cos\theta_{\rm i}\cos\theta_{\rm f}\left(\frac{E_{\rm i}}{E_{\rm i}+E_{\rm cyc}}e^{i(\varphi_{\rm i}-\varphi_{\rm f})}+\frac{E_{\rm i}}{E_{\rm i}-E_{\rm cyc}}e^{-i(\varphi_{\rm i}-\varphi_{\rm f})}\right)$ $\displaystyle\left(\begin{array}[]{cc}a^{\rm(v)}_{\rm 12}\\\ a^{\rm(v)}_{\rm 21}\end{array}\right)=\left(\begin{array}[]{cc}-i\cos\theta_{\rm i}\\\ i\cos\theta_{\rm f}\end{array}\right)$ (28) $\displaystyle\quad\quad\quad\quad\quad\times\left(\frac{E_{\rm i}}{E_{\rm i}+E_{\rm cyc}}e^{i(\varphi_{\rm i}-\varphi_{\rm f})}-\frac{E_{\rm i}}{E_{\rm i}-E_{\rm cyc}}e^{-i(\varphi_{\rm i}-\varphi_{\rm f})}\right).$ The thermal motion of electrons significantly affects both scatterings below the cyclotron resonance and at the cyclotron resonance. 111The thermal broadening of the resonance is calculated according to one-dimensional electron distribution and natural width of Landau levels approximately calculated according to Pavlov et al. 1991. In our Monte-Carlo code, we use pre-calculated tables, which describe the redistribution of photons over the energy, momentum, and polarization states. #### 2.2.2 Cyclotron absorption Cyclotron absorption at the fundamental depends on the photon polarization state and is calculated according to Zheleznyakov 1977. In the case of $k_{\rm B}T_{\rm e}\ll m_{\rm e}c^{2}$, where $k_{\rm B}$ is the Boltzmann constant, $T_{\rm e}$ is the electron temperature, and $m_{\rm e}$ is the electron mass, the adopted absorption cross section for extraordinary photons is given by $\displaystyle\sigma_{\rm s}^{(1)}\simeq 5.67\times 10^{3}\frac{\sigma_{\rm T}}{B_{12}}\frac{(1+\cos^{2}\theta)}{|\cos\theta|}\left(\frac{m_{\rm e}c^{2}}{k_{\rm B}T_{\rm e}}\right)^{1/2}$ (29) $\displaystyle\times\exp\left[-\frac{m_{\rm e}c^{2}}{2k_{\rm B}T_{\rm e}\cos^{2}\theta}\left(\frac{E-E_{\rm cyc}}{E_{\rm cyc}}\right)^{2}\right].$ The cross section of cyclotron absorption for O-mode photons is smaller: $\sigma_{\rm s}^{(2)}\approx\sigma_{\rm s}^{(1)}(k_{\rm B}T_{\rm e}/m_{\rm e}c^{2})$. In the case of $k_{\rm B}T_{\rm e}\sim m_{\rm e}c^{2}$ the absorption cross section is obtained numerically under the assumption that the distribution of electrons over the momentum $p_{b}$ on the ground Landau level is given by $\displaystyle f_{\rm e}(p_{b})=\frac{1}{2}\frac{e^{-y\gamma}}{K_{1}(y)},$ (30) where $\gamma=[1+(p_{b}/(m_{\rm e}c))^{2}]^{1/2}$ is the Lorentz factor, $y=m_{\rm e}c^{2}/(kT_{\rm e})$ is the inverse dimensionless temperature and $K_{1}$ is the modified Bessel function of the second kind (Mushtukov et al., in prep.). Cyclotron absorption at the fundamental results in excitation of an electron from the ground Landau level to the first excited level. Because the de-excitation rate of electrons due to emission of cyclotron photons is much larger than the de-excitation rate due to collisions between particles $r_{\rm coll}$ (Bonazzola et al., 1979), the majority of cyclotron absorption events are followed by almost immediate photon emission and can be considered as an event of Compton scattering (see e.g., Harding & Lai 2006). The probability $P_{\rm abs,true}$ that the cyclotron absorption event will end up with a true absorption can be estimated from cyclotron $r_{\rm cyc}$ and collision $r_{\rm coll}$ de-excitation rates: $\displaystyle P_{\rm abs,true}\simeq\frac{r_{\rm coll}}{r_{\rm cyc}}\sim 1.7\times 10^{-7}n_{\rm e,21}B_{12}^{-7/2},$ (31) where $n_{\rm e,21}=n_{\rm e}/10^{21}\,{\rm cm^{-3}}$ is local number density of electrons, which is taken in our calculations to be dependent on optical depth in the atmosphere. #### 2.2.3 Free-free absorption Free-free (bremsstrahlung) opacity is polarization, direction and density dependent. We calculate the opacity according to Lai & Ho 2003 (see Appendix A) and under assumption that the ellipticity of normal modes is given by (2.2). The mass density $\rho$ in the atmosphere is calculated under assumption of hydrostatic equilibrium: ${\rm d}P/{\rm d}\xi=-\rho g$, where $P$ is the gas pressure, $\xi$ \- vertical coordinate in the atmosphere, and $g$ is the gravitational acceleration. In the case of atmosphere composed of a few layers of fixed temperature, the mass density at optical depth $\tau$ is given by 222 We use the optical depth due to non-magnetic Thomson scattering, assuming that the opacity is $\kappa_{\rm T}=0.34\,{\rm cm^{2}\,g^{-1}}$. $\displaystyle\rho(\tau)=\rho(\tau_{i})+40.6\,\frac{m}{T^{(i)}_{\rm keV}R_{6}^{2}}[\tau-\tau_{i}]\quad{\rm g\,cm^{-3}},$ (32) where $\tau_{i}$ are optical depth at the top border of $i$ layer, $T^{(i)}_{\rm keV}$ is the temperature in the layer. The initial cyclotron photons are emitted at $\sim E_{\rm cyc}$ with the thermal broadening, which depends on the direction in respect to magnetic field lines. The angular distribution of the cyclotron photons is taken to be isotropic. ## 3 Monte Carlo code We perform Monte Carlo simulations tracing X-ray photons in plane parallel atmosphere of magnetized NS. Magnetic field strength and its direction are fixed and assumed to be constant in the atmosphere, which is reasonable assumption because of small geometrical thickness of the atmosphere and small size of hot spots at the NS surface. Non-linear effects of radiative transfer are not important at low luminosity states and neglected in our simulations. Tracing X-ray photons in the atmosphere we get angular-dependent energy spectra of polarized radiation leaving the atmosphere (see Fig. 2). There are two sources of seed photons in the model: the cyclotron photons due to radiative de-excitation of electrons excited to upper Landau levels by collisions of accretion flow with the atmosphere, and thermal photons. The sources of cyclotron photons are distributed exponentially in the atmosphere. The cyclotron photons are emitted close to the cyclotron energy within the thermally broadened line. The distribution of initial thermal photons is determined by the absorption coefficients in the atmosphere and local temperature (see the third term in radiative transfer equation 2.2). The transfer of X-ray photons originated from different initial sources is considered separately. For each simulation we trace history of $N_{\rm cyc}^{({\rm i})}$ cyclotron photons and $N_{\rm th}^{({\rm i})}$ thermal photons. Because we account for free-free and cyclotron absorption in the atmosphere, the number of cyclotron and thermal photons leaving the atmosphere - $N_{\rm cyc}^{({\rm f})}$ and $N_{\rm th}^{({\rm f})}$ \- is smaller than the number of seed photons: $N_{\rm cyc}^{({\rm f})}\leq N_{\rm cyc}^{({\rm i})}$ and $N_{\rm th}^{({\rm f})}\leq N_{\rm th}^{({\rm i})}$. Simulating radiative transfer we aim to reach a certain number ($N_{\rm th}^{({\rm f})},N_{\rm cyc}^{({\rm f})}\sim 2\times 10^{6}$) of photons that leave the atmosphere. The results of the separately calculated radiative transfer problems are combined in a final angular and polarization dependent spectra: $\displaystyle\frac{{\rm d}N_{\rm cyc}^{({\rm f})}(E,\theta,s)}{{\rm d}E},\quad\quad\frac{{\rm d}N_{\rm th}^{({\rm f})}(E,\theta,s)}{{\rm d}E}.$ (33) In order to construct the final spectrum accounting for both sources of seed photons, we normalize the contribution of both sources: $\displaystyle\frac{{\rm d}N^{({\rm f})}(E,\theta,s)}{{\rm d}E}=A_{1}\,\frac{{\rm d}N_{\rm cyc}^{({\rm f})}(E,\theta,s)}{{\rm d}E}+A_{2}\,\frac{{\rm d}N_{\rm th}^{({\rm f})}(E,\theta,s)}{{\rm d}E},$ (34) where $A_{1}$ and $A_{2}$ are constants. The normalization is performed on the base of the energy conservation law in the atmosphere. We start with the simulation of radiative transfer of the cyclotron photons. The accretion luminosity is proportional to the total energy of seed cyclotron photons in the simulation: $\displaystyle L_{\rm acc}\propto\sum_{j=1}^{N_{\rm cyc}^{\rm(i)}}E_{j,{\rm cyc}}^{\rm(i)},$ (35) while the part of luminosity due to thermal emission of the atmosphere is determined by the difference between total energy of seed cyclotron photons and total energy of reprocessed cyclotron photons leaving the atmosphere: $\displaystyle\sum_{j=1}^{N_{\rm th}^{\rm(f)}}E_{j,{\rm th}}^{\rm(f)}=\left[\sum_{j=1}^{N_{\rm cyc}^{\rm(i)}}E_{j,{\rm cyc}}^{\rm(i)}-\sum_{j=1}^{N_{\rm cyc}^{\rm(f)}}E_{j,{\rm cyc}}^{\rm(f)}\right].$ (36) Using conditions (35) and (36) we get constants $A_{1}$ and $A_{2}$ in (34). In order to perform Monte Carlo simulations and track the photons we use a set of pre-calculated tables describing photon redistribution due to magnetic Compton scatterings: Tables A of total scattering cross-section, where the cross-section is given as a function of photon energy, polarisation state before and after the scattering event, the angle between the $B$-field direction and photon momentum, temperature and bulk velocity of the electron gas. For each combination of the initial and final polarization state of a photon, the tables are pre-calculated in a fixed grid in photon energy and angle $\theta_{\rm i}$, and for a fixed temperature and bulk velocity of the gas. To get a scattering cross-section for given photon energy and momentum, we use quadratic interpolation in the photon energy grid and further quadratic interpolation in the angle grid. Tables B of photon probabilities to be scattered into a certain segment of the solid angle $(\theta_{\rm f}+\Delta\theta_{\rm f},\varphi_{\rm f}+\Delta\varphi_{\rm f})$. The tables are pre-calculated on a grid of photon initial parameters (energy, polarization state, momentum) and for both possible final polarization states. Steps of tracing of photon history are 1. 1. We make a choice of seed photon origin: thermal emission of the atmosphere or cyclotron emission. If the photon is due to the cyclotron emission, we get the optical depth where the photon is emitted: $\tau_{0}=-\tau_{\rm br}\ln X_{1},$ where $X_{1}\in(0;1)$ is a random number, and the photon energy assuming that the photon is emitted within a thermally broadened cyclotron line. If the photon is due to the thermal emission, we get the optical depth of its emission in the atmosphere out of pre-calculated cumulative distribution functions of photon emission accounting for the assumed temperature structure in the atmosphere and free-free absorption coefficient. 2. 2. We get the free path of the photon accounting for scattering and absorption opacity. 3. 3. Using the initial coordinate of the photon and free path length, we get a new coordinate of a photon, where it is scattered or absorbed. If the new coordinate is located out of the atmosphere, the photon contributes to the spectra of X-ray radiation leaving the atmosphere, and we come back to the step 1. If the photon is still in the atmosphere, we move on to step 4. 4. 4. Comparing the cross section of Compton scattering, free-free and cyclotron absorption, we specify the elementary process at a new photon coordinate. If the photon is absorbed, we stop trace history of the photon and start tracing a new photon, i.e. come back to step 1. If the photon is scattered by electrons, we get its new energy, momentum direction and polarization state on the base of pre-calculated tables, and come back to step 2. ## 4 Results of numerical simulations ### 4.1 Influence of NS atmosphere conditions on the X-ray spectra For the case of fixed local magnetic field strength, there are five main parameters of the performed numerical simulations and resulting spectra of X-ray photons. These are (i) the optical thickness of the overheated upper layer due to Thomson scattering $\tau_{\rm up}$, (ii) the typical length of accretion flow braking in the atmosphere measured in optical depth due to Thomson scattering $\tau_{\rm br}$, (iii) the temperature of the atmosphere under the overheated upper layer $T_{\rm bot}$, (iv) the temperature of the overheated upper layer $T_{\rm up}$, and (v) the chemical composition of the atmosphere given by the atomic number $Z$. Here we investigate how these parameters affect the spectrum. To separate effects caused by different reasons, we compare the results of numerical simulations with the results based on the following set of fiducial parameters: $T_{\rm bot}=3\,{\rm keV}$, $T_{\rm up}=80\,{\rm keV}$, $\tau_{\rm up}=0.1$, $\tau_{\rm br}=1$, $Z=1$, $\theta_{\rm B}=0$ (red line in Fig. 3 a-f). In most cases, we see the X-ray energy spectrum consisting of two components. The low energy component is a result of black-body radiation comptonized by electrons in the atmosphere. The high energy component is a result of the initial emission of cyclotron photons and their further comptonization by electrons. The multiple scatterings of cyclotron photons are strongly affected by the resonance at the cyclotron energy broadened by the thermal motion. X-ray photons hardly escape the atmosphere at the energies close to the cyclotron energy, and because of that, the photons tent to escape in red and blue wings of a cyclotron line. Note that thermal emission also contributes to the initial photons at cyclotron energy because free-free absorption is resonant at $E_{\rm cyc}$ in X-mode. Comparing the results of different numerical simulations, we can make some conclusions: (a) The larger optical thickness of the overheated upper layer $\tau_{\rm up}$ does not affect much the low-energy part of X-ray spectra, but influence the high energy component (see Fig. 3a) affecting photon redistribution around the cyclotron line. (b) Smaller optical depth of accretion flow braking $\tau_{\rm br}$ leads to stronger high-energy component of X-ray spectrum (see Fig. 3b). It is natural because at smaller $\tau_{\rm br}$ it becomes easier for cyclotron photons to leave the atmosphere, starting their diffusion from a smaller optical depth. Additionally, the smaller optical depth of the accretion flow braking results in a smaller fraction of absorbed cyclotron photons, contributing to the thermal low-energy part of X-ray spectra. (c) The overheated upper layer of the atmosphere affects the high-energy end of X-ray spectra. The lower temperature of the upper layer $T_{\rm up}$ makes the blue wing of a cyclotron line weaker (see Fig. 3c). If the temperature of the upper layer is much smaller than the cyclotron energy $E_{\rm cyc}$, the most of cyclotron photons are scattered into a red wing of the line and leave the atmosphere at $E\lesssim E_{\rm cyc}$. (d) The temperature $T_{\rm bot}$ of the atmosphere below the overheated upper layer shapes the thermal radiation of the atmosphere and affects the low energy part of X-ray spectra. Increase of the temperature of the bottom atmosphere results in a corresponding shift of low-energy component (see Fig. 3d). (e) The chemical composition in the atmosphere affects the cross section of free-free absorption: the larger the atomic number, the larger the cross section of free-free absorption. Because the low-energy component is dominated by the thermal emission of the atmosphere, this component is affected stronger by the variations in free-free absorption coefficient. Specifically, we see that the energy spectra tend to be slightly suppressed at low energies for larger effective atomic numbers $Z$ (see Fig. 3e). (f) The direction of magnetic field in respect to the NS surface does not affect much the final spectra integrated over the solid angle (see Fig. 3f). In our simulations, we see only a slight decrease of the width of the absorption feature at $E\sim E_{\rm cyc}$. This decrease is likely due to the fact that the thermal broadening of the cyclotron resonances in Compton scattering cross section is weaker for the photons propagating across the field direction. Figure 4: Specific intensity $I_{E}$ at the stellar surface at different directions given by an angle $\theta$ between local $B$-field direction and photon momentum. Different lines are given for different angles $\theta=0$ (solid), $0.125\pi$ (dashed), $0.25\pi$ (dotted), $0.375\pi$ (dashed-dotted). Note, that the intensity in red and blue wings of a cyclotron line are strongly variable. Parameters for simulated spectrum: $E_{\rm cyc}=50\,{\rm keV}$, $T_{\rm bot}=3\,{\rm keV}$, $T_{\rm up}=80\,{\rm keV}$, $\tau_{\rm up}=0.1$, $\tau_{\rm br}=1$, $Z=1$. The specific intensity of X-ray radiation leaving the atmosphere is angular dependent (see Fig. 4). In particular, a strong angular dependence of the specific intensity at red and blue wings of a cyclotron line is expected. The intensity integrated over the energies composes a pencil beamed diagram, which is slightly suppressed in the direction orthogonal to the stellar surface. This phenomenon is natural for the case of the atmosphere with the overheated upper layer: the contribution of the overheated upper layer into the intensity is smaller in the direction perpendicular to the stellar surface. The X-ray energy flux leaving the atmosphere is polarization-dependent (see Fig. 5). The polarization dependence is particularly strong near the cyclotron energy, which is natural because the strength of cyclotron resonance and even its existence depends on the polarization state of a photon. At low energies $E\ll E_{\rm cyc}$ the flux is dominated by X-mode photons because the scattering cross-section is smaller for this polarization state (Herold, 1979; Daugherty & Harding, 1986; Mushtukov et al., 2016). However, the difference in X-ray energy flux at different polarization states is not dramatic because of the inverse temperature profile in the atmosphere: the upper layer is assumed to be much hotter than the underling atmosphere. Note, however, that the exact predictions for polarization require detailed analyses of effects arising from vacuum polarization and complicated behavior of plasma dielectric tensor under conditions of high temperatures. The detailed analysis of these effects is beyond the scope of this paper and will be discussed in a separate publication. Figure 5: X-ray energy spectra at the NS surface in X-mode (blue dashed line) and O-mode (red dashed-dotted line) and complete spectra (black solid line). Parameters for simulated spectrum: $E_{\rm cyc}=50\,{\rm keV}$, $T_{\rm bot}=3\,{\rm keV}$, $T_{\rm up}=80\,{\rm keV}$, $\tau_{\rm up}=0.1$, $\tau_{\rm br}=1$, $Z=1$. ### 4.2 Comparison to the observational data In order to verify our model, we compared results of the simulations with the data obtained during a low-luminosity state of transient XRP A 0535+262 with $L_{\rm X}=7\times 10^{34}$ erg s-1. The data were adopted from Tsygankov et al. (2019b) and cover broad-energy band from 0.3 to 79 keV using Swift/XRT and NuSTAR instruments. In Fig. 6 we represent the model calculated for the following set of parameters (see red line): $\displaystyle E_{\rm cyc}=48\,{\rm keV},\quad\quad\quad Z=1,\quad\quad\quad\theta_{\rm B}=0$ $\displaystyle T_{\rm bot}=3.5\,{\rm keV},\quad\quad\quad T_{\rm up}=100\,{\rm keV},$ $\displaystyle\tau_{\rm up}=0.1,\quad\quad\quad\tau_{\rm br}=0.5.$ The theoretical model represents the spectrum integrated over the solid angle at the NS surface and is based on Monte Carlo simulation with $2\times 10^{6}$ photons, leaving the atmosphere. In order to compare the theoretical model with the observed X-ray spectra, we have accounted for spectral changes due to the gravitational red shift. We assume that the photon energy detected by a distant observer $E^{\infty}$ is related to the photon energy at the NS surface $E$ as $\displaystyle E_{\infty}=E(1-u)^{1/2},$ (37) where $u=R_{\rm S}/R$, and Schwarzschild radius $R_{\rm S}\approx 3m\,{\rm km}$. Mass and radius of a NS were taken to be $M=1.4\,M_{\odot}$ and $R=12\,{\rm km}$. As can be seen, our theoretical predictions are able to describe complex spectral shape of the source including all observed features. We note, however, that our theoretical model shows a lack of X-ray photons in a red wing of a cyclotron line at energy $\sim 15-20\,{\rm keV}$. It is hard to eliminate this discrepancy in the energy spectrum integrated over the solid angle. We suppose that this problem can be solved if one accounts for the exact geometry of NS rotation in the observer’s reference frame and the precise process of pulse profile formation. We also note that the radiative transfer at the high energy part of X-ray spectra can be affected by the effect of vacuum polarisation (see Appendix B), which was not taken into account in our simulations. This analysis is beyond the scope of the paper and a matter of further investigations, which will be discussed in a separate publication. Figure 6: Observed $E\,F_{E}$ spectrum of A 0535+262 at accretion luminosity of $~{}7\times 10^{34}\,{\rm erg\ \rm s^{-1}}$ is given by black circles (NuSTAR FPMA and FPMB data) and squares (Swift/XRT). Simulated spectrum is represented by red line. Parameters for simulated spectrum: $E^{\infty}_{\rm cyc}=39\,{\rm keV}$, $T_{\rm bot}=2.8\,{\rm keV}$, $T_{\rm up}=100\,{\rm keV}$, $\tau_{\rm up}=0.1$, $\tau_{\rm br}=0.5$, $Z=1$. The run results from $5\times 10^{5}$ photons leaving the atmosphere. ## 5 Summary and Discussion ### 5.1 Spectra formation at low mass accretion rates We performed numerical simulations for spectra formation in XRPs at very low- luminosity state, when the interaction between radiation and accretion flow above NS surface does not affect X-ray spectra and dynamics of the accretion flow. Our simulations are coherent with a physical model (see Fig. 1) where the accretion flow is braked in upper layers of NS atmosphere due to collisions between particles, and most of kinetic energy is released initially in the form of cyclotron photons. The spectra leaving the atmosphere of a NS is a matter of radiative transfer, strongly affected by magnetic Compton scattering. The essential ingredient of the model is an overheated upper layer of the NS atmosphere proposed earlier by Suleimanov et al. 2018 for the case of low-level accretion onto weakly magnetized NSs. Simulated radiative transfer in the atmosphere was performed under the assumption of LTE and accounts for Compton scattering of X-ray photons by thermally distributed electrons, cyclotron, and free-free absorption. Two components in the spectrum correspond to comptonized thermal radiation (low-energy hump), and comptonized cyclotron photons (high-energy hump) originated from collisions of accreting particles with the electrons in the NS atmosphere and further radiative transition of electrons to the ground Landau level. The absorption feature on top of the high-energy hump is due to the resonant scattering of X-ray photons at cyclotron energy, which forces cyclotron photons to leave the atmosphere in the wings of a cyclotron line. Using the constructed model, it was possible to reproduce the observed spectrum of X-ray pulsar A 0535+262 at a very low luminosity state (see Fig. 6, Tsygankov et al. 2019b). Qualitative agreement between simulated and observed X-ray spectra confirms assumptions of the underlying physical model. We argue that two-component spectra should be typical for low-level accretion onto strongly magnetized NSs. ### 5.2 Applications to the observational studies #### 5.2.1 Investigation of the “propeller" state Decreasing of the mass accretion rates in XRPs down to the very low values results in the transition of the source either to (i) the “propeller" state, when the accretion flow cannot penetrate through the centrifugal barrier set up by the rotating magnetosphere of a NS (see, e.g., Illarionov & Sunyaev 1975; Lipunov 1987; Ustyugova et al. 2006), or (ii) to the regime of stable accretion from a cold disc (see Tsygankov et al. 2017). “Propeller" effect was recently detected in a few XRPs (see e.g., Tsygankov et al., 2016; Lutovinov et al., 2017). Moreover, in some sources, transitions into the “propeller" state were discovered to be accompanied by dramatic changes of X-ray energy spectra (Tsygankov et al., 2016). Specifically, in the energy range below $\sim 10$ keV, the spectra were shown to become significantly softer with shape changed from the power-law to a black body with a typical temperature around 0.5 keV. However, it is still unknown if the centrifugal barrier blocks the accretion process entirely, and the detected soft X-ray spectra are observational evidence of cooling NS surface, or leakage of matter through the barrier is still possible and responsible for some fraction of the observed emission (Wijnands & Degenaar, 2016; Rouco Escorial et al., 2017). Our theoretical model of spectra formation provides a natural way to distinguish low-level accretion from the cooling NS surface. The hard component of X-ray spectra is a direct result and a specific feature of the accretion process. As a result, low-level accretion in the case of the leakage of the centrifugal barrier should result in two-component X-ray spectra, while the “propeller" state without penetration of material through the barrier should result in single hump soft spectra. #### 5.2.2 Measurements of magnetic field strength Two-component X-ray energy spectra at low mass accretion rate provide a way to estimate magnetic field strength at the NS surface. Indeed, the hard component of X-ray spectra is formed around cyclotron energy, which is directly related to the field strength: $E_{\rm cyc}\approx 11.6\,B_{12}\,$keV. Thus, the detection of hard-energy hump provides a way to estimate the cyclotron energy in the case when cyclotron absorption feature is not seen in the source spectrum. For instance, the spectra of X Persei (Di Salvo et al., 1998), according to our model, implies the cyclotron energy $E_{\rm cyc}\gtrsim 100\,{\rm keV}$ and corresponding magnetic field strength $B\gtrsim 10^{13}\,{\rm G}$. Note that this estimation is coherent with the results based on timing analyses and torque models applied to this particular source (Doroshenko et al., 2012). ## Acknowledgements This work was supported by the Netherlands Organization for Scientific Research Veni Fellowship (AAM), the Väisälä Foundation (SST) and the Academy of Finland travel grant 324550. VFS thanks Deutsche Forschungsgemeinschaft (DFG) for financial support (grant WE 1312/51-1). The authors also thank the Russian Science Foundation (grant 19-12-00423) for financial support. We are grateful to an anonymous referee for a number of useful comments and suggestions which helped us improve the paper. ## Data availability The calculations presented in this paper were performed using a private code developed and owned by the corresponding author, please contact him for any request/question about. Data appearing in the figures are available upon request. The observational data used in the manuscript are adopted from those reported in Tsygankov et al. 2019b. ## References * Basko & Sunyaev (1976) Basko M. M., Sunyaev R. A., 1976, MNRAS, 175, 395 * Becker & Wolff (2007) Becker P. A., Wolff M. T., 2007, ApJ, 654, 435 * Becker et al. (2012) Becker P. A., et al., 2012, A&A, 544, A123 * Bonazzola et al. (1979) Bonazzola S., Heyvaerts J., Puget J. L., 1979, A&A, 78, 53 * Daugherty & Harding (1986) Daugherty J. K., Harding A. K., 1986, ApJ, 309, 362 * Di Salvo et al. (1998) Di Salvo T., Burderi L., Robba N. R., Guainazzi M., 1998, ApJ, 509, 897 * Doroshenko (2020) Doroshenko V., 2020, MNRAS, 491, 1857 * Doroshenko et al. (2012) Doroshenko V., Santangelo A., Kreykenbohm I., Doroshenko R., 2012, A&A, 540, L1 * Doroshenko et al. (2021) Doroshenko V., Santangelo A., Tsygankov S., Ji L., 2021, arXiv e-prints, * Filippova et al. (2005) Filippova E. V., Tsygankov S. S., Lutovinov A. A., Sunyaev R. A., 2005, Astronomy Letters, 31, 729 * Frank et al. (2002) Frank J., King A., Raine D. J., 2002, Accretion Power in Astrophysics: Third Edition * Ghosh & Lamb (1978) Ghosh P., Lamb F. K., 1978, ApJ, 223, L83 * Ginzburg (1970) Ginzburg V. L., 1970, The propagation of electromagnetic waves in plasmas * Gnedin & Syunyaev (1973) Gnedin Y. N., Syunyaev R. A., 1973, Zhurnal Eksperimentalnoi i Teoreticheskoi Fiziki, 65, 102 * Gornostaev (2021) Gornostaev M. I., 2021, MNRAS, 501, 564 * Harding & Lai (2006) Harding A. K., Lai D., 2006, Reports on Progress in Physics, 69, 2631 * Harding & Preece (1987) Harding A. K., Preece R., 1987, ApJ, 319, 939 * Herold (1979) Herold H., 1979, Phys. Rev. D, 19, 2868 * Illarionov & Sunyaev (1975) Illarionov A. F., Sunyaev R. A., 1975, A&A, 39, 185 * Inoue (1975) Inoue H., 1975, PASJ, 27, 311 * Kirk (1980) Kirk J. G., 1980, Plasma Physics, 22, 639 * Lai (2014) Lai D., 2014, in European Physical Journal Web of Conferences. p. 01001 (arXiv:1402.1903), doi:10.1051/epjconf/20136401001 * Lai & Ho (2003) Lai D., Ho W. C. G., 2003, ApJ, 588, 962 * Lipunov (1987) Lipunov V. M., 1987, Ap&SS, 132, 1 * Lutovinov et al. (2017) Lutovinov A. A., Tsygankov S. S., Krivonos R. A., Molkov S. V., Poutanen J., 2017, ApJ, 834, 209 * Lutovinov et al. (2021) Lutovinov A., et al., 2021, arXiv e-prints, p. arXiv:2103.05728 * Lyubarsky & Sunyaev (1982) Lyubarsky Y. E., Sunyaev R. A., 1982, Technical report, Comptonization in radiation-dominated shocks and spectra of X-ray pulsars * Mushtukov et al. (2015a) Mushtukov A. A., Suleimanov V. F., Tsygankov S. S., Poutanen J., 2015a, MNRAS, 447, 1847 * Mushtukov et al. (2015b) Mushtukov A. A., Suleimanov V. F., Tsygankov S. S., Poutanen J., 2015b, MNRAS, 454, 2539 * Mushtukov et al. (2015c) Mushtukov A. A., Tsygankov S. S., Serber A. e. V., Suleimanov V. F., Poutanen J., 2015c, MNRAS, 454, 2714 * Mushtukov et al. (2016) Mushtukov A. A., Nagirner D. I., Poutanen J., 2016, Phys. Rev. D, 93, 105003 * Nagase (1989) Nagase F., 1989, PASJ, 41, 1 * Nelson et al. (1995) Nelson R. W., Wang J. C. L., Salpeter E. E., Wasserman I., 1995, ApJ, 438, L99 * Pavlov et al. (1991) Pavlov G. G., Bezchastnov V. G., Meszaros P., Alexand er S. G., 1991, ApJ, 380, 541 * Reig (2011) Reig P., 2011, Ap&SS, 332, 1 * Rouco Escorial et al. (2017) Rouco Escorial A., Bak Nielsen A. S., Wijnands R., Cavecchi Y., Degenaar N., Patruno A., 2017, MNRAS, 472, 1802 * Suleimanov et al. (2010) Suleimanov V. F., Pavlov G. G., Werner K., 2010, ApJ, 714, 630 * Suleimanov et al. (2018) Suleimanov V. F., Poutanen J., Werner K., 2018, A&A, 619, A114 * Tsygankov et al. (2016) Tsygankov S. S., Lutovinov A. A., Doroshenko V., Mushtukov A. A., Suleimanov V., Poutanen J., 2016, A&A, 593, A16 * Tsygankov et al. (2017) Tsygankov S. S., Mushtukov A. A., Suleimanov V. F., Doroshenko V., Abolmasov P. K., Lutovinov A. A., Poutanen J., 2017, A&A, 608, A17 * Tsygankov et al. (2019a) Tsygankov S. S., Rouco Escorial A., Suleimanov V. F., Mushtukov A. A., Doroshenko V., Lutovinov A. A., Wijnand s R., Poutanen J., 2019a, MNRAS, 483, L144 * Tsygankov et al. (2019b) Tsygankov S. S., Doroshenko V., Mushtukov A. e. A., Suleimanov V. F., Lutovinov A. A., Poutanen J., 2019b, MNRAS, 487, L30 * Ustyugova et al. (2006) Ustyugova G. V., Koldoba A. V., Romanova M. M., Lovelace R. V. E., 2006, ApJ, 646, 304 * Walter et al. (2015) Walter R., Lutovinov A. A., Bozzo E., Tsygankov S. S., 2015, A&ARv, 23, 2 * Wang & Frank (1981) Wang Y. M., Frank J., 1981, A&A, 93, 255 * Wijnands & Degenaar (2016) Wijnands R., Degenaar N., 2016, MNRAS, 463, L46 * Zheleznyakov (1977) Zheleznyakov V. V., 1977, Electromagnetic waves in cosmic plasma. Generation and propagation. ## Appendix A Free-free absorption in a strong magnetic field The opacity due to free-free absorption is calculated in our simulations according to Lai & Ho 2003. The opacity is dependent on magnetic field strength, polarization of X-ray photons, their energy and direction of the momentum in respect to local direction of $B$-field. The opacity for mode $j$ of ellipticity $K_{j}$ can be written as $\displaystyle\kappa_{j}^{\rm ff}=\kappa_{+}|e_{+}^{j}|^{2}+\kappa_{-}|e_{-}^{j}|^{2}+\kappa_{0}|e_{0}^{j}|^{2},$ (38) where $e_{0}$, $e_{\pm}$ are the spherical components of the photon’s unit polarization vector with $\displaystyle|e_{\pm}^{j}|^{2}$ $\displaystyle=$ $\displaystyle\left|\frac{1}{\sqrt{2}}(e_{x}^{j}\pm ie_{y}^{j})\right|^{2}$ $\displaystyle=$ $\displaystyle\frac{[1\pm(K_{j}\cos\theta+K_{z,j}\sin\theta)]^{2}}{2(1+K_{j}^{2}+K_{z,j}^{2})},$ $\displaystyle|e_{z}^{j}|^{2}$ $\displaystyle=$ $\displaystyle\frac{(K_{j}\sin\theta- K_{z,j}\cos\theta)^{2}}{1+K_{j}^{2}+K_{z,j}^{2}},$ (40) where the ellipticities $K_{j}$ are given by (2.2) and $K_{z,j}$ is taken to be $0$. The coefficients in (38) are given by $\displaystyle\kappa_{\pm}=\frac{\omega}{c\rho}v_{e}\Lambda_{\pm}\gamma_{ei}^{\perp},\quad\quad\kappa_{0}=\frac{\omega}{c\rho}v_{e}\gamma_{ie}^{\parallel},$ (41) where $\displaystyle\Lambda_{\pm}=[(1\pm u_{e}^{1/2})^{2}(1\mp u_{i}^{1/2})^{2}+\gamma_{\pm}^{2}]^{-1},$ (42) $\displaystyle\gamma_{\pm}=\gamma_{ei}^{\perp}+(1\pm u_{e}^{1/2})\gamma_{ri}+(1\mp u_{i}^{1/2})\gamma_{re}.$ (43) Dimensionless quantities: $\displaystyle u_{e}=\left(\frac{E_{\rm cyc}}{E}\right)^{2},\quad u_{i}=\left(\frac{E_{Bi}}{E}\right)^{2},\quad v_{e}=\left(\frac{E_{pe}}{E}\right)^{2},$ (44) where the energies corresponding to electron cyclotron frequency, ion cyclotron frequency, and the electron plasma frequency are given by $\displaystyle E_{\rm cyc}$ $\displaystyle=$ $\displaystyle 11.58\,B_{12}\,{\rm keV},$ (45) $\displaystyle E_{Bi}$ $\displaystyle=$ $\displaystyle 6.305\times 10^{-3}\,B_{12}\left(\frac{Z}{A}\right)\,{\rm keV},$ (46) $\displaystyle E_{pe}$ $\displaystyle=$ $\displaystyle 2.871\times 10^{-2}\left(\frac{Z}{A}\right)^{1/2}\rho^{1/2}\,{\rm keV}.$ (47) The dimensionless dumping rates due to electron-ion collisions and photon emission by electrons and ions are given by $\displaystyle\gamma_{ei}^{\perp,\parallel}$ $\displaystyle=$ $\displaystyle 4.55\times 10^{-8}\,\frac{Z^{2}n_{i,21}}{T_{\rm keV}^{1/2}E_{\rm keV}^{2}}\left(1-e^{-E/k_{\rm B}T}\right)g^{\rm ff}_{\perp,\parallel},$ $\displaystyle\gamma_{re}$ $\displaystyle=$ $\displaystyle 9.52\times 10^{-6}\,E_{\rm keV},$ (48) $\displaystyle\gamma_{ri}$ $\displaystyle=$ $\displaystyle 7.1\times 10^{-7}\,\frac{Z^{2}}{A}\,E_{\rm keV},$ where $g^{\rm ff}_{\perp,\parallel}$ are magnetic Gaunt factors, which are calculated according to Suleimanov et al. 2010 in our simulations. ## Appendix B On the influence of vacuum polarization Assuming constant temperature $T$ in the atmosphere one can get the dependence of the mass density on the vertical coordinate $\xi$: $\displaystyle\rho(\xi)=\rho(\xi_{0})\exp\left[\frac{gm_{\rm p}}{k_{\rm B}T}(\xi-\xi_{0})\right],$ (49) where $g\simeq 1.3\times 10^{14}\,mR_{6}^{-2}\,{\rm cm\,s^{-2}}$ is the acceleration due to gravity at the NS surface. If the opacity is given by Thomson opacity $\kappa_{\rm e}=0.34\,{\rm cm^{2}\,g^{-1}}$, the local mass density is related to the optical depth in the atmosphere as $\displaystyle\rho=40.6\,\tau_{\rm T}\frac{m}{T_{\rm keV}R_{6}^{2}}\quad{\rm g\,cm^{-3}}.$ (50) The critical mass density, when the contribution of vacuum polarisation becomes comparable to the contribution of plasma to the dielectric tensor, can be estimated as (Lai & Ho, 2003) $\displaystyle\rho_{\rm V}=9.64\times 10^{-5}\,Y_{\rm e}^{-1}B_{12}^{2}E_{\rm keV}^{2}f^{-2}\quad{\rm g\,cm^{-3}},$ (51) where $Y_{\rm e}=Z/A$ and $f\sim 1$. At $\rho\ll\rho_{\rm V}$ the dielectric tensor is dominated by vacuum effects, while at $\rho\gg\rho_{\rm V}$, the dielectric tensor is dominated by plasma effects. Using (50) and (51) we get the optical depth due to Thomson scattering corresponding to $\rho_{\rm V}$: $\displaystyle\tau_{\rm T,V}=2.4\times 10^{-6}\,Y_{\rm e}^{-1}B_{12}^{2}T_{\rm keV}E_{\rm keV}^{2}f^{-2}m^{-1}R_{6}^{2}.$ (52) Therefore, for the case of physical conditions expected in XRP A 0535+262 we get $\tau_{\rm T,V}\sim 3\times 10^{-5}T_{\rm keV}E_{\rm keV}^{2},$ which means that the polarization of X-ray photons in low-energy hump is well described by (2.2), while polarization of X-ray photons in high-energy hump might be affected by the effects of vacuum polarization, which will be investigated in a separate publication.
# Quantum effects in the interaction of low-energy electrons with light Adamantios P. Synanidis ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain P. A. D. Gonçalves ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain Claus Ropers Department of Ultrafast Dynamics, Max Planck Institute for Multidisciplinary Sciences, 37077 Göttingen, Germany 4th Physical Institute - Solids and Nanostructures, University of Göttingen, 37077 Göttingen, Germany F. Javier García de Abajo<EMAIL_ADDRESS>ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain ICREA-Institució Catalana de Recerca i Estudis Avançats, Passeig Lluís Companys 23, 08010 Barcelona, Spain ###### Abstract The interaction between free electrons and nanoscale optical fields has emerged as a unique platform to investigate ultrafast processes in matter and explore fundamental quantum phenomena. In particular, optically modulated electrons are employed in ultrafast electron microscopy as noninvasive probes that push the limits of spatiotemporal and spectral resolution down to the picometer–attosecond–microelectronvolt range. Electron kinetic energies well above the involved photon energies are commonly employed, rendering the electron–light coupling efficiency low and, thus, only providing limited access to the wealth of quantum nonlinear phenomena underlying the dynamical response of nanostructures. Here, we theoretically investigate electron–light interactions when photons and electrons have comparable energies, revealing strong quantum and recoil effects that include a nonvanishing coupling of surface-scattered electrons to plane waves of light, inelastic electron backscattering from localized optical fields, and strong electron–light coupling under grazing electron diffraction by an illuminated crystal surface. Our results open new vistas in electron–light–matter interactions with promising applications in ultrafast electron microscopy. ## Introduction The synergetic relation between short light pulses and free electron beams (e-beams) underlies several recent advances in ultrafast electron microscopy toward a combined sub-nm–sub-fs–sub-meV spatiotemporal and spectral resolution, rapidly progressing toward the goal of mapping atomic-scale spatial features and their evolution over unprecedentedly small time scales Barwick _et al._ (2009); Flannigan and Zewail (2012); Feist _et al._ (2015); Barwick and Zewail (2015); Polman _et al._ (2019); García de Abajo and Di Giulio (2021); Roques-Carmes _et al._ (2023); García de Abajo and Ropers (2023). A prominent example is photon-induced near-field electron microscopy Barwick _et al._ (2009); García de Abajo _et al._ (2010); Park _et al._ (2010) (PINEM), which is based on the synchronous arrival of laser and electron femtosecond pulses at a sampled nanostructure, thus enabling optical- pump/electron-probe spectroscopy to be performed with a nanoscale spatial resolution inherited from the use of state-of-the-art electron optics setups. This approach has been applied to image optical near fields in nanophotonics Piazza _et al._ (2015); Lummen _et al._ (2016); Vanacore _et al._ (2020); Kurman _et al._ (2021), the subcycle evolution of those fields Nabben _et al._ (2023); Gaida _et al._ (2024); Bucher _et al._ (2023), and the nanoscale-resolved fluctuations of the light with which the electron has interacted Di Giulio _et al._ (2019); Dahan _et al._ (2021); Yang _et al._ (2024). PINEM can be regarded as a specific application of the more general concept of stimulated inelastic electron–light scattering (SIELS). The latter has been leveraged to gain control over the free-electron wave function by customizing its interaction with light, including the generation of trains of attosecond electron pulses Feist _et al._ (2015); Priebe _et al._ (2017); Kozák _et al._ (2018); Morimoto and Baum (2018) and the shaping of the transverse e-beam profile Vanacore _et al._ (2019); Konečná and García de Abajo (2020); García de Abajo and Konečná (2021); Madan _et al._ (2022); Mihaila _et al._ (2022). Laser-assisted photoemission Glover _et al._ (1996); Saathoff _et al._ (2008) constitutes another example of SIELS that can be used to probe the ultrafast dynamics of condensed matter systems Arrell _et al._ (2016) and produces interesting effects in the strong-field limit Yalunin _et al._ (2011); Dombi _et al._ (2020). Many of these advances have been accomplished in transmission electron microscopes (TEMs) operating with relatively high e-beam energies ($\gtrsim 30\,$keV), orders of magnitude larger than those of the employed photons (typically in the eV range) and, consequently, rendering the probability that a single electron interacts with a single photon (e.g., in an individual confined optical mode) much smaller than unity. Such a weak electron–light interaction limits applications in metrology, imaging of atomic-scale excitations, and the study of nonlinear phenomena. For robust structures, the problem is circumvented in SIELS by using intense laser pulses Piazza _et al._ (2015); Lummen _et al._ (2016); Vanacore _et al._ (2020), but this approach cannot be extended to sensitive specimens such as biological materials. Phase matching between the electron excitation and the light field can also boost the interaction Bendaña _et al._ (2011); Kfir _et al._ (2020); Wang _et al._ (2020); Dahan _et al._ (2020); Henke _et al._ (2021), although this strategy is only practical in specialized structures that host modes with evanescent optical fields in vacuum. Surprisingly, in the linear electron–photon interaction regime, describing electrons as classical point-like charges produces the same results as a quantum-mechanical treatment in which electron recoil is ignored García de Abajo and Di Giulio (2021). Likewise, the wave function of energetic electrons in PINEM, and more generally in SIELS, is modified by a global factor that encapsulates the interaction with light through a single complex parameter that depends linearly on the optical field Varshalovich and D’Yakonov (1971); Weingartshofer _et al._ (1977); García de Abajo _et al._ (2010); Park _et al._ (2010); Feist _et al._ (2015); García de Abajo and Di Giulio (2021) and is therefore unsuited to access the subcycle nonlinear dynamics of a specimen in general. Consequently, it is highly desirable to achieve strong interaction between single electrons and atomic-scale excitations as a way to access their ultrafast nonlinear dynamics. In this context, the use of low-energy electrons with kinetic energies comparable to those of the quanta associated with the employed optical fields opens a plausible avenue to overcome these challenges. Indeed, the electron–light coupling strength increases when the electron energy is reduced, while a deviation from a classical-probe behavior is introduced by electron recoil Talebi (2018, 2020); García de Abajo _et al._ (2022). Low-energy electrons thus hold the potential to reveal new physical phenomena during their interaction with localized optical fields, a possibility that demands the exploration of physically relevant configurations. Figure 1: Electron–light–matter interaction and recoil effects with low-energy electrons. (a) Illustration of inelastic electron scattering by the evanescent optical field associated with propagating surface polaritons. (b, c) Transmission (b) and reflection (c) electron spectra in the configuration of (a) for electrons of incident energy $\hbar\varepsilon_{0}=10\,$eV and angle $\theta_{\rm e}=45\degree$ combined with polaritons of electric-field amplitude $E_{0}=6\times 10^{8}\,$V/m, effective refractive index $n_{\rm eff}=50$, and energy $\hbar\omega=1\,$eV. For readability, spectral peaks are broadened with a Lorentzian of 0.1 FWHM in $\Delta\varepsilon/\omega$, with $\Delta\varepsilon=\varepsilon-\varepsilon_{0}$. (d) Nonvanishing interaction between a plane wave of light and an electron reflected on a light-transparent surface. (e) Despite the small coupling in (d) ($P_{1}<2\%$), strong recoil effects are observed for $\varepsilon_{0}\sim\omega$ in the asymmetry factor $(P_{1}-P_{-1})/(P_{1}+P_{-1})$, which vanishes in the classical regime ($\varepsilon_{0}\gg\omega$); we take $E_{0}=8\times 10^{7}\,$V/m, $\theta_{\rm e}=45\degree$, $\theta_{\rm l}=90\degree$, and $\hbar\omega=1\,$eV. Here, we reveal a wealth of previously unexplored quantum and recoil effects taking place during electron–light–matter interactions when the electron and photon energies are comparable. Considering realistic frameworks that admit rigorous semi-analytical treatments, we theoretically explore electron–light interactions mediated by scattering at planar interfaces of either the light, the electrons, or both (Fig. 1). We adopt optical fields in the form of either externally incident plane waves or surface polaritons. Importantly, surface scattering leads to symmetry breaking that enhances the electron–light coupling, even allowing otherwise forbidden electron–photon interactions. In particular, we show that low-energy electrons can be inelastically scattered solely due to an evanescent optical field propagating along an electron- transparent surface (Fig. 1a–c), including the emergence of a back-reflected electron signal (Fig. 1b). In addition, we demonstrate that a plane-wave electron reflected on a light-transparent surface can produce inelastically reflected electrons because of the nonvanishing electron–photon coupling originating in translational symmetry breaking of the out-of-plane electron wave function (Fig. 1d–e). The resulting reflected-electron spectrum exhibits substantial recoil effects (Fig. 1e). We further report on the possibility of reaching the strong electron–light coupling regime with moderate light intensities through Bragg scattering at planar atomic lattices, whereby the interaction is boosted under Rayleigh anomaly conditionsLord Rayleigh (1907a). Besides its interest from a fundamental viewpoint, the present study unveils exciting opportunities for the exploration of improved microscopy and metrology in the regime of low-energy electrons exposed to optical fields of comparable photon energy. Our results are particularly relevant to the exploration of the rich phase and electronic-structure phenomenology exhibited by material surfaces, which are cornerstones in many technological applications. ## Results ### Theoretical framework for the interaction of low-energy electrons with light We first consider a planar material acting through a one-dimensional (1D) potential $V(z)$ on the electron, while a generalization to laterally corrugated atomic lattices is presented further below. To study electron scattering by the planar structure in the presence of a classical optical field (e.g., light plane waves or surface polaritons), we write the Hamiltonian $\hat{\mathcal{H}}_{0}({\bf r})+\hat{\mathcal{H}}_{1}({\bf r},t)$, where $\hat{\mathcal{H}}_{0}({\bf r})=-\hbar^{2}\nabla^{2}/2m_{\rm e}+V(z)$ describes the electron–material system and $\hat{\mathcal{H}}_{1}({\bf r},t)=-({\rm i}e\hbar/m_{\rm e}c)\,{\bf A}({\bf r},t)\cdot\bm{\nabla}$ accounts for the electron–light interaction. The latter arises from the minimal coupling prescription applied to a classical vector potential ${\bf A}({\bf r},t)$ after neglecting $A^{2}$ terms and adopting a gauge in which the scalar potential is zero. We focus on monochromatic fields of frequency $\omega$ and in-plane wave vector ${\bf k}_{\parallel}=(k_{x},k_{y})$, characterized by a vector potential ${\bf A}({\bf r},t)={\bf A}(z)\,{\rm e}^{{\rm i}{\bf k}_{\parallel}\cdot{\bf R}-{\rm i}\omega t}+{\rm c.c.}$, where the notation ${\bf R}=(x,y)$ is adopted. Also, in the absence of illumination, the electron wave function is taken to be a solution of the $\hat{\mathcal{H}}_{0}({\bf r})$ Hamiltonian, namely, $\psi_{0}({\bf r},t)=\varphi_{00}(z)\,{\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot{\bf R}-{\rm i}\varepsilon_{0}t}$, having a well- defined energy $\hbar\varepsilon_{0}$ and in-plane wave vector ${\bf q}_{0\parallel}$ ($\perp\hat{\bf z}$). Now, in-plane translation symmetry and energy conservation allow us to write the perturbation series $\displaystyle\psi({\bf r},t)=\sum_{n=0}^{\infty}\sum_{\ell=-n}^{n}\varphi_{n\ell}(z)\,{\rm e}^{{\rm i}({\bf q}_{0\parallel}+\ell\,{\bf k}_{\parallel})\cdot{\bf R}-{\rm i}(\varepsilon_{0}+\ell\omega)t},$ (1) where $n$ runs over scattering orders, while $\ell$ denotes the net number of exchanged photons (i.e., absorbed or emitted by the electron for $\ell>0$ and $\ell<0$, respectively). In addition, $\psi({\bf r},t)$ satisfies the Lippmann–Schwinger equation $\psi({\bf r},t)=\psi_{0}({\bf r},t)+\int{\rm d}^{3}{\bf r}^{\prime}\int{\rm d}t^{\prime}\,\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})\,\hat{\mathcal{H}}_{1}({\bf r}^{\prime},t)\,\psi({\bf r}^{\prime},t^{\prime})$, where the Green function $\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})$ is defined by $[\hat{\mathcal{H}}_{0}({\bf r})-{\rm i}\hbar\partial_{t}]\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})=-\delta({\bf r}-{\bf r}^{\prime})\delta(t-t^{\prime})$. Combining these elements, we obtain the recurrence relation $\displaystyle\varphi_{n\ell}(z)=\frac{\hbar e}{m_{\rm e}c}\int{\rm d}{z^{\prime}}\,\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})$ (2) $\displaystyle\times\Big{\\{}{\bf A}(z^{\prime})\cdot\big{[}{\bf q}_{0\parallel}+(\ell-1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\partial_{z^{\prime}}\big{]}\,\varphi_{n-1,\ell-1}(z^{\prime})$ $\displaystyle\;\;\,+{\bf A}^{*}(z^{\prime})\cdot\big{[}{\bf q}_{0\parallel}+(\ell+1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\partial_{z^{\prime}}\big{]}\,\varphi_{n-1,\ell+1}(z^{\prime})\Big{\\}}$ for $n>0$, where $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})$ is the frequency-domain 1D Green function satisfying $\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})=(2\pi)^{-3}\int{\rm d}^{2}{\bf q}_{\parallel}\int{\rm d}\varepsilon\,\mathcal{G}_{0}(z,z^{\prime},\varepsilon)\,{\rm e}^{{\rm i}{\bf q}_{\parallel}\cdot({\bf R}-{\bf R}^{\prime})-{\rm i}\varepsilon(t-t^{\prime})}$, and we define $\varepsilon^{\perp}_{\ell}=\varepsilon_{0}+\ell\omega-\hbar|{\bf q}_{0\parallel}+\ell\,{\bf k}_{\parallel}|^{2}/2m_{\rm e}$ such that $\hbar\varepsilon^{\perp}_{\ell}$ is the out-of-plane electron energy after the exchange of a net number of photons $\ell$ (see Appendix for a self- contained derivation). We consider planar structures of negligible thickness and expand the electron wave function components in Eq. (1) using the ansatz $\displaystyle\varphi_{n\ell}(z)=\sum_{s=\pm}\sum_{j}\alpha_{n\ell}^{js}\,{\rm e}^{\zeta_{n\ell}^{js}z}\,\Theta(sz)$ (3) within the regions above ($s=+$) and below ($s=-$) the material. Here, $j$ labels contributions that can be either evanescent (${\rm Re}\\{\zeta_{n\ell}^{j\pm}\\}\neq 0$) or propagating (imaginary $\zeta_{n\ell}^{j\pm}$). Inserting Eq. (3) into Eq. (2), we find a recursive expression with a unique solution for the coefficients $\alpha_{n\ell}^{j\pm}$ and $\zeta_{n\ell}^{j\pm}$, which automatically satisfy the physical conditions ${\rm Re}\\{\zeta_{n\ell}^{j+}\\}\leq 0$ and ${\rm Re}\\{\zeta_{n\ell}^{j-}\\}\geq 0$ (see SI). In addition, the purely propagating components have exponential coefficients $\zeta_{n\ell}^{j\pm}=\pm{\rm i}q_{\ell z}$ (i.e., imaginary and independent of $n$ and $j$), where $q_{\ell z}=\sqrt{2m_{\rm e}\varepsilon^{\perp}_{\ell}/\hbar}$ is determined by energy conservation for a net number of photon exchanges $\ell$. Finally, the fractions of $\ell$-resolved electrons scattered along the upward ($+$) and downward ($-$) directions are given by $\displaystyle P_{\ell}^{\pm}=\frac{q_{\ell z}}{q_{0z}}\,\Big{|}\sideset{}{{}^{\prime}}{\sum}_{nj}\alpha_{n\ell}^{j\pm}\Big{|}^{2},$ (4) where the primed sum indicates that it is restricted to purely propagating waves. Figure 2: Inelastic scattering of low-energy electrons upon total reflection at a polariton-supporting surface. (a–c,e–g) Probabilities corresponding to a net exchange of $\ell={0,\pm 1}$ quanta calculated without (a–c) and with (e–g) inclusion of quantum recoil as a function of electron incidence angle $\theta_{\rm e}$. (d, h) Electron spectra for $\theta_{\rm e}=86.2\degree$ obtained without (d) and with (h) recoil. We consider a polaritonic electric- field amplitude $E_{0}=5\times 10^{6}\,$V/m, effective refractive index $n_{\rm eff}=250$, and energy $\hbar\omega=0.2\,$eV. ### Recoil and quantum effects in surface-scattered electrons Fully electron-reflecting surface.—As a tutorial configuration, we first consider an electron with energy $\hbar\varepsilon_{0}$ and incident angle $\theta_{\rm e}$ (with respect to the surface normal) that undergoes total reflection at a planar surface supporting a surface polariton with in-plane wave vector $k_{\parallel}=|{\bf k}_{\parallel}|=n_{\rm eff}k$, where $k=\omega/c$ is the light wave vector and $n_{\rm eff}>1$ is an effective index of refraction. For a thin metallic film of permittivity $\epsilon<0$ and thickness $d$ embedded in a medium of permittivity $\epsilon_{s}>1$, we have $n_{\rm eff}=\epsilon_{s}\lambda/[\pi(1-\epsilon)d]$ at a light wavelength $\lambda$, so thin films favor large values of $n_{\rm eff}$ (e.g., $n_{\rm eff}=40$ for $\lambda=500$ nm in currently available Abd El-Fattah _et al._ (2019) 10-monolayer crystalline Ag(111) films deposited on Si). Higher values of $n_{\rm eff}\sim 100$’s are displayed by infrared graphene plasmons Woessner _et al._ (2015); Ni _et al._ (2018) and phonon-polaritons in few- layer hBN Giles _et al._ (2018); Li _et al._ (2021). For simplicity, we take the electron and the surface polariton to share the same in-plane direction of incidence with ${\bf q}_{0\parallel}\parallel{\bf k}_{\parallel}\parallel\hat{\bf x}$, such that the associated vector potential can be written as ${\bf A}({\bf r})=(E_{0}/k)\,\big{(}\kappa^{2}+k_{\parallel}^{2}\big{)}^{-1/2}\,\big{(}\kappa\,\hat{\bf x}+{\rm i}\,k_{\parallel}\,\operatorname{sign}\\{z\\}\,\hat{\bf z}\big{)}\,{\rm e}^{{\rm i}k_{\parallel}x-\kappa|z|}$, where $E_{0}$ is a global electric-field amplitude (see SI). In addition, the electron–surface interaction is assumed to be elastic, and thus, any inelastic electron signal stems from surface-polariton emission and absorption processes by the electron. In this scenario, we can set the $z$ component of the zeroth-order electron wave function as $\varphi_{00}(z)=\big{[}{\rm e}^{-{\rm i}q_{0z}z}-e^{{\rm i}q_{0z}z}\big{]}\Theta(z)$, while the Green function in Eq. (2) reduces to $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})=({\rm i}m_{\rm e}/\hbar^{2}q_{\ell z})\big{[}{\rm e}^{{\rm i}q_{\ell z}(z+z^{\prime})}-{\rm e}^{{\rm i}q_{\ell z}|z-z^{\prime}|}\big{]}\Theta(z)\Theta(z^{\prime})$ (see SI). Inserting these elements into Eq. (2) and noticing that only reflected components need to be considered, we find a set of analytical coefficients $\alpha_{n,\ell}^{j+}$ and $\zeta_{n,\ell}^{j+}$, from which the reflection probability $P_{\ell}\equiv P^{+}_{\ell}$ for a given $\ell$ channel is obtained via Eq. (4). It is instructive to examine the $\varepsilon_{0}\gg\omega$ limit, where recoil effects should play a minor role. As a direct generalization of the result obtained for an electron moving with constant velocity along a straight-line trajectory García de Abajo and Di Giulio (2021), we approximate the $\ell$-dependent inelastic probability as $P_{\ell}=J_{\ell}^{2}(2|\beta|)$, where $\beta=({\rm i}e/\hbar c)\int{\rm d}{t}\;\dot{{\bf r}}_{\rm e}(t)\cdot{\bf A}[{\bf r}_{\rm e}(t)]\,{\rm e}^{-{\rm i}\omega t}$ is an electron–light coupling parameter obtained by integrating over time the vector potential component parallel to the velocity $\dot{{\bf r}}_{\rm e}(t)$ and evaluated at the electron position ${\bf r}_{\rm e}(t)$. Taking a specularly reflected trajectory with the surface- polariton field given above, we obtain $\displaystyle\beta=\frac{2\,{\rm i}}{\sqrt{2n_{\rm eff}^{2}-1}}\;\frac{eE_{0}c}{\hbar\omega^{2}}\frac{(n_{\rm eff}c/v-\sin\theta_{\rm e})\cos\theta_{\rm e}}{(c/v-n_{\rm eff}\sin\theta_{\rm e})^{2}+(n_{\rm eff}^{2}-1)\cos^{2}\theta_{\rm e}}.$ (5) For reference, we note that the prefactor $eE_{0}c/\hbar\omega^{2}$ takes a value of $\approx 2$ for $E_{0}=10^{7}$ V/m and $\hbar\omega=1$ eV. This mode energy is characteristic of surface plasmons in ultrathin metal films Abd El- Fattah _et al._ (2019) and exciton-polaritons in transition-metal dichalcogenides Li _et al._ (2014); Epstein _et al._ (2020), while polaritons of lower energy (e.g., $\hbar\omega\sim 0.1$ eV in graphene Woessner _et al._ (2015); Ni _et al._ (2018) and hBN Giles _et al._ (2018); Li _et al._ (2021)) should produce larger coupling for the same field amplitude in accordance to the scaling $\beta\propto 1/\omega^{2}$. Equation (5) reveals the important role of confinement in enhancing the electron–polariton coupling: given a certain electron velocity $v$, the denominator reaches its minimum value under the condition $\displaystyle c/n_{\rm eff}=v\sin\theta_{\rm e}$ (6) (i.e., when the surface-polariton phase velocity matches the in-plane projection of the electron velocity). In addition, Eq. (5) illustrates the well-known linear scaling of the coupling coefficient with the applied electric field amplitude $E_{0}$. Figure 2 highlights the importance of recoil effects in the interaction between a low-energy electron and a strongly confined surface polariton by contrasting the classical nonrecoil PINEM theory [Fig. 2a–d, based on Eq. (5)] with the rigorous quantum formalism introduced above [Fig. 2e–h, Eq. (4)]. As a first observation, the classical treatment in Eq. (5) provides the necessary conditions for strong coupling and leads to substantial inelastic probabilities (Fig. 2a-d). Focusing for concreteness on the $\ell=\pm 1$ channels (see SI for more sidebands), the parameter space for which electron–light coupling is maximized is well captured by the classical framework, and it corresponds to the phase-matching condition in Eq. (6) (i.e., $\hbar\varepsilon_{0}\approx m_{\rm e}c^{2}/2n_{\rm eff}^{2}\sim 4$ eV with $\theta_{\rm e}=90\degree$), for which the coupling diverges as $\beta\propto 1/\cos{\theta_{\mathrm{e}}}$ near $\theta_{\rm e}=90\degree$. However, both the intensity profile in the $(\varepsilon_{0},\theta_{\rm e})$ phase space and the magnitude of the electron–light coupling strength are markedly different in the classical and quantum theories. Specifically, the incorporation of recoil in the latter leads to asymmetric loss–gain spectra (i.e., $P_{-\ell}\neq P_{\ell}$) as well as abrupt thresholds in $P_{\ell}$ (Fig. 2e–g). These observations can be interpreted by noting that energy–momentum conservation imposes the condition $q_{\ell z}=q_{0}\sqrt{1+\ell\omega/\varepsilon_{0}-|\sin\theta_{\rm e}+\ell k_{\parallel}/q_{0}|^{2}}$, and thus, not only the emission ($\ell<0$) and absorption ($\ell>0$) probabilities are rendered different, but also the inelastic signal vanishes whenever $1+\ell\omega/\varepsilon_{0}<|\sin\theta_{\rm e}+\ell k_{\parallel}/q_{0}|^{2}$, since $q_{\ell z}$ becomes purely imaginary (i.e., the corresponding electron wave is evanescent). Another dramatic consequence of recoil is the redistribution of probability to neighboring $\ell$’s near the aforementioned thresholds (see Fig. 2h), in contrast to the symmetric spectrum produced by the classical description (Fig. 2d). Figure 3: Inelastic scattering of low-energy electrons upon partial reflection at a polariton-supporting thin film. We show transmitted (a–d) and reflected (e-h) electron spectra as a function of polaritonic electric-field amplitude $E_{0}$ for fixed effective refractive index ($n_{\rm eff}=50$) and energy ($\hbar\omega=1\,$eV). Selected incident electron energies $\hbar\varepsilon_{0}$ are considered, while the incidence angle is fixed at $\theta_{\rm e}=45\degree$. Electron–surface scattering is modeled through a $\delta$-function potential of amplitude $U_{0}=1\,$eV nm. Spectral features are broadened by a Lorentzian of 0.1 eV FWHM. Partially electron-reflecting surface.—We expect partial transmission and reflection when the electron is scattered by an atomically thin 2D material, which we describe through a surface potential $V(z)=U_{0}\,\delta(z)$. The parameter $U_{0}$ has units of energy times length, and arguing that an atomic monolayer can be described by a barrier of finite thickness $d\lesssim 1$ nm and internal potential $V_{0}$ in the eV range, we expect $U_{0}\approx V_{0}d$ in the eV$\times$nm range. We also assume the material to support long-lived polaritonic modes (e.g., phonon-polaritons in hBN Giles _et al._ (2018) or plasmons in doped graphene Ni _et al._ (2018)), so that they are characterized by a real effective index $n_{\rm eff}$. The calculation of the probabilities associated with the electron wave functions involved in the net exchange of $\ell$ polariton quanta follows the same steps as in the above scenario of full reflection, but now both reflected and transmitted electron components are produced. For an electron prepared with an incident $-q_{0z}$ wave vector in the out-of-plane direction, the wave function in the absence of illumination is given by $\varphi_{00}(z)=\big{[}{\rm e}^{-{\rm i}q_{0z}z}+r_{0}\,{\rm e}^{{\rm i}q_{0z}z}\big{]}\Theta(z)+t_{0}\,{\rm e}^{-{\rm i}q_{0z}z}\,\Theta(-z)$, where we use the transmission and reflection coefficients $t_{\ell}=\big{(}1+{\rm i}m_{\rm e}U_{0}/\hbar^{2}q_{\ell z}\big{)}^{-1}$ and $r_{\ell}=t_{\ell}-1$, respectively. We use this result together with the Green function $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})=-({\rm i}m_{\rm e}/\hbar^{2}q_{\ell z})\Big{[}{\rm e}^{{\rm i}q_{\ell z}|z-z^{\prime}|}+r_{\ell}\,{\rm e}^{{\rm i}q_{\ell z}(|z|+|z^{\prime}|)}\Big{]}$ (see SI) to obtain the reflected and transmitted $\ell$-resolved probabilities $P^{\pm}_{\ell}$ from Eq. (4) following the formalism developed above. The results are plotted as a function of electric- field amplitude for different electron energies in Fig. 3. Again, we find recoil effects emerging through strong asymmetries in the electron spectra, which, as anticipated, become more symmetric as the electron energy is increased toward the $\varepsilon_{0}\gg\omega$ regime. These results are qualitatively correct even when more involved $z$-dependent potentials are considered (e.g., finite-thickness films) because the Green function outside the material retains the same expression as above de Aguiar (1993), with reflection and transmission coefficients depending on the details of the potential. ### Nonvanishing interaction of surface-scattered electrons and unscattered light plane waves An interesting scenario is presented when a thin film is illuminated from the far field and the scattered optical components are comparatively negligible, so the electron mainly sees an external light plane wave. Electron–light coupling can still take place under this configuration because the free-space energy–momentum mismatch is broken by the fact that the electron is scattered by the material. We regard this situation as the complementary of electron shaping mediated by PINEM interaction (i.e., SIELS assisted by electron- transparent, light-reflecting plates, in which the kinematic electron–photon free-space coupling mismatch is circumvented by having light half-plane waves instead of full plane waves Vanacore _et al._ (2018)). We note that 2D monolayers (e.g., graphene and hBN) are nearly transparent to light (e.g., $\sim 2.3\%$ absorption by graphene over a wide spectral range Mak _et al._ (2008); Nair _et al._ (2008)) and can thus be regarded as good candidates to explore the interaction of light plane waves with surface-reflected low-energy electrons. Figure 4: Inelastic interaction between a light plane wave and surface- scattered electrons. We plot the probability associated with $\ell=\pm 1$ net photon exchanges as a function of the electron ($\theta_{\rm e}$) and photon ($\theta_{\rm l}$) incidence angles without (a,b) and with (c,d) inclusion of recoil for an electron-opaque, light-transparent surface. We assume total- electron reflection and take $\hbar\varepsilon_{0}=2\,$eV, $E_{0}=8\times 10^{7}\,$V/m, and $\hbar\omega=1\,$eV. In panel (d), kinematically forbidden regions are shaded in gray. We explore this idea in Fig. 4, where a fully reflected electron is considered to be interacting with a freely propagating p-polarized light plane wave. The analysis is analogous to that in Fig. 2, but using a plane-wave optical field instead of a surface mode. We present the probabilities $P_{\ell=\pm 1}^{\pm}$ calculated to first-order in the light intensity as a function of light and electron incidence angles, comparing classical (nonrecoil) and quantum (Eq. (4), with recoil) descriptions. Incidentally, the coupling parameter in the former is given by $\displaystyle\beta=\frac{2eE_{0}c}{\hbar\omega^{2}}\frac{\cos\theta_{\rm e}(\sin\theta_{\rm l}\;c/v-\sin{\theta_{\rm e}})}{(c/v-\sin\theta_{\rm l}\sin\theta_{\rm e})^{2}-\cos^{2}\theta_{\rm l}\cos^{2}\theta_{\rm e}},$ where $E_{0}$ is the light-plane-wave amplitude. We find again that recoil leads to asymmetric inelastic electron signals, as well as regions of the $(\theta_{\rm l},\theta_{\rm e})$ parameter space in which electron–light coupling becomes kinematically allowed or forbidden, accompanied by a transfer of probability to the symmetric ($\ell\to-\ell$) channel. In contrast to the interaction with surface polaritons, where phase-matching at grazing incidence produced the strongest interaction, now phase-matching is forbidden (i.e., the in-plane optical wave vector is always smaller than the in-plane electron wave vector), rendering the coupling smaller, so we reduce the analysis to first- order processes. The maximum coupling is observed at grazing light incidence ($\theta_{\rm l}=\pm 90\degree$) and normal electron incidence ($\theta_{\rm e}=0$), which is consistent with the angular scaling of the classical coupling coefficient as $\beta\propto\sin\theta_{l}\cos\theta_{\rm e}$ for $v\ll c$. These conditions guarantee maximum overlap of the light electric field along the electron trajectory. Figure 5: Recoil effects in the interaction of light with lattice-diffracted electrons. (a) We consider a normally incident electron undergoing diffraction by an Au(111) monolayer (Au–Au distance $d\approx 0.288$ nm, with atomic bonds along $y$) under grazing plane-wave light irradiation along $x$ (${\bf k}_{\parallel}\parallel\hat{\bf x}$) with linear polarization in the $y-z$ plane as indicated by the angle $\phi_{\rm p}$. Each of the electron Bragg diffraction orders (see yellow arrows for one of them) splits in energy and direction of reflection/transmission upon exchange of a net number of photons $\ell$. (b) Electron isoenergy contours after exchanging $\ell=-1$, 0, and 1 photons (blue, black, and red circles), superimposed on the reciprocal lattice of the atomic monolayer ($4\pi/\sqrt{3}d$ distance between sites). We consider 71.4 eV incident electrons and 1 eV photons. (c-h) Intensity of transmitted (c-e) and reflected (f-h) Bragg peaks for three different incident electron energies (see labels above (c,f), (d,g), and (e,h)) upon exchange of $\ell=-1$, 0, or 1 photons (1 eV energy). The light wave vector ${\bf k}_{\parallel}$ is indicated in (c) and the electric field amplitude is $E_{0}=2.5\times 10^{8}$ V/m with polarization set by $\phi_{\rm p}=45\degree$. The area of the circles gives the fraction of electrons in each Bragg peak (see log-scale legend). (i-k) Same as (e), but for varying polarization angles $\phi_{\rm p}$ (see top labels). ### Light-assisted low-energy electron diffraction Low-energy electrons with energies $\sim 10-500\,$eV are commonly used to study the atomic structure of crystal surfaces in low-energy electron diffraction Rocca (1995); Pendry (1974, 1984) (LEED) because they penetrate only a few atomic layers and have de Broglie wavelengths commensurate with the atomic spacings. In a related context, energy-resolved inelastic low-energy electron surface scattering is also used to probe surface modes Claus _et al._ (1992); Nagao _et al._ (2001, 2006), while ultrafast LEED grants one access to time-resolved structural dynamics Gulde _et al._ (2014); Vogelgesang _et al._ (2018). Here, we theoretically study the interaction of surface-diffracted electrons with light plane waves and show the important role played by recoil in the underlying electron–light coupling, including the presence of lattice resonances that boost the interaction under Rayleigh anomaly conditions Lord Rayleigh (1907a). To illustrate electron–light–matter interactions in the presence of Bragg diffraction, we consider a low-energy electron normally impinging on an illuminated monolayer of gold atoms arranged in a (111) triangular lattice with an Au–Au bond distance of $0.288\,$nm (Fig. 5a). In the absence of external illumination, the diffracted electron wave function consists of components with wave vectors given by ${\bf q}^{\pm}_{{\bf G}0}={\bf G}\pm\sqrt{q_{0}^{2}-G^{2}}\,\hat{\bf z}$, where ${\bf G}$ are 2D reciprocal lattice vectors, while the $+$ ($-$) sign corresponds to upward (downward) electron motion relative to the atomic plane. Lattice scattering is elastic, so all of these wave vectors have a magnitude $q_{0}=\sqrt{2m_{\rm e}\varepsilon_{0}/\hbar}$ determined by the incident electron energy $\hbar\varepsilon_{0}$. Diffracted electron plane waves with $G<q_{0}$ generate observable LEED spots, as determined by an Ewald sphere construction (see Fig. 5), whereas waves with $G>q_{0}$ are evanescent. The latter do not explicitly contribute to the far-field electron scattering probability, but they have to be retained in the description of dynamical electron diffraction by the atomic layer Rocca (1995); Pendry (1974, 1984). Starting from a normally incident electron, the wave function associated with the incident and scattered waves in the absence of illumination takes the form $\displaystyle\psi_{0}({\bf r},t)=\Big{[}{\rm e}^{-{\rm i}q_{0}z}+\sum_{\pm}B^{\pm}_{{\bf G}}{\rm e}^{{\rm i}{\bf q}^{\pm}_{{\bf G}0}\cdot{\bf r}}\Theta(\pm z)\Big{]}\,{\rm e}^{-{\rm i}\varepsilon_{0}t},$ (7a) where $B^{\pm}_{{\bf G}}$ are scattering amplitudes. Upon interaction with an incident light plane wave of wave vector ${\bf k}$, every diffraction order (either propagating or evanescent) can exchange energy with the light field in multiples of the photon energy and in-plane wave vector (i.e., $\ell\hbar\omega$, and $\ell{\bf k}_{\parallel}$, respectively), giving rise to diffracted components with wave vectors ${\bf q}^{\pm}_{{\bf G}\ell}={\bf G}+\ell{\bf k}_{\parallel}\pm q_{{\bf G}\ell z}$, where $q_{{\bf G}\ell z}=\sqrt{2m_{\rm e}(\varepsilon_{0}+\ell\omega)^{2}/\hbar-|{\bf G}+\ell{\bf k}_{\parallel}|^{2}}\,\hat{\bf z}$, labeled by the direction of motion ($+/-$ for upward/downward scattering) and the net lattice and photon momentum exchanges ($\hbar{\bf G}$ and $\ell\hbar{\bf k}_{\parallel}$). The total electron wave function takes the form $\displaystyle\psi({\bf r},t)=\Big{[}{\rm e}^{-{\rm i}q_{0}z}+\sum_{\pm}C^{\pm}_{{\bf G}\ell}{\rm e}^{{\rm i}{\bf q}^{\pm}_{{\bf G}\ell}\cdot{\bf r}-{\rm i}\ell\omega t}\Theta(\pm z)\Big{]}\,{\rm e}^{-{\rm i}\varepsilon_{0}t},$ (7b) where the amplitudes $C_{{\bf G}\ell}^{\pm}$ are self-consistently determined from an extension of LEED theory to incorporate the interaction with both the atomic lattice and the optical field [see SI for details, including analytical expressions for the coefficients $B^{\pm}_{{\bf G}}$ and $C^{\pm}_{{\bf G}\ell}$ in Eqs. (7a) and (7b), respectively.] For simplicity, we limit our analysis to first order in the electron–light interaction, but we incorporate the interaction with the lattice to all orders. As the electron energy increases, the number of diffracted spots also increases because more points are inside the Ewald sphere. In the elastic part ($\ell=0$), a given diffraction order ${\bf G}$ is observed in the LEED pattern when the electron energy exceeds a threshold energy $\hbar^{2}G^{2}/2m_{\rm e}$. However, in the inelastic components, the changes in electron energy and momentum enter the condition for far-field propagation. For example, after exchanging $\ell$ photons, a previously evanescent order may become propagating if $\hbar|{\bf G}+\ell{\bf k}_{\parallel}|^{2}/2m_{\rm e}<\varepsilon_{0}+\ell\omega$, and likewise, a propagating Bragg-diffracted beam may become evanescent (Fig 5b). These effects are more relevant when $\varepsilon_{0}\sim\hbar G^{2}/2m_{\rm e}$. In addition, electrons scattered at the onset of a diffraction order move under grazing conditions, so they spend more time near the surface and, therefore, undergo a stronger interaction with light. Consequently, by choosing the electron energy close to the threshold of one of the ${\bf G}$ beams, we expect to increase the coupling of diffracted electrons to light, emphasizing the importance of recoil (see below). This phenomenology is illustrated in the calculations presented in Fig. 5c-h for three different incident electron energies (in separate columns) near the threshold of the $(1,1)$ and its symmetry-equivalent diffraction spots. We show the intensities of transmitted (Fig. 5c-e) and reflected (Fig. 5f-h) beams. Following the conditions under which a maximum electron–light interaction was observed in Fig. 4, we take the electron to be normally impinging on the atomic plane, while a light plane wave is incident parallel to the $x$ surface direction with linear polarization as indicated in Fig. 5a. The atomic lattice is oriented with Au–Au bonds along $y$. Upon inspection of our numerical results, we estimate an overall inelastic scattering probability of $\sim 10-20\%$ for incident electron energies up to 50 eV and optical electric-field strengths $E_{0}\sim 2.5\times 10^{8}\,$V/m. Additionally, we find that the waves corresponding to $\ell=\pm 1$ typically have comparable intensities to the transmitted $\ell=0$ beam. In contrast to the amplitude found when adopting the nonrecoil approximation, which only depends on the amplitude of each ${\bf G}$-dependent LEED spot and its multiplexing into different energy sidebands according to the corresponding electron–light coupling coefficient $\beta$ (i.e., considering the time integral of the field along the classical incoming and Bragg-reflected electron paths), a full quantum treatment including recoil reveals that all diffraction orders (propagating and evanescent) can contribute. Figure 6: Strong coupling between light and diffracted electrons. (a) Probability (blue curve, to first order) and azimuthal outgoing angle (orange curves) of the $(2,1)$ diffraction spot [${\bf G}=(2\pi/d)\,\big{(}\sqrt{3}\hat{\bf x}-\hat{\bf y}\big{)}$] after emitting one photon ($\ell=-1$) as a function of electron energy $\hbar\varepsilon_{0}$ under the configuration of Fig. 5a for a light plane wave impinging along $x$ with polarization along $y$, amplitude $E_{0}=2.5\times 10^{7}$ V/m, and photon energy $\hbar\omega=1$ eV. Rigorous first-order theory (solid-blue curve) is compared with the analytical approximation in Eq. (8) (dashed curve). (b) Same as (a), but shown as a function of photon energy for an incident electron energy $\hbar\varepsilon_{0}=73.3$ eV. A divergent probability is observed when the inelastically diffracted beam becomes grazing at electron and photon energies $\hbar\varepsilon_{0{\rm r}}$ and $\hbar\omega_{\rm r}$ in (a) and (b), respectively. The probability also diverges at low frequencies with the asymptotic behavior $\propto 1/\omega^{4}$ indicated by the dotted-blue curve in (b). The onset of a new diffraction order during the scattering of waves by a periodic structure causes an anomaly consisting of the depletion of the specularly reflected and directly transmitted beams, as pointed out by Lord Rayleigh in the context of light diffraction by periodic gratings Lord Rayleigh (1907a, b); García de Abajo (2007). In light-assisted inelastic electron diffraction, a related anomaly takes place when a scattered beam becomes grazing (i.e., a vanishing out-of-plane wave vector component $q_{{\bf G}\varepsilon_{\ell}z}=0$ for a combination of reciprocal lattice vector ${\bf G}$ and sideband order $\ell$, or equivalently, $\hbar\varepsilon_{0}+\ell\hbar\omega=(\hbar^{2}/2m_{\rm e})|{\bf G}+\ell{\bf k}_{\parallel}|^{2}$ under normal electron-incidence conditions). Upon examination of the corresponding coefficients $C_{{\bf G}\ell}^{\pm}$ in Eq. (7b) (see SI), considering $q_{{\bf G}\varepsilon_{\ell}z}\approx 0$ and taking the field amplitude ${\bf E}_{0}\perp\hat{\bf z}$ for simplicity, we can approximate $\displaystyle C_{{\bf G},\ell}^{\pm}\approx$ $\displaystyle\frac{{\rm i}^{\ell}e}{\hbar\omega}\;\frac{{\bf E}_{0}\cdot{\bf G}}{q_{{\bf G}\varepsilon_{\ell}z}\;q_{{\bf G}\varepsilon_{0}z}}\Big{(}B^{+}_{{\bf G}}+B^{-}_{{\bf G}}\Big{)},$ (8) and consequently, the probability $\big{|}C_{{\bf G},\ell}^{\pm}\big{|}^{2}$ diverges as $1/q_{{\bf G}\varepsilon_{\ell}z}^{2}$. An illustrative example is presented in Fig. 6 when varying either the electron energy (Fig. 6a) or the photon energy (Fig. 6b) around the conditions for the grazing emission of an inelastic electron beam. A divergence in the probability calculated to the first order of interaction with the light is observed, leading to unphysical values above unity. This indicates that the system enters into the nonpertubative regime, requiring higher orders of interaction to describe the system, but also revealing that strong electron–light coupling can be reached even for small light intensities. Such a strong interaction results from in- phase scattering by a large number of atoms in the planar lattice, which demands the use of sufficiently broad electron and light beams that can be regarded as plane waves over a wide surface area. This divergence is well captured by Eq. (8) [Fig. 6, dashed-blue curves], in reasonable agreement with our rigorous first-order results [Fig. 6, solid-blue curves, obtained from Eq. (B34) in Appendix]. We note that, in addition to the explicit $1/\omega$ factor in Eq. (8), the inelastic scattering coefficient $C_{{\bf G},\ell}^{\pm}$ is dominated by a $1/(q_{{\bf G}\varepsilon_{l}z}-q_{{\bf G}\varepsilon_{0}z})$ term in Eq. (B34) (see SI), thus resulting in an overall $\propto 1/\omega^{4}$ scaling of the probability with decreasing photon frequency $\omega$, as indicated in Fig. 6b. ## Discussion In summary, based on a comprehensive theoretical treatment of the quantum- mechanical interaction between low-energy free electrons and optical fields with comparable photon energies, we have identified a plethora of recoil and quantum effects emerging in the form of dramatic modifications in the energy and angular distribution of electrons undergoing elastic surface scattering and inelastic interaction with optical fields associated with either surface polaritons or propagating light. In particular, we have shown that free electrons interacting with evanescent optical fields can undergo classically forbidden backscattering. Furthermore, the interaction between surface- scattered electrons and unscattered light plane waves renders a nonzero electron–light coupling due to the breaking of translational symmetry in the electron wave function; we propose that suspended atomically thin layers may provide suitable conditions (high transparency to light and large electron scattering) to observe this effect. As a common element in the interaction between free electrons and crystal surfaces, we have incorporated Bragg diffraction, which leads to an interplay between scattering by the atomic lattice and inelastic photon exchanges. The latter can transform propagating diffraction orders into evanescent or the other way around. Importantly, strong electron–light coupling is predicted at the onset of an inelastically scattering electron beam, capitalizing on the in-phase interaction with many atoms in the structure. Such a strong coupling regime could be leveraged to optically shape free electrons using moderate light intensities, potentially operating in the continuous-wave regime without damaging the scattering material. Although we have focused on planar surfaces, the concept of combining elastic scattering of low-energy electrons by a material structure and the inelastic interaction with light of comparably low photon energy is more general and could involve the use of non-periodic nanostructures such as holes, tips, and other curved elements to guide and reshape the electron wave function and also increase or spatially modulate its interaction with specific optical modes. In a related context, electron interaction with illuminated atoms in the gas phase has a long tradition Weingartshofer _et al._ (1977, 1983); Francken and C.J.Joachain (1990); E. Arqué López and García de Abajo (2022), which could be revisited as a platform to optically modulate electrons and explore new physics. The novel directions opened by the presented theory and simulations could be experimentally explored in currently available low-energy electron- microscope setups. ## Acknowledgments The authors acknowledge insightful discussions with V. Di Giulio, S. V. Yalunin, and J. Otto. This work has been supported in part by the European Research Council (Advanced Grants 789104-eNANO and 101055435-ULEEM), the European Commission (Horizon 2020 Grants No. 101017720 FET-Proactive EBEAM and No. 964591-SMART-electron), the Spanish MICINN (PID2020-112625GB-I00 and Severo Ochoa CEX2019-000910-S), the Catalan AGAUR (Grant No. 2023 FI-1 00052) and CERCA Programs, and Fundaciós Cellex and Mir-Puig. ## Appendix A Low-energy electron scattering by an illuminated homogeneous surface We present a self-contained theory of low-energy electron scattering at planar interfaces that are either supporting surface polaritons or subject to external illumination. ### A1.1 Electron scattering by a planar surface We consider an electron of energy $\hbar\varepsilon_{0}$ elastically scattered by a homogeneous planar structure represented through a one-dimensional (1D) potential energy $V(z)$. Different types of surfaces are discussed below, with the electron impinging from the vacuum region ($z>0$) as a plane wave of well- defined in-plane wave vector ${\bf q}_{0\parallel}$ and total wave vector $q_{0}=\sqrt{2m_{\rm e}\varepsilon_{0}/\hbar}$. The incident out-of plane wave vector is then $-q_{0z}$ with $q_{0z}=\sqrt{q_{0}^{2}-q_{0\parallel}^{2}}$. The Hamiltonian of the system can be written as $\displaystyle\hat{\mathcal{H}}_{0}({\bf r})=-\frac{\hbar^{2}\nabla^{2}}{2m_{\rm e}}+V(z),$ with $V(z)=0$ in the $z>0$ region. Translational invariance in the $x-y$ plane allows us to factorize the electron wave function as $\displaystyle\psi_{0}({\bf r},t)={\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot{\bf R}-{\rm i}\varepsilon_{0}t}\,\psi_{0}(z),$ (A9) where ${\bf R}=(x,y)$ and the subscript in $\psi_{0}$ indicates that the interaction with light is still not included. In the formalism that follows, $\hat{\mathcal{H}}_{0}({\bf r})$ determines $\psi_{0}(z)$ starting from an incident electron plane wave, as well as the noninteracting Green function $\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})$ implicitly defined by Economou (2006) $\displaystyle\big{[}\hat{\mathcal{H}}_{0}({\bf r})-{\rm i}\hbar\partial_{t}\big{]}\,\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})=-\delta({\bf r}-{\bf r}^{\prime})\delta(t-t^{\prime}).$ (A10) Time invariance allows us to write the Green function in frequency space as $\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})=(2\pi)^{-1}\int{\rm d}\varepsilon\;{\rm e}^{{\rm i}\varepsilon(t^{\prime}-t)}\,\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},\varepsilon)$. Likewise, surface translation symmetry reduces the dependence on in-plane coordinates to the difference ${\bf R}-{\bf R}^{\prime}$. Moving to Fourier space, this dependence can be expressed as a combination of ${\rm e}^{{\rm i}{\bf q}_{\parallel}\cdot({\bf R}-{\bf R}^{\prime})}$ waves with in-plane wave vectors ${\bf q}_{\parallel}$. In addition, the Hamiltonian $\hat{\mathcal{H}}_{0}({\bf r})$ depends on ${\bf R}$ only through the $\nabla^{2}_{\bf R}$ term, so we can separate the frequency in parallel and perpendicular components as $\varepsilon=\hbar q_{\parallel}^{2}/2m_{\rm e}+\varepsilon^{\perp}$. Putting these elements together, we can write $\displaystyle\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})=\int\frac{{\rm d}^{2}{\bf q}_{\parallel}}{(2\pi)^{2}}\;{\rm e}^{{\rm i}{\bf q}_{\parallel}\cdot({\bf R}-{\bf R}^{\prime})}\int\frac{{\rm d}\varepsilon}{2\pi}\;{\rm e}^{{\rm i}\varepsilon(t^{\prime}-t)}\;\mathcal{G}_{0}\big{(}z,z^{\prime},\varepsilon-\hbar q_{\parallel}^{2}/2m_{\rm e}\big{)},$ (A11) which, upon insertion into Eq. (A10), leads to the equation $\displaystyle\bigg{[}-\frac{\hbar^{2}}{2m_{\rm e}}\partial_{z}^{2}+V(z)-\hbar\varepsilon^{\perp}\bigg{]}\,\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp})=-\delta(z-z^{\prime})$ (A12) for the 1D Green function $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp})$. We consider three different types of potentials, for which the Green function can be calculated analytically as follows: * • Finite potential barrier. To describe atomically thin two-dimensional (2D) materials, we set $V(z)=U_{0}\delta(z)$. The parameter $U_{0}$ has units of energy times length. Arguing that an atomic monolayer can be described by a barrier of finite thickness $d\lesssim 1$ nm and internal potential $V_{0}$ in the eV range, we expect $U_{0}\approx V_{0}d$ in the eV$\times$nm range when adopting the zero-thickness approximation. The corresponding Green function reads $\displaystyle\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp})$ $\displaystyle=-\frac{{\rm i}m_{\rm e}}{\hbar^{2}q_{z}}\,\Big{[}{\rm e}^{{\rm i}q_{z}|z-z^{\prime}|}+r_{q_{z}}\,{\rm e}^{{\rm i}q_{z}(|z|+|z^{\prime}|)}\Big{]},$ (A13a) where $q_{z}=\sqrt{2m_{\rm e}\varepsilon^{\perp}/\hbar}$ is the out-of-plane wave vector, the first term accounts for free electron propagation, and the second term is proportional to the energy-dependent reflection coefficient of the potential barrier $r_{q_{z}}=\big{(}{\rm i}\hbar^{2}|q_{z}|/m_{\rm e}U_{0}-1\big{)}^{-1}$. Upon inspection, Eq. (A13a) can be readily verified to satisfy Eq. (A12) for the chosen potential. * • Infinite potential barrier. As an instructive limit, we also consider complete surface electron reflection, which can be realized through the potential barrier introduced above by setting $U_{0}=\infty$ (i.e., with $r_{q_{z}}=-1$). From Eq. (A13a), the Green function now reduces to $\displaystyle\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp})$ $\displaystyle=-\frac{{\rm i}m_{\rm e}}{\hbar^{2}q_{z}}\big{(}{\rm e}^{{\rm i}q_{z}|z-z^{\prime}|}-\,{\rm e}^{{\rm i}q_{z}(|z|+|z^{\prime}|)}\big{)}$ (A13b) $\displaystyle=-\frac{{\rm i}m_{\rm e}}{\hbar^{2}q_{z}}\left({\rm e}^{{\rm i}q_{z}|z-z^{\prime}|}-\,{\rm e}^{{\rm i}q_{z}(|z|+|z^{\prime}|)}\right)\,\big{[}\Theta(z)\,\Theta(z^{\prime})+\Theta(-z)\,\Theta(-z^{\prime})\big{]},$ where the second line shows that positive and negative $z$ regions do not mix [i.e., an electron coming from $z>0$ stays fully inside that region after scattering, as one can verify from Eq. (A22) below]. * • Free electron. When the interaction with the material can be neglected, we set $V(z)=0$ (i.e., $U_{0}=0$), so we have $r_{q_{z}}=0$, and therefore, Eq. (A13a) reduces to $\displaystyle\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp})$ $\displaystyle=-\frac{{\rm i}m_{\rm e}}{\hbar^{2}q_{z}}{\rm e}^{{\rm i}q_{z}|z-z^{\prime}|}.$ (A13c) In addition, the initial wave function $\psi_{0}(z)$ produced by an incident wave ${\rm e}^{-{\rm i}q_{0z}}$ also depends on the type of potential and takes the explicit form $\displaystyle\psi_{0}(z)=\left\\{\begin{array}[]{ll}\big{[}{\rm e}^{-{\rm i}q_{0z}z}+r_{q_{0z}}\,{\rm e}^{{\rm i}q_{0z}z}\big{]}\,\Theta(z)+t_{q_{0z}}\,{\rm e}^{-{\rm i}q_{0z}z}\,\Theta(-z),&\quad\quad\quad\text{(finite potential barrier)}\\\ \big{[}{\rm e}^{-{\rm i}q_{0z}}-{\rm e}^{{\rm i}q_{0z}}\big{]}\,\Theta(z),&\quad\quad\quad\text{(infinite potential barrier)}\\\ {\rm e}^{-{\rm i}q_{0z}},&\quad\quad\quad\text{(free electron)}\end{array}\right.$ (A17) where $t_{q_{z}}=1+r_{q_{z}}$ is the transmission coefficient (evaluated at the incident out-of-plane wave vector $q_{z}=-q_{0z}$), which guarantees the continuity of $\psi_{0}(z)$. In the analysis presented below, we use the 1D Green function and initial wave function for the finite potential barrier, from which the free-electron and infinite-potential-barrier configurations are directly obtained by setting $r_{q_{z}}=0$ and $r_{q_{z}}=-1$, respectively. ### A1.2 Interaction of light The optical field is introduced through a classical vector potential ${\bf A}({\bf r},t)={\bf A}(z)\,{\rm e}^{{\rm i}{\bf k}_{\parallel}\cdot{\bf R}-{\rm i}\omega t}+{\rm c.c.}$ with well-defined in-plane wave vector ${\bf k}_{\parallel}$ and frequency $\omega$. By working in a gauge with vanishing scalar potential, adopting the minimal-coupling prescription, and neglecting $A^{2}$ terms, the interaction Hamiltonian reduces to $\displaystyle\hat{\mathcal{H}}_{1}({\bf r},t)=\frac{-{\rm i}e\hbar}{m_{\rm e}c}{\bf A}({\bf r},t)\cdot\bm{\nabla},$ (A18) so the total Hamiltonian becomes $\hat{\mathcal{H}}({\bf r},t)=\hat{\mathcal{H}}_{0}({\bf r})+\hat{\mathcal{H}}_{1}({\bf r},t)$. The electron wave function $\psi({\bf r},t)$ is then modified relative to $\psi_{0}({\bf r},t)$ as described by the Lippmann–Schwinger (LS) equation Messiah (1966); Sakurai (1994) $\psi({\bf r},t)=\psi_{0}({\bf r},t)+\int{\rm d}^{3}{\bf r}^{\prime}{\rm d}t^{\prime}\;\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})\,\hat{\mathcal{H}}_{1}({\bf r}^{\prime},t^{\prime})\,\psi({\bf r}^{\prime},t^{\prime})$. At this point, it is convenient to write the perturbation series $\displaystyle\psi({\bf r},t)=\sum_{n=0}^{\infty}\psi^{(n)}({\bf r},t),$ (A19) where the $n$-th term is order $n$ in the vector potential ${\bf A}$. By inserting Eq. (A19) into the LS equation, we find the recursion formula $\displaystyle\psi^{(n)}({\bf r},t)=\int{\rm d}^{3}{\bf r}^{\prime}\int{\rm d}t^{\prime}\,\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})\,\hat{\mathcal{H}}_{1}({\bf r}^{\prime},t^{\prime})\,\psi^{(n-1)}({\bf r}^{\prime},t^{\prime})$ (A20) for $n>0$, starting with $\psi^{(0)}\equiv\psi_{0}$. Because the vector potential depends on time through ${\rm e}^{\pm{\rm i}\omega t}$, each subsequent perturbation order introduces the possibility of absorbing or emitting one additional photon. Hence, at a given order $n$ the electron energy can be modified to $\hbar(\varepsilon_{0}+\ell\omega)$ with $|\ell|\leq n$. Using this and further imposing conservation of in-plane momentum, we can decompose the post-interaction wave function as $\displaystyle\psi({\bf r},t)=\sum_{n=0}^{\infty}\sum_{\ell=-n}^{n}\varphi_{n\ell}(z)\,{\rm e}^{{\rm i}({\bf q}_{0\parallel}+\ell\,{\bf k}_{\parallel})\cdot{\bf R}}\,{\rm e}^{-{\rm i}(\varepsilon_{0}+\ell\omega)t}.$ (A21) We now introduce Eqs. (A11) and (A21) into Eq. (A20), use the factorization in Eq. (A9), and identify terms with the same $({\bf R},t)$ dependence on both sides of the equation. This procedure shows that Eq. (A21) is indeed a solution when the $z$-dependent coefficients are determined by iterating the relation $\displaystyle\varphi_{n\ell}(z)=\frac{\hbar e}{m_{\rm e}c}\int{\rm d}{z^{\prime}}\,\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})\;\Big{\\{}{\bf A}(z^{\prime})$ $\displaystyle\cdot\big{[}{\bf q}_{0\parallel}+(\ell-1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\partial_{z^{\prime}}\big{]}\,\varphi_{n-1,\ell-1}(z^{\prime})$ $\displaystyle+{\bf A}^{*}(z^{\prime})$ $\displaystyle\cdot\big{[}{\bf q}_{0\parallel}+(\ell+1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\partial_{z^{\prime}}\big{]}\,\varphi_{n-1,\ell+1}(z^{\prime})\Big{\\}},$ (A22) where $\varepsilon_{\ell}^{\perp}=\varepsilon_{0}+\ell\omega-\hbar|{\bf q}_{\parallel}+\ell\,{\bf k}_{\parallel}|^{2}/2m_{\rm e}$. Equation (A22), which is reproduced as Eq. (2) in the main text, applies to $n>0$ with a seeding term $\varphi_{00}(z)=\psi_{0}(z)$ and imposing $\varphi_{n\ell}(z)=0$ for $|\ell|>n$. For simplicity, we assume the in-plane wave vectors of light and electron to be both along the surface direction $x$ (i.e., ${\bf q}_{0\parallel}=q_{0\parallel}\hat{\bf x}$ and ${\bf k}_{\parallel}=k_{\parallel}\hat{\bf x}$). In what follows, we consider two types of optical fields: * • Unscattered light plane waves. For sufficiently thin films (e.g., self- standing atomic monolayers), light scattering by the material can be approximately neglected. Then, taking an incident wave vector ${\bf k}=k_{\parallel}\hat{\bf x}-k_{z}\hat{\bf z}$ and an electric field amplitude ${\bf E}_{0}$, the $z$ dependence of the vector potential reduces to $\displaystyle{\bf A}(z)=\frac{1}{{\rm i}k}{\bf E}_{0}\,{\rm e}^{{\rm i}k_{z}z}.$ (A23a) We consider propagating light with $k_{\parallel}\leq k=\omega/c$ and $k_{z}=\sqrt{k^{2}-k_{\parallel}^{2}}$. * • Surface polaritons. For films of negligible thickness, the electric field associated with a polariton of in-plane wave vector ${\bf k}_{\parallel}=k_{\parallel}\hat{\bf x}$ and frequency $\omega$ can be written as ${\bf E}({\bf r},t)={\bf E}({\bf r})\,{\rm e}^{-{\rm i}\omega t}+{\rm c.c.}$ with ${\bf E}({\bf r})=E_{0}\;\big{(}k_{\parallel}^{2}+\kappa^{2}\big{)}^{-1/2}\,\big{(}{\rm i}\,\kappa\,\hat{\bf x}-\operatorname{sign}\\{z\\}\,k_{\parallel}\,\hat{\bf z}\,\big{)}\,{\rm e}^{{\rm i}k_{\parallel}x-\kappa|z|}$, where $E_{0}$ is a global amplitude and $\kappa=\sqrt{k_{\parallel}^{2}-k^{2}}$ determines an out-of-plane exponential decay with $k_{\parallel}>k$. This expression is constructed to satisfy $\nabla\cdot{\bf E}({\bf r},t)=0$ at $z\neq 0$. Then, the vector potential reduces to ${\bf A}({\bf r})={\bf E}({\bf r})/{\rm i}k={\bf A}(z)\,{\rm e}^{{\rm i}k_{\parallel}x}$, where the $z$ dependence is given by $\displaystyle{\bf A}(z)=\frac{E_{0}}{k}\frac{1}{\sqrt{\kappa^{2}+k_{\parallel}^{2}}}\;\Big{(}\kappa\hat{\bf x}+{\rm i}\,k_{\parallel}\operatorname{sign}\\{z\\}\,\hat{\bf z}\Big{)}\,{\rm e}^{-\kappa|z|}.$ (A23b) We find it convenient to express the in-plane wave vector as $k_{\parallel}=n_{\rm eff}k$ in terms of an effective refractive index $n_{\rm eff}$. In this work, we take $n_{\rm eff}\gg 1$ values, so have $\kappa\approx k_{\parallel}$, and therefore, ${\bf A}(z)\approx\big{(}E_{0}/\sqrt{2}\,k\big{)}\;\big{(}\hat{\bf x}+{\rm i}\,\operatorname{sign}\\{z\\}\,\hat{\bf z}\big{)}\,{\rm e}^{-k_{\parallel}|z|}$. However, we use the fully retarded expressions in our calculations. Incidentally, these expressions can also be used for arbitrary planar material systems contained within the $z<0$ region if we are only interested in the outer $z>0$ half-space (i.e., without electron penetration inside the material). Next, we solve Eq. (A22) by plugging the expressions for $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})$ and ${\bf A}(z)$ given in Eqs. (A13) and (A23). ### A1.3 Recursive solution for the self-consistent electron wave function We solve Eq. (A22) by expanding the wave function components $\varphi_{n\ell}(z)$ following a recursive procedure to determine their explicit dependence on $z$, the parameters involved in such dependence, and how they are modified at each iteration step given the analytical expressions for $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})$ and ${\bf A}(z)$ in Eqs. (A13) and (A23). Because these quantities depend on $z$ through exponential factors, and therefore, the $z^{\prime}$ integral at each iteration $n$ in Eq. (A22) produces additional exponential factors consisting of a combination of those at order $n-1$ plus those of $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})$ and ${\bf A}(z)$, the wave function components must take the general form $\displaystyle\varphi_{n\ell}(z)=\sum_{s=\pm 1}\sum_{j=1}^{N_{n\ell}}\alpha_{n\ell}^{js}\,{\rm e}^{\zeta_{n\ell}^{js}z}\,\Theta{(sz)}$ (A24) [Eq. (3) in the main text], where the number of terms in this sum ($N_{n\ell}$) is obviously increasing with $n$ and also depends on $\ell$. Specifically, the vector potentials under consideration can be written as $\displaystyle{\bf A}(z)=\sum_{s=\pm 1}{\bf A}^{s}\,{\rm e}^{\eta^{s}z}\,\Theta{(sz)},$ (A25) where the coefficients ${\bf A}^{\pm 1}$ and $\eta^{\pm 1}$ can be directly identified from Eqs. (A23). Likewise, the Green function $\mathcal{G}_{0}(z,z^{\prime},\varepsilon^{\perp}_{\ell})$ is given by Eq. (A13a) with the reflection coefficient $r_{q_{\ell z}}$ set to different values depending on the choice for the potential $V(z)$. Note that in the $\ell$ channel, the out-of-plane wave vector is given by $q_{\ell z}=\sqrt{2m_{\rm e}\varepsilon^{\perp}_{\ell}/\hbar+{\rm i}0^{+}}=\sqrt{2m_{\rm e}(\varepsilon_{0}+\ell\omega)/\hbar-|{\bf q}_{\parallel}+\ell\,{\bf k}_{\parallel}|^{2}+{\rm i}0^{+}}$, where the square root is taken to yield a positive imaginary part. By inserting Eqs. (A13a), (A24), and (A25) into Eq. (A22), we obtain the equivalent recursion formula (for $\pm z>0$) $\displaystyle\sum_{j}\alpha_{n\ell}^{j\pm}\,\boxed{{\rm e}^{\zeta_{n\ell}^{j\pm}z}}=$ $\displaystyle-\frac{{\rm i}e}{\hbar cq_{\ell z}}\sum_{s=\pm 1}\sum_{j}\Bigg{\\{}2{\rm i}q_{\ell z}\delta_{s,\pm 1}\Bigg{[}\boxed{{\rm e}^{(\eta^{s}+\zeta_{n-1,\ell-1}^{js})z}}\;{\bf A}^{s}\cdot\big{[}{\bf q}_{0\parallel}+(\ell-1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\zeta_{n-1,\ell-1}^{js}\big{]}\,\frac{\alpha_{n-1,\ell-1}^{js}}{q_{\ell z}^{2}+\big{(}\eta^{s}+\zeta_{n-1,\ell-1}^{js}\big{)}^{2}}$ $\displaystyle\quad\quad\quad+\boxed{{\rm e}^{(\eta^{s*}+\zeta_{n-1,\ell+1}^{js})z}}\;{\bf A}^{s*}\cdot\big{[}{\bf q}_{0\parallel}+(\ell+1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\zeta_{n-1,\ell+1}^{js}\big{]}\,\frac{\alpha_{n-1,\ell+1}^{js}}{q_{\ell z}^{2}+\big{(}\eta^{s*}+\zeta_{n-1,\ell+1}^{js}\big{)}^{2}}\Bigg{]}$ (A26) $\displaystyle+\boxed{{\rm e}^{\pm{\rm i}q_{\ell z}z}}\;\Bigg{[}$ $\displaystyle{\bf A}^{s}\cdot\big{[}{\bf q}_{0\parallel}+(\ell-1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\zeta_{n-1,\ell-1}^{js}\big{]}\,\alpha_{n-1,\ell-1}^{js}\;\Bigg{(}\frac{-r_{q_{\ell z}}}{{\rm i}q_{\ell z}+s\;\big{(}\eta^{s}+\zeta_{n-1,\ell-1}^{js}\big{)}}\pm\frac{s}{{\rm i}q_{\ell z}\mp\big{(}\eta^{s}+\zeta_{n-1,\ell-1}^{js}\big{)}}\Bigg{)}$ $\displaystyle+$ $\displaystyle{\bf A}^{s*}\cdot\big{[}{\bf q}_{0\parallel}+(\ell+1)\,{\bf k}_{\parallel}-{\rm i}\hat{\bf z}\,\zeta_{n-1,\ell+1}^{js}\big{]}\,\alpha_{n-1,\ell+1}^{js}\;\Bigg{(}\frac{-r_{q_{\ell z}}}{{\rm i}q_{\ell z}+s\;\big{(}\eta^{s*}+\zeta_{n-1,\ell+1}^{js}\big{)}}\pm\frac{s}{{\rm i}q_{\ell z}\mp\big{(}\eta^{s*}+\zeta_{n-1,\ell+1}^{js}\big{)}}\Bigg{)}\Bigg{]}\Bigg{\\}},$ where we have performed the $z^{\prime}$ integrals using the identities $\displaystyle\int{\rm d}{z^{\prime}}\,\Theta{(\pm z^{\prime})}\,{\rm e}^{{\rm i}q_{\ell z}|z^{\prime}|+\Delta z^{\prime}}=\frac{-1}{{\rm i}q_{\ell z}\pm\Delta},$ (A27a) $\displaystyle\int{\rm d}{z^{\prime}}\,\Theta{(\pm z^{\prime})}\,{\rm e}^{{\rm i}q_{\ell z}|z-z^{\prime}|+\Delta z^{\prime}}=\frac{\pm\operatorname{sign}\\{z\\}}{{\rm i}q_{\ell z}-\operatorname{sign}\\{z\\}\Delta}\,{\rm e}^{{\rm i}q_{\ell z}|z|}+\frac{2{\rm i}q_{\ell z}}{q_{\ell z}^{2}+\Delta^{2}}\,{\rm e}^{\Delta z}\;\Theta(\pm z).$ (A27b) Incidentally, the exponentials inside the integrals vanish at $|z^{\prime}|\to\infty$ because they contain either evanescent electron/light components or propagating fields in which the electron wave vectors $q_{\ell z}$ possess an arbitrarily small positive imaginary part that makes them consistent with the use of retarded Green functions. At every iteration step $n>0$, we solve Eq. (A26) by setting the coefficients $\alpha_{n\ell}^{j\pm}$ and $\zeta_{n\ell}^{j\pm}$ in the left-hand side to match the exponential terms in the right-hand side (see boxed expressions), starting with the $n=0$ seeding values $\big{\\{}\alpha_{00}^{1,+1}=1,\,\zeta_{00}^{1,+1}=-{\rm i}q_{0z}\big{\\}}$, $\big{\\{}\alpha_{00}^{2,+1}=r_{q_{0z}},\,\zeta_{00}^{2,+1}={\rm i}q_{0z}\big{\\}}$, and $\big{\\{}\alpha_{00}^{1,-1}=t_{q_{0z}},\,\zeta_{00}^{1,-1}=-{\rm i}q_{0z}\big{\\}}$, as determined from Eq. (A17), and noticing that $|\ell|\leq n$. When considering polaritons, for which $\eta^{s}$ in Eq. (A25) has a finite real part that makes the optical field evanescent, upon inspection of the numerical solution of Eq. (A26), we corroborate that the obtained exponential coefficients of all propagating components in Eq. (A26) (i.e., with vanishing ${\rm Re}\\{\zeta_{n\ell}^{js}\\}$) satisfy $\zeta_{n\ell}^{js}={\rm i}\,sq_{\ell z}$, as expected from energy conservation after a net exchange of $\ell$ photons. The remaining coefficients are found to possess finite real parts satisfying the condition $s\,{\rm Re}\\{\zeta_{n\ell}^{js}\\}<0$ (i.e., they are evanescent). However, under external illumination, Eq. (A25) comprises propagating components (i.e., imaginary coefficients $\eta^{s}$), so the interaction region extends indefinitely in $z$, affecting the electron in the way described by Volkov Wolkow (1935). A direct solution of Eq. (A26) is then problematic because one needs to separate surface-mediated inelastic transitions from far-field electron–light interactions. In actual experiments, the range of the applied light fields is limited to a finite region near the surface (e.g., when using a laser beam), so we find it more convenient to introduce a finite real part in the exponential coefficients as ${\rm Re}\\{\eta^{s}\\}=-s\gamma$ with $\gamma>0$ to prevent electron exposure to far optical fields. Then, we iterate Eq. (A26) to the desired order $n$, perform the $|z|\to\infty$ limit to calculate the scattered electron intensity, and finally take $\gamma\to 0^{+}$. In practice, this amounts to calculating the electron intensity after eliminating the terms of Eq. (A25) in which $\zeta_{n\ell}^{js}$ depends explicitly on $\gamma$. The remaining far- field terms also satisfy $\zeta_{n\ell}^{js}={\rm i}\,sq_{\ell z}$. ## Appendix A2 Low-energy electron scattering by an illuminated atomic monolayer including diffraction While in Sec. A we discussed homogenous surfaces, the in-plane atomic corrugation needs to be incorporated in a more realistic scenario. Next, we extend the theory to incorporate diffraction by an illuminated monoatomic layer. ### B2.1 Low-energy electron diffraction by an atomic monolayer Low-energy electron diffraction (LEED) is a well-established technique for the determination of surface atomic structures based on the measurement and comparison with the theory of the electron-energy dependence of the intensities associated with different Bragg reflections by periodic atomic lattices Pendry (1974); Van Hove _et al._ (1986). Here, we borrow the theoretical methods developed to simulate LEED and describe electron diffraction by an atomic monolayer Pendry (1974). Following the notation of Sec. A, we consider an electron beam of energy $\hbar\varepsilon_{0}$ impinging on a crystalline atomic monolayer with wave vector ${\bf q}_{0}={\bf q}_{0\parallel}-q_{0z}\,\hat{\bf z}$, where $q_{0z}=\sqrt{q_{0}^{2}-q_{0\parallel}^{2}}$. For simplicity, we assume a simple lattice of the same type of atoms, periodically arranged at positions ${\bf R}_{\alpha}$ in the $z=0$ plane and with unit cell area $A$. The Hamiltonian of the system can be approximated as $\displaystyle\hat{\mathcal{H}}_{0}({\bf r})=-\frac{\hbar^{2}\nabla^{2}}{2m_{\rm e}}+\sum_{\alpha}V_{\rm atom}(|{\bf r}-{\bf R}_{\alpha}|),$ where $V_{\rm atom}(r)$ is the atomic potential, considered to be isotropic. The electron is diffracted by the lattice and exchanges discrete amounts of in-plane momentum $\hbar{\bf G}$ determined by the 2D reciprocal lattice vectors ${\bf G}$, resulting in scattered beams with three-dimensional wave vectors ${\bf q}_{0\parallel}+{\bf G}\pm q_{{\bf G}\varepsilon_{0}z}\hat{\bf z}$ directed along upward ($+$) and downward ($-$) directions. The out-of- plane wave vector component $q_{{\bf G}\varepsilon_{0}z}=\sqrt{q_{0}^{2}-|{\bf q}_{0\parallel}+{\bf G}|^{2}+{\rm i}0^{+}}$ is determined by energy conservation with the square root taken to yield a positive imaginary part. For $|{\bf q}_{0\parallel}+{\bf G}|<q_{0}$, the diffracted waves propagate to the far field, giving rise to observable LEED spots, whereas waves with $|{\bf q}_{0\parallel}+{\bf G}|>q_{0}$ are evanescent. The total electron wave function outside the layer should take the form $\displaystyle\psi_{0}({\bf r},t)=\bigg{[}{\rm e}^{-{\rm i}q_{0z}z}+\sum_{{\bf G},\pm}B^{\pm}_{{\bf G}}\;{\rm e}^{{\rm i}({\bf G}\cdot{\bf R}\pm q_{{\bf G}\varepsilon_{0}z}z)}\;\Theta{(\pm z)}\bigg{]}\;{\rm e}^{{\rm i}({\bf q}_{0\parallel}\cdot{\bf R}-\varepsilon_{0}t)},$ (B28) where $B^{\pm}_{{\bf G}}$ are Bragg scattering coefficients. To determine $B^{\pm}_{{\bf G}}$ in Eq. (B28), we start by expressing the incident wave around each atomic position ${\bf R}_{\alpha}$ Pendry (1974) as $\displaystyle{\rm e}^{{\rm i}{\bf q}_{0}\cdot{\bf r}}={\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot{\bf R}_{\alpha}}\sum_{L}\phi^{\mathrm{inc}}_{L}j_{L}\big{[}q_{0}({\bf r}-{\bf R}_{\alpha})\big{]},$ (B29a) where $\displaystyle\phi^{\mathrm{inc}}_{L}=4\pi\,Y_{L}^{*}\big{(}\Omega_{{\bf q}_{0}}\big{)}$ (B29b) are expansion coefficients involving spherical harmonics $Y_{L}$ labeled by the quantum numbers $L=(l,m)$, while $j_{L}(q_{0}{\bf r})={\rm i}^{l}j_{l}(q_{0}r)Y_{L}\big{(}\Omega_{\hat{\bf r}}\big{)}$ are regular spherical waves with a radial dependence introduced through spherical Bessel functions $j_{l}$. Note that we separate the phase associated with the propagation of the incident wave to each atomic site as a global factor ${\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot{\bf R}_{\alpha}}$ in Eq. (B29a). Incidentally, under normal incidence (${\bf q}_{0}=-\hat{\bf z}$) we have $\phi^{\mathrm{inc}}_{L}=\delta_{m0}\,(-1)^{l}\sqrt{4\pi\,(2l+1)}$. In the far field ($q_{0}r\gg 1$), regular waves have the asymptotic behavior $j_{l}(q_{0}r)=\frac{1}{2}\Big{[}h^{(1)}_{l}(q_{0}r)+h^{(2)}_{l}(q_{0}r)\Big{]}\approx\frac{1}{2q_{0}r}\Big{[}{\rm i}^{-(l+1)}{\rm e}^{{\rm i}q_{0}r}+{\rm i}^{l+1}{\rm e}^{-{\rm i}q_{0}r}\Big{]},$ which corresponds to a combination of outgoing and incoming waves expressed in terms of the Hankel functions $h^{(1)}_{l}$ and $h^{(2)}_{l}$, respectively. Scattering by an atom only affects the outgoing wave, which retains its magnitude and quantum numbers $L$ because $V_{\rm atom}(r)$ is a real and isotropic potential. Therefore, the waves become $j_{l}(q_{0}r)\rightarrow\frac{1}{2q_{0}r}\Big{[}e^{2i\delta_{l}}h^{(1)}_{l}(q_{0}r)+h^{(2)}_{l}(q_{0}r)\Big{]},$ where $\delta_{l}$ are phase shifts that depend on the orbital quantum number $l$ and the atomic potential (i.e., the type of atom). Here, we calculate $\delta_{l}$ numerically assuming free atoms, for which the potential is obtained by using a Dirac-Fock code Ankudinov _et al._ (1996), while electron scattering is computed by solving the Schrödinger equation following a well- established procedure Salvat and Mayol (1991). For an individual atom, the scattered components can thus be written as García de Abajo _et al._ (2001) $\sum_{L}\phi^{\mathrm{scat}}_{L}h^{(+)}_{L}\big{[}(q_{0}({\bf r}-{\bf R}_{\alpha})\big{]}$, where $h^{(+)}_{L}(q_{0}{\bf r})={\rm i}^{l+1}h^{(1)}_{l}(q_{0}r)Y_{L}\big{(}\Omega_{\hat{\bf r}}\big{)}$ are spherical outgoing waves and $\phi^{\mathrm{scat}}_{L}=t_{l}\,\phi^{\mathrm{inc}}_{L}$ are scattering coefficients that involve the scattering matrix elements $t_{l}={\rm e}^{{\rm i}\delta_{l}}\sin\delta_{l}$. In the atomic monolayer, lattice symmetry allows us to write $\displaystyle\psi_{0}({\bf r},t)=\bigg{[}{\rm e}^{{\rm i}{\bf q}_{0}\cdot{\bf r}}+\sum_{\alpha}{\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot{\bf R}_{\alpha}}\sum_{L}\phi^{\mathrm{scat}}_{L}\,h^{(+)}_{L}\big{[}(q_{0}({\bf r}-{\bf R}_{\alpha})\bigg{]}\;{\rm e}^{-{\rm i}\varepsilon_{0}t},$ (B30) where the propagation phase factor ${\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot{\bf R}_{\alpha}}$ is retained in the contribution of scattering by each lattice site $\alpha$. Now, we incorporate multiple scattering in the monolayer by supplementing the incident wave at each atom with the result of scattering from the rest of the atoms. More precisely, we expand the waves outgoing from each position $\beta$ as a combination of regular waves at each other position $\alpha$ using the identity Pendry (1974) $h^{(+)}_{L^{\prime}}\big{[}(q_{0}({\bf r}-{\bf R}_{\beta})\big{]}=\sum_{L}G_{\alpha\beta,LL^{\prime}}\,j_{L}\big{[}(q_{0}({\bf r}-{\bf R}_{\alpha})\big{]}$, where $G_{\alpha\beta,LL^{\prime}}=4\pi\sum_{L^{\prime\prime}}h^{(+)}_{L^{\prime\prime}}\big{[}(q_{0}({\bf R}_{\alpha}-{\bf R}_{\beta})\big{]}\int{\rm d}\Omega\;Y_{L}(\Omega)Y_{L^{\prime\prime}}(\Omega)Y_{L^{\prime}}^{*}(\Omega)$. From these considerations, we find the self-consistent equation $\phi^{\text{scat}}_{L}=t_{l}\,\big{[}\phi^{\text{inc}}_{L}+\sum_{L^{\prime}}\,G_{LL^{\prime}}({\bf q}_{0\parallel})\,\phi^{\text{scat}}_{L^{\prime}}\big{]}$, or equivalently, $\displaystyle\phi^{\text{scat}}_{L}=\sum_{L^{\prime}}\big{[}S^{-1}\big{]}_{LL^{\prime}}t_{l^{\prime}}\,\phi^{\text{inc}}_{L^{\prime}},$ (B31) where we define the lattice sum $G_{LL^{\prime}}({\bf q}_{0\parallel})=\sum_{\beta\neq\alpha}\,{\rm e}^{{\rm i}{\bf q}_{0\parallel}\cdot({\bf R}_{\beta}-{\bf R}_{\alpha})}\,G_{\alpha\beta,LL^{\prime}}$, which is trivially independent of $\alpha$, and the matrix $S_{LL^{\prime}}=t_{l}\,G_{LL^{\prime}}({\bf q}_{0\parallel})$. In our simulations, we use the efficient numerical methods developed by Kambe Kambe (1967); Pendry (1974) to carry out the sum over $\beta$ and find converged results by truncating the multipolar expansion to $l\leq 5$. To bring Eq. (B30) into the form of Eq. (B28), we use the identity (Eq. (A.39) in Ref. Pendry (1974)) $h_{L}^{(+)}(q_{0}{\bf r})=-\frac{1}{2\pi^{2}q_{0}}\int{\rm d}^{3}{\bf q}\;Y_{L}\big{(}\Omega_{\bf q}\big{)}\,\frac{{\rm e}^{{\rm i}{\bf q}\cdot{\bf r}}}{q_{0}^{2}-q^{2}+{\rm i}0^{+}}$ and then convert the sum over $\alpha$ into a sum over reciprocal lattice vectors as $\sum_{\alpha}{\rm e}^{{\rm i}({\bf q}_{0\parallel}-{\bf q}_{\parallel})\cdot{\bf R}_{\alpha}}=\frac{(2\pi)^{2}}{A}\sum_{\bf G}\delta({\bf q}_{0\parallel}+{\bf G}-{\bf q}_{\parallel}).$ The ${\bf q}_{\parallel}$ integral is done using these $\delta$-functions, while the integral over $q_{z}$ can be performed by closing the integration contour in the upper (lower) complex $q_{z}$ plane for $z>0$ ($z<0$). Finally, we recover an expression like Eq. (B28), from which we identify $\displaystyle B^{\pm}_{\bf G}$ $\displaystyle=\frac{2\pi{\rm i}}{A\,q_{0}q_{{\bf G}\varepsilon_{0}z}}\sum_{L}Y_{L}\big{(}\Omega_{{\bf q}_{0\parallel}+{\bf G}\pm q_{{\bf G}\varepsilon_{0}z}\hat{\bf z}}\big{)}\,\phi_{L}^{\mathrm{scat}}=\frac{8\pi^{2}{\rm i}}{A\,q_{0}q_{{\bf G}\varepsilon_{0}z}}\sum_{LL^{\prime}}Y_{L}\big{(}\Omega_{{\bf q}_{0\parallel}+{\bf G}\pm q_{{\bf G}\varepsilon_{0}z}\hat{\bf z}}\big{)}\,\big{[}S^{-1}\big{]}_{LL^{\prime}}\,t_{l^{\prime}}\,Y_{L^{\prime}}^{*}\big{(}\Omega_{{\bf q}_{0}}\big{)},$ (B32) where the rightmost identity is obtained by inserting the scattering wave coefficients $\phi_{L}^{\mathrm{scat}}$ obtained from Eqs. (B29b) and (B31). From the above considerations, we can also obtain the Green function $\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},\varepsilon)$ corresponding to an electron energy $\hbar\varepsilon$. In analogy to Eq. (A11), we separate it into the contribution of different in-plane wave vector components ${\bf q}_{\parallel}$, each of them connected to the whole set of in-plane wave vectors $\big{\\{}{\bf q}_{\parallel}+{\bf G}\big{\\}}$ (differing by reciprocal lattice vectors ${\bf G}$) via scattering by the atomic monolayer. We thus restrict ${\bf q}_{\parallel}$ to the first Brillouin zone (1BZ) and write the frequency-space Green function as $\displaystyle\mathcal{G}_{0}({\bf r},{\bf r}^{\prime},t-t^{\prime})=\sum_{{\bf G}{\bf G}^{\prime}}\int_{\rm 1BZ}\frac{{\rm d}^{2}{\bf q}_{\parallel}}{(2\pi)^{2}}\;{\rm e}^{{\rm i}({\bf q}_{\parallel}+{\bf G})\cdot{\bf R}-{\rm i}({\bf q}_{\parallel}+{\bf G}^{\prime})\cdot{\bf R}^{\prime}}\int\frac{{\rm d}\varepsilon}{2\pi}\;{\rm e}^{{\rm i}\varepsilon(t^{\prime}-t)}\;\mathcal{G}_{{\bf G}{\bf G}^{\prime}}\big{(}{\bf q}_{\parallel},z,z^{\prime},\varepsilon\big{)}.$ (B33a) Energy conservation in the regions outside the monolayer (i.e., above and below) implies that $\mathcal{G}_{{\bf G}{\bf G}^{\prime}}\big{(}{\bf q}_{\parallel},z,z^{\prime},\varepsilon\big{)}$ must be made of components with a $(z,z^{\prime})$ dependence given by ${\rm e}^{\pm{\rm i}q_{{\bf G}\varepsilon z}z\pm{\rm i}q_{{\bf G}^{\prime}\varepsilon z}z^{\prime}}$, where $q_{{\bf G}\varepsilon z}=\sqrt{2m_{\rm e}\varepsilon/\hbar-|{\bf q}_{\parallel}+{\bf G}|^{2}+{\rm i}0^{+}}$. Similarly to Eq. (A13a), we expect the Green function to consist of a free-space term plus contributions arising from wave scattering at the atomic layer (i.e., reflection and transmission), that is, $\displaystyle\mathcal{G}_{{\bf G}{\bf G}^{\prime}}\big{(}{\bf q}_{\parallel},z,z^{\prime},\varepsilon\big{)}=-\frac{{\rm i}m_{\rm e}}{\hbar^{2}q_{{\bf G}\varepsilon z}}\,\Big{[}\delta_{{\bf G}{\bf G}^{\prime}}{\rm e}^{{\rm i}q_{{\bf G}\varepsilon z}|z-z^{\prime}|}+r^{ss^{\prime}}_{{\bf G}{\bf G}^{\prime}}({\bf q}_{\parallel},\varepsilon)\,{\rm e}^{{\rm i}q_{{\bf G}\varepsilon z}|z|+{\rm i}q_{{\bf G}^{\prime}\varepsilon z}|z^{\prime}|}\Big{]},$ (B33b) where $r^{ss^{\prime}}_{{\bf G}{\bf G}^{\prime}}({\bf q}_{\parallel},\varepsilon)$ are Bragg scattering coefficients with $s=\operatorname{sign}\\{z\\}$ and $s^{\prime}=\operatorname{sign}\\{z^{\prime}\\}$ denoting the direction of wave propagation (i.e., upward and downward for $s,s^{\prime}=1$ and -1, respectively). Following the same steps as in the derivation presented above for Eqs. (B28) and (B32), but now applied to incident waves ${\rm e}^{{\rm i}({\bf q}_{\parallel}+{\bf G}^{\prime})\cdot{\bf R}+{\rm i}s^{\prime}q_{{\bf G}^{\prime}\varepsilon z}z}$ and outgoing waves ${\rm e}^{{\rm i}({\bf q}_{\parallel}+{\bf G})\cdot{\bf R}+{\rm i}sq_{{\bf G}\varepsilon,z}z}$ (with $s=\pm 1$ and $s^{\prime}=\pm 1$ indicating upward/downward propagation), we can readily write $\displaystyle r^{ss^{\prime}}_{{\bf G}{\bf G}^{\prime}}({\bf q}_{\parallel},\varepsilon)=\frac{8\pi^{2}{\rm i}}{A\,q_{0}q_{{\bf G}\varepsilon z}}\sum_{LL^{\prime}}Y_{L}\big{(}\Omega_{{\bf q}_{\parallel}+{\bf G}+sq_{{\bf G}\varepsilon z}\hat{\bf z}}\big{)}\,\big{[}S^{-1}\big{]}_{LL^{\prime}}\,t_{l^{\prime}}\,Y_{L^{\prime}}^{*}\big{(}\Omega_{{\bf q}_{\parallel}+{\bf G}^{\prime}+s^{\prime}q_{{\bf G}^{\prime}\varepsilon z}\hat{\bf z}}\big{)}.$ (B33c) We note that Eqs. (B33) are only valid outside the range of the atomic potentials $V_{\rm atom}(|{\bf r}-{\bf R}_{\alpha}|)$, which represents a thin region and can therefore be safely neglected to describe the interaction with light via the LS equation [Eq. (A20)]. ### B2.2 Interaction with light The Green function in Eqs. (B33) accounts for electron–lattice interaction to all orders of scattering. Inserting it into Eq. (A20) together with the initial wave function in Eq. (B28), we can obtain a perturbative solution to the wave function up to an arbitrary order $n$ in the interaction with light. After a net exchange of $\ell$ photons (with $|\ell|\leq n$), the in-plane wave vector and frequency of the electron become ${\bf q}_{0\parallel}+\ell{\bf k}_{\parallel}$ and $\varepsilon_{\ell}=\varepsilon_{0}+\ell\omega$, respectively, as deduced from the LS equation by performing the ${\bf R}^{\prime}$ and $t^{\prime}$ integrals, which then select the $\mathcal{G}_{{\bf G}{\bf G}^{\prime}}\big{(}{\bf q}_{0\parallel}+\ell{\bf k}_{\parallel},z,z^{\prime},\varepsilon_{0}+\ell\omega\big{)}$ Green function component. A general analysis is involved and does not provide substantial insights. Instead, we present results assuming normally incident electrons (${\bf q}_{0\parallel}=0$ and ${\bf q}_{0}=-q_{0}\hat{\bf z}$) and limit our discussion to first-order interaction with a grazingly incident light plane wave of wave vector ${\bf k}_{\parallel}=k\,\hat{\bf x}$ and real electric field amplitude ${\bf E}_{0}$ in the $y-z$ plane (i.e., ${\bf A}({\bf r},t)=({\bf E}_{0}/{\rm i}k)[{\rm e}^{{\rm i}(kx-\omega t)}-{\rm e}^{-{\rm i}(kx-\omega t)}]$). Then, the $\ell=0$ wave function component remains as in Eq. (B28), while two sidebands $\ell=\pm 1$ emerge at order $n=1$. Inserting the field into Eq. (A18) and combining the result with Eqs. (A20), (B28), and (B33), we obtain $\displaystyle\psi^{(1)}({\bf r},t)=\frac{e}{\hbar\omega}$ $\displaystyle\sum_{\ell=\pm 1}{\rm i}^{\ell-1}\;{\rm e}^{-{\rm i}\varepsilon_{\ell}t}\sum_{{\bf G}{\bf G}^{\prime}}\frac{1}{q_{{\bf G}\varepsilon_{\ell}z}}{\rm e}^{{\rm i}(\ell{\bf k}_{\parallel}+{\bf G})\cdot{\bf R}}$ $\displaystyle\times{\bf E}_{0}\cdot\sum_{\pm}\int{\rm d}z^{\prime}\;\Theta{(\pm z^{\prime})}\;{\rm e}^{-\gamma|z^{\prime}|}\;\Big{[}\delta_{{\bf G}{\bf G}^{\prime}}{\rm e}^{{\rm i}q_{{\bf G}\varepsilon_{\ell}z}|z-z^{\prime}|}+r^{\operatorname{sign}\\{z\\},\pm}_{{\bf G}{\bf G}^{\prime}}(\ell{\bf k}_{\parallel},\varepsilon_{\ell})\,{\rm e}^{{\rm i}q_{{\bf G}\varepsilon_{\ell}z}|z|+{\rm i}q_{{\bf G}^{\prime}\varepsilon_{\ell}z}|z^{\prime}|}\Big{]}$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\;\,\times\Big{[}-\hat{\bf z}\,q_{0}\,\delta_{{\bf G}^{\prime}0}\;{\rm e}^{-{\rm i}q_{0}z^{\prime}}+B^{\pm}_{{\bf G}^{\prime}}\;({\bf G}^{\prime}\pm q_{{\bf G}^{\prime}\varepsilon_{0}z}\hat{\bf z})\;{\rm e}^{{\rm i}q_{{\bf G}^{\prime}\varepsilon_{0}z}|z^{\prime}|}\Big{]},$ where we have inserted the ${\rm e}^{-\gamma|z^{\prime}|}$ factor that we discuss in Sec. A1.3. Finally, performing the $z^{\prime}$ integral with the help of Eqs. (A27) and taking the $|z|\to\infty$ and $\gamma\to 0^{+}$ limits in that order, we obtain $\displaystyle\psi^{(1)}({\bf r},t)=\sum_{{\bf G},\ell,\pm}C_{{\bf G},\ell}^{\pm}\;{\rm e}^{{\rm i}(\ell{\bf k}_{\parallel}+{\bf G})\cdot{\bf R}\pm{\rm i}q_{{\bf G}\varepsilon_{\ell}z}z-{\rm i}\varepsilon_{\ell}t}\;\Theta(\pm z),$ where $\displaystyle C_{{\bf G},\ell}^{s}=$ $\displaystyle-\frac{{\rm i}^{\ell}e}{\hbar\omega}\;\frac{1}{q_{{\bf G}\varepsilon_{\ell}z}}{\bf E}_{0}\cdot\sum_{\pm}\Bigg{\\{}\hat{\bf z}\,r^{s\pm}_{{\bf G}0}(\ell{\bf k}_{\parallel},\varepsilon_{\ell})\,\frac{q_{0}}{q_{{\bf G}\varepsilon_{\ell}z}\mp q_{0}}$ (B34) $\displaystyle+\sum_{{\bf G}^{\prime}}B^{\pm}_{{\bf G}^{\prime}}\;({\bf G}^{\prime}\pm q_{{\bf G}^{\prime}\varepsilon_{0}z}\hat{\bf z})\bigg{[}\delta_{{\bf G}{\bf G}^{\prime}}\frac{\pm s}{q_{{\bf G}\varepsilon_{\ell}z}\mp sq_{{\bf G}^{\prime}\varepsilon_{0}z}}-r^{s\pm}_{{\bf G}{\bf G}^{\prime}}(\ell{\bf k}_{\parallel},\varepsilon_{\ell})\,\frac{1}{q_{{\bf G}\varepsilon_{\ell}z}+q_{{\bf G}^{\prime}\varepsilon_{0}z}}\bigg{]}\Bigg{\\}}$ are the sought-after inelastic scattering coefficients. ## References * Barwick _et al._ (2009) B. Barwick, D. J. Flannigan, and A. H. Zewail, Nature 462, 902 (2009). * Flannigan and Zewail (2012) D. J. Flannigan and A. H. Zewail, Acc. Chem. Res. 45, 1828 (2012). * Feist _et al._ (2015) A. Feist, K. E. Echternkamp, J. Schauss, S. V. Yalunin, S. Schäfer, and C. Ropers, Nature 521, 200 (2015). * Barwick and Zewail (2015) B. Barwick and A. H. Zewail, ACS Photonics 2, 1391 (2015). * Polman _et al._ (2019) A. Polman, M. Kociak, and F. J. García de Abajo, Nat. Mater. 18, 1158 (2019). * García de Abajo and Di Giulio (2021) F. J. García de Abajo and V. Di Giulio, ACS Photonics 8, 945 (2021). * Roques-Carmes _et al._ (2023) C. Roques-Carmes, S. E. Kooi, Y. Yang, N. Rivera, P. D. Keathley, J. D. Joannopoulos, S. G. Johnson, I. Kaminer, K. K. Berggren, and M. Soljačić, Appl. Phys. Rev. 10, 011303 (2023). * García de Abajo and Ropers (2023) F. J. García de Abajo and C. Ropers, Phys. Rev. Lett. 130, 246901 (2023). * García de Abajo _et al._ (2010) F. J. García de Abajo, A. Asenjo-Garcia, and M. Kociak, Nano Lett. 10, 1859 (2010). * Park _et al._ (2010) S. T. Park, M. Lin, and A. H. Zewail, New J. Phys. 12, 123028 (2010). * Piazza _et al._ (2015) L. Piazza, T. T. A. Lummen, E. Quiñonez, Y. Murooka, B. Reed, B. Barwick, and F. Carbone, Nat. Commun. 6, 6407 (2015). * Lummen _et al._ (2016) T. T. A. Lummen, R. J. Lamb, G. Berruto, T. LaGrange, L. D. Negro, F. J. García de Abajo, D. McGrouther, B. Barwick, and F. Carbone, Nat. Commun. 7, 13156 (2016). * Vanacore _et al._ (2020) G. M. Vanacore, I. Madan, and F. Carbone, Riv. Nuovo Cimento 43, 567 (2020). * Kurman _et al._ (2021) Y. Kurman, R. Dahan, H. H. Sheinfux, K. Wang, M. Yannai, Y. Adiv, O. Reinhardt, L. H. G. Tizei, S. Y. Woo, J. Li, J. H. Edgar, M. Kociak, F. H. L. Koppens, and I. Kaminer, Science 372, 1181 (2021). * Nabben _et al._ (2023) D. Nabben, J. Kuttruff, L. Stolz, A. Ryabov, and P. Baum, Nature 619, 63 (2023). * Gaida _et al._ (2024) J. H. Gaida, H. Lourenço-Martins, M. Sivis, T. Rittmann, A. Feist, F. J. García de Abajo, and C. Ropers, “Attosecond electron microscopy by free-electron homodyne detection,” (2024), Nat. Photon. https://doi.org/10.1038/s41566-024-01380-8, arXiv:2305.03005 . * Bucher _et al._ (2023) T. Bucher, H. Nahari, H. H. Sheinfux, R. Ruimy, A. Niedermayr, R. Dahan, Q. Yan, Y. Adiv, M. Yannai, J. Chen, Y. Kurman, S. T. Park, D. J. Masiel, E. Janzen, J. H. Edgar, F. Carbone, G. Bartal, S. Tsesses, F. H. L. Koppens, G. M. Vanacore, and I. Kaminer, “Coherently amplified ultrafast imaging in a free-electron interferometer,” (2023), arXiv:2305.04877 . * Di Giulio _et al._ (2019) V. Di Giulio, M. Kociak, and F. J. García de Abajo, Optica 6, 1524 (2019). * Dahan _et al._ (2021) R. Dahan, A. Gorlach, U. Haeusler, A. Karnieli, O. Eyal, P. Yousefi, M. Segev, A. Arie, G. Eisenstein, P. Hommelhoff, and I. Kaminer, Science 373, eabj7128 (2021). * Yang _et al._ (2024) Y. Yang, J.-W. Henke, A. S. Raja, F. J. Kappert, G. Huang, G. Arend, Z. Qiu, A. Feist, R. N. Wang, A. Tusnin, A. Tikan, C. Ropers, and T. J. Kippenberg, Science 383, 168 (2024). * Priebe _et al._ (2017) K. E. Priebe, C. Rathje, S. V. Yalunin, T. Hohage, A. Feist, S. Schäfer, and C. Ropers, Nat. Photon. 11, 793 (2017). * Kozák _et al._ (2018) M. Kozák, N. Schönenberger, and P. Hommelhoff, Phys. Rev. Lett. 120, 103203 (2018). * Morimoto and Baum (2018) Y. Morimoto and P. Baum, Nat. Phys. 14, 252 (2018). * Vanacore _et al._ (2019) G. M. Vanacore, G. Berruto, I. Madan, E. Pomarico, P. Biagioni, R. J. Lamb, D. McGrouther, O. Reinhardt, I. Kaminer, B. Barwick, H. Larocque, V. Grillo, E. Karimi, F. J. García de Abajo, and F. Carbone, Nat. Mater. 18, 573 (2019). * Konečná and García de Abajo (2020) A. Konečná and F. J. García de Abajo, Phys. Rev. Lett. 125, 030801 (2020). * García de Abajo and Konečná (2021) F. J. García de Abajo and A. Konečná, Phys. Rev. Lett. 126, 123901 (2021). * Madan _et al._ (2022) I. Madan, V. Leccese, A. Mazur, F. Barantani, T. LaGrange, A. Sapozhnik, P. M. Tengdin, S. Gargiulo, E. Rotunno, J.-C. Olaya, I. Kaminer, V. Grillo, F. J. García de Abajo, F. Carbone, and G. M. Vanacore, ACS Photonics 9, 3215 (2022). * Mihaila _et al._ (2022) M. C. C. Mihaila, P. Weber, M. Schneller, L. Grandits, S. Nimmrichter, and T. Juffmann, Phys. Rev. X 12, 031043 (2022). * Glover _et al._ (1996) T. E. Glover, R. W. Schoenlein, A. H. Chin, and C. V. Shank, Phys. Rev. Lett. 76, 2468 (1996). * Saathoff _et al._ (2008) G. Saathoff, L. Miaja-Avila, M. Aeschlimann, M. M. Murnane, and H. C. Kapteyn, Phys. Rev. A 77, 022903 (2008). * Arrell _et al._ (2016) C. A. Arrell, J. Ojeda, L. Mewes, J. Grilj, F. Frassetto, L. Poletto, F. van Mourik, and M. Chergui, Phys. Rev. Lett. 117, 143001 (2016). * Yalunin _et al._ (2011) S. V. Yalunin, M. Gulde, and C. Ropers, Phys. Rev. B 84, 195426 (2011). * Dombi _et al._ (2020) P. Dombi, Z. Pápa, J. Vogelsang, S. V. Yalunin, M. Sivis, G. Herink, S. Schäfer, P. Groß, C. Ropers, and C. Lienau, Rev. Mod. Phys. 92, 025003 (2020). * Bendaña _et al._ (2011) X. M. Bendaña, A. Polman, and F. J. García de Abajo, Nano Lett. 11, 5099 (2011). * Kfir _et al._ (2020) O. Kfir, H. Lourenço-Martins, G. Storeck, M. Sivis, T. R. Harvey, T. J. Kippenberg, A. Feist, and C. Ropers, Nature 582, 46 (2020). * Wang _et al._ (2020) K. Wang, R. Dahan, M. Shentcis, Y. Kauffmann, A. Ben Hayun, O. Reinhardt, S. Tsesses, and I. Kaminer, Nature 582, 50 (2020). * Dahan _et al._ (2020) R. Dahan, S. Nehemia, M. Shentcis, O. Reinhardt, Y. Adiv, X. Shi, O. Be’er, M. H. Lynch, Y. Kurman, K. Wang, and I. Kaminer, Nat. Phys. 16, 1123 (2020). * Henke _et al._ (2021) J.-W. Henke, A. S. Raja, A. Feist, G. Huang, G. Arend, Y. Yang, F. J. Kappert, R. N. Wang, M. Möller, J. Pan, J. Liu, O. Kfir, C. Ropers, and T. J. Kippenberg, Nature 600, 653 (2021). * Varshalovich and D’Yakonov (1971) D. A. Varshalovich and M. A. D’Yakonov, Sov. Phys. JETP 33, 51 (1971). * Weingartshofer _et al._ (1977) A. Weingartshofer, J. K. Holmes, G. Caudle, E. M. Clarke, and H. Krüger, Phys. Rev. Lett. 39, 269 (1977). * Talebi (2018) N. Talebi, Adv. Phys. X 3, 1499438 (2018). * Talebi (2020) N. Talebi, Phys. Rev. Lett. 125, 080401 (2020). * García de Abajo _et al._ (2022) F. J. García de Abajo, E. J. C. Dias, and V. Di Giulio, Phys. Rev. Lett. 129, 093401 (2022). * Lord Rayleigh (1907a) Lord Rayleigh, Proc. R. Soc. Lond. A 79, 399 (1907a). * Abd El-Fattah _et al._ (2019) Z. M. Abd El-Fattah, V. Mkhitaryan, J. Brede, L. Fernández, C. Li, Q. Guo, A. Ghosh, A. Rodríguez Echarri, D. Naveh, F. Xia, J. E. Ortega, and F. J. García de Abajo, ACS Nano 13, 7771 (2019). * Woessner _et al._ (2015) A. Woessner, M. B. Lundeberg, Y. Gao, A. Principi, P. Alonso-González, M. Carrega, K. Watanabe, T. Taniguchi, G. Vignale, M. Polini, J. Hone, R. Hillenbrand, and F. H. Koppens, Nat. Mater. 14, 421 (2015). * Ni _et al._ (2018) G. X. Ni, A. S. McLeod, Z. Sun, L. Wang, L. Xiong, K. W. Post, S. S. Sunku, B.-Y. Jiang, J. Hone, C. R. Dean, M. M. Fogler, and D. N. Basov, Nature 557, 530 (2018). * Giles _et al._ (2018) A. J. Giles, S. Dai, I. Vurgaftman, T. Hoffman, S. Liu, L. Lindsay, C. T. Ellis, N. Assefa, I. Chatzakis, T. L. Reinecke, J. G. Tischler, M. M. Fogler, J. H. Edgar, D. N. Basov, and J. D. Caldwell, Nat. Mater. 17, 134 (2018). * Li _et al._ (2021) N. Li, X. Guo, X. Yang, R. Qi, T. Qiao, Y. Li, R. Shi, Y. Li, K. Liu, Z. Xu, L. Liu, F. J. García de Abajo, Q. Dai, E.-G. Wang, and P. Gao, Nat. Mater. 20, 43 (2021). * Li _et al._ (2014) Y. Li, A. Chernikov, X. Zhang, A. Rigosi, H. M. Hill, A. M. van der Zande, D. A. Chenet, E.-M. Shih, J. Hone, and T. F. Heinz, Phys. Rev. B 90, 205422 (2014). * Epstein _et al._ (2020) I. Epstein, B. Terrés, A. J. Chaves, V.-V. Pusapati, D. A. Rhodes, B. Frank, V. Zimmermann, Y. Qin, K. Watanabe, T. Taniguchi, H. Giessen, S. Tongay, J. C. Hone, N. M. R. Peres, and F. H. L. Koppens, Nano Lett. 20, 3545 (2020). * de Aguiar (1993) M. A. M. de Aguiar, Phys. Rev. A 48, 2567 (1993). * Vanacore _et al._ (2018) G. M. Vanacore, I. Madan, G. Berruto, K. Wang, E. Pomarico, R. J. Lamb, D. McGrouther, I. Kaminer, B. Barwick, F. J. García de Abajo, and F. Carbone, Nat. Commun. 9, 2694 (2018). * Mak _et al._ (2008) K. F. Mak, M. Y. Sfeir, Y. Wu, C. H. Lui, J. A. Misewich, and T. F. Heinz, Phys. Rev. Lett. 101, 196405 (2008). * Nair _et al._ (2008) R. R. Nair, P. Blake, A. N. Grigorenko, K. S. Novoselov, T. J. Booth, T. Stauber, N. M. R. Peres, and A. K. Geim, Science 320, 1308 (2008). * Rocca (1995) M. Rocca, Surf. Sci. Rep. 22, 1 (1995). * Pendry (1974) J. B. Pendry, _Low Energy Electron Diffraction_ (Academic Press, London, 1974). * Pendry (1984) J. B. Pendry, in _Determination of Surface Structure by LEED_ , edited by P. M. Marcus and F. Jona (Plenum Press, New York, 1984). * Claus _et al._ (1992) H. Claus, A. Büssenschütt, and M. Henzler, Rev. Sci. Instrum. 63, 2195 (1992). * Nagao _et al._ (2001) T. Nagao, T. Hildebrandt, M. Henzler, and S. Hasegawa, Phys. Rev. Lett. 86, 5747 (2001). * Nagao _et al._ (2006) T. Nagao, S. Yaginuma, T. Inaoka, and T. Sakurai, Phys. Rev. Lett. 97, 116802 (2006). * Gulde _et al._ (2014) M. Gulde, S. Schweda, G. Storeck, M. Maiti, H. K. Yu, A. M. Wodtke, S. Schäfer, and C. Ropers, Science 345, 200 (2014). * Vogelgesang _et al._ (2018) S. Vogelgesang, G. Storeck, J. G. Horstmann, T. Diekmann, M. Sivis, S. Schramm, K. Rossnagel, S. Schäfer, and C. Ropers, Nat. Phys. 14, 184 (2018). * Lord Rayleigh (1907b) Lord Rayleigh, Philos. Mag. 14, 60 (1907b). * García de Abajo (2007) F. J. García de Abajo, Rev. Mod. Phys. 79, 1267 (2007). * Weingartshofer _et al._ (1983) A. Weingartshofer, J. K. Holmes, J. Sabbagh, and S. L. Chin, J. Phys. B 16, 1805 (1983). * Francken and C.J.Joachain (1990) P. Francken and C.J.Joachain, J. Opt. Soc. Am. B 7, 554 (1990). * E. Arqué López and García de Abajo (2022) V. D. E. Arqué López and F. J. García de Abajo, Phys. Rev. Research 4, 013241 (2022). * Economou (2006) E. N. Economou, _Green’s Functions in Quantum Physics_ (Springer, Heidelberg, 2006). * Messiah (1966) A. Messiah, _Quantum Mechanics_ (North-Holland, New York, 1966). * Sakurai (1994) J. J. Sakurai, _Modern Quantum Mechanics_ (Addison-Wesley, Boston, 1994). * Wolkow (1935) D. M. Wolkow, Z. Phys. 94, 250 (1935). * Van Hove _et al._ (1986) M. A. Van Hove, W. H. Weinberg, and C.-M. Chan, _Low Energy Electron Diffraction_ (Springer-Verlag, Heidelberg, 1986). * Ankudinov _et al._ (1996) A. L. Ankudinov, S. I. Zabinsky, and J. J. Rehr, Comput. Phys. Commun. 98, 359 (1996). * Salvat and Mayol (1991) F. Salvat and R. Mayol, Comput. Phys. Commun. 62, 65 (1991). * García de Abajo _et al._ (2001) F. J. García de Abajo, M. A. Van Hove, and C. S. Fadley, Phys. Rev. B 63, 075404 (2001). * Kambe (1967) K. Kambe, Z. Naturforsch. A 22, 322 (1967). Figure S7: Inelastic scattering of low-energy electrons upon total reflection at a polariton-supporting surface. Panels (b-d,g-i) are reproduced from Fig. 2a-c,e-g in the main text, but now including dashed curves indicating the kinematic threshold beyond which energy–momentum conservation renders $q_{\ell z}$ purely imaginary (i.e., $1+\ell\omega/\varepsilon_{0}=|\sin\theta_{\rm e}+\ell k_{\parallel}/q_{0}|^{2}$). In addition, we present results for $\ell=\pm 2$ in panels (a,e,f,j). Figure S8: Influence of electron incidence angle on inelastic scattering under the conditions of Fig. 2 in the main text. We present results analogous to Fig. 2d,h, but for different values of the electron incidence angle $\theta_{\rm e}$. Figure S9: Inelastic scattering of low-energy electrons at an electron-transparent, polariton-supporting thin film. Same as Fig. 3 in the main text, but with $U_{0}=0$ (i.e., the electron does not see the material).
# Numerical study of acoustic cell trapping above elastic membrane disks driven in higher-harmonic modes by thin-film transducers with patterned electrodes André G. Steckel<EMAIL_ADDRESS>Department of Physics, Technical University of Denmark, DTU Physics Building 309, DK-2800 Kongens Lyngby, Denmark Henrik Bruus<EMAIL_ADDRESS>Department of Physics, Technical University of Denmark, DTU Physics Building 309, DK-2800 Kongens Lyngby, Denmark (23 December 2021) ###### Abstract Excitations of MHz acoustic modes are studied numerically in 10-µm-thick silicon disk membranes with a radius of 100 and 500 µm actuated by an attached 1-µm-thick (AlSc)N thin-film transducer. It is shown how higher-harmonic membrane modes can be excited selectively and efficiently by appropriate patterning of the transducer electrodes. When filling the half-space above the membrane with a liquid, the higher-harmonic modes induce acoustic pressure fields in the liquid with interference patterns that result in the formation of a single, strong trapping region located 50 - $100~{}\textrm{\textmu{}m}$ above the membrane, where a single suspended cell can be trapped in all three spatial directions. The trapping strength depends on the acoustic contrast between the cell and the liquid, and as a specific example it is shown by numerical simulation that by using a 60% iodixanol solution, a cancer cell can be held in the trap. ## I Introduction Recently, the concept of thin-film-actuated devices has been introduced in the field of microscale acoustofluidics at MHz ultrasound frequencies. In 2018, Reichert et al. demonstrated experimentally and numerically, how to generate useful acoustofluidic responses in microchannels with a thin silicon-membrane lid driven by a lead-zirconate-titanate (PZT) thin-film transducer [1]. In this case, the transducer makes up around 15% by volume (v/v) of the actuated membrane, which is excited while leaving the bulk part of the device inert. In 2021, we successfully modeled and experimentally validated the excitation by aluminum scandium nitrite (AlSc)N thin-film transducers of MHz-modes in millimeter-sized glass-block devices without microchannels [2]. In this system, the transducer is only 0.2% v/v. In a follow-up study [3], which constitutes the theoretical foundation of our present work, we demonstrated by numerical simulation, how thin-film transducers can induce acoustofluidic responses in bulk microfluidic devices on par with those obtained by using conventional bulk PZT transducers. In Ref. [3], we also studied the robustness and pointed out several advantages of using thin-film transducers: the low sensitivity of the thin-film device to the material, the thickness, and the quality factor of the thin-film transducer, and that the microfabrication techniques, by which the thin-film devices can be produced, also allow for careful shaping of the transducer electrodes, a design freedom that may be used to boost the acoustofluidic response of the device. In this work, we follow up on the latter idea, and present a numerical study of how proper shaping of the electrodes on circular thin-film-driven membranes can increase the excitation amplitudes of the higher-harmonic vibration eigenmodes in the membrane. When placing such membranes in the bottom wall of a cavity containing a liquid, we show that specific higher harmonics of the order $n\gtrsim 10$ in the membrane induce acoustic pressure fields in the liquid with interference patterns that result in the formation of a single, strong trapping region located 50 - $100~{}\textrm{\textmu{}m}$ above the membrane, where a single suspended cell can be trapped in all three spatial directions. The choice of this model system is motivated by the increasing use of disk-shaped membranes in acoustofluidic applications, such as capacitive micromachined ultrasonic transducers (CMUT) for imaging, inkjet printing, and testing [4], thin-film resonators for mixing and biosensing [5, 6], and silicon-membrane devices for particle manipulation [7]. In the field of acoustofluidics, electrode shaping is of course used extensively when dealing with surface acoustic waves (SAW), as the electrodes directly define these waves [8]. However, when dealing with bulk acoustic waves (BAW), the topic of this work, electrode shaping is rarely used, and not at all for the above-mentioned membrane devices. A simple split-electrode configuration with an applied anti-symmetric driving voltage, has been used on experiments on bulk piezoelectric (PZE) transducers [9, 10] and on thin-film transducers [1] to obtain a strong excitation of anti-symmetric modes for optimal particle focusing. Such systems has also been studied in numerical simulations [11, 12, 13, 3]. Furthermore, in a combined experimental and numerical study, Hammarström et al. used a square-shaped back and a full front electrode on a bulk PZE transducer to create a dynamically-defined array of particle traps in the liquid just above the transducer [14]. In the field of microelectromechanical system (MEMS), shaping of PZE- transducer electrodes has been studied in much more detail. For example the development of energy harvesting MEMS devices involving shaping of electrodes is an active research field [15, 16, 17, 18, 19]. A particular thorough study is the combined experimental and theoretical work on the excitation and detection of more than 50 permitted arbitrary modes in disc-, plate-, ring-, and beam-shaped PZT-on-silicon resonators by electrode shaping [20]. It is this kind of selective excitation of chosen resonance modes that we in the present work extend from solid-state MEMS to acoustofluidic devices for particle handling. The contents of the paper is as follows. In Section II we introduce our model system, a disk-shaped silicon-membrane-based microscale acoustofluidic system driven at MHz-frequencies by an (AlSc)N PZE thin-film transducer with patterned electrodes for selective excitation of higher-harmonic resonance modes, a system chosen for its compatibility with standard MEMS fabrication techniques. A brief summary is given of the basic theory and modeling developed in our previous work [3]. It includes time-harmonic perturbation theory, the electromechanical theory of the elastic membrane and the linear PZE transducer, acoustics and time-averaged acoustic streaming of the liquid, boundary conditions, and the numerical implementation in the software COMSOL Multiphyscis [21]. In Section III, we present the main result of the paper, the selective enhanced excitation by electrode shaping of higher-harmonic membrane resonance modes for acoustofluidic applications. In Section IV, we present an application example by showing how specific higher-harmonics membrane modes may induce the above-mentioned trapping of a single suspended cell. In Section V, we discuss the possibility of size-dependent cell trapping and the advantages and disadvantages of the presented method of trapping, and finally in Section VI we present our main conclusions. ## II Model system, theory, and numerical implementation The model system, sketched in Fig. 1, consists of a thin disk-shaped silicon membrane completely covered by a (AlSc)N thin-film PZE transducer on one of its surfaces. The ground electrode of the transducer is always fully covering the transducer, whereas the excitation electrode may be divided into several pieces each with individual alternate-current (AC) excitation voltages. Below the membrane is air (treated as vacuum), and above it a liquid. The system is axisymmetric, so the we use cylindrical coordinates $(r,\phi,z)$ through the paper with the $z$ axis perpendicular to the membrane through its center. The basic theory and modeling for such a thin-film PZE-transducer-driven acoustofluidic system, was developed in a perturbation scheme involving the acoustic first-order fields and the steady time-averaged second-order fields in our previous work [3], founded on the theory for bulk PZE-transducer-driven systems [22] taking the acoustic boundary layers into account analytically through effective boundary conditions [23]. Numerical simulations based on this theory have been validated experimentally for several different microscale acoustofluidic systems [22, 9, 10, 2]. In the following we briefly summarize this basic theory and its numerical implementation and adapt the previous cartesian-coordinate formulation into the cylindrical coordinates of the present axisymmetric system. We do not study azimuthal variations, so the full three-dimensional (3D) problem is independent of $\phi$ and is thus reduced to a two-dimensional (2D) problem in the radial and axial coordinates $r$ and $z$, respectively. Figure 1: The axisymmetric model in the $r$-$z$ plane consisting of a thin silicon membrane (gray) clamped for $r>r_{\mathrm{mv}}$ and fully covered with a thin-film (AlSc)N PZE transducer (beige) on its top surface. The grounded transducer electrode (black) covers the entire transducer, but the excitation electrode (red and blue) may be either fully covering or divided into several sections each with the indicated excitation voltage. Below the membrane is air (white, treated as vacuum) and above is a liquid (cyan). (a) The fundamental resonance mode ($n=0$) of the membrane with a large amplitude as the fully- covering excitation electrode (red) is compatible with the mode shape. (b) The third-harmonic ($n=3$) resonance mode of the membrane with a small amplitude as the fully covering excitation electrode (red) is incompatible with the mode shape. (c) The third-harmonic ($n=3$) resonance mode of the membrane with a large amplitude due to the specific patterning of the excitation electrode (red and blue) into four pieces with specific excitation voltages. ### II.1 First-order acoustic fields Using first-order perturbation theory and complex-valued fields [3], the time- harmonic electric potential $\tilde{\varphi}_{1}(\bm{r},t)$ applied to the PZE thin-film excites a time-harmonic acoustic displacement field $\tilde{\bm{u}}_{1}(\bm{r},t)$, $\tilde{\varphi}_{1}(\bm{r},t)=\varphi_{1}(\bm{r})\>\mathrm{e}^{-{\mathrm{i}\omega t}},\quad\tilde{\bm{u}}_{1}(\bm{r},t)=\bm{u}_{1}(\bm{r})\>\mathrm{e}^{-{\mathrm{i}\omega t}},$ (1) where $\omega=2\pi f$ is the angular frequency, $f$ the frequency, $\bm{r}$ the spatial coordinate, and $t$ the time. We convert the cartesian coordinates $\bm{r}=(x,y,z)$ of Refs. [22, 3, 2] into cylindrical coordinates $\bm{r}=(r\cos\phi,r\sin\phi,z)$, and assuming $\phi$-independent excitation voltages, we have, $\varphi_{1}(\bm{r})=\varphi_{1}(r,z),\quad\bm{u}_{1}(\bm{r})=u_{r}(r,z)\bm{e}_{r}+u_{z}(r,z)\bm{e}_{z}.$ (2) The governing equations of $\bm{u}_{1}$ and $\varphi_{1}$ are the weakly damped Cauchy elastodynamic equation and the quasi-static Gauss equation for a charge-free linear dielectric, $-\rho\omega^{2}(1+\mathrm{i}\Gamma_{\mathrm{sl}})\>\bm{u}_{1}=\bm{\nabla}\cdot\bm{\sigma}_{\mathrm{sl}},\quad\bm{\nabla}\cdot\bm{D}=0,$ (3) where $\bm{\sigma}_{\mathrm{sl}}$ is the Cauchy stress tensor, $\Gamma_{\mathrm{sl}}$ is the damping coefficient of the solid, and $\bm{D}=-(1+\mathrm{i}\Gamma_{\varepsilon})\bm{\varepsilon}\cdot\bm{\nabla}\varphi_{1}$ is the electric displacement field with $\Gamma_{\varepsilon}$ and $\bm{\varepsilon}$ being the dielectric damping coefficient and tensor, respectively. For a purely mechanical solid, the stress tensor is defined in Voigt notation as, $\bm{\sigma}_{\mathrm{sl}}=\bm{C}\cdot\bm{S}_{\mathrm{sl}},$ (4) where $\bm{C}$ is the elastic moduli tensor of rank two which relates the strain tensor $\bm{S}_{\mathrm{sl}}$ to the stress tensor $\bm{\sigma}_{\mathrm{sl}}$. In cylindrical coordinates, the Voigt notation- form of $\bm{C}$, $\bm{S}_{\mathrm{sl}}$ and $\bm{\sigma}_{\mathrm{sl}}$ is, $\displaystyle\bm{C}$ $\displaystyle=\left(\begin{array}[]{c@{\:}c@{\:}c@{\:}|c@{\:}c@{\:}c}C_{11}\hfil\>&C_{12}\hfil\>&C_{13}\hfil\>&0\hfil\>&0\hfil\>&0\\\ C_{12}\hfil\>&C_{11}\hfil\>&C_{13}\hfil\>&0\hfil\>&0\hfil\>&0\\\ C_{13}\hfil\>&C_{13}\hfil\>&C_{33}\hfil\>&0\hfil\>&0\hfil\>&0\\\ \hline\cr 0\hfil\>&0\hfil\>&0\hfil\>&C_{44}\hfil\>&0\hfil\>&0\\\ 0\hfil\>&0\hfil\>&0\hfil\>&0\hfil\>&C_{44}\hfil\>&0\\\ 0\hfil\>&0\hfil\>&0\hfil\>&0\hfil\>&0\hfil\>&C_{66}\\\ \end{array}\right),$ (5g) $\displaystyle\bm{S}_{\mathrm{sl}}$ $\displaystyle=\Big{(}\partial_{r}u_{1r},\frac{1}{r}u_{1r},\partial_{z}u_{1z},0,\partial_{r}u_{1z}+\partial_{z}u_{1r},0\Big{)}^{\mathrm{T}},$ (5h) $\displaystyle\bm{\sigma}_{\mathrm{sl}}$ $\displaystyle=\Big{(}\sigma_{rr},\sigma_{\phi\phi},\sigma_{zz},\sigma_{\phi z},\sigma_{rz},\sigma_{r\phi}\Big{)}^{\mathrm{T}}.$ (5i) Here, the column vectors $\bm{S}_{\mathrm{sl}}$ and $\bm{\sigma}_{\mathrm{sl}}$ are written as the transpose “T” of the row vectors. Moreover, $\bm{C}$ is written for the lowest symmetry in our problem, the hexagonal crystal structure of (AlSc)N, and we note that it has the same form in cartesian as in cylindrical coordinates when the polarization axis is parallel to the $z$ axis. Whereas neither the displacement nor the electric field has a component in the azimuthal $\phi$-direction, the strain and stress do have a nonzero $\phi\phi$-component. Finally, we note that for silicon (111) the above form for $\bm{C}$ still applies, but with the symmetry constraints $C_{13}=C_{12}$, $C_{33}=C_{11}$, and $C_{66}=C_{44}$. Other orientations of silicon crystal are not isotropic in the $r$-$\phi$ plane and thus cannot be modeled as an axisymmetric system [24, 25]. In this work, our PZE transducer is taken to be (AlSc)N which has a hexagonal piezoelectric crystal structure [26] with the material parameters listed in Table 1. For PZE materials, the stress and electric displacement fields are coupled to the electric and strain fields though the piezoelectric coupling tensor, $\displaystyle\bm{\sigma}_{\mathrm{sl}}$ $\displaystyle=\bm{C}\cdot\bm{S}_{\mathrm{sl}}-\bm{e}^{\mathrm{T}}\cdot\bm{E},$ (6a) $\displaystyle\bm{D}$ $\displaystyle=\bm{e}\cdot\bm{S}_{\mathrm{sl}}+\bm{\varepsilon}\cdot\bm{E},$ (6b) where the electric field defined as $\bm{E}=-\bm{\nabla}\varphi_{1}$, and where in the Voigt notation for the hexagonal crystal structure, $\bm{e}$ and $\bm{\varepsilon}$ are the following tensors, $\displaystyle\bm{e}$ $\displaystyle=\left(\begin{array}[]{ccc|ccc}0&0&0&0&0&e_{15}\\\ 0&0&0&0&e_{15}&0\\\ e_{31}&e_{31}&e_{33}&0&0&0\end{array}\right)\\!,\;\bm{\varepsilon}=\left(\begin{array}[]{ccc}\varepsilon_{11}&0&0\\\ 0&\varepsilon_{11}&0\\\ 0&0&\varepsilon_{33}\end{array}\right)\\!.$ (13) The mechanical displacement in the solids couple into the adjacent liquid or fluid as pressure waves. For a fluid with density $\rho_{\mathrm{fl}}$, dynamic viscosity $\eta_{\mathrm{fl}}$, bulk viscosity $\eta_{\mathrm{fl}}^{\mathrm{b}}$, compressibility $\kappa_{\mathrm{fl}}$, and sound speed $c_{\mathrm{fl}}$, and damping coefficient $\Gamma_{\mathrm{fl}}$, the acoustic pressure $p_{1}$ is governed by the weakly damped Helmholtz equation, and the acoustic velocity $\bm{v}_{1}^{d}$ is derived from $p_{1}$, $\displaystyle\tilde{p}_{1}$ $\displaystyle=p_{1}(\bm{r})\>\mathrm{e}^{-{\mathrm{i}\omega t}},\quad$ $\displaystyle\tilde{\bm{v}}_{1}^{d}$ $\displaystyle=\bm{v}_{1}^{d}(\bm{r})\>\mathrm{e}^{-{\mathrm{i}\omega t}},$ (14a) $\displaystyle\nabla^{2}p_{1}$ $\displaystyle=-\dfrac{\omega^{2}}{c_{\mathrm{fl}}^{2}}(1+\mathrm{i}\Gamma_{\mathrm{fl}})p_{1},\quad$ $\displaystyle\kappa_{\mathrm{fl}}$ $\displaystyle=(\rho_{\mathrm{fl}}c_{\mathrm{fl}}^{2})^{-1},$ (14b) $\displaystyle\bm{v}_{1}^{d}$ $\displaystyle=-\mathrm{i}\dfrac{1-\mathrm{i}\Gamma_{\mathrm{fl}}}{\omega\rho_{\mathrm{fl}}}\bm{\nabla}p_{1},\quad$ $\displaystyle\Gamma_{\mathrm{fl}}$ $\displaystyle=\bigg{(}\frac{4}{3}\eta_{\mathrm{fl}}+\eta_{\mathrm{fl}}^{\mathrm{b}}\bigg{)}\omega\kappa_{\mathrm{fl}}.$ (14c) In our $\phi$-independent axisymmetric case we have $p_{1}(\bm{r})=p_{1}(r,z),\quad\bm{v}_{1}^{d}(\bm{r})=v_{1r}^{d}(r,z)\>\bm{e}_{r}+v_{1z}^{d}(r,z)\>\bm{e}_{z}.$ (15) ### II.2 Second-order time-averaged fields The nonlinearities in the fluid dynamics induce second-order terms in the perturbation theory from products of first-order terms [3]. Here, we focus on the time-averaged values of such fields, $\bm{F}_{2}=\big{\langle}\tilde{\bm{F}}_{2}(\bm{r},t)\big{\rangle}=\dfrac{\omega}{2\pi}\int_{0}^{\frac{2\pi}{\omega}}\tilde{\bm{F}}_{2}(\bm{r},t)\>\mathrm{d}t$, in particular the steady streaming velocity $\bm{v}_{2}$ and the acoustic radiation force $\bm{F}^{\mathrm{rad}}$ on suspended particles. For the time- averaged product of two first-order fields $A_{1}$ and $B_{1}$ we use the well-known expression $\big{\langle}\operatorname{Re}\big{[}\tilde{A}_{1}(\bm{r},t)\big{]}\operatorname{Re}\big{[}\tilde{B}_{1}(\bm{r},t)\big{]}\big{\rangle}=\frac{1}{2}\operatorname{Re}\big{[}A_{1}(\bm{r})B^{*}_{1}(\bm{r})\big{]}$. The acoustic streaming $\bm{v}_{2}$ is an incompressible Stokes flow with a non-negligible time-average Eckart-streaming bulk force $\eta_{\mathrm{fl}}\nabla^{2}\bm{v}_{2}=\bm{\nabla}\mathcal{P}_{2}-\dfrac{\Gamma_{\mathrm{fl}}\omega}{2c_{\mathrm{fl}}^{2}}\operatorname{Re}\big{[}p_{1}^{*}\bm{v}_{1}^{d}\big{]},\quad\bm{\nabla}\cdot\bm{v}_{2}=0,$ (16) where $\mathcal{P}_{2}$ is the pressure redefined to include the excessive pressure [23]. The expression for the acoustic radiation force $\bm{F}^{\mathrm{rad}}$ acting on a suspended particle of radius $a$, density $\rho_{\mathrm{pt}}$, and compressibility $\kappa_{\mathrm{pt}}$ is [27], $\displaystyle\bm{F}^{\mathrm{rad}}$ $\displaystyle=-\pi a^{3}\bigg{\\{}\dfrac{2\kappa_{\mathrm{fl}}}{3}\operatorname{Re}\big{[}f_{0}^{*}p_{1}^{*}\bm{\nabla}p_{1}\big{]}-\rho_{\mathrm{fl}}\operatorname{Re}\big{[}f_{1}^{*}\bm{v}_{1}^{d*}\\!\cdot\\!\bm{\nabla}\bm{v}_{1}^{d}\big{]}\bigg{\\}},$ (17a) $\displaystyle f_{0}$ $\displaystyle=1-\dfrac{\kappa_{\mathrm{pt}}}{\kappa_{\mathrm{fl}}},\quad f_{1}=\dfrac{2(1-\gamma)(\rho_{\mathrm{pt}}-\rho_{\mathrm{fl}})}{2\rho_{\mathrm{pt}}+(1-3\gamma)\rho_{\mathrm{fl}}},$ (17b) where $f_{0}$ and $f_{1}$ are the monopole and dipole scattering coefficients, $\delta$ is the width of the acoustic boundary layer, and $\gamma$ is a correction factor, $\delta=\sqrt{\frac{2\eta_{\mathrm{fl}}}{\omega\rho_{\mathrm{fl}}}},\quad\gamma=-\frac{3}{2}\bigg{[}1+\mathrm{i}\bigg{(}1+\frac{\delta}{a}\bigg{)}\bigg{]}\frac{\delta}{a}.$ (18) The total force $\bm{F}^{\mathrm{pt}}$ acting on a suspended particle moving with speed $\bm{v}_{\mathrm{pt}}$ at position $\bm{r}$ is the sum of the radiation force $\bm{F}^{\mathrm{rad}}$, the Stokes drag force $\bm{F}^{\mathrm{drag}}$, and the buoyancy-corrected gravitational force $\bm{F}^{\mathrm{grav}}$ from the gravitational acceleration $\bm{g}$, $\displaystyle\bm{F}^{\mathrm{pt}}$ $\displaystyle=\bm{F}^{\mathrm{rad}}+\bm{F}^{\mathrm{drag}}+\bm{F}^{\mathrm{grav}},$ (19a) $\displaystyle\bm{F}^{\mathrm{drag}}$ $\displaystyle=6\pi\eta_{\mathrm{fl}}a(\bm{v}_{2}-\bm{v}_{\mathrm{pt}}),\quad\bm{F}^{\mathrm{grav}}=\frac{4}{3}\pi a^{3}(\rho_{\mathrm{pt}}-\rho_{\mathrm{fl}})\>\bm{g}.$ (19b) ### II.3 Boundary conditions In the following we state the boundary conditions that we apply to the first- order acoustic and time-averaged second-order steady fields. On the axis $r=0$, we have the usual axisymmetry conditions, $\displaystyle\partial_{r}p_{1}$ $\displaystyle=0,\;\;$ $\displaystyle\partial_{r}u_{1z}$ $\displaystyle=0,\;\;$ $\displaystyle u_{1r}$ $\displaystyle=0,\;\;$ $\displaystyle\partial_{r}\varphi_{1}$ $\displaystyle=0,$ (20a) $\displaystyle\partial_{r}p_{2}$ $\displaystyle=0,\;\;$ $\displaystyle\partial_{r}v_{2z}$ $\displaystyle=0,\;\;$ $\displaystyle v_{2r}$ $\displaystyle=0,$ $\displaystyle\text{at }\;r=0.$ (20b) We control the electric potential $\varphi_{1}$ on the transducer electrodes. At the ground electrode, $\varphi_{1}=0$ always, whereas the excited electrodes have a peak-to-peak AC voltage amplitude of $\varphi_{0}$, so there $\varphi=\frac{1}{2}\varphi_{0}\>\mathrm{e}^{\mathrm{i}\Delta}$ with $\Delta$ being a phase. Here, we work only with in-phase $\Delta=0$ (positive) and anti-phase $\Delta=\pi$ (negative) excitation voltages, so at the electrode surfaces we have $\displaystyle\varphi_{1}$ $\displaystyle=0,\quad$ $\displaystyle\text{on ground electrodes},$ (21a) $\displaystyle\varphi_{1}$ $\displaystyle=+\frac{1}{2}\varphi_{0},\quad$ $\displaystyle\text{on positive electrodes},$ (21b) $\displaystyle\varphi_{1}$ $\displaystyle=-\frac{1}{2}\varphi_{0},,\quad$ $\displaystyle\text{on negative electrodes}.$ (21c) At interfaces away from the electrodes, we assume a zero-free-charge condition on $\varphi_{1}$. On freely vibrating interfaces, we assume a zero-stress condition on $\bm{u}_{1}$, $\bm{D}\cdot\bm{n}=0,\quad\bm{\sigma}_{\mathrm{sl}}\cdot\bm{n}=\bm{0},\quad\text{at the solid-air interface},$ (22) At the solid-fluid interface, we use the effective boundary conditions for the continuity of the first-order velocity and stress, developed in a coordinate- free form by Bach and Bruus [23], where the acoustic boundary layer in the fluid has been accounted for analytically. Here we state these boundary conditions for our case with $\phi$-independent axisymmetric first-order fields, $\displaystyle\partial_{z}p_{1}$ $\displaystyle=\dfrac{\mathrm{i}\omega\rho_{0}}{1-\mathrm{i}\Gamma_{\mathrm{fl}}}\left[-\mathrm{i}\omega u_{1z}-\dfrac{\mathrm{i}}{k_{\mathrm{s}}r}\partial_{r}\left(-\mathrm{i}\omega ru_{1r}\right)\right]$ $\displaystyle\qquad-\dfrac{\mathrm{i}}{k_{\mathrm{s}}}\big{(}k_{\mathrm{c}}^{2}p_{1}+\partial_{z}^{2}p_{1}\big{)},$ (23a) $\displaystyle\bm{\sigma}_{\mathrm{sl}}\cdot\bm{e}_{z}$ $\displaystyle=-p_{1}\bm{e}_{z}+\mathrm{i}k_{\mathrm{s}}\eta_{0}\Big{(}-\mathrm{i}\omega\bm{u}_{1}+\dfrac{1}{\omega\rho_{0}}\bm{\nabla}p_{1}\Big{)},$ (23b) $\displaystyle k_{\mathrm{c}}^{2}$ $\displaystyle=(1+\mathrm{i}\Gamma_{\mathrm{fl}})\dfrac{\omega^{2}}{c_{\mathrm{fl}}^{2}},\qquad k_{\mathrm{s}}=\dfrac{1+\mathrm{i}}{\delta}.$ (23c) as well as the second-order streaming $\bm{v}_{2}$, $\displaystyle v_{2r}^{d0}$ $\displaystyle=-\dfrac{1}{2\omega}\mathrm{Re}\bigg{\\{}\dfrac{1}{2}v_{1r}^{\delta 0\star}\partial_{r}v_{1r}^{\delta 0}+\dfrac{1}{2}v_{1z}^{\delta 0\star}\partial_{z}v_{1r}^{\delta 0}$ $\displaystyle\qquad\quad+\dfrac{2-\mathrm{i}}{2}\Big{(}\partial_{r}v_{1r}^{\delta 0\star}+\frac{1}{r}v_{1r}^{\delta 0\star}+\partial_{z}v_{1z}^{\delta 0\star}\Big{)}v_{1r}^{\delta 0}$ $\displaystyle\qquad\quad+\mathrm{i}\Big{(}\partial_{r}u^{0\star}_{1r}+\frac{1}{r}u^{0\star}_{1r}+\partial_{z}u^{0\star}_{1z}-\partial_{z}v_{1z}^{d\star}\Big{)}v_{1r}^{\delta 0}$ $\displaystyle\qquad\quad-\mathrm{i}\big{(}v_{1r}^{\delta 0\star}+v_{1z}^{\delta 0\star}\big{)}\partial_{z}u_{1r}-\mathrm{i}\bm{u}_{1}^{0\star}\cdot\bm{\nabla}v_{1r}^{d}\bigg{\\}},$ (24a) $\displaystyle v_{2z}^{d0}$ $\displaystyle=-\dfrac{1}{2\omega}\mathrm{Re}\Big{\\{}\mathrm{i}v_{1r}^{\delta 0\star}\partial_{r}u_{1z}^{0\star}+u_{1r}^{0\star}\partial_{r}\big{(}v_{1z}^{d}+v_{1z}^{\delta 0}\big{)}$ $\displaystyle\qquad\quad+u_{1z}^{0\star}\partial_{z}\big{(}v_{1z}^{d}+v_{1z}^{\delta 0}\big{)}\Big{\\}}$ (24b) Here, $\bm{v}_{1}^{\delta 0}=\bm{u}_{1}^{0}-\bm{v}_{1}^{d0}$, which together with all terms containing the factor $k_{\mathrm{s}}$ originates from the boundary layer of width $\delta$. Finally, when we consider systems without a lid, the acoustic waves transmitted from the membrane into the fluid will propagate toward infinity ($r\rightarrow\infty$ and $z\rightarrow\infty$) until damped out. To keep a finite-sized computational domain, we therefore follow Refs. [28, 22] and limit the physical domain of the fluid to the region $0<r<r_{\mathrm{fl}}$ and $0<z<z_{\mathrm{fl}}$, then add so-called perfectly matched layers (PML) around this domain for $r_{\mathrm{fl}}<r<r_{\mathrm{PML}}$ or $z_{\mathrm{fl}}<z<z_{\mathrm{PML}}$, in which all outgoing waves from the membrane are damped out, and from which no incoming waves are sent toward the membrane, $\displaystyle\text{For }\;r_{\mathrm{fl}}<r<r_{\mathrm{PML}}\text{ or }\;z_{\mathrm{fl}}<z<z_{\mathrm{PML}}:$ $\displaystyle\chi(r,z)=$ (25a) $\displaystyle K_{\mathrm{PML}}\bigg{(}\theta[r\\!-\\!r_{\mathrm{fl}}]\bigg{[}\dfrac{r\\!-\\!r_{\mathrm{fl}}}{r_{\mathrm{PML}}\\!-\\!r_{\mathrm{fl}}}\bigg{]}^{2}\\!+\theta[z\\!-\\!z_{\mathrm{fl}}]\bigg{[}\dfrac{z\\!-\\!z_{\mathrm{fl}}}{z_{\mathrm{PML}}\\!-\\!z_{\mathrm{fl}}}\bigg{]}^{2}\bigg{)},$ $\displaystyle\partial_{r}\rightarrow[1+\mathrm{i}\chi(r,z)]\partial_{r},\quad\partial_{z}\rightarrow[1+\mathrm{i}\chi(r,z)]\partial_{z}.$ (25b) $\displaystyle\mathrm{d}r\rightarrow\frac{\mathrm{d}r}{1+\mathrm{i}\chi(r,z)},\qquad\mathrm{d}x\rightarrow\frac{\mathrm{d}z}{1+\mathrm{i}\chi(r,z)}.$ (25c) Here, $\theta$ is the Heaviside step function, and $K_{\mathrm{PML}}$ is a constant chosen, such that the outgoing waves are damped in the PML without reflections at $r=r_{\mathrm{fl}}$ and $z=z_{\mathrm{fl}}$. The choice of boundary conditions at the outer surface of the PML are not crucial due to the strong damping provided in the PML. We use the Dirichlet condition $p_{1}=0$ there, but changing to Neumann condition $\bm{n}\cdot\bm{\nabla}p_{1}=0$ leads to negligible relative deviation in the result, less than $10^{-5}$ measured in terms of the L2-norm. ### II.4 Numerical implementation in COMSOL As in Ref. [3], we implement the model in the commercial finite-element software COMSOL 5.5 [21], closely following the implementation method described in Ref. [22]. The fields $p_{1}$, $\bm{u}_{1}$, $\varphi_{1}$, $p_{2}$, and $\bm{v}_{2}$ are all computed with quartic-order polynomial test functions. The mesh is defined to ensure at least 20 mesh elements per wavelength, and typically more, resulting in more than 80 nodal points per wavelength in the physical domain, and 10 elements per wavelength in the PML domain. In the bulk fluid we use a triangular mesh, and in both the membrane and the thin-film transducer, a structured mesh is used with at least four mesh elements in the thickness direction. At the symmetry axis $r=0$ is added a boundary layer of 15 elements, growing up from a smallest element of 0.005 times the bulk mesh size. This mesh ensured a successful mesh-convergence analysis similar to that presented in Ref. [29]. Table 1: Material parameter values used in the model. The iodixanol solution is computed from the interpolation polynomials given in Ref. [30]. For the MCF-7 cells, the radius is chosen as the center value $10~{}\textrm{\textmu{}m}$ in the observed range of 8.1-12.2 µm given in Ref. [31], and the scattering coefficients $f_{0}$ and $f_{1}$ are computed from Eq. (17b) using data from Ref. [32]. Parameter | Value | | Parameter | Value ---|---|---|---|--- _Thin-film aluminum scandium nitride_ , $\textrm{Al}_{0.6}\textrm{Sc}_{0.4}\textrm{N}$ [33, 34] $\rho_{\mathrm{sl}}$ | 3300 $\textrm{kg}\>\textrm{m}^{-3}$ | | $\Gamma_{\mathrm{sl}}$ | 0.0005 $C_{11}$ | 313.8 GPa | | $C_{33}$ | 197.1 GPa $C_{12}$ | 150.0 GPa | | $C_{44}$ | 108.6 GPa $C_{13}$ | 139.2 GPa | | $C_{66}$ | 81.9 GPa $e_{31,f}$ | $-2.65~{}\textrm{C}\>\textrm{m}^{-2}$ | | $e_{15}$ | $-0.32~{}\textrm{C}\>\textrm{m}^{-2}$ $e_{33}$ | 2.73 $\textrm{C}\>\textrm{m}^{-2}$ | | $\Gamma_{\varepsilon}$ | $0.0005$ $\varepsilon_{11}$ | 22 $\varepsilon_{0}$ | | $\varepsilon_{33}$ | 22 $\varepsilon_{0}$ _Membrane silicon, Si (111)_ [25, 35, 36] $\rho_{\mathrm{sl}}$ | 2329 $\textrm{kg}\>\textrm{m}^{-3}$ | | | $E$ | 168.9 GPa | | $s$ | 0.262 $C_{11}$ | 207.5 GPa | | $C_{44}$ | 73.7 GPa $C_{12}$ | 66.9 GPa | | $\Gamma_{\mathrm{sl}}$ | 0.0001 $c_{\mathrm{lo}}$ | 5594 $\textrm{m}\,\textrm{s}^{-1}$ | | $c_{\mathrm{tr}}$ | 3425 $\textrm{m}\,\textrm{s}^{-1}$ _Water_ [29] $\rho_{\mathrm{fl}}$ | $997~{}\textrm{kg}\>\textrm{m${}^{-3}$}$ | | $\eta_{\mathrm{fl}}$ | $0.890~{}\textrm{mPa}\>\textrm{s}$ $c_{\mathrm{fl}}$ | $1497~{}\textrm{m}\,\textrm{s}^{-1}$ | | $\eta^{{\mathrm{b}}}_{\mathrm{fl}}$ | $2.485~{}\textrm{mPa}\>\textrm{s}$ $\kappa_{\mathrm{fl}}$ | $448~{}\textrm{TPa}^{-1}$ | | $\Gamma_{\mathrm{fl}}$ | $10.3~{}\mathrm{THz}^{-1}f$ _Iodixanol 60% solution_ [30] $\rho_{\mathrm{fl}}^{\mathrm{Idx,}60\%}$ | $1320~{}\textrm{kg}\>\textrm{m${}^{-3}$}$ | | $c_{\mathrm{fl}}^{\mathrm{Idx,}60\%}$ | $1498~{}\textrm{m}\,\textrm{s}^{-1}$ $\eta_{\mathrm{fl}}^{\mathrm{Idx,}60\%}$ | $7.690~{}\textrm{mPa}\>\textrm{s}$ | | $\kappa_{\mathrm{fl}}^{\mathrm{Idx,}60\%}$ | $338~{}\textrm{TPa}^{-1}$ _Cancer cell MCF-7_ [31, 32] $\rho_{\mathrm{MCF\text{-}7}}$ | $1055~{}\textrm{kg}\>\textrm{m${}^{-3}$}$ | | $\rho_{\mathrm{MCF\text{-}7}}$ | $373~{}\textrm{TPa}^{-1}$ $a_{\mathrm{MCF\text{-}7}}$ | $10~{}\textrm{\textmu{}m}$ | | | $f_{0}^{\mathrm{Wa}}$ | $0.167$ | | $f_{1}^{\mathrm{Wa}}$ | $0.037+0.00002\mathrm{i}$ $f_{0}^{\mathrm{Idx,}60\%}$ | $-0.104$ | | $f_{1}^{\mathrm{Idx,}60\%}$ | $-0.153+0.0010\mathrm{i}$ Figure 2: Simulation of the resonance modes $n$ for a 10-µm-thick silicon Si (111) membrane of radius $r_{\mathrm{mv}}=100~{}\textrm{\textmu{}m}$ placed below a half-space filled with a fluid and actuated by an attached 1-µm-thick $\textrm{Al}_{0.6}\textrm{Sc}_{0.4}\textrm{N}$ thin-film PZE transducer with a peak-to-peak AC voltage $\varphi_{0}=1~{}\textrm{V}_{\textrm{pp}}$. The fluid domain is surrounded by a 100-µm wide PML domain. (a) Color plot of the pressure $\operatorname{Re}[\mathrm{i}p_{1}]$ from $-7.8$ (blue) to $+7.8~{}\textrm{kPa}$ (red) and of the displacement amplitude $|\bm{u}_{1}|$ from 0 (blue) to $0.75~{}\textrm{nm}$ (yellow) for the $n=0$ fundamental mode at 2.16 MHz, excited by the uniform excitation electrode shown in (d) the $r$-$z$ plane and (g) the $x$-$y$ plane. (b) The same as (a) but for the $n=3$ harmonic mode at 52.0 MHz with pressure amplitude $\pm 14~{}\textrm{kPa}$ and displacement amplitude 0.08 nm, excited by the uniform excitation electrode shown in (e) the $r$-$z$ plane and (i) the $x$-$y$ plane. (c) The same as (b) but with with pressure amplitude $\pm 513~{}\textrm{kPa}$ and displacement amplitude 0.75 nm, and with the patterned excitation electrode shown in (f) the $r$-$z$ plane and (k) the $x$-$y$ plane. (h), (j), and (l) Color plot of the out-of-phase in-plane strain $\operatorname{Re}[\mathrm{i}\partial_{r}u_{1r}]$ from min (cyan) to max (magenta) for the membrane modes shown respectively in (a), (b), and (c) for which the vertical displacement $\operatorname{Re}[\mathrm{i}u_{1z}]$ for clarity has been enhanced by a factor 15000, 30000, and 6000, respectively. Animations of panels (a) and (c) are given in the Supplemental Material [37]. The materials used in the model are as follows: the PZE thin-film transducer is $\textrm{Al}_{0.6}\textrm{Sc}_{0.4}\textrm{N}$, the silicon membrane is Si-(111), the liquid is either pure water or a 60% aqueous solution of iodixanol, and the cell is a MCF-7 breast-cancer cell. All material parameter values used in the model are listed in Table 1 including references to the literature. We carried out the numerical simulations of the model on a workstation with a 16-cores Intel i9-7960X processor at 3.70 GHz boost clock frequency and 128 GB of random access memory (RAM). At any given frequency, a simulation of the first-order fields $p_{1}$, $\bm{u}_{1}$, and $\varphi_{1}$ comprised of 1.4 million degrees of freedom, used 14 GB RAM, and took 50 s, whereas the second- order fields $p_{2}$ and $\bm{v}_{2}$ comprised of 2.7 million degrees of freedom, used 27 GB RAM, and took 180 s. ## III Excitation of higher-harmonic membrane modes by electrode patterning We now introduce the excitation of higher-harmonic membrane modes in acoustofluidics. As already indicated in Fig. 1, we show that only by patterning the excitation electrode to be compatible with the shape of the target higher-harmonic membrane mode, this mode can be excited with a sufficiently large amplitude. ### III.1 Membrane modes and patterned electrodes In Fig. 2 we show the main effect of patterning the excitation electrodes appropriately for a 10-µm-thick Si-disk membrane covered uniformly by a 1-µm- thick (AlSc)N thin-film transducer. The inner part $r<r_{\mathrm{mv}}=100~{}\textrm{\textmu{}m}$ of the membrane is free to vibrate, whereas the surrounding ring $r_{\mathrm{mv}}<r<r_{\mathrm{mc}}=300~{}\textrm{\textmu{}m}$ is clamped. Above the membrane is placed a cylindrical domain of water of radius $r_{\mathrm{mc}}$ and height $z_{\mathrm{fl}}=200~{}\textrm{\textmu{}m}$, and outside this domain is placed a PML domain of thickness $100~{}\textrm{\textmu{}m}$. In Fig. 2(a), the ground and excitation electrode are fully covering the transducer, and a peak-to-peak voltage $\varphi_{0}=1~{}\textrm{V}$ is applied to the latter. A frequency sweep reveals the $\phi$-independent vibration resonance modes $n$ of the membrane disk, $n=0,1,2,3,\ldots$, with resonance frequencies $f_{n}=2.16,10.5,19.7,52.0$ MHz and a monotonically decreasing maximum displacement amplitude $u_{1z,n}^{\mathrm{max}}=0.75,0.24,0.08,0.02$ nm for $n=0,1,2,3$. Animations of modes $n=0$ and $n=3$ are given in the Supplemental Material 111See Supplemental Material at http://bruus- lab.dk/files/Steckel_membrane_Suppl.zip for animations of $p_{1}$ and $\bm{u}_{1}$ in Fig. 2(a), Fig. 2(c), Fig. 3(a), Fig. 4, and Fig. 5.. This decreasing mode amplitude is partly explained by the uniformity across the thin-film-transducer surface of the applied perpendicular voltage drop $\varphi_{0}$, see Fig. 2(d,g), which by the electromechanical coupling matrix $\bm{e}$ promotes a uniform stretching or compression of the thin film at any given time. This works well for the fundamental mode $n=0$ at 2.16 MHz, where the film is either stretching across the entire membrane at any given time, or contracting, although not uniformly, resulting in a perpendicular displacement $u_{1z}$ that is either positive across the entire membrane, or negative, as shown in Fig. 2(a). For the third harmonic mode $n=3$ at 52.0 MHz, the oscillating displacement $u_{1z}$ changes sign 3 times along the radial direction as shown in Fig. 2(b), and this counteracts uniform stretching/contraction promoted by the uniform potential shown in Fig. 2(e,i), and as a result the excitation amplitude is decreased. Clearly, by arranging for a non-uniform potential following the non-uniform mechanical displacement, we could hope to restore a large excitation amplitude. But how? The clue is the out-of-phase in-plane strain $\mathrm{i}\partial_{r}u_{1r}$ for mode $n=0$ and 3 shown in Fig. 2(h,j), respectively. This field indicates how the membrane would move were it free to vibrate in a given resonance mode. For $n=0$ at 2.16 MHz, the strain indicates a contraction (cyan) in a large center part of the membrane, which the uniform applied potential supports, only counteracting the expansion indicated by the strain in a minor peripheral part (magenta). In contrast, for mode $n=3$ at 52 MHz, the strain indicates four ring-shaped domains respectively exhibiting contraction, expansion, contraction, expansion, which is not compatible with the applied uniform potential, as each full contraction-expansion period is nearly canceled by the potential. To make the potential for $n=3$ compatible with the strain pattern, we therefore split the excitation electrode into the same 3 ring-shaped domains (plus the central disk) defined by the strain with alternating excitation voltages $+\frac{1}{2}\varphi_{0}$, $-\frac{1}{2}\varphi_{0}$, $+\frac{1}{2}\varphi_{0}$, and $-\frac{1}{2}\varphi_{0}$, shown in Fig. 2(f,k). If these rings should be contacted by wires in the same layer [20] or by conducting vias to wires in a second layer, we leave to the experts of MEMS technology to figure out, here we just assume it possible in practise. The resulting acoustic response for the same applied voltage amplitude is shown in Fig. 2(c). Here we see that the resonance frequency remains $f_{3}=52.0$ MHz, and that the displacement amplitude for the patterned electrode is $u_{1z,3}^{\mathrm{max,p}}=0.75$ nm, one and half order of magnitude higher than that of the uniform electrode, $u_{1z,3}^{\mathrm{max,u}}=0.02$ nm, and equal to the uniform-electrode mode-0 amplitude $u_{1z,0}^{\mathrm{max,u}}=0.76$ nm. ### III.2 Traveling and standing pressure waves induced by higher-harmonic membrane modes Let us now turn to the resulting acoustic pressure field $p_{1}$ in the liquid above the membrane vibrating in its resonance modes $n=0$ and 3. In Fig. 2(a) we see that a strong pressure wave is emitted from the antinode part of the vibrating membrane at 2.16 MHz in mode $n=0$ and absorbed in the PML domain. In Fig. 2(b), and more clearly in Fig. 2(c), we also see partial waves being emitted from each of the four antinodes in mode $n=3$ at 52.0 MHz. There is an important qualitative difference between the fundamental mode $n=0$ and any higher-harmonic mode $n$. The former emits only one partial wave, and thus $p_{1}$ is devoid of interference patterns, whereas the latter emits $n+1$ partial waves from $n+1$ ring-shaped parts of the membrane surface. These partial waves interfere and give rise to a complex interference pattern stretching out in the bulk of the liquid above the membrane, such as seen in Fig. 2(c). This has profound implications for the acoustofluidic properties of the system, which we investigate further in Section IV. Figure 3: (a) 3D contour plot of instantaneous isobars at $-250$, $-125$, $125$, and $250~{}\textrm{kPa}$ equivalent to Fig. 2(c) of the acoustic pressure $p_{1}$ in the liquid above the membrane excited to resonance mode $n=3$ at 52.0 MHz by the patterned 3-ring electrode shown in Fig. 2(f,k). The displacement amplitude is enhanced by a factor 20,000 to be visible. An animation of $p_{1}$ and $\bm{u}_{1}$ is given in the Supplemental Material [37]. (b) A color plot similar to Fig. 2(c), but for the absolute value $|p_{1}|$. In Fig. 3(a) is shown a 3D contour plot of instantaneous isobars equivalent to Fig. 2(c) of the acoustic pressure $p_{1}$ in the liquid above the membrane excited to resonance mode $n=3$ at 52.0 MHz by the patterned electrode shown in Fig. 2(f,k). The pressure pattern consists of ring-shaped waves propagating upward in the $z$ direction, while exhibiting standing waves in the radial direction. This particular mixture of standing radial and traveling axial waves is clearly revealed by studying the color plot Fig. 3(b) of the absolute value $|p_{1}|$ of the acoustic pressure. Following a outward radial path from $r=0$ at any given fixed height $z=z_{0}$, reveals regular oscillations in $|p_{1}(r,z_{0})|$ between nearly zero minima and large maxima, a characteristic of a standing radial wave. The most pronounced oscillation happens at height $z_{0}=60~{}\textrm{\textmu{}m}$, and here we find the ratio $\max|p_{1}|/\min|p_{1}|\approx 140$, which corresponds to a nearly ideal standing wave. When inspecting the pressure nodal surfaces (gray) in $|p_{1}|$ in Fig. 3(b), we see that they form two nearly perfect and one deformed coaxial cylinder centered on the $z$-axis, but with a tendency to widen out as the distance to the membrane increases. Following any vertical path upward parallel to the $z$-axis, $|p_{1}|$ is mainly monotonically decreasing, which is the characteristic of a nearly ideal damped traveling wave. One exception to this behavior is found along the $z$-axis, where a local maximum in $|p_{1}(0,z)|$ is found for $z\approx 60~{}\textrm{\textmu{}m}$ above the center of the membrane. Here we estimate the ratio $\max|p_{1}|/\min|p_{1}|\approx 1.3$, which corresponds to a 1.0:0.3 mixture of a traveling and a standing wave. We shall see in Section IV that this small component of a standing wave along $z$ together with the large standing wave component along $r$ spawns a particle trap in all three spatial directions at some distance above the center of the vibrating silicon membrane. ### III.3 The effect of introducing a rigid lid So far, we have assumed ideally absorbing surroundings by introducing the PML. As a result the pressure waves are traveling away from the membrane without being reflected back. In actual acoustofluidic systems, such a behavior is approximately realized by using soft PDMS rubber walls, as demonstrated recently by Skov et al. in a combined experimental and numerical study [38]. They also showed how replacing the soft lid with the other extreme, a hard glass lid, the traveling wave field was replaced by a standing wave field. Here, we briefly examine this latter case, by replacing the top PML in our model by a rigid lid placed at $z=z_{\mathrm{fl}}$ with a hard-wall boundary condition, $\partial_{z}p_{1}=0$, for the acoustic pressure. We keep the PML at the side to mimic an infinitely broad system. The results of the rigid-lid simulation of $p_{1}$ is shown in Fig. 4, where a vertical standing-wave behavior is clearly seen as horizontal nodal lines in the color plot of $|p_{1}|$ in Fig. 4(b). However, since the PML at the side still allows for the waves to propagate away from the membrane, now predominantly in the radial direction, neither the membrane modes nor the maximum pressure amplitudes are much affected by replacing the PML lid with the rigid lid, as seen when comparing Figs. 3(b) and 4(b). The smooth behavior of $|p(0,z)|$ is now overlaid with minor oscillations, but there is still a global pressure maximum along the $z$-axis. Close to the membrane, the admixture of the vertical standing wave component is strong for the rigid-lid system, but still traveling waves are seen, as revealed by the animations of Figs. 2(c) and 4(b) given in the Supplemental Material [37]. As the characteristics of the pressure field are not strongly affected by the lid boundary condition, we continue using the previous case of an ideally absorbing lid in the following. Figure 4: The small membrane of Figs. 2 and 3 but with a hard-wall lid at $z=z_{\mathrm{fl}}$ instead of PML. (a) Same as Fig. 2(c) for the pressure $p_{1}$. An animation of $p_{1}$ and $\bm{u}_{1}$ is given in the Supplemental Material [37]. (b) Same as Fig. 3(b) for the magnitude $|p_{1}|$ of the pressure $p_{1}$. ## IV Cell trapping by higher-harmonic membrane modes We choose to illustrate the possibilities offered by the use of higher- harmonic membrane modes in acoustofluidics application with the example of trapping of a single suspended biological cell. Particle trapping is a central problem in the field studied by many groups. Examples are microparticle trapping in acoustic tweezers [39, 40, 41, 42, 43, 44] and in disposable capillary tubes[45, 28], nanoparticle by using seed particles [46, 47, 48], and short and long term trapping[49, 50]. Trapping and the associated focusing has been created by various methods, including standing bulk [51, 52], traveling bulk [53], and surface [54, 55, 38] acoustic waves. An important focus area in the field is the trapping and focusing of biological cells in general [56, 57, 58, 50] and of circulating tumor cells in particular [59, 60, 30], in some cases accomplished by tuning the acoustic properties of the suspension medium tune relative to those of the given cells [30, 61]. Figure 5: A membrane device as in Fig. 2(c), but with $r_{\mathrm{mv}}=500~{}\textrm{\textmu{}m}$, $r_{\mathrm{mc}}=1500~{}\textrm{\textmu{}m}$, $z_{\mathrm{fl}}=1000~{}\textrm{\textmu{}m}$, and with the excitation electrode is divided into 10 (instead of 3) ring-shaped segments each $44~{}\textrm{\textmu{}m}$ wide, separated by 4-µm-wide gaps (not shown), and excited with alternating voltages $\pm\frac{1}{2}\varphi_{0}$ as in Fig. 2(f,k). Color plots of the pressure $\operatorname{Re}[\mathrm{i}p_{1}]$ from $-235$ (blue) to $+235~{}\textrm{kPa}$ (red) and of $|\bm{u}_{1}|$ from 0 (blue) to $0.49~{}\textrm{nm}$ (yellow) for the $n=10$ higher-harmonic mode at $18.5~{}\textrm{MHz}$. The top inset is a color plot of $|p_{1}|$. The bottom inset shows details of $p_{1}$ and $|\bm{u}_{1}|$. An animation of $p_{1}$ and $\bm{u}_{1}$ is given in the Supplemental Material [37]. ### IV.1 Design of the membrane for cell trapping In the following numerical simulation analysis of cell trapping, we choose as our model cell the breast cancer cell MCF-7 with the known acoustic parameters listed in Table 1 and here assumed to be spherical with a radius $a_{\mathrm{MCF\text{-}7}}=10~{}\textrm{\textmu{}m}$. To facilitate acoustic trapping of such a cell, we prefer to work with an acoustic wavelength in the fluid larger than the cell, $\lambda_{0}\gtrsim 8a_{\mathrm{MCF\text{-}7}}\approx 80~{}\textrm{\textmu{}m}$, or $f_{0}\lesssim 20~{}\textrm{MHz}$, a limit in which the long-wavelength limit expression (17a) of the acoustic radiation force on a suspended particle is valid. Furthermore, for membrane mode $n$ with radial wavelength $\lambda_{r}^{(n)}\approx 2r_{\mathrm{mv}}/(n+\frac{1}{2})$, the mode frequency scales as $f^{(n)}\propto\big{[}\lambda_{r}^{(n)}\big{]}^{-2}$, and since $f^{(n)}\propto\lambda_{0}^{-1}$ in the fluid, we can at membrane resonance write $\big{[}\lambda_{r}^{(n)}\big{]}^{2}=\lambda_{0}L$, where $L$ is a parameter of dimension length characteristic for the given system. We find numerically that $L\approx 110~{}\textrm{\textmu{}m}$ in our system with a 10-µm-thick silicon-membrane pushing on water. Combined with the long- wavelength criterion, we obtain $\lambda^{(n)}_{r}=\sqrt{\lambda_{0}L}\gtrsim\sqrt{8a_{\mathrm{MCF\text{-}7}}L}\approx 94~{}\textrm{\textmu{}m}$. Finally, to ensure the formation of a trapping point on the $z$-axis, we must demand that $\lambda_{0}<\alpha\lambda_{r}^{(n)}$ (where $\alpha\approx 1$ is a constant we have not been able to compute) because if not, the acoustic wave in the fluid propagates at an angle away from the $z$-axis. We thus conclude that all above criteria are satisfied if $8a_{\mathrm{MCF\text{-}7}}\lesssim\lambda_{0}\lesssim\alpha^{2}L$ (equivalent to $12~{}\textrm{MHz}\lesssim f_{0}\lesssim 20\alpha^{2}~{}\textrm{MHz}$) and $\lambda_{r}^{(n)}\gtrsim 94~{}\textrm{\textmu{}m}$. Consequently, as a proof of concept, we use the design shown in Fig. 5, consisting of a 10-µm-thick silicon Si (111) disk membrane of radius $r_{\mathrm{mv}}=500~{}\textrm{\textmu{}m}$ driven in its $n=10$ harmonic mode with $\lambda_{r}^{(10)}\approx~{}95~{}\textrm{\textmu{}m}$ and an estimated resonance frequency $f^{(10)}\approx Lc_{0}/\big{[}\lambda_{r}^{(10)}\big{]}^{2}\approx 18.2~{}\textrm{MHz}$. Similar to Fig. 2(f,k) of mode $n=3$, the mode $n=10$ is excited by patterning the excitation electrode of the 1-µm-thick $\textrm{Al}_{0.6}\textrm{Sc}_{0.4}\textrm{N}$ transducer into ten ring-shaped segments. A $1~{}\textrm{V}_{\textrm{pp}}$ AC voltage is applied to the these segments with alternating phases $+\frac{1}{2}\varphi_{0}$ and $-\frac{1}{2}\varphi_{0}$, resulting in a strong excitation of resonance mode $n=10$ at $f=18.5~{}\textrm{MHz}$, close the prediction $f^{(10)}$. All the features seen in Fig. 5 of the disk-membrane displacement field $\bm{u}_{1}$ and of the pressure field $p_{1}$ in the liquid are the same as in Fig. 2(c), and the amplitudes are relatively large being $|\bm{u}_{1}|=0.49~{}\textrm{nm}$ and $|p_{1}|=235~{}\textrm{kPa}$. Of particular interest for trapping, we notice a local maximum in $|p_{1}(0,z)|$ in the inset of Fig. 5 near $z=100~{}\textrm{\textmu{}m}$, similar to the one at $z=60~{}\textrm{\textmu{}m}$ shown in Fig. 3(b). ### IV.2 Characterizing the cell trap Figure 6: Contour plots of the simulated components $F^{\mathrm{pt}}_{r}$ and $F^{\mathrm{pt}}_{z}$ of the force $\bm{F}^{\mathrm{pt}}$ acting on MCF-7 cells in the system shown in Fig. 5, but for different liquids and voltage amplitudes: (a,b) Water at $1~{}\textrm{V}_{\textrm{pp}}$, (c,d) 60% iodixanol at $1~{}\textrm{V}_{\textrm{pp}}$, and (e,f) 60% iodixanol at $5~{}\textrm{V}_{\textrm{pp}}$, using the parameters in Table 1. The contour lines for $F^{\mathrm{pt}}_{r}$ are from $-200$ to $100~{}\textrm{pN}$ in steps of $50~{}\textrm{pN}$ in (a,c), but from $-5$ to $2.5~{}\textrm{nN}$ in steps of $1.25~{}\textrm{nN}$ in (e). The contour lines for $F^{\mathrm{pt}}_{z}$ are from $-20$ to $50~{}\textrm{pN}$ in steps of $5~{}\textrm{pN}$ in (b,d), but $-0.5$ to $1.25~{}\textrm{nN}$ in steps of $0.125~{}\textrm{nN}$ in (f). The black arrows indicate the components direction. (g) Line plot of the normalized vertical force component $\hat{F}_{z}^{\mathrm{pt}}$ along the $z$-axis for case (b), (d), and (f). To characterize the cell trap, we study the total force $\bm{F}^{\mathrm{pt}}$ (19a) that acts on a MCF-7 cell suspended in water. In Fig. 6(a,b) is shown the contour plots of the radial and axial components $F^{\mathrm{pt}}_{r}$ and $F^{\mathrm{pt}}_{z}$ of $\bm{F}^{\mathrm{pt}}$. To set the scale, we note that in this case, the buoyancy-corrected gravitational force is $F^{\mathrm{grav}}_{\mathrm{wa}}=2.3~{}\textrm{pN}$ and the Stokes drag force is $F^{\mathrm{drag}}_{\mathrm{wa}}(10~{}\textrm{\textmu{}m}/\textrm{s})=1.7~{}\textrm{pN}$ assuming that the velocity of the cell relative to water is $10~{}\textrm{\textmu{}m}/\textrm{s}$. Firstly, we note that $\bm{F}^{\mathrm{pt}}$ with a magnitude above $70~{}\textrm{pN}$ is completely dominated by the radiation force $\bm{F}^{\mathrm{rad}}$, secondly that $F^{\mathrm{pt}}_{r}\gtrsim F^{\mathrm{pt}}_{z}$, and thirdly that using pure water, $\bm{F}^{\mathrm{pt}}$ expels the cell from the primary nodal plane, instead of trapping it, as indicated by the black arrows. This anti-trapping is a well-known result in trapping theory for a particle more heavy and rigid than the fluid in a single traveling wave [44]. To change the system into a trap, we need to tune the acoustic properties of the fluid to reverse the signs of the scattering coefficients $f_{0}$ and $f_{1}$. This can be done by using the density modifier iodixanol as demonstrated in Refs. [30, 62, 63, 64]. In Fig. 6(c,d) is shown that indeed by using a 60% aqueous iodixanol solution that is heavier than pure water, the particle force $\bm{F}^{\mathrm{pt}}$ is reversed: the orange regions of $F^{\mathrm{pt}}_{r}>0$ in Fig. 6(a) turns into the cyan regions of $F^{\mathrm{pt}}_{r}<0$ in Fig. 6(c), and vice versa. Also, the yellow regions of $F^{\mathrm{pt}}_{z}>0$ in Fig. 6(b) turns into the magenta regions of $F^{\mathrm{pt}}_{z}<0$ in Fig. 6(d), and vice versa. The characteristic forces for the solution are $F^{\mathrm{grav}}_{I60\%}=-11~{}\textrm{pN}$ and $F^{\mathrm{drag}}_{I60\%}(10~{}\textrm{\textmu{}m}/\textrm{s})=15~{}\textrm{pN}$. In Fig. 6(c) we see that radial force $F^{\mathrm{pt}}_{r}$ points toward the $z$ axis for $r_{\mathrm{mv}}\lesssim 20~{}\textrm{\textmu{}m}$, reaching a maximum amplitude of 234 pN at the height $z_{\mathrm{trap}}\approx 70~{}\textrm{\textmu{}m}$. However, the axial component $F^{\mathrm{pt}}_{z}$ is an order of magnitude smaller, and when plotting $F^{\mathrm{pt}}_{z}(0,z)$ along the $z$-axis, we find that although its has the desired upward direction $F^{\mathrm{pt}}_{z}(0,z)>0$ for $z<z_{\mathrm{trap}}$, it is still positive at its minimum at $z=z_{\mathrm{trap}}$, $F^{\mathrm{pt}}_{z}(0,z_{\mathrm{trap}})\approx+5~{}\textrm{pN}$. The setup is thus not a trap at all, because as the numerical simulation reveals, at $z_{\mathrm{trap}}$ we have $F_{z}^{\mathrm{rad}}=-20~{}\textrm{pN}$, $F_{z}^{\mathrm{drag}}=+14~{}\textrm{pN}$, and $F_{z}^{\mathrm{grav}}=+11~{}\textrm{pN}$, such that $F^{\mathrm{pt}}_{z}=(-20+14)~{}\textrm{pN}+11~{}\textrm{pN}=5~{}\textrm{pN}>0~{}\textrm{pN}$. A trap is achieved simply by enhancing the AC voltage amplitude from the $1~{}\textrm{V}_{\textrm{pp}}$ to $5~{}\textrm{V}_{\textrm{pp}}$. This enhances the magnitude of the voltage-dependent second-order forces $\bm{F}^{\mathrm{rad}}$ and $\bm{F}^{\mathrm{drag}}$ by a factor $5^{2}=25$, leaving the gravitation force unchanged. Changing from $1~{}\textrm{V}_{\textrm{pp}}$ to $5~{}\textrm{V}_{\textrm{pp}}$, we thus expect the weakest point in the trap to attain the value $\bm{F}^{\mathrm{pt}}=\big{(}-234\times 25,(-20+14)\times 25+11\big{)}~{}\textrm{pN}=(-5850,-119)~{}\textrm{pN}$, and this is exactly what is obtained, as shown in Fig. 6(e,f). So using an appropriate tuning of density of the liquid and of the excitation voltage, our system is able to trap the MCF-7 cell in a trapping point located on the $z$-axis approximately $70~{}\textrm{\textmu{}m}$ above the membrane as revealed by the red thick line in Fig. 6(g). As shown in Fig. 6(e), radial trapping occurs once the cell enters the cyan region where $F^{\mathrm{pt}}_{r}<0$. However, we notice that this region is surrounded by barrier-like orange region with $F^{\mathrm{pt}}_{r}>0$, which repels the cell. This barrier could make it difficult for the cell to enter the trapping region in the first place, however once trapped, the barrier prevents more particles to enter the trap. This aspect is discussed further in Section V. To gain a better understanding of how the trap is loaded, we study in the steady acoustic streaming in the following. ### IV.3 Steady acoustic streaming in the trap Figure 7: Vector plot (cyan arrows) of the steady streaming $\bm{v}_{2}$ and a color plot of $|\bm{v}_{2}|$ from 0 (black) to $247~{}\textrm{\textmu{}m}/\textrm{s}$ (white) in a 60% iodixanol solution at $f=18.5~{}\textrm{MHz}$ and $5~{}\textrm{V}_{\textrm{pp}}$ corresponding to Fig. 6(e,f) of the membrane driven in mode $n=10$ . Bulk-driven Eckart streaming dominates over boundary-driven Rayleigh streaming. In the system with an acoustic wave propagating upward from a vibrating membrane, the acoustic streaming is dominated by the bulk-driven Eckart streaming due to the force density (16) $\bm{f}=-\dfrac{\Gamma_{\mathrm{fl}}\omega}{2c_{\mathrm{fl}}^{2}}\operatorname{Re}\big{[}p_{1}^{*}\bm{v}_{1}^{d}\big{]}$ from the upward traveling acoustic wave, whereas the Rayleigh boundary-driven streaming is negligible. In Fig. 7 is shown the simulated streaming for the 60% iodixanol solution at 18.5 MHz and $5~{}\textrm{V}_{\textrm{pp}}$ corresponding to Fig. 6(e,f). The dominant feature is the toroidal vortex centered at $(r,z)=(200,140)~{}\textrm{\textmu{}m}$, which drags in particles down toward the membrane near $r=300~{}\textrm{\textmu{}m}$ and then sends them upward along the lower part of the $z$-axis, exactly where the trapping point is located. The flow velocity there is $247~{}\textrm{\textmu{}m}/\textrm{s}$ corresponding to a Stokes drag force on a trapped cell of $F^{\mathrm{drag}}=358~{}\textrm{pN}$, which is a factor of 1.4 lower than the dominant radiation force, $F_{z}^{\mathrm{rad}}=500~{}\textrm{pN}$. The trap thus work well for the large cancer cells, but less so for smaller cells, and we can estimate the critical cell radius $a_{\mathrm{cr}}$, below which the cell is not trapped as $a_{\mathrm{cr}}=\sqrt{F_{z}^{\mathrm{drag}}(10~{}\textrm{\textmu{}m})/F_{z}^{\mathrm{rad}}(10~{}\textrm{\textmu{}m})}\>10~{}\textrm{\textmu{}m}=8.5~{}\textrm{\textmu{}m}$. [65] ## V Discussion of the results In the following, we discuss various aspects of the higher-harmonic membrane trap, its advantages and disadvantages. Some of these aspects are particular to the specific type of trap, while other apply to single-cell traps in general. Arrays of traps. One distinct advantage of the membrane trap is its relatively small size, which permits the fabrication of arrays of single-cell traps for parallel analysis that can be turned on and off in a controlled manner. The design with $(r_{\mathrm{mv}},n)=(500~{}\textrm{\textmu{}m},10)$ analyzed in Fig. 5 allows for placing one trap per mm2, a density that can be increased by identifying membrane modes with the desired properties for smaller values of $r_{\mathrm{mv}}$. Improved loading by turning the trap upside down. In Section IV.3, it was mentioned how the toroidal streaming vortex of the membrane trap helped loading the trap. Given that $\bm{F}^{\mathrm{grav}}$ points upward, a further improvement of the loading process can be achieved by placing the membrane trap above the liquid instead of below. In this case cells would sediment upward toward the membrane, even when the acoustics is turned off. By subsequent actuation of the acoustics, the cells would be in a good position to be carried into the trap by the toroidal vortex. A further advantage of the inverted trap geometry is that trapping can be obtained at lower excitation voltages and for smaller particles due to a favorable interplay between $\bm{F}^{\mathrm{grav}}$ and $\bm{F}^{\mathrm{drag}}$ [66]. Turning the trap described in Section IV.2 upside down reduces the critical trapping radius $a_{\mathrm{cr}}$ by 30%. Reduction of the radial force barrier. If the repelling force barrier surrounding the trap region poses a problem, as mentioned in Section IV.2, it would be possible to reduce and perhaps even remove it by careful further tuning of the liquid. As studied by Qiu et al. [64], a range of carrier fluids with different acoustic properties can be created using solutions of Ficoll or iodixanol. In Fig. 8 we show the result of a purely theoretical computation of $\bm{F}^{\mathrm{pt}}$ for the 60% iodixanol solution of Fig. 6(c,d), were we assume it possible to keep all parameters constant, while lowering the density from its actual value, $\rho_{\mathrm{fl}}^{\mathrm{Idx,}60\%}=1320~{}\textrm{kg}\>\textrm{m${}^{-3}$}$, to match that of the MCF-7 cell, $\rho_{\mathrm{MCF\text{-}7}}=1055~{}\textrm{kg}\>\textrm{m${}^{-3}$}$, and even lower to $930~{}\textrm{kg}\>\textrm{m${}^{-3}$}$. Were it possible to find a liquid with these parameters, we see in Fig. 8(a,c), how the force barrier is strongly reduced in the neutral buoyant case, and nearly vanished in the low-density case of in Fig. 8(b,d). Figure 8: Color plots of $F^{\mathrm{pt}}_{r}$ and $F^{\mathrm{pt}}_{z}$ in a 60% iodixanol solution at 18.5 MHz and $1~{}\textrm{V}_{\textrm{pp}}$ as in Fig. 6(c,d), but assuming theoretically a reduction in the density from its actual value $\rho_{\mathrm{fl}}^{\mathrm{Idx,}60\%}=1320~{}\textrm{kg}\>\textrm{m${}^{-3}$}$, while keeping all other parameters constant. (a) The neutral buoyant case $\rho_{\mathrm{fl}}^{\mathrm{Idx,}60\%}=\rho_{\mathrm{MCF\text{-}7}}=1055~{}\textrm{kg}\>\textrm{m${}^{-3}$}$. (b) A low-density case $\rho_{\mathrm{fl}}^{\mathrm{Idx,}60\%}=930~{}\textrm{kg}\>\textrm{m${}^{-3}$}$. Creating an axial pressure node by breaking the axisymmetry. The anti-trapping of cells in water shown in Fig. 6(a,b) is removed in Bessel-beam traps by breaking the axisymmetry and create a pressure node along the $z$-axis [44]. It would be interesting to study if a similar effect could be achieved in the membrane trap. It may be done by creating segments in the excitation electrode both in the radial and azimuthal direction, and then excite a rotating higher- harmonic mode by running the excitation voltage with appropriate phase shifts in the azimuthal direction as in Ref. [67]. Sign-change of the radiation force for shorter wave lengths. The anti-trapping of the membrane trap may be circumvented in a different way by working in the regime where the acoustic wavelength $\lambda_{0}$ is comparable to the cell radius $a$. In this regime diffraction effects may change the sign of the radiation force [68]. A recent numerical study on suspended white blood cell has shown such a sign reversal to occur for $a\approx 0.4\lambda_{0}$ [69]. This points to a way of getting the membrane trap to work without tuning of the suspension medium, and would perhaps therefore also work for a standard isotonic saline solution. Trapping of smaller-sized particles. Given the relatively strong bulk-driven Eckart streaming in the membrane trap, it is not easy to reduce the critical radius from the above-mentioned value of $a_{\mathrm{cr}}=8.5~{}\textrm{\textmu{}m}$. Several of the methods proposed in the literature are for systems dominated by boundary-driven Rayleigh streaming [70, 53, 71]. Perhaps only the method of trapped large seed particles would be a way to trap smaller particles in the membrane trap [46]. ## VI Conclusion In this paper, we have shown in a numerical study, how the concept of selective and efficient excitation of higher-harmonic disk membrane modes can be applied to acoustofluidic systems. The physical mechanism is based on the actuation of 1-µm-thick piezoelectric $\textrm{Al}_{0.6}\textrm{Sc}_{0.4}\textrm{N}$ thin-film transducers by patterning the excitation electrodes to match the out-of-phase strain pattern in the transducer at specific mode. As a proof of concept, we have shown that the $n=3$ higher-harmonic mode in an axisymmetric 200-µm-diameter, 10-µm-thick silicon membrane actuated at a frequency 52 MHz, emits pressure waves form each anti-node which results in interference patterns in the pressure field in a liquid placed above the membrane. When the condition that the wavelength in the liquid is smaller than that of the node spacing in the membrane mode, the interference pattern in the liquid can create a global pressure maximum in some distance above the membrane. Our main example demonstrated how the global pressure maximum can be associated with the appearance of a single-cell trap, if the suspending liquid is tuned such that the cells have a negative acoustic contrast factor. In our model, this tuning was achieved by using a 60% iodixanol solution. To lower the operating frequency and obtain a wave length larger than the cell, the $n=10$ higher-harmonic mode in 1000-µm-diameter, 10-µm-thick membrane was was excited at 18.5 MHz using appropriately patterned excitation electrodes. We showed numerically that MCF-7 cancer cells could be trapped approximately 70 µm above the center of the membrane. Several aspects of the trap was discussed arrays of traps, an inverted setup, reduction of the radial force barrier in the trap, the creation of an axial pressure mode, and the sign-change of the radiation force at shorter wave lengths. Our analysis demonstrate the higher-harmonic MHz acoustic modes can be excited selectively and efficiently in acoustofluidic systems by appropriate patterning of transducer electrodes in thin-film transducers. Such modes may prove useful in practical applications, such as the single-cell trap analyzed in our main example. ## Acknowledgements This work was supported by the BioWings project funded by the European Union’s Horizon 2020 Future and Emerging Technologies (FET) programme, grant No. 801267. ## References * Reichert _et al._ [2018] P. Reichert, D. Deshmukh, L. Lebovitz, and J. Dual, Thin film piezoelectrics for bulk acoustic wave (BAW) acoustophoresis, Lab Chip 18, 3655 (2018). * Steckel _et al._ [2021] A. G. Steckel, H. Bruus, P. Muralt, and R. Matloub, Fabrication, characterization, and simulation of glass devices with $\mathrm{Al}\mathrm{N}$ thin-film transducers for excitation of ultrasound resonances, Phys. Rev. Applied 16, 014014, 1 (2021). * Steckel and Bruus [2021] A. G. Steckel and H. Bruus, Numerical study of bulk acoustofluidic devices driven by thin-film transducers and whole-system resonance modes, J. Acoust. Soc. Am. 150, 634 (2021). * Brenner _et al._ [2019] K. Brenner, A. S. Ergun, K. Firouzi, M. F. Rasmussen, Q. Stedman, and B. P. Khuri-Yakub, Advances in capacitive micromachined ultrasonic transducers, Micromachines 10, 152, (27 pp) (2019). * Cui _et al._ [2016] W. Cui, H. Zhang, H. Zhang, Y. Yang, M. He, H. Qu, W. Pang, D. Zhang, and X. Duan, Localized ultrahigh frequency acoustic fields induced micro-vortices for submilliseconds microfluidic mixing, Appl. Phys. Lett. 109, 253503 (6 pp) (2016). * Cui _et al._ [2019] W. Cui, L. Mu, X. Duan, W. Pang, and M. A. Reed, Trapping of sub-100 nm nanoparticles using gigahertz acoustofluidic tweezers for biosensing applications, Nanoscale 11, 14625 (2019). * Qian _et al._ [2021] J. Qian, R. Yang, H. Begum, and J. E.-Y. Lee, Reconfigurable acoustofluidic manipulation of particles in ring-like rich patterns enabled on a bulk micromachined silicon chip, in _2021 21st International Conference on Solid-State Sensors, Actuators and Microsystems (Transducers)_ (IEEE, 2021) pp. 365–368. * Delsing _et al._ [2019] P. Delsing, A. N. Cleland, M. J. A. Schuetz, J. Knoerzer, G. Giedke, J. I. Cirac, K. Srinivasan, M. Wu, K. C. Balram, C. Bauerle, T. Meunier, C. J. B. Ford, P. V. Santos, E. Cerda-Mendez, H. Wang, H. J. Krenner, E. D. S. Nysten, M. Weiss, G. R. Nash, L. Thevenard, C. Gourdon, P. Rovillain, M. Marangolo, J.-Y. Duquesne, G. Fischerauer, W. Ruile, A. Reiner, B. Paschke, D. Denysenko, D. Volkmer, A. Wixforth, H. Bruus, M. Wiklund, J. Reboud, J. M. Cooper, Y. Fu, M. S. Brugger, F. Rehfeldt, and C. Westerhausen, The 2019 surface acoustic waves roadmap, J. Phys. D Appl. Phys. 52, 353001 (40pp) (2019). * Bodé _et al._ [2020] W. N. Bodé, L. Jiang, T. Laurell, and H. Bruus, Microparticle acoustophoresis in aluminum-based acoustofluidic devices with PDMS covers, Micromachines 11, 292 (2020). * Lickert _et al._ [2021] F. Lickert, M. Ohlin, H. Bruus, and P. Ohlsson, Acoustophoresis in polymer-based microfluidic devices: Modeling and experimental validation, J. Acoust. Soc. Am. 149, 4281 (2021). * Bora and Shusteff [2015] M. Bora and M. Shusteff, Efficient coupling of acoustic modes in microfluidic channel devices, Lab Chip 15, 3192 (2015). * Moiseyenko and Bruus [2019] R. P. Moiseyenko and H. Bruus, Whole-system ultrasound resonances as the basis for acoustophoresis in all-polymer microfluidic devices, Phys. Rev. Applied 11, 014014 (2019). * Tahmasebipour _et al._ [2020] A. Tahmasebipour, L. Friedrich, M. Begley, H. Bruus, and C. Meinhart, Toward optimal acoustophoretic microparticle manipulation by exploiting asymmetry, J. Acoust. Soc. Am. 148, 359 (2020). * Hammarström _et al._ [2021] B. Hammarström, N. R. Skov, K. Olofsson, H. Bruus, and M. Wiklund, Acoustic trapping based on surface displacement of resonance modes, J. Acoust. Soc. Am. 149, 1445 (2021). * Wein _et al._ [2013] F. Wein, M. Kaltenbacher, and M. Stingl, Topology optimization of a cantilevered piezoelectric energy harvester using stress norm constraints, Struct. Multidiscip. O. 48, 173 (2013). * Du _et al._ [2017] S. Du, Y. Jia, S.-T. Chen, C. Zhao, B. Sun, E. Arroyo, and A. A. Seshia, A new electrode design method in piezoelectric vibration energy harvesters to maximize output power, Sensor. Actuat. A-Phys. 263, 693 (2017). * Fu _et al._ [2018] H. Fu, G. Chen, and N. Bai, Electrode coverage optimization for piezoelectric energy harvesting from tip excitation, SENSORS 18 (2018), 10.3390/s18030804. * Yang _et al._ [2018] Z. Yang, S. Zhou, J. Zu, and D. Inman, High-performance piezoelectric energy harvesters and their applications, Joule 2, 642 (2018). * Luo _et al._ [2021] A. Luo, Y. Zhang, X. Guo, Y. Lu, C. Lee, and F. Wang, Optimization of MEMS vibration energy harvester with perforated electrode, J. Microelectromech. S. 30, 299 (2021). * Pulskamp _et al._ [2012] J. S. Pulskamp, S. S. Bedair, R. G. Polcawich, G. L. Smith, J. Martin, B. Power, and S. A. Bhave, Electrode-shaping for the excitation and detection of permitted arbitrary modes in arbitrary geometries in piezoelectric resonators, IEEE transactions on ultrasonics, ferroelectrics, and frequency control 59, 1043 (2012). * Com [2019] COMSOL Multiphysics 5.5 (2019), http://www.comsol.com. * Skov _et al._ [2019a] N. R. Skov, J. S. Bach, B. G. Winckelmann, and H. Bruus, 3D modeling of acoustofluidics in a liquid-filled cavity including streaming, viscous boundary layers, surrounding solids, and a piezoelectric transducer, AIMS Mathematics 4, 99 (2019a). * Bach and Bruus [2018] J. S. Bach and H. Bruus, Theory of pressure acoustics with viscous boundary layers and streaming in curved elastic cavities, J. Acoust. Soc. Am. 144, 766 (2018). * Hopcroft _et al._ [2010] M. A. Hopcroft, W. D. Nix, and T. W. Kenny, What is the Young’s modulus of silicon, J. Microelectromech. Syst 19, 229 (2010). * Thomsen _et al._ [2014] E. V. Thomsen, K. Reck, G. Skands, C. Bertelsen, and O. Hansen, Silicon as an anisotropic mechanical material: Deflection of thin crystalline plates, Sensors and Actuators A: Physical 220, 347 (2014). * Trolier-McKinstry and Muralt [2004] S. Trolier-McKinstry and P. Muralt, Thin film piezoelectrics for MEMS, Journal of Electroceramics 12, 7 (2004). * Settnes and Bruus [2011] M. Settnes and H. Bruus, Theoretical analysis of viscous corrections to the acoustic radiation force on cells in microchannel acoustophoresis, in _Proc. 15th MicroTAS, 2 - 6 October 2011, Seattle (WA), USA_ , edited by J. Landers, A. Herr, D. Juncker, N. Pamme, and J. Bienvenue (CBMS, 2011) pp. 160–162. * Ley and Bruus [2017] M. W. H. Ley and H. Bruus, Three-dimensional numerical modeling of acoustic trapping in glass capillaries, Phys. Rev. Applied 8, 024020 (2017). * Muller and Bruus [2014] P. B. Muller and H. Bruus, Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels, Phys. Rev. E 90, 043016 (2014). * Augustsson _et al._ [2016] P. Augustsson, J. T. Karlsen, H.-W. Su, H. Bruus, and J. Voldman, Iso-acoustic focusing of cells for size-insensitive acousto-mechanical phenotyping, Nat. Commun. 7, 11556 (2016). * An _et al._ [2009] J. An, J. Lee, S. H. Lee, J. Park, and B. Kim, Separation of malignant human breast cancer epithelial cells from healthy epithelial cells using an advanced dielectrophoresis-activated cell sorter (dacs), Analytical and bioanalytical chemistry 394, 801 (2009). * Cushing _et al._ [2017] K. W. Cushing, F. Garofalo, C. Magnusson, L. Ekblad, H. Bruus, and T. Laurell, Ultrasound characterization of microbead and cell suspensions by speed of sound measurements of neutrally buoyant samples, Anal. Chem. 89, 8917 (2017). * Caro _et al._ [2015] M. A. Caro, S. Zhang, T. Riekkinen, M. Ylilammi, M. A. Moram, O. Lopez-Acevedo, J. Molarius, and T. Laurila, Piezoelectric coefficients and spontaneous polarization of ScAlN, J. Phys.-Condens. Mat. 27, 245901 (2015). * Olsson _et al._ [2020] R. H. Olsson, Z. Tang, and M. D’Agati, Doping of aluminum nitride and the impact on thin film piezoelectric and ferroelectric device performance, in _2020 IEEE Custom Integrated Circuits Conference (CICC)_ (2020) pp. 1–6. * Kim _et al._ [2001] J. Kim, D.-i. D. Cho, and R. S. Muller, Why is (111) silicon a better mechanical material for MEMS? in _Transducers 01 Eurosensors XV_ (Springer, 2001) pp. 662–665. * Hahn and Dual [2015] P. Hahn and J. Dual, A numerically efficient damping model for acoustic resonances in microfluidic cavities, Physics of Fluids 27, 062005 (2015). * Note [1] See Supplemental Material at http://bruus-lab.dk/files/Steckel_membrane_Suppl.zip for animations of $p_{1}$ and $\bm{u}_{1}$ in Fig. 2(a), Fig. 2(c), Fig. 3(a), Fig. 4, and Fig. 5. * Skov _et al._ [2019b] N. R. Skov, P. Sehgal, B. J. Kirby, and H. Bruus, Three-dimensional numerical modeling of surface-acoustic-wave devices: Acoustophoresis of micro- and nanoparticles including streaming, Phys. Rev. Applied 12, 044028 (2019b). * Baresch _et al._ [2016] D. Baresch, J.-L. Thomas, and R. Marchiano, Observation of a single-beam gradient force acoustical trap for elastic particles: Acoustical tweezers, Phys. Rev. Lett. 116, 024301 (2016). * Karlsen and Bruus [2017] J. T. Karlsen and H. Bruus, Acoustic tweezing and patterning of concentration fields in microfluidics, Phys. Rev. Applied 7, 034017 (2017). * Riaud _et al._ [2017] A. Riaud, M. Baudoin, O. Bou Matar, L. Becerra, and J.-L. Thomas, Selective manipulation of microscopic particles with precursor swirling Rayleigh waves, Phys. Rev. Applied 7, 024007 (2017). * Baudoin _et al._ [2019] M. Baudoin, J.-C. Gerbedoen, A. Riaud, O. B. Matar, N. Smagin, and J.-L. Thomas, Folding a focalized acoustical vortex on a flat holographic transducer: miniaturized selective acoustical tweezers, Science advances 5, eaav1967 (2019). * Gong and Baudoin [2019] Z. Gong and M. Baudoin, Particle assembly with synchronized acoustic tweezers, Phys. Rev. Applied 12, 024045 (2019). * Baudoin and Thomas [2020] M. Baudoin and J.-L. Thomas, Acoustic tweezers for particle and fluid micromanipulation, Annual Review of Fluid Mechanics 52, 205 (2020). * Hammarström _et al._ [2010] B. Hammarström, M. Evander, H. Barbeau, M. Bruzelius, J. Larsson, T. Laurell, and J. Nillsson, Non-contact acoustic cell trapping in disposable glass capillaries, Lab Chip 10, 2251 (2010). * Hammarström _et al._ [2014] B. Hammarström, B. Nilson, T. Laurell, J. Nilsson, and S. Ekström, Acoustic trapping for bacteria identification in positive blood cultures with maldi-tof ms, Anal. Chem. 86, 10560 (2014). * Evander _et al._ [2015] M. Evander, O. Gidlof, B. Olde, D. Erlinge, and T. Laurell, Non-contact acoustic capture of microparticles from small plasma volumes, Lab Chip 15, 2588 (2015). * Hammarström _et al._ [2012] B. Hammarström, T. Laurell, and J. Nilsson, Seed particle enabled acoustic trapping of bacteria and nanoparticles in continuous flow systems, Lab Chip 12, 4296 (2012). * Hammarström _et al._ [2014] B. Hammarström, M. Evander, J. Wahlström, and J. Nilsson, Frequency tracking in acoustic trapping for improved performance stability and system surveillance, Lab Chip 14, 1005 (2014). * Olofsson _et al._ [2021] K. Olofsson, V. Carannante, M. Takai, B. Onfelt, and M. Wiklund, Ultrasound-based scaffold-free core-shell multicellular tumor spheroid formation, Micromachines 12, 329 1 (2021). * Hagsäter _et al._ [2008] S. M. Hagsäter, A. Lenshof, P. Skafte-Pedersen, J. P. Kutter, T. Laurell, and H. Bruus, Acoustic resonances in straight micro channels: Beyond the 1D-approximation, Lab Chip 8, 1178 (2008). * Augustsson _et al._ [2011] P. Augustsson, R. Barnkob, S. T. Wereley, H. Bruus, and T. Laurell, Automated and temperature-controlled micro-PIV measurements enabling long-term-stable microchannel acoustophoresis characterization, Lab Chip 11, 4152 (2011). * Bach and Bruus [2020] J. S. Bach and H. Bruus, Suppression of acoustic streaming in shape-optimized channels, Phys. Rev. Lett. 124, 214501 (2020). * Shi _et al._ [2009] J. Shi, D. Ahmed, X. Mao, S.-C. S. Lin, A. Lawit, and T. J. Huang, Acoustic tweezers: patterning cells and microparticles using standing surface acoustic waves (SSAW), Lab Chip 9, 2890 (2009). * Collins _et al._ [2016] D. J. Collins, A. Neild, and Y. Ai, Highly focused high-frequency travelling surface acoustic waves (saw) for rapid single-particle sorting, Lab Chip 16, 471 (2016). * Gascoyne _et al._ [2002] P. Gascoyne, C. Mahidol, M. Ruchirawat, J. Satayavivad, P. Watcharasit, and F. F. Becker, Microsample preparation by dielectrophoresis: isolation of malaria, Lab Chip 2, 70 (2002). * Nilsson _et al._ [2009] J. Nilsson, M. Evander, B. Hammarström, and T. Laurell, Review of cell and particle trapping in microfluidic systems, Analytica Chimica Acta 649, 141 (2009). * Gustafson _et al._ [2021] K. T. Gustafson, K. T. Huynh, D. Heineck, J. Bueno, A. Modestino, S. Kim, A. Gower, R. Armstrong, C. E. Schutt, and S. D. Ibsen, Automated fluorescence quantification of extracellular vesicles collected from blood plasma using dielectrophoresis dagger, Lab Chip 21 (2021), 10.1039/d0lc00940g. * Li _et al._ [2015] P. Li, Z. Mao, Z. Peng, L. Zhou, Y. Chen, P.-H. Huang, C. I. Truica, J. J. Drabick, W. S. El-Deiry, M. Dao, S. Suresh, and T. J. Huang, Acoustic separation of circulating tumor cells, Proc. Natl. Acad. Sci. U.S.A. 112, 4970 (2015). * Low and Abas [2015] W. S. Low and W. A. B. W. Abas, Benchtop technologies for circulating tumor cells separation based on biophysical properties, Biomed. Res. Int. 2015, 239362 (2015). * Olofsson _et al._ [2020] K. Olofsson, B. Hammarstrom, and M. Wiklund, Acoustic separation of living and dead cells using high density medium, Lab Chip 20, 1981 (2020). * Karlsen _et al._ [2016] J. T. Karlsen, P. Augustsson, and H. Bruus, Acoustic force density acting on inhomogeneous fluids in acoustic fields, Phys. Rev. Lett. 117, 114504 (2016). * Karlsen _et al._ [2018] J. T. Karlsen, W. Qiu, P. Augustsson, and H. Bruus, Acoustic streaming and its suppression in inhomogeneous fluids, Phys. Rev. Lett. 120, 054501 (2018). * Qiu _et al._ [2019] W. Qiu, J. T. Karlsen, H. Bruus, and P. Augustsson, Experimental characterization of acoustic streaming in gradients of density and compressibility, Phys. Rev. Appl. 11, 024018 (2019). * Muller _et al._ [2012] P. B. Muller, R. Barnkob, M. J. H. Jensen, and H. Bruus, A numerical study of microparticle acoustophoresis driven by acoustic radiation forces and streaming-induced drag forces, Lab Chip 12, 4617 (2012). * Li _et al._ [2021] J. Li, A. Crivoi, X. Peng, L. Shen, Y. Pu, Z. Fan, and S. A. Cummer, Three dimensional acoustic tweezers with vortex streaming, Communications Physics 4, 1 (2021). * Tran _et al._ [2012] S. B. Q. Tran, P. Marmottant, and P. Thibault, Fast acoustic tweezers for the two-dimensional manipulation of individual particles in microfluidic channels, Appl. Phys. Lett. 101, 114103 (2012). * Hasegawa [1977] T. Hasegawa, Comparison of 2 solutions for acoustic radiation pressure on a sphere, J. Acoust. Soc. Am. 61, 1445 (1977). * Habibi _et al._ [2017] R. Habibi, C. Devendran, and A. Neild, Trapping and patterning of large particles and cells in a 1d ultrasonic standing wave, Lab Chip 17, 33279 (2017). * Qiu _et al._ [2020] W. Qiu, H. Bruus, and P. Augustsson, Particle-size-dependent acoustophoretic motion and depletion of micro-and nano-particles at long timescales, Phys. Rev. E 102, 013108 (2020). * Winckelmann and Bruus [2021] B. Winckelmann and H. Bruus, Theory and simulation of electroosmotic suppression of acoustic streaming, J. Acoust. Soc. Am. 149, 3917 (2021).
# Super-Resolved Image of M87 Observed with East Asian VLBI Network Fumie Tazaki 1,, Yuzhu Cui 2,3, Kazuhiro Hada 4, Motoki Kino 4,5, Ilje Cho 6, Guang-Yao Zhao 6, Kazunori Akiyama 7,8,4, Yosuke Mizuno2,9,10, Hyunwook Ro 11,12, Mareki Honma 4,13,14, Ru-Sen Lu 15,16,17, Zhi-Qiang Shen 15,16, Lang Cui 18, and Yoshinori Yonekura 19 1 Tokyo Electron Technology Solutions Limited, Iwate 023-1101, Japan <EMAIL_ADDRESS> 2 Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 201210, China 3 Research Center for Intelligent Computing Platforms, Zhejiang Laboratory, Hangzhou 311100, China 4 Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, Iwate 023-0861, Japan 5 Academic Support Center, Kogakuin University of Technology and Engineering, Tokyo 192-0015, Japan 6 Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain 7 Massachusetts Institute of Technology Haystack Observatory, Westford, MA 01886, USA 8 Black Hole Initiative, Harvard University, Cambridge, MA 02138, USA 9 School of Physics & Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China 10 Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany 11 Department of Astronomy, Yonsei University, Seodaemun-gu, Seoul 03722, Republic of Korea 12 Korea Astronomy and Space Science Institute, Yuseong-gu, Daejeon 34055, Republic of Korea 13 Department of Astronomical Science, The Graduate University for Advanced Studies, SOKENDAI, Tokyo 181-8588, Japan 14 Institute of Astronomy, The University of Tokyo, Tokyo 181-0015, Japan 15 Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China 16 Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Nanjing 210008, China 17 Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany 18 Xinjiang Astronomical Observatory, CAS, 150 Science 1-Street, Urumqi 830011, China 19 Center for Astronomy, Ibaraki University, Ibaraki 310-8512, Japan ###### Abstract Obtaining high-resolution images at centimeter-or-longer wavelengths is vital for understanding the physics of jets. We reconstructed images from the M87 22 GHz data observed with the East Asian VLBI Network (EAVN) by using the regularized maximum likelihood (RML) method, which is different from the conventional imaging method CLEAN. Consequently, a bright core and jet extending about 30 mas to the northwest were detected with a higher resolution than in the CLEAN image. The width of the jet was 0.5 mas at 0.3 mas from the core, consistent with the width measured in the 86 GHz image in the previous study. In addition, three ridges were able to be detected at around 8 mas from the core, even though the peak-to-peak separation was only 1.0 mas. This indicates that the RML image’s spatial resolution is at least 30% higher than that of the CLEAN image. This study is an important step for future multi- frequency and high-cadence observations of the EAVN to discuss the more detailed structure of the jet and its time variability. _Keywords_ active galactic nuclei; jet; very-long-baseline interferometry; M87; imaging ## 1 Introduction M87, located at 16.8 Mpc from Earth in the constellation Virgo, is a giant elliptical galaxy with a super-massive black hole of $6.5\times 10^{9}\,M_{\odot}$ at its center (EHTC et al., 2019). This close proximity to a huge black hole allows for a linear resolution of 1 milliarcsecond (mas) = 0.08 pc = 130 Schwarzschild radii ($R_{\rm s}$). The relativistic jet erupts from the central core, emitting radiation ranging from radio to X-rays and gamma rays (EHT MWL Science Working Group et al., 2021). Even considering only the radio band, the jet’s appearance is quite different depending on the wavelength. For VLBI at centimeter-or-longer wavelengths, M87 jets extending to $\sim$900 mas at 150 mm (Hada et al., 2012) and $\sim$20 mas at 13 mm (Hada et al., 2017) have been observed; however, as the wavelength becomes shorter, the downstream of the jet becomes less visible, extending to $\sim$10 mas at 7 mm (Walker et al., 2018) and $\sim$3 mas at 3.5 mm (Hada et al., 2016; Kim et al., 2018). The 1.3 mm data observed with the EHT imaged the ring-like structure in the very vicinity of the black hole (EHTC et al., 2019). The extended jet at the 1.3 mm waveband is so faint that it was not detected in the 2017 EHT observation. However, by improving short-baseline coverage, the dynamic range of the image can be increased, and the faint, extended jet can be recovered. How far downstream the jet can be detected depends on the sensitivity of the telescope, but there is no doubt that observations at centimeter-or-longer wavelengths are still more suitable for studying the downstream of the jet. It is more difficult to obtain high-resolution images downstream of a jet than upstream because shorter wavelengths have a higher spatial resolution for the same telescope aperture. One method for obtaining high-resolution images of the downstream of a jet is space VLBI. By connecting ground and space telescopes, VLBI can be constructed to obtain a long baseline length. This effectively means that it plays the same role as a giant telescope that is larger than the size of the Earth. For example, the VLBI Space Observatory Programme (VSOP) satellite, led by the Institute of Space and Astronautical Science (ISAS) in JAXA in collaboration with the National Astronomical Observatory of Japan (NAOJ), was launched in 1997 and renamed HALCA, which carries a radio telescope with an 8-meter diameter (Hirabayashi et al., 2000). High spatial resolution observations at 1.6 GHz and 5 GHz using VSOP successfully resolved the jet into three ridges at the mas scale (Asada et al., 2016). Investigating how the complex internal structure of the jet evolves over time is critical to understanding the jet’s physics. However, as space VLBI observation time is limited, it is difficult to conduct studies that closely monitor the jet and reveal component motions in detail. Therefore, to image the jet’s fine structure further from the central region, it is necessary to observe the jet at centimeter-or-longer wavelengths, where the jet is the brightest, and to reconstruct a high-resolution image. The regularized maximum likelihood (RML) method applied in this study is an imaging technique for interferometric data that was primarily developed for imaging the data obtained with the Event Horizon Telescope (EHT). While CLEAN, a conventional method, creates an image model after constructing a dirty map, resulting in images with resolutions limited by the beam size, RML directly constructs an image model that fits the observed data, resulting in higher- resolution images than the nominal diffraction limit determined by the _uv_ coverage. As the ideal spatial resolution of VLBI is several times smaller than beam size (Honma et al., 2014), the use of RML may yield images with several-times-higher spatial resolutions than CLEAN images. The East Asian VLBI Network (EAVN) is a VLBI system in East Asia, and currently consists of up to 16 telescopes with a longest baseline of 5078 km (Akiyama et al., 2022). Three science working groups (SWGs), active galactic nuclei (AGNs), evolved stars, and star-forming regions are operated in the EAVN, of which M87 has continued to be monitored since 2017 at 13 and 7 mm wavelengths (Cui et al., 2021) as a target of the AGN SWG. This is a major monitoring project that has been carried out since the era of the KVN and VERA Array (KaVA; VLBI network in Japan and Korea; (Niinuma et al., 2014)), and the results of KaVA’s M87 observations are summarized in Park et al. (2019), which found where the jet accelerated and transitioned from subluminal to superluminal speeds. These high-cadence monitoring observations provide precise measurements of jet motion, which is an important step toward understanding the physics and evolution of jets. ## 2 Observations and Data Reductions The data treated in this paper were observed with the EAVN on 18 March 2017, and have already been published in Cui et al. (2021). The paper (Cui et al., 2021) presents observations of four AGNs in two frequency bands of the EAVN, namely, 22 GHz and 43 GHz. They used the Astronomical Image Processing System (AIPS; Greisen, 2003) provided by the National Radio Astronomy Observatory (NRAO) for an initial calibration of the complex visibility. After that, CLEAN imaging and self-calibration were performed from the calibrated data using DIFMAP software (Shepherd et al., 1994). On the other hand, the data treated in this paper are only the M87 22 GHz observation, which is imaged with a different imaging method (see Section 3 for the details) using the initial calibrated data prior to imaging. In this section, a brief overview of the observational data used in this study is given. The telescopes participating in this observation were the KVN and VERA Array (KaVA), Tianma 65 m Radio Telescope (TMRT), Nanshan 26 m Radio Telescope (NSRT), and Hitachi (HIT). VERA consists of four stations in Japan, namely, Mizusawa (MIZ), Iriki (IRK), Ogasawara (OGA), and Ishigakijima (ISG). KVN stands for Korean VLBI Network, which has three stations, namely, Yonsei (KYS), Ulsan (KUS), and Tamna (KTN), though KYS was unable to participate in this observation due to an issue at the site. See Table 1 in Cui et al. (2021) for specifications of each participating telescope. Figure 1 shows that NSRT contributes to filling the outer part of the _uv_ plane. The longest baseline is 5100 km between NSRT and Ogasawara, which corresponds to the smallest synthetic beam size of 0.6 mas in the northwest–southeast direction. Figure 1: The _uv_ coverage for the M87 session. Red curves indicate baselines related to NSRT. M87 was observed along with 3C 273, 1219+044, and M84. Within the 7-hour session, M87 was observed for 6 scans at 47 min per scan. The recording rate was 1 Gbps (2-bit sampling), where a total bandwidth of 256 MHz was divided into eight 32 MHz intermediate-frequency (IF) bands. Only left-hand circular polarization was received. All the data were correlated at the Daejeon hardware correlator installed at the Korea Astronomy and Space Science Institute (KASI). The initial calibration of visibility amplitude, bandpass, and phase was performed in the standard manner by using the AIPS software package. ## 3 Imaging We reconstructed M87 images from the EAVN data set with the regularized maximum likelihood (RML) method implemented in the Sparse Modeling Imaging Library for Interferometry (SMILI; Akiyama et al., 2017a, b), which is developed primarily for imaging EHT data (EHTC et al., 2019). The VLBI observation fills the _uv_ -plane sampling depending on the telescope placement and the position of the source, as shown in Figure 1. To obtain an image from these data, the conventional method CLEAN draws a dirty map by inverse Fourier transform with zeros in the empty _uv_ plane and reconstructs a set of point-source models from the dirty map. RML, on the other hand, uses regularization to ensure a plausible image consistent with the data without performing an inverse Fourier transform. In this imaging, we use the regularization terms of the weighted-L1 (wL1), total variation (TV), total squared variation (TSV), and maximum entropy method (MEM) (see Appendix A in EHTC et al., 2019, for the definition of each term), which describe assumptions about the source structure, such as that only limited regions in the field of view have brightness or that the source structure is smooth. Each term contains a hyperparameter, which adjusts the relative weighting of the regularization term to the data. If the weight of the regularization term is too strong, the image will be inconsistent with the data. On the other hand, if the weight of the data is too strong, the image will not reflect the features of the image expressed by the regularization term. This allows the production of dirty beam-free images with RML, which can recover finer structures than CLEAN. We used visibility amplitudes, log closure amplitudes, and closure phases for image reconstruction. The visibility amplitude of TMRT shows some offsets compared with that of KaVA. Due to the uncertainty of a priori calibration, the visibility amplitudes of NSRT and HIT are very low compared with KaVA. These systematic errors in visibility amplitude do not affect the log closure amplitude because they are offset in the process of calculating the log closure amplitude. Therefore, the baselines, including TMRT, NSRT, and HIT, are excluded only from the visibility amplitude for the first imaging, and they are applied after the visibility amplitude is corrected by self- calibration. wL1 regularization applies a pixel intensity penalty so that the dark areas in a prior image are also dark in the restored image, reducing the noise of the background region. The prior image is obtained by convolving the CLEAN map with 1.5 mas circular Gaussian. Since only loose constraints on the intensity distribution of the core and jet are required, we convolved the map with a large-enough circular Gaussian; therefore, our images are not affected by the detailed CLEAN model. We applied a total flux of 2.0 Jy based on the CLEAN results. The image properties are set to a pixel size of 80 $\mu$as and a field of view of 41 mas $\times$ 20 mas, referring to Figure 9 of Cui et al. (2021). In addition, the image window, in which the intensity is calculated in imaging, was set as shown by the white circles in Figure 2. To remove noise within the image window and outside of the source structure, the brightness outside of the yellow-circles region in Figure 2 was corrected to zero at the end of each image. As the jet’s structure is restored in an area much smaller than these windows, it can be assumed that the window settings are appropriate and that the subsequent image reconstruction is successful. Figure 2: The averaged image of 48 final images (see Figure 3) and regions of the window setting. White circles represent areas where pixel values are restored during image reconstruction. Flux densities generated outside the yellow circles by the image reconstruction are considered noise, and pixel values are set to zero. Figure 3: The top panel shows the averaged image of the final 48 images. The bottom panel shows 3 images taken from a total of 48 images as examples to see how much difference there is between the selected 48 images. In the first step, after excluding the HIT, NSRT, and TMRT baselines, we added a 200% error to the visibility amplitudes to reduce the weight of the visibility amplitudes and reconstruct images with a greater emphasis on the closure quantities information. We started the first imaging with the initial image of a circular Gaussian with the full width at a half maximum of 0.1 mas. The regularization parameters of wL1, TV, TSV, and MEM were set to 1, 10, 10, and 0.0001, respectively. We performed the first self-calibration by using the obtained image as a model and obtained a calibrated _uv_ data set to proceed to the next step of the parameter survey. Self-calibration restored the depressed visibility amplitudes, and all baselines were used hereafter. In the next step, iterative imaging was performed using the self-calibrated _uv_ data. The iterative pipeline of imaging and self-calibration was created to investigate how the image changes or is consistent with the data depending on the parameter set.The following parameter combinations were prepared: additional errors of visibility amplitudes (err) = [0.1,0.01,0], regularization parameters of MEM ($\lambda_{\rm MEM}$) = [0.01,0.001,0.0001], regularization parameters of wL1 ($\lambda_{\rm wL1}$) = [10,1,0.1], and regularization parameters of TSV ($\lambda_{\rm TSV}$) = [100,10,1]. After the iterative imaging of 81 combinations of the parameter sets, we selected the final images with good fits to the data, i.e., those with reduced $\chi^{2}$ around 1. The selection criteria of reduced $\chi^{2}$ for the closure phases and log closure amplitudes are less than 1.2 and 1.3, respectively. ## 4 Image Properties In total, 48 final images, which fit the data better than the selection criteria, were selected among the 81 images. There are slight differences among the 48 selected images depending on the parameter set. For example, MEM had a particularly pronounced effect on the images, with the larger $\lambda_{\rm MEM}$ resulting in a blurrier image. Ex. 1, 2, and 3 in Figure 3 show images with $\lambda_{\rm MEM}$ set to 0.0001, 0.001, and 0.01, respectively, all with a reduced $\chi^{2}$ near to 1. The top image in Figure 3 is the average of 48 such images that are slightly different but meet the reduced $\chi^{2}$ selection criteria. All images have a bright central core from which a fainter jet extends in a northwesterly direction. Furthermore, the jet dims at about 15 mas from the core and brightens again at about 20 mas, which is consistent with the CLEAN image (Cui et al., 2021). A counter-jet-like structure is also visible, though it is very faint and short compared with the approaching jet. We tried to calculate the noise level of the image at a sufficient distance from the source structure using DIFMAP, assuming that the pixel values of the averaged RML image are the CLEAN model. Using natural weighting, we estimatedrms = 0.3 mJy/beam with the beam size (FWHM) of $1.39\times 0.605$ mas in 11.6 degrees. The counter-jet-like structure was about six times brighter than the noise rms and is considered to be significantly detected. The brightest part of the counter feature in the average RML image, which is located 1.7 mas southeast of the core, has about 23% of the brightness of the main jet symmetrically located relative to the core. This means that the brightness ratio (BR) of the main jet to the counter jet is about 4.3. However, there is no conspicuous blob to the northwest of the core that fully corresponds to the blob to the southeast. In previous studies, the BR of the M87 jet at 1 mas from the core was about 5–25 (Hada et al., 2016), and at 0.5–3.1 mas, 10–15 (Kovalev et al., 2007). In Ly et al. (2007), the BR integrated over a region was 14.4, but the peak-to-peak ratio was 3.4. Although this study is roughly consistent with the results of previous studies, more precise BR measurements are needed to estimate the jet velocity. To evaluate the jet shape, all 48 images were processed as follows. We rotated all images 19 degrees clockwise so that the jet extended horizontally rightward from the bright core. The vertical profile of this image was obtained to analyze the intensity distribution in the direction perpendicular to the jet axis. The profiles were located 0.3 mas and 0.5 mas from the core, and then in 0.5 mas increments up to 8.0 mas, they were averaged by $\pm$0.2 mas horizontally, respectively. The jet width was defined using the full width at half-maximum intensity (FWHM). If there are multiple peaks in the profile, the FWHM of the peaks at both ends and the peak-to-peak distance between them were used to calculate the width of the entire jet. Jet widths were measured in all 48 images, and the average values are plotted in Figure 4 with red diamond marks. The error bars are their standard deviations. The widths of 0.3 mas and 8 mas from the core were 0.5 mas and 2.7 mas, respectively. The jet width’s dependence on the core distance can be fitted with a power law with an exponent of $0.54\pm 0.09$ (95% confidence level). These widths are consistent with the jet width of the 43 and 86 GHz images obtained with the CLEAN method in Hada et al. (2016), and they are shown with green and blue circle marks in Figure 4. Figure 4: The width of the jet relative to the apparent distance from the core is shown on a double-logarithmic graph. Red diamonds show the result of this study. Green and blue circles are obtained from 43 and 86 GHz images, respectively, observed with VLBA and GBT in 2014 (Hada et al., 2016). The best-fit model ($W\propto r^{0.54\pm 0.09}$) for the relationship between the apparent distance $r$ and jet width $W$, obtained from the images of this study, is shown with an orange dashed line. The observation epochs are noted in the legend. To see if we can resolve the ridge structure in the images, we investigated the profile perpendicular to the jet axis at a distance of 8 mas along the jet from the core (Figure 5), where we see three peaks in each image. The best-fit parameters were obtained by fitting the profile with three Gaussian components. The separation between the central and southern peaks was 1.1–1.4 mas, and the separation between the northern and central peaks was even narrower at 0.9–1.1 mas. On the other hand, no triple ridges were detected in the CLEAN image obtained using the same observation data (Cui et al., 2021). As the beam size used for the convolution of the CLEAN image was 1.35 mas perpendicular to the jet, by assuming it to be the spatial resolution of the CLEAN image in this direction, the RML image with a peak separation of 1.0 mas shows at least 30% higher spatial resolution than the CLEAN image. The ridges with the brightest peak intensities were, in order, the northern, the southern, and the central ridges. The central ridge seems to be systematically thicker than the northern and southern ridges; however, for some images, the difference in width is so small that it is unclear whether there is a significant difference. Previous observations at 1.6 GHz with VSOP (Asada et al., 2016) and the ultra-deep image of 2 Gbps VLBA with the phased VLA at 15 GHz (Hada, 2017) detected a triple-ridge structure more than 5 mas downstream from the core. The triple ridge at 8 mas from the core detected in this study is also consistent in position and width with these, so it is likely that the same phenomenon was captured. Figure 5: Left: the average image rotated clockwise by 19 degrees. The lowest level of the contour line is 0.13% of peak flux. The yellow vertical line is drawn at 8.0 mas from the core, where the slice profile is investigated. Right: Slice profile integrating a region of $8.0\pm 0.2$ mas from the core. ## 5 Summary We reconstructed M87 images from the EAVN 22 GHz data set with the RML method implemented in the SMILI tool using sparse modeling techniques. Visibility amplitudes, log closure amplitudes, and closure phases are utilized for the image reconstruction. By setting various parameters in an iterative pipeline of imaging and self-calibration, 81 images were obtained. Finally, 48 images with a reduced $\chi^{2}$ of about 1 were selected as the final images. Although there are slight differences among the selected images, depending on the parameter set, all images have a bright central core from which a fainter jet extends in a northwesterly direction. A counter-jet-like structure is also visible with the detection at 6$\sigma$, although it is very faint and short compared with the approaching jet. We successfully measured the jet width of the region of a few mas at the root of the jet, which is consistent with the results of previous studies measured at 43 GHz and 86 GHz. We observed three peaks in the profile perpendicular to the jet axis at a distance of 8 mas from the core in each image, which is consistent with the previous studies from a space VLBI by VSOP and an ultra-deep image by the 2 Gbps VLBA observation with the phased VLA. The image of the M87 jet obtained by RML allows us to examine the differences in the brightness and thickness of the three ridges in detail, as well as their distance from each other, features which could be lost by convolving with nominal beam sizes, as in the CLEAN method. In the future, we must image the M87 jet at a centimeter-or-longer wavelength with higher resolution and sensitivity to investigate the entire jet from the root to the downstream with more precise profiles. This will help us to understand the acceleration and collimation mechanisms of the jet. If we observe the same region by using multiple wavelengths, we should be able to determine the physical properties, such as the optical depth and magnetic field. Furthermore, if monitoring observations are made at a resolution high enough to distinguish the three ridges, it is possible to study the time variation of the peculiar structure. Expanding the VLBI array will be one of the keys to achieving these goals. An attempt to create a global array centered on East Asia has already begun with EATING VLBI, a joint effort between the EAVN and the Italian VLBI Network (Hada & Eavn/Eating VLBI Collaboration, 2020). Collaboration between East Asia and Australia has also started as a joint observation between the Tidbinbilla 70 m radio telescope and the EAVN, and it will include other Australian stations in the future. In addition, a radio telescope in Thailand is scheduled to join the EAVN in the future. These array extensions will play a pilot role for the next generation of VLBI, such as SKA-VLBI (Dewdney et al., 2009) and ngVLA (Murphy et al., 2018). Funding: This research is funded by the following: JSPS (Japan Society for the Promotion of Science) Grant in Aid for Scientific Research (KAKENHI) (A) 22H00157 and (B) 18KK0090 (K.H.). K.H. is also funded by the Mitsubishi Foundation (201911019). Y.C. is funded by the China Postdoctoral Science Foundation (2022M712084). K.A. is financially supported by grants from the National Science Foundation (AST-1935980, AST-2034306, AST-2107681, AST-2132700, OMA-2029670). Y.M. is supported by the National Natural Science Foundation of China (12273022) and the Shanghai pilot program of international scientists for basic research (22JC1410600). R.-S.L. is supported by the Max Planck Partner Group of the MPG and the CAS, the Key Program of the NSFC (No. 11933007), the Key Research Program of Frontier Sciences, CAS (No. ZDBS-LY- SLH011), and the Shanghai Pilot Program for Basic Research—CAS, Shanghai Branch (No. JCYJ-SHFY-2022-013). L.C. is supported by the CAS “Light of West China” Program (No. 2021-XBQNXZ-005) and the NSFC (No. U2031212 and 61931002). Acknowledgments: This work made use of the East Asian VLBI Network (EAVN), which is operated under cooperative agreement by the National Astronomical Observatory of Japan (NAOJ), Korea Astronomy and Space Science Institute (KASI), Shanghai Astronomical Observatory (SHAO), and Xinjiang Astronomical Observatory (XAO). The operation of the Hitachi 32 m telescope is partially supported by the inter-university collaborative project “Japanese VLBI Network (JVN)” of NAOJ. We acknowledge all staff members and students who supported the operation of the array and the correlation of the data. ## References * EHTC et al. (2019) Akiyama, K. et al. [Event Horizon Telescope Collaboration]. First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole Astrophys. J. Lett. 2019, 875, L1. * EHT MWL Science Working Group et al. (2021) EHT MWL Science Working Group et al. Broadband Multi-wavelength Properties of M87 during the 2017 Event Horizon Telescope Astrophys. J. Lett. 2021, 911, L11. * Hada et al. (2012) Hada, K.; Kino, M.; Nagai, H.; Doi, A.; Hagiwara, Y.; Honma, M.; Giroletti, M.; Giovannini, G.; Kawaguchi, N. VLBI Observations of the Jet in M 87 during the Very High Energy $\gamma$-Ray Flare in 2010 April. Astrophys. J. 2012, 760, 52. * Hada et al. (2017) Hada, K.; Park, J.H.; Kino, M.; Niinuma, K.; Sohn, B.W.; Ro, H.W.; Jung, T.; Algaba, J.C.; Zhao, G.Y.; Lee, S.S.; et al. Pilot KaVA monitoring on the M 87 jet: Confirming the inner jet structure and superluminal motions at sub-pc scales. Publ. Astron. Soc. Jpn. 2017, 69, 71. * Walker et al. (2018) Walker, R.C.; Hardee, P.E.; Davies, F.B.; Ly, C.; Junor, W. The Structure and Dynamics of the Subparsec Jet in M87 Based on 50 VLBA Observations over 17 Years at 43 GHz. Astrophys. J. 2018, 855, 128. * Hada et al. (2016) Hada, K.; Kino, M.; Doi, A.; Nagai, H.; Honma, M.; Akiyama, K.; Tazaki, F.; Lico, R.; Giroletti, M.; Giovannini, G.; et al. High-sensitivity 86 GHz (3.5 mm) VLBI Observations of M87: Deep Imaging of the Jet Base at a Resolution of 10 Schwarzschild Radii. Astrophys. J. 2016, 817, 131. * Kim et al. (2018) Kim, J.Y.; Krichbaum, T.P.; Lu, R.S.; Ros, E.; Bach, U.; Bremer, M.; de Vicente, P.; Lindqvist, M.; Zensus, J.A. The limb-brightened jet of M87 down to the 7 Schwarzschild radii scale. Astron. Astrophys. 2018, 616, A188. * Hirabayashi et al. (2000) Hirabayashi, H.; Hirosawa, H.; Kobayashi, H.; Murata, Y.; Asaki, Y.; Avruch, I.M.; Edwards, P.G.; Fomalont, E.B.; Ichikawa, T.; Kii, T.; et al. The VLBI Space Observatory Programme and the Radio-Astronomical Satellite HALCA. Publ. Astron. Soc. Jpn. 2000, 52, 955. * Asada et al. (2016) Asada, K.; Nakamura, M.; Pu, H.-Y. Indication of the Black Hole Powered Jet in M87 by VSOP Observations. Astrophys. J. 2016, 833, 56. * Honma et al. (2014) Honma, M.; Akiyama, K.; Uemura, M.; Ikeda, S. Super-resolution imaging with radio interferometry using sparse modeling. Publ. Astron. Soc. Jpn. 2014, 66, 95. * Akiyama et al. (2022) Akiyama, K.; Algaba, J.C.; An T.; Asada, K.; Asanok, K.; Byun, D.Y.; Chanapote, T.; Chen, W.; Chen, Z.; Cheng, X.; et al. Overview of the Observing System and Initial Scientific Accomplishments of the East Asian VLBI Network (EAVN). Galaxies 2022, 10, 113. * Cui et al. (2021) Cui, Y.Z.; Hada, K.; Kino, M.; Sohn, B.W.; Park, J.; Ro, H.W.; Sawada-Satoh, S.; Jiang, W.; Cui, L.; Honma, M.; et al. East Asian VLBI Network observations of active galactic nuclei jets: imaging with KaVA+Tianma+Nanshan. Res. Astron. Astrophys. 2021, 21, 205. * Niinuma et al. (2014) Niinuma, K.; Lee, S.S.; Kino, M.; Sohn, B.W.; Akiyama, K.; Zhao, G.Y.; Sawada-Satoh, S.; Trippe, S.; Hada, K.; Jung, T.; et al. VLBI observations of bright AGN jets with the KVN and VERA Array (KaVA): Evaluation of imaging capability. Publ. Astron. Soc. Jpn. 2014, 66, 103. * Park et al. (2019) Park, J.; Hada, K.; Kino, M.; Nakamura, M.; Hodgson, J.; Ro, H.; Cui, Y.; Asada, K.; Algaba, J.C.; Sawada-Satoh, S.; et al. Kinematics of the M87 Jet in the Collimation Zone: Gradual Acceleration and Velocity Stratification. Astrophys. J. 2019, 887, 147. * Greisen (2003) Greisen, E. W. Information Handling in Astronomy—Historical Vistas; Springer Dordrecht 2003; Volume 285, p. 109. * Shepherd et al. (1994) Shepherd, M.C.; Pearson, T.J.; Taylor, G.B. DIFMAP: An interactive program for synthesis imaging. Bull. Astron. Soc. 1994, 26, 987. * Akiyama et al. (2017a) Akiyama, K.; Ikeda, S.; Pleau, M.; Fish, V.L.; Tazaki, F.; Kuramochi, K.; Broderick, A.E.; Dexter, J.; Mościbrodzka, M.; Gowanlock, M.; et al. Superresolution Full-polarimetric Imaging for Radio Interferometry with Sparse Modeling. Astron. J. 2017, 153, 159. * Akiyama et al. (2017b) Akiyama, K.; Kuramochi, K.; Ikeda, S.; Fish, V.L.; Tazaki, F.; Honma, M.; Doeleman, S.S.; Broderick, A.E.; Dexter, J.; Mościbrodzka, M.; et al. Imaging the Schwarzschild-radius-scale Structure of M87 with the Event Horizon Telescope Using Sparse Modeling. Astrophys. J. 2017, 838, 1. * EHTC et al. (2019) Akiyama, K. et al. [Event Horizon Telescope Collaboration]. First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole. Astrophys. J. Lett. 2019, 875, L4. * Kovalev et al. (2007) Kovalev, Y.Y.; Lister, M.L.; Homan, D.C.; Kellermann, K.I. The Inner Jet of the Radio Galaxy M87. Astrophys. J. 2007, 668, 27. * Ly et al. (2007) Ly, C.; Walker, R.C.; Junor, W. High-frequency vlbi imaging of the jet base of M87. Astrophys. J. 2007, 660, 200. * Hada (2017) Hada, K. The Structure and Propagation of the Misaligned Jet M87. Galaxies 2017, 5, 2. * Hada & Eavn/Eating VLBI Collaboration (2020) Hada, K. et al. [Eavn/Eating VLBI Collaboration]. Observations of nearby relativistic jets with EAVN and EATING VLBI. Perseus Sicily: Black Hole Clust. Outskirts. Proc. Int. Astron. Union 2020, 342, 73. * Dewdney et al. (2009) Dewdney, P.E.; Hall, P.J.; Schilizzi, R.T.; Lazio, T.J.L. The Square Kilometre Array IEEE Proc. 2009, 97, 1482. * Murphy et al. (2018) Murphy, E.J.; Bolatto, A.; Chatterjee, S.; Casey, C.M. The ngVLA Science Case and Associated Science Requirements. ASPC 2018, 517, 3.
# Optimizing Context-Enhanced Relational Joins Viktor Sanca EPFL Lausanne, Switzerland <EMAIL_ADDRESS>Manos Chatzakis2 EPFL Lausanne, Switzerland <EMAIL_ADDRESS>Anastasia Ailamaki1 EPFL, Google Switzerland, USA <EMAIL_ADDRESS> ###### Abstract Collecting data, extracting value, and combining insights from relational and context-rich multi-modal sources in data processing pipelines presents a challenge for traditional relational DBMS. While relational operators allow declarative and optimizable query specification, they are limited to data transformations unsuitable for capturing or analyzing context. On the other hand, representation learning models can map context-rich data into embeddings, allowing machine-automated context processing but requiring imperative data transformation integration with the analytical query. To bridge this dichotomy, we present a context-enhanced relational join and introduce an embedding operator composable with relational operators. This enables hybrid relational and context-rich vector data processing, with algebraic equivalences compatible with relational algebra and corresponding logical and physical optimizations. We investigate model-operator interaction with vector data processing and study the characteristics of the $\mathcal{E}$-join operator. Using an example of string embeddings, we demonstrate enabling hybrid context-enhanced processing on relational join operators with vector embeddings. The importance of holistic optimization, from logical to physical, is demonstrated in an order of magnitude execution time improvement. ###### Index Terms: analytics, vector embeddings, AI for database systems, query optimization, hardware-conscious processing 22footnotetext: Author contributed during an internship at DIAS lab, EPFL.11footnotetext: Work done in its entirety at EPFL. ## I Introduction Relational databases allow declarative query specification and abstractions for logical and physical query plan optimizations. These optimizations include operator reordering via algebraic equivalences and heuristics and instantiating operators for resource-efficient and hardware-conscious execution on modern hardware. This allows ad-hoc query specification by abstracting out significant implementation details from the user and end-to- end optimization. As the main goal of relational analytical databases is to provide abstractions to large-scale processing and extraction of value from the data of interest, relational databases are designed for data types where a procedural way to process the data is possible to precisely specify, such as aggregating numerical values or processing strings with a well-specified pattern. Still, many data sources are unsuitable for processing in a relational database and are typically only stored serialized in binary formats. These include documents, text, images, and other data sources of increasing value, driven by the advent of the Internet and mobile devices and services such as social media. Such data has a lot of human-understandable contexts: the contents and number of objects in an image, the semantics of a string despite alternative spellings, typos, or tenses, all of which make this task impractical if not impossible to specify in traditional relational data analytics. On the other hand, advancements in artificial intelligence and machine learning have allowed increasingly complex machine reasoning and performance in analyzing context-rich data such as images or text. Models such as BERT [1] and GPT [2] allow natural language processing, ResNet [3] object localization and detection, often available as Foundation Models [4] that are trained on web-scale data and further customizable and re-trainable for the particular task. To use those models, often based on Transformer architecture [5], analysts would instantiate the particular model, input data, and collect the output, using frameworks such as Tensorflow [6] or Pytorch [7], often in an isolated, task-specific setting. With the proliferation of embedding-based analytics, vector databases have recently gained traction, offering embedding storage and vector search, but with limited integration with traditional relational analytics or available operations over data. Figure 1: Problem: Model-RDBMS data analysis requires user expertise, imperative tasks, and data movement specification. While machine learning models can transform context-rich, multi-modal data into embeddings, coordinating the models and data processing pipelines is manual and imperative. Suppose an analyst wanted to analyze and extract insights from an RDBMS and use some data as input to the models as in Figure 1, which may be again input to an analytical query and models in a more complex analytical processing pipeline. Such a use case could combine the data from social media feeds with user reviews and transaction and analytics in an online retailer case, resulting in complex data processing pipelines, as the data of interest and value may not necessarily be tabular only, such as in canonical TPC-H, TPC-DS, or SSB [8] benchmarks. Figure 2: Enabler: models embed context-rich data into common tensor representation, allowing automated processing. With two independent components, RDBMS with relational data and Model with vector data, the user is back in the center of imperative program specification and data movement. This is not desirable, as a user must be an expert to fine-tune the queries, potentially perform data integration, correctly deploy and scale the queries to the hardware, orchestrate data movement, and specify the correct operator orders to prevent negatively impacting the performance. Decades of research and engineering in query optimization and execution engines allow hiding this complexity and is the key motivation to extend and make relational algebra the basis of emerging methods in multi-modal and context-rich processing. Tightly integrated, expressive, and optimizable, hybrid vector-relational data management is part of our broader effort for the next generation of context- rich analytical engines [9]. The key enabler of this integration is that embedding models transform the domain of context-rich data into tensors as a common intermediate data representation, allowing strictly specified operations over high dimensional vectors such as similarity or analogies, providing a method to formalize the processing of data while preserving context in neural embedding space (Figure 2). A separation of concerns is established: model selection handles the multi-modality, data context, and semantics; the analytical engine optimizes and processes context-free data and tensors via exposed operators. Thus, traditional relational operators have their counterparts and new physical and logical properties with neural-embedding-based vector processing. In this work, we investigate the case of context-enhanced join operation with model-operator interactions, which takes place over vector embeddings instead of only traditional relational data, and: * • Motivate and propose the capabilities of a context-enhanced join operator in Section II, and introduce and formalize the relational operator extension in Section III, * • Analyze the suitability of traditional join operator for the task of vector data processing, and propose a cost model, logical optimizations, and an alternative efficient tensor formulation for parallel execution of a join operator for processing neural embeddings in Section IV, * • Evaluate the physical and hardware optimizations we propose in Section V, and benchmark the operator implementation and characteristics in Section VI, showing the over orders of magnitude impact on the execution time and the importance of both logical and physical optimizations of vector-based join operations. ## II Motivation Figure 3: Context-enhanced, model-relational analytics. There have been significant efforts to enable machine-automated understanding of context-rich data. The key idea behind neural embeddings is that the model ($\mu$) learns how to transform the input data domain into high-dimensional vector space (Figure 2), where relationships between the data can be expressed using linear algebra expressions over vectors. Embedding models ($\mathcal{E}_{\mu}$) take a human-understandable context-rich domain and map it into a machine-operable high dimensional vector (tensor) space. ### II-A Extended Functionality: Joins Over Contextual Data Model-driven embeddings transform data previously opaque to the relational data management system into context-free vectors, as illustrated in Figure 3. The separation of concerns between the model-driven context and uniform vector data representation enables defining expressions over vectors and tensors, such as semantic similarity using cosine distance, or other vector arithmetic operations, such as finding word analogies. Broadly, this approach enables a fundamentally new way to join context-rich data such as strings, documents, or images by defining a corresponding model and vector expression. This formulation is closer to the traditional relational join, as a similarity predicate between two vectors can be formulated as an expression. We describe particular use cases for such context-extended relational data management systems. #### II-A1 Semantic-Based Similarity Operations Instead of having a human-in-the-loop or an expert system that performs dictionary-based or hard-coded rule-based similarity operations, neural embeddings allow the automation of similarity operations over many data types. A common tensor representation defines similarity joins as join condition expressions, such as cosine distance between the embedded vectors. The models fine-tune the functionality and context. After embedding the data and providing operators and expressions over tensors such as cosine distance, model-independent operations can be combined with the rest of the relational query plan. #### II-A2 Online Data Cleaning Strings or other context-rich data can be dirty or have rich semantics. If we consider words or sentences, they may have misspellings, alternative spellings, synonyms, or different tenses that all have the same meaning. Specifying all the rules to unify context is error-prone and difficult, while word embeddings can encompass such similarity using representation learning. Therefore, such operators can process such data on the fly without prior cleaning and only the data of interest, relying on embeddings and specified similarity thresholds for data integration and potentially performing post- verification steps. #### II-A3 Multi-Modal Data Processing The data context is opaque to the execution engine, while the model selection and parameters give context and transform the data. By processing context-free tensors, relational engines provide a common optimization framework for multi- modal data driven by models, not relational engines. Join operations are not all-encompassing but provide a step towards unifying relational with model- driven data processing under a declarative and optimizable model. ### II-B Integrating Vector Embeddings With Relational Operators Data management systems support and simplify data processing with research and systems contributions and features such as transactions and concurrency control [10, 11], auto-tuning [12], hardware-conscious implementations [13, 14, 15, 16, 17], corresponding data structures [18], and query optimization with declarative interfaces to abstract out the system complexity from the end-user. Figure 4: Goal: Hybrid vector-relational operations are declarative transformation primitives amenable to query optimization. Instead of manual intermediate orchestration and system integration to combine and analyze multi-modal and context-rich data, involving different systems, data sources, and efficient operator reimplementation, we investigate how to extend traditional relational joins to support model-driven context with minimal system intrusions and build on top of existing judiciously modified abstractions. In particular, this means that the vectors are simply another data type over which expressions and operations can be defined. This makes index structures designed to store, maintain, and perform similarity search over tensors [19] compatible as physical access method options. Similarly, recent work has formulated traditional relational processing over tensors [20], where tensor processing platforms are used as analytical RDBMS to benefit from existing implementation while transforming the relational data and operations into tensor representation. While there has been prior work to integrate model inference and learning with analytical engines [21, 22], our goal is complementary as we focus on how we can extend the relational model functionality with contextual data, as illustrated in Figure 4. Similarly, we expose co-optimization opportunities at logical, physical, and implementation levels and fine-grained system interactions [9]. ### II-C Holistic Optimization Without loss of generality, suppose the data of interest are strings and dates stored in an RDBMS. Generally, one can consider other context-rich formats stored as binary objects with other relational data. To allow semantic similarity operations, such as matching strings that are synonyms, have misspellings, or different tenses, word embedding models transform strings into vectors, which are then comparable using cosine distance. While RDBMS could execute regex-like string expressions, mapping strings to embeddings allows capturing broader classes of similarity within a model. Note that the model can be trained and adapted for different datasets to adjust the notion of similarity, which the analyst selects. We are interested in joining two tables over strings, where a condition over dates exists, making the queries selective on both tables. In a declarative setting, query specification requires embedding model information and the join condition expression, and the selectivity information from the relational column needs to propagate before the embeddings. Otherwise, the whole interaction may result in the user eagerly materializing all the data as in Figure 1, performing expensive embedding, and only then filtering. In more complex cases and ad-hoc queries, imperative specification and optimization are increasingly difficult. Even with a declarative query with a logical plan with correct selectivities and operator orders, the word embedding model must interact with a join operator. Physical optimization must address this interaction and account for the tensor data format and the expressions suitable for comparing high- dimensional data. For example, while an equi-join over tensors could be implemented as a hash-join, more practical embedding comparisons, such as cosine distance, require algorithms such as nested-loop join for pair-wise comparisons and consider the join, operation, and model data structure access patterns in the algorithm and cost model. Finally, from a hardware-conscious perspective, using many-dimensional vectors with relational operators designed and optimized for single-dimensional numerical data and judicious use of caches and memory hierarchy demands novel tradeoffs. A 100-dimensional tensor embedding will change the caching and execution patterns of traditional algorithms, and model embedding can incur computational or data access costs at a critical path of execution. Designing hardware-conscious algorithms represents a direction driven by novel model- database interactions. We aim to enable holistic optimization (Figure 4), starting from declarative specification through logical and physical optimization, and finally, hardware specialization to allow efficient execution. Takeaway Neural embedding models transform the context-rich, multi-modal data into a common (per-model) tensor representation space. From the perspective of declarative relational processing, models provide separation between data semantics and context-less tensors as a model-parametrized projection operator. From there, relational operators perform operations such as cosine distance or vector transformations over tensors, amenable to query optimization via common abstractions and cost models that include model- operator interactions and physical optimizations aware of tensors and new computation and data access patterns co-designed for hardware. We analyze these behaviors and propose solutions that are aware of the new design space. ## III Context-Enhanced Relational Join Operator In this section, we start with formalizing the proposal of a relational operator extension to declaratively process context-rich data stored along traditional relational data, such as strings and text, that may be stored along with numerical or date attributes. We call this hybrid model-relational processing. This enhancement stems from the fact that the contextual data may need to be transformed and processed differently. However, compatibility with relational algebra and optimizations for processing purely relational data is required. Instead of using separate systems and manually orchestrating the data movement for processing using external programs or opaque UDFs, we propose a set of operations needed to express a join based on embedding the original data that is amenable to traditional query optimization. ### III-A Neural Embeddings Neural embeddings and representation learning are rich and active research fields in machine learning. Images can be embedded with models such as ResNet [3], audio with PANNss [23], and text with Bert [1], word2Vec [24], FastText [25, 26]. Foundation Models [4] offer an increasingly flexible way to specialize large models to a particular use case. It is important to mention those models can be tuned, as they learn representations from the training dataset through transfer learning [27] (e.g., starting from one of the foundation models) or re-training. In this work, we focus on and experiment with string embedding models. However, as embeddings are generally high- dimensional vectors, once in the embedding domain, the processing of this data is model- and input-data-type-agnostic, and the same principles and optimizations hold. Processing embedded data allows automating semantic similarity using cosine similarity (or another distance) between the vectors. More complex relationships in the vector space are possible, such as analogies, such as country capitals (Switzerland $\rightarrow$ Bern), (Greece $\rightarrow$ Athens). The use of vectors necessitates interaction with linear algebra; therefore, the equations below outline definitions of cosine similarity over vectors and matrices. We will use them heavily in logical (Section IV) and physical (Section V) optimization phases. $cos(\theta)=\frac{A\cdot B}{\lVert A\rVert\lVert B\rVert}=\frac{\sum\limits_{i=1}^{n}A_{i}B_{i}}{\sqrt{\sum\limits_{i=1}^{n}A_{i}^{2}}\sqrt{\sum\limits_{i=1}^{n}B_{i}^{2}}}$ (Cosine Similarity) $\frac{\mathbf{a}^{(1,d)}\cdot\mathbf{b}^{(1,d)}}{\lVert\mathbf{a}\rVert\lVert\mathbf{b}\rVert}=\frac{\sum\limits_{i=1}^{d}a_{i}b_{i}}{\sqrt{\sum\limits_{i=1}^{d}b_{i}^{2}}\sqrt{\sum\limits_{i=1}^{d}b_{i}^{2}}}$ (Vector-Vector) $\frac{\mathbf{a}^{(1,d)}\cdot\mathbf{B}^{(m,d)}}{\lVert\mathbf{a}\rVert\lVert\mathbf{B}\rVert}=\left[\frac{\mathbf{a}\cdot\mathbf{b}_{i}}{\lVert\mathbf{a}\rVert\lVert\mathbf{b}_{i}\rVert}\right]_{i=0}^{m}$ (Vector-Matrix) $\frac{\mathbf{A}^{(n,d)}\cdot\mathbf{B}^{(m,d)}}{\lVert\mathbf{A}\rVert\lVert\mathbf{B}\rVert}=\left[\left[\frac{\mathbf{a}_{i}\cdot\mathbf{b}_{j}}{\lVert\mathbf{a}_{i}\rVert\lVert\mathbf{b}_{j}\rVert}\right]_{i=0}^{n}\right]_{j=0}^{m}$ (Matrix-Matrix) ### III-B Model-Operator Interaction In our example, we focus on context awareness over strings so that common mistakes or semantically similar words are automatically captured. Rather than imposing user to strictly specify the rules for string similarity or clean the data ahead of time, we enable words such as (barbecue, barbecues, bbq, barbicue, grilling) that have similar semantics, to automatically be used with relational operator predicates without prior user intervention. The user should only specify the embedding model and a threshold distance parameter over cosine similarity calculation (Equation: Cosine Similarity). Instead of comparing two strings in their original domain, they are embedded. If the cosine similarity $cos(\theta)$ is larger than the specified threshold, the two strings are similar and should be matched. This avoids manual string processing and combining techniques, such as Locality Sensitive Hashing, individually limited to capturing only features such as misspellings. A context-aware operator is supplemented with an embedding model ($\mu$). In this case, when an operator receives strings, it embeds them and then performs the requested processing in the vector domain. Models can be selected based on the analyst’s needs, while often having desirable properties such as the capability of training and adapting to the desired similarity context. This interaction opens up design and optimization choices, such as how to mask or minimize the cost of embedding/model and overlap it with operator execution. We capture this interaction through relational algebra (Subsection III-C) and a cost model (Section IV) to allow holistic integration with the remainder of the query plan. ### III-C Relational Operators and Algebra We introduce the embedding operator ($\mathcal{E}$) using a model ($\mu$), and relational algebra equivalences over selection ($\sigma$) and $\theta$-join ($\bowtie_{\theta}$) operations, compatible with traditional relational algebra definitions. #### III-C1 Selection The selection operation applies predicate $\theta$ over input tuples and returns only the tuples that satisfy the condition. $\sigma_{\theta}(R)=\\{t\in R,\ \theta(t)\\}$ (Selection with predicate $\theta$) To change the domain of input data, we allow mapping the input tuples (or a projection over the tuples for simple notation) using a model ($\mu$) into vector space using embedding ($\mathcal{E}$) operation. $\mathcal{E}_{\mu}(R)=\\{t\in R,\ t\mapsto\mu(t)\\}$ (Embedding with model $\mu$) For completeness and decoding of the embeddings and retrieving the context- rich data, an inverse operation $\mathcal{E}^{-1}$ should also be defined, which is the standard component of encoder-decoder architectures, and semantically correct only for the same model $\mu$. Alternatively, a lookup table mechanism can maintain this object-embedding mapping. $\mathcal{E}^{-1}_{\mu}(\mathcal{E}_{\mu}(R))=R$ (Decoding with model $\mu$) Combining embedding with selection allows the processing of tuples with a mixture of data formats. Some attributes may have traditional relational predicates, and some may require embedding and predicates using different metrics (such as cosine distance). Predicate pushdown and operation reordering can happen as soon as the attributes that predicates operate over are available. $\sigma_{\mathcal{E},\mu,\theta}(R)\Leftrightarrow\sigma_{\theta}(\mathcal{E}_{\mu}(R))\Leftrightarrow\sigma_{\theta_{\mathcal{E}}}(\mathcal{E}_{\mu}(\sigma_{\theta_{R}}(R))))$ ($\mathcal{E}$-Selection) #### III-C2 Join The join operation takes two relations and joins them over specified attributes using specified predicate conditions ($\theta$-join). $R\times S=\\{(r,s),\ r\in R\wedge s\in S\\}$ (Cartesian Product) Joins are amenable to predicate pushdowns and reordering. $R\bowtie_{\theta}S\Leftrightarrow\sigma_{\theta}(R\times S)$ ($\theta$-Join Generalization) We introduce embeddings to the generalized join definition and provide equivalences. Embeddings can be observed as a special projection operation that changes the domain. $\begin{split}R\bowtie_{\mathcal{E},\mu,\theta}S\Leftrightarrow\sigma_{\mathcal{E},\mu,\theta}(R\times S)\\\ \Leftrightarrow\\\ \sigma_{\theta}(\mathcal{E}_{\mu}(R)\times\mathcal{E}_{\mu}(S))\Leftrightarrow\mathcal{E}_{\mu}(R)\bowtie_{\theta}\mathcal{E}_{\mu}(S)\end{split}$ ($\mathcal{E}$-$\theta$-Join) Conversely, decoding the embeddings into their original domain is possible across the plan (Equation: Decoding with model $\mu$). Figure 5: Hybrid vector-relational query example, and the join operator which is the focus of the optimizations in this paper. Takeaway We formulate the context-enhanced operators by extending relational operators and algebra to allow declarative integration of embedding models with relational engines and optimizers. A hybrid setting enables declarative and systematic logical and physical optimizations, as depicted in the simple query in Figure 5, while providing semantic awareness using embeddings to separate concerns between models and engines. ## IV Logical Optimization Figure 6: Matrix formulation of $\mathcal{E}$-join allows scalable and cache- efficient execution over high-dimensional embeddings. Starting from the extended algebra and operators, we present the logical optimization driven by model-operator interaction and tensors as a common intermediate data representation. In contrast to traditional optimizations and relational operator cost models, the two factors are different. First, since models may be on the critical path of execution, model embedding data access or computation time must be considered in addition to the relational operator’s data access and processing cost. Second, embeddings change the data domain from traditionally atomic data types (prescribed by the 1st normal form) into dense high-dimensional vectors, which could benefit from optimizations in the domain of linear algebra. Still, regarding the 1st normal form, embeddings are not structured data but should be observed and processed atomically. This changes the cost model and impacts the known characteristics of algorithms. For example, suppose an embedding is a 100-D vector. In that case, the cost of data movement and cache locality characteristics (spatial and temporal) are changed and must be evaluated in conjunction with the model’s behavior and internal data structures. ### IV-A Cost Model We outline the abstract cost model for the context-enhanced selection and join operations. For joins, we start by investigating the first available strategy: nested-loop join (NLJ). It is important to note that we focus on evaluating exact algorithms in our study. Since the distance we use is cosine similarity, hash-based approaches would yield approximate solutions similar to Locality Sensitive Hashing. Of course, if we were to use equi-joins, it would be possible to use traditional hash-joins, but there would be no benefit from using embeddings. Still, nested loop joins are a good fit as they can be formulated with good cache-locality, an important performance factor (Section VI) that does not incur random access over high dimensional data as every vector needs to be pair-wise compared using cosine distance. We outline the abstract cost model for selection and join below, where $R$ and $S$ are relations, $|R|$ is the cardinality of relation $R$, $A$ represents the data access cost, $M$ represents the model cost, and $C$ is the computation cost. Selection is an operation where input data is scanned, embedded, and the condition is applied over every input tuple, where each tuple incurs access, computation, and model cost: $Cost(\sigma_{\mathcal{E},\mu,\theta}(R))=|R|\cdot(A+M+C)$ ($\mathcal{E}$-Selection Cost) By naively extending the Nested-Loop Join (NLJ) operation, it would scan both relations and perform pair-wise condition comparisons. In this implementation without considering model-relational interaction, model access would be performed per-processed tuple, which incurs quadratic model access cost. Considering that embedding models are computationally expensive, the following cost model shows the suboptimality: $Cost(R\bowtie_{\mathcal{E},\mu,\theta}S)=|R|\cdot|S|\cdot(A+M+C)$ ($\mathcal{E}$-NL Join Cost) Instead, by considering the characteristics of the nested-loop join, we observe that tuple embedding needs to happen only once per tuple from both relations. This can be performed as a precursor to the join operation or as a lazy embedding and data materialization strategy. By observing this behavior, the join results in a linear model cost with prefetching: $Cost(R\bowtie_{\mathcal{E},\mu,\theta}S)=|R|\cdot|S|\cdot(A+C)+(|R|+|S|)\cdot M$ ($\mathcal{E}$-NLJ Prefetch Optimization) This optimization is significant, as the model cost can span from random access to a lookup table (several times slower than sequential scan) to expensive computations over deep neural networks (data transfer and computation). From another perspective, if machine learning models are used as-a-service and paid for per embedding, this cost model conversely results in monetary savings compared to the potential suboptimal, manual implementation. Expressing embeddings as relational operator extensions allows logical optimization to occur in conjunction with other operators in the hybrid relational-embedding pipeline (selection pushdown, join reordering), such that the cardinality of the most costly part of the query plan will be reduced without explicit user intervention or knowledge about specific interactions given a relational operator. ### IV-B Tensor Join Formulation Since the computation of context-enhanced operators happens over dense high- dimensional embedding vectors, following the vector and matrix definitions of cosine distance in Subsection III-A, we present the tensor formulation of the dot product. It is important to highlight that cosine similarity is equivalent to the dot product with normalized input vectors. The tensor formulation allows reasoning about the potential decomposition of the problem for parallel and cache-efficient execution beyond data parallelism, a basis for the physical optimization (Section V). This enables efficient and well-studied matrix-based algorithms for linear algebra in addition to the traditional relational algorithms. We present the block-matrix decomposition of the problem [28]. $\displaystyle\mathbf{D}_{tv}=\sum\limits_{i=1}^{d}\mathbf{R}_{ti}\mathbf{S}_{iv}$ $\displaystyle\begin{bmatrix}[r]\phantom{\;\;}\mathbf{S}_{11}&\dots&\phantom{\,}\mathbf{S}_{1v}\phantom{0}\\\ \phantom{00}\vdots\phantom{00}&\ddots&\vdots\phantom{00}\\\ \phantom{\;\;}\mathbf{S}_{d1}&\dots&\phantom{\,}\mathbf{S}_{dv}\phantom{0}\;\end{bmatrix}\rightarrow\mathbf{S}$ (Block Matrix Dot Product Decomposition [28]) $\displaystyle\mathbf{R}\leftarrow\begin{bmatrix}[r]\mathbf{R}_{11}&\dots&\mathbf{R}_{1d}\\\ \vdots\phantom{0}&\ddots&\vdots\phantom{00}\\\ \mathbf{R}_{t1}&\dots&\mathbf{R}_{td}\;\end{bmatrix}$ $\displaystyle\begin{bmatrix}[r]\phantom{0}\mathbf{D}_{11}&\dots&\mathbf{D}_{1v}\phantom{0}\\\ \vdots\phantom{0}&\ddots&\vdots\phantom{00}\\\ \mathbf{D}_{t1}&\dots\ &\mathbf{D}_{tv}\phantom{0}\end{bmatrix}\rightarrow\mathbf{D}=\mathbf{R}\mathbf{S}$ Given a $(|R|\times dim)$ matrix $\mathbf{R}$ with $t$ row partitions and $d$ column partitions, and a $(dim\times|S|)$ matrix $\mathbf{S}$ with $d$ row partitions and $v$ column partitions that are compatible with the partitions of $\mathbf{R}$, dot product $\mathbf{D}=\mathbf{R}\mathbf{S}$ can be formed block-wise, yielding $\mathbf{D}$ as a $(|R|\times|S|)$ matrix with $t$ row partitions and $v$ column partitions. We consider $\mathbf{S}$ to be already transposed if the initial data layout is as of $\mathbf{R}$; in other words, matrices $\mathbf{R}$ and $\mathbf{S}$ are compatible. In particular, we partition the data along the tuple lines, not the dimensions, as illustrated in Figure 6 ①. Transforming the initial Nested-Loop Join into Tensor formulation enables applying linear algebra optimizations, in particular, matrix multiplication algorithms to achieve better cache utilization of high-dimensional data, with well-understood parallelization using block-matrix decomposition. This is compatible with and extends recent research on formulating relational operators for tensor processing runtimes [29]. The cache is utilized better as, in contrast to NLJ, a matrix block (several vectors) can remain in the cache and be reused over many operations. Block-matrix partitioning allows for defining the processing granularity, which is significant when memory footprint needs to be constrained, allowing fine control of both the transfer and processing granularity of cosine-distance-based similarity operations, all while reducing redundant data transfers. In particular, this is a dense matrix operation which is highly computationally and data access optimized. The next step is to map back to corresponding tuple pairs that satisfy the threshold condition, as in Figure 6 ②. It is sufficient to maintain the starting offsets of input relation partitions, so the result set constitutes a potentially sparse matrix of pairs that represent matrix batch offsets, driven by the predicate selectivity. This result can be considered as equivalent to using late materialization, and while sparse, it is more compact as tuples of offsets represent unique tensor identifiers. This is increasingly important when using novel memory hierarchies with fast but limited memory, such as high-bandwidth memory (HBM) [30]. Takeaway Formulating the cost model and alternative equivalent execution plans using linear algebra allows tuning the algorithms to the cost model and execution environment parameters, as high-dimensional vectors and model processing introduce data access, caching, and processing overheads. This is a mandatory step that enables further logical and physical optimizations, different from the ones suitable for traditional relational operators that process only single-dimensional data. ## V Physical Optimization Modern data management systems are designed and optimized to efficiently utilize available hardware resources [14, 17, 13]. Equally, machine learning and linear algebra frameworks are designed with physical optimizations to allow fast and efficient execution over vector data [6, 7, 31, 19]. ### V-A Data-Parallel Execution To benefit from many-core architectures, we outline the parallelization and hardware-conscious optimizations of the join algorithm. In contrast to the traditional Nested-Loop Join (NLJ) that allows exact cosine-distance-based joins, high-dimensional embedding vectors take up more space in the cache hierarchy. Consider a 32KB L1 cache, and we operate over 4-byte values. Using a 100-dimensional embedding vector, the L1 cache can store only 80 values, in contrast to 8000 for the single-dimensional data type. This necessitates cache-efficient implementation to benefit from the memory hierarchy. Furthermore, computing cosine distance over vectors requires more computation cycles than simply performing the regular value-based operation. Thus, judicious use of hardware resources is necessary to speed up data access and computation. #### V-A1 Data-parallelization strategies Nested-Loop Join can be parallelized by partitioning the input relations and using a heuristic of keeping the smaller relation inside the inner loop to improve data and cache locality. This is a traditional and well-known NLJ algorithm implementation and optimization. In addition, we propose using the matrix (tensor) formulation (Figure 6) using linear algebra as an alternative to NLJ physical implementation. Matrix multiplication to obtain a dot product and normalize the vectors is embarrassingly parallel, in addition to having fine-grained control over partition size. In contrast to NLJ, matrix multiplication over dense vectors is a linear algebra operation with a better cache locality [32, 33], improving the use of the memory hierarchy in the presence of high-dimensional data, and using efficient matrix multiplication algorithms and linear-algebra frameworks. #### V-A2 CPU Hardware Support Traditionally, CPUs benefit from the main memory access locality. They are general-purpose compute units designed to process full-precision data types (e.g., 32-bit and 64-bit) that can support SIMD, such as with Intel AVX instructions. Recent AVX-512 [34] instruction set has introduced hardware support for half-precision data types, which allows processing up to 32 16-bit floating point numbers in a SIMD register. Furthermore, to support typical machine learning workloads, beyond providing hardware support for half- precision data types, CPUs such as 4th generation Intel Scalable Xeon Processors introduced specialized instruction sets (AMX) to accelerate matrix computations, along with limited capacity High-Bandwidth memory [35, 30]. In general, specialized instructions can accelerate the dense matrix computations. At the same time, low-latency access to memory enables optimizing the sparse matrix processing when processing the elements that satisfy the join predicate. As even the main memory is often limited or expensive, we will next discuss how to constrain the memory requirements of the tensor-based join formulation. #### V-A3 SIMD Vectorization Executing linear algebra operations such as cosine distance over dense vectors is compute-intensive and involves repeated operations over every vector embedding element. Since operations such as sum are repeated over every element of the logical vector, it is a natural fit for using single- instruction multiple-data instructions (SIMD). We use SIMD vectorization supported by hardware to speed up the arithmetic operations using fewer processing cycles using specialized registers and compute units, in conjunction with data-parallel partitioning for multi-core operator execution. ### V-B Constraining the Memory Requirements Figure 7: Matrix partitioning constrains the memory requirement. The tensor join formulation given in Figure 6 assumes a dot product operation between two matrices, a dense matrix linear algebra operation. This will result in a large intermediate state matrix of the same dimensions as the input relations. Despite joins being typically selective, which might reduce the matrix size, as in Figure 7, the intermediate state might still be too big to store and compute at once. Computing this matrix based on relations R and S would yield a |R|x|S| memory requirement, which for 100k x 100k matrix that would store FP32 floating point values results in 40GB. While this matrix can be preserved to offset future computation, the primary purpose is to compute and return the offsets that satisfy the distance threshold requirement and return this result, which is typically a sparse operation. Thus, even for modest input relation sizes, this approach, in its current formulation, does not scale well concerning the memory requirements. To resolve this issue, the previously presented matrix decomposition (Equation: Block Matrix Dot Product Decomposition [28]) allows scheduling the computation in batches and explicitly controlling the memory requirements based on the desired intermediate matrix size. This trades off memory for multiple invocations of the computation algorithm with smaller matrices, effectively computing the large one while pruning the intermediate sparse state after each matrix computation and comparison with the similarity threshold condition. We illustrate this in Figure 7, where two relations A and B are joined over vectors. While the required memory requirement can be selectivity driven by other pushed-down relational predicates ($\sigma_{A}$, $\sigma_{B}$), this might not fit the available buffer budget. Thus, based on the available Buffer size, the input data can be partitioned arbitrarily by decomposing the input data along the vector tuple boundaries (not dimensions). The strict |A|x|B| memory requirement becomes Buffer = |part(A)|x|part(B)|, at the cost of several invocations of the algorithm that might reduce the overall performance by frequent data movement and lower cache locality. Takeaway The physical operator design landscape encompasses implementation and hardware device characteristics-based decisions. Model-operator interactions only enrich and open a new design space. High-dimensional data contributes to reduced capacity of the memory hierarchy in comparison to common atomic data types found in relational data processing and requires rethinking cache-local implementations. With the increase in per-tuple compute cost, the strain is on both memory and compute resources, which invites the use of specialized hardware-conscious algorithms such as tensor join. ## VI Evaluation We start by demonstrating the functionality of using models as a driver of context-enhanced relational operations through the example of word embeddings. We then focus on the main performance evaluation of the proposed logical and physical optimizations, showing that a holistic approach is necessary to obtain a performant join algorithm. System We implement our prototype operators and evaluation in a standalone system in C++ and use Intel AVX instructions for SIMD execution. Tensor formulation benchmarks use Intel oneAPI Math Kernel Library for CPU-aware and efficient BLAS-based linear algebra operations. Hardware Setup We run the end-to-end and scalability experiments on a two- socket Intel Xeon Gold 5118 CPU (2 x 12-core, 48 threads) and 384GB of RAM. All experiments are with in-memory data; experiments with synthetic data use the same random number generator seed for reproducibility. ### VI-A Enhancing Operator Context via Word Embeddings In our study, we use the example of word embeddings that transform input strings into high-dimensional vectors. We show the context-awareness functionality that word embeddings allow and note that embedding models can be fine-tuned and replaced to support different notions of similarity. Conversely, there are embedding models to support different data modalities. Still, the intermediate data representation of an embedding is a context-free vector that operators process independently of the particular model, on top of which we base our analysis. The proposed optimizations of our approach are independent by design and principled in approach due to the separation of concerns between the model, which produces vectors, and the operator performing the join over context-free vectors. Embedding Model We use FastText [25, 26] as a model ($\mu$) for string embeddings, which has the desirable properties that it can be trained and adapted to the context, it supports out of vocabulary word embedding and is resilient to misspellings. A context-aware operator is supplemented with an embedding model. In this case, when an operator receives strings, it embeds them using FastText and then performs the requested processing in the vector domain. Dataset We train a 100-dimension embedding model over a subset of Wikipedia dataset [36], cleaned of stopwords, using a subset of 1M strings from the dataset to test the similarity using the model. We show the nearest vectors to sample words as the strings are embedded into a high-dimensional vector space. We then decode the vectors back into string and present sample semantic matches in Table I. The model has learned semantics and context from the Wikipedia dataset. To fine-tune the model, it is possible to specialize the embedding models with other domain-specific datasets. TABLE I: Semantic Matching using FastText trained on Wikipedia dataset, 100-D embeddings, sample words. Word | Top-15 Model Matches ---|--- dbms | rdbms, nosql, dbmss, postgresql, rdbmss, sql, dbmses, sqlite, dataflow, ordbms, oodbms, couchdb, mysql, ldap, oltp postgres | postgre, postgresql, openvt, dbms, rdbmss, sqlite, dbmss, odbc, backend, rdbms, rdbmses, postgis, openvp, couchdb, mysql animal | animals, felines, human, bovines, equines, dogs, nonhuman, ferrets, rabbits, chickens, anthropods, bovine, anthropod, mammal, equine dog | dogs, poodle, doberman, sheepdog, puppies, dachshund, hound, bullmastiff, retriever, pinschers, dobermans, puppy, bullmastiffs, chickenhound, dachshunds clothes | dresses, clothing, garments, underwear, bedclothes, undergarments, towels, underwears, scarves, shoes, nightgowns, clothings, bathrobes, underclothes It is important to mention that models allow automated semantic matching, and the strings are not materialized or retrieved during operations in an intermediate step. The computation entirely happens on embedded data, and only positive matches are retrieved. It is possible to decode the embeddings, for example, based on their offset in the input relation and processing the embedding using standard encoder-decoder model architecture. This model aimed to detect synonyms, semantic and related matches, and plural forms of the words without external user specification. The only parameter in the case of a join with cosine distance would be a single threshold parameter. This allows relational operators normally operating over sample strings (i.e., Word column in Table I) to perform matches with strings on the right in the embedding domain without humans in the loop or experts that would create and specify strict rules. Such models can work by providing positive match examples that could infer the correct cosine distance threshold parameter. ### VI-B NLJ Formulation: Logical Optimization 1k x 1k10k x 1k10k x 10k$10^{1}$$10^{2}$$10^{3}$$10^{4}$$10^{5}$$389.8$$3{,}419$$36{,}242.6$$280.2$$3{,}023.6$$33{,}226.6$$9$$62$$651.2$$4$$35$$269.6$$|$R$|$ x $|$S$|$ tuplesTime [ms] - log10 scaleNO-SIMDSIMDPrefetch NO-SIMDPrefetch SIMD Figure 8: The impact of logical and physical optimization on NLJ formulation. 100-D vectors, 48 threads. As introduced in Section IV, we extend the traditional relational join formulation by embedding vector processing and retrieval. We evaluate the impact of logical optimization of vector prefetching and physical optimization using SIMD in Figure 8. This experiment validates the cost difference between the naive join extension (Equation: $\mathcal{E}$-NL Join Cost) and the one aware of the vector retrieval (Equation: $\mathcal{E}$-NLJ Prefetch Optimization). Not prefetching the embeddings incurs quadratic model access costs validating the cost model, resulting in orders of magnitude slower execution time. This emphasizes the importance of analyzing, exposing, and optimizing model- operator interactions. Despite using the same hardware resources, including separate experiments with and without SIMD, the main bottleneck is not computational but access-pattern-related and algorithmic. With the wrong holistic operator formulation, faster hardware cannot correct the suboptimal formulation, as may happen with imperative operator specification by a non- expert user. In this case, the optimal strategy of prefetching the embeddings first and then joining, despite having two separate tasks, allows faster execution time. SIMD instructions improve the execution time 2x, indicating a computational bottleneck that additional hardware instructions reduce, while this is not possible in the non-prefetch, sub-optimal formulation. $1$$2$$4$$8$$12$$24$$48$$10^{0}$$10^{1}$$10^{2}$$10^{3}$# of threadsTime [ms] - log10 scale10k x 10k100k x 100k Figure 9: Optimized NLJ formulation scalability, 100-D vectors. In Figure 9, we show the scalability of the proposed logical optimization with prefetching. We compare model prefetching over two input relations sized 10k x 10k, and 100k x 100k, which results in $10^{8}$ and $10^{10}$ computations, respectively. Notice the log-scale of the figure. Using the improved NLJ cost model formulation and comparing the execution time between the two input sizes, execution time scales linearly instead of quadratically as in the non- optimized case and by an expected factor of $10^{2}$. Takeaway. Logical operator optimizations and task orchestration are crucial to removing algorithmic bottlenecks. Allocating more resources cannot scale and is wasteful before resolving an algorithm’s logical costs and overheads. ### VI-C NLJ Formulation: Physical Optimization $1$$4$$8$$12$$16$$20$$24$$28$$32$$40$$48$$0$$5$$10$$15$$20$# of threadsTime [s]SIMDNO-SIMD Figure 10: NLJ formulation scalability and impact of SIMD, 10k x 10k join input relations, 100-D vectors. We focus next on the physical optimizations and demonstrate the scalability of CPU execution and physical and logical optimizations of NLJ formulation presented in Section V. First, we investigate the impact of SIMD vectorization (Figure 10). We enable hyperthreading (24 physical, 48 logical cores), affinitize threads to cores (2 threads will run on 1 physical core, 4 on 2, etc.), and run the NLJ formulation of 10k x 10k relation size input with 100-dimensional embeddings. The processor has AVX-512 registers that can simultaneously fit 16 32-bit floating-point values simultaneously. The average improvement is 5.36x, indicating non-computational overheads during vectorization but improved execution time using available hardware intrinsics. $1$$4$$8$$12$$16$$20$$24$$28$$32$$40$$48$$0$$1$$2$$3$$4$# of threadsTime [s]Affinitized HTs (intertwine)Physical cores, then HTs Figure 11: NLJ formulation scalability with SIMD, prioritizing physical cores, 10k x 10k input relations, 100-D vectors. We use two strategies to investigate the effect of affinitizing threads to cores (Figure 11). First, we assign the physical cores to the thread pool, followed by hyper-threads. Second, we affinitize threads and hyper-threads to cores (2 threads will run on 1 physical core, 4 on 2, until 24 cores/48 threads). While affinitized strategy scales with added cores, physical cores scale faster, and hyper-threads do not contribute to improved execution time. This corroborates that dense vector linear algebra is computationally heavy, and physical cores cannot simultaneously benefit from scheduling a hyper- thread. At the shift from physical to added logical cores, we see an increase in the physical, then HTs strategy due to data-parallel execution at 28 threads, as 28 partitions were assigned to 24 physical cores, creating 4 stragglers. Finally, using all the available resources (after 24 threads), the result is the same. 10k x 10k100k x 1k1k x 100k1M x 1k1k x 1M10k x 100k100k x 10k100k x 100k10k x 1M1M x 10k$10^{3}$$10^{4}$$269.6$$264.4$$319.8$$2{,}410$$3{,}957$$2{,}854$$2{,}786.4$$28{,}310$$30{,}902.4$$22{,}593.6$$|$R$|$ x $|$S$|$ tuplesTime [ms] - log10 scale$|10^{8}|$$|10^{9}|$$|10^{10}|$ operations Figure 12: Optimized NLJ formulation with varying input relation sizes, 100-D vectors, 48 threads. Finally, we evaluate the impact of different input relation sizes (in tuples) over 100-D, 32-bit embeddings over 48 threads and investigate the impact of physical and logical optimizations using the NLJ formulation. In this experiment (Figure 12), we investigate the effects of input sizes, number of computations, and ordering of input relations of context-enhanced NLJ implementation. First, the execution time scales linearly with the number of computations/operations performed, according to the cost model (Equation: $\mathcal{E}$-NLJ Prefetch Optimization). Second, we validate that to achieve improved execution time due to cache locality, smaller relation should still be the inner loop, as in the traditional nested-loop-join, while using 100-D vectors for cosine distance computation. Despite more expensive per-vector computations, data access patterns still play an important role, impacting our experiment’s performance by up to $\sim$35% (at $10^{10}$ operations). Takeaway. Logical and physical optimizations of the NLJ formulation with vectors enable reducing the overheads by orders of magnitude from the initial vector join extension. Still, the approaches we proposed until now optimize for vector execution without explicitly considering the high dimensionality and similarity operations over individual tuples. The tensor formulation, which we will evaluate next, addresses this issue. ### VI-D Tensor Formulation: The Holistic Vector-Join Optimization $1$$4$$16$$64$$256$$1$$4$$16$$64$$256$$1$$4$$16$$64$$256$$10^{-1}$$10^{0}$$10^{1}$256002560000256000000# FP32 Ops:Vector #FP32:Time per element [ns]Vectorize-NLJTensor Figure 13: Physical optimization. The tensor strategy (green) pays off in larger inputs compared to NLJ (blue). Instead of applying optimized computation on individual vector operations in the NLJ, in Figure 6, we propose batching multiple vector tuples together in a tensor join formulation using optimized matrix computation. The key enabler and difference is that BLAS matrix operations are highly optimized for the cache locality that the simple NLJ imposed in its formulation. We evaluate this physical optimization proposed in Section V, evaluating whether the tensor formulation improves the per-vector-element processing time. We compare two strategies, running the fully optimized NLJ against the Tensor formulation. For this, we vary two factors: the total number of floating point numbers processed (#FP32 Ops) and how many floating point numbers represent an individual vector (vector dimensionality, Vector #FP32). Figure 13 summarizes the findings, where three data clusters are based on the number of operations, refined by individual vector size. In other words, for the 25600 case with dimensionality 1, there are $25600/1$ tuples joined, equally balanced in two relations, indicating $\sqrt{25600/1}=160$ tuples per input relation. Similarly, to obtain the number of tuples for the case of dimensionality 256, $\sqrt{25600/256}=10$. We use the per-FP32 breakdown as a unifying metric across the input size and dimensionality. First, we notice the benefit of vectorization with increased vector size, where specialized hardware operations improve the per-tuple performance. Second, pushing this boundary beyond per-tuple-vector but to a whole tuple-vector-batch (Tensor), when sufficient computation can benefit from the cache locality, this approach significantly improves the execution time. In particular, the Tensor approach was slower only in case there were $\sqrt{25600/64}=20$ and $\sqrt{25600/256}=10$ tuples to join. $1$$4$$16$$64$$256$$1$$4$$16$$64$$256$$1$$4$$16$$64$$256$$10^{-1}$$10^{0}$$10^{1}$256002560000256000000# FP32 Ops:Vector #FP32:Time per element [ns]Tensor-Fully-BatchedTensor-Non- Batched Figure 14: The impact of vector batching. Non-batched indicates that one of the join inputs is processed one vector at a time. Batching the vectors together in the tensor formulation is the key to reducing unnecessary data movement. We demonstrate the impact of batching in Figure 14, where the BLAS-matrix operations are used with one fully batched relation. At the same time, the other is loaded vector-by-vector, repeated as many times as there are tuples. An alternative is where both relations are fully batched. While inefficiencies are not noticeable with very small input sizes, batching becomes increasingly significant for scalability as the input grows. 100k x 100k50k x 50k100k x 10k10k x 50k5k x 50k10k x 10k10k x 5k5k x 5k$10^{0}$$10^{1}$$10^{2}$Matrix Mini-Batch SizeRelative to No BatchRelative SlowdownRelative Decrease of Required RAM Figure 15: Batch size impact on memory requirements and execution time. 100k x 100k, 100-D input (No Batch case). 10k x 10k10k x 1k1k x 1k10k x 1001k x 10010k x 10100 x 100100 x 1010 x 10$10^{0}$$10^{1}$$10^{2}$$10^{3}$$10^{4}$$10^{5}$$10^{6}$Matrix Mini-Batch SizeRelative to No BatchRelative SlowdownRelative Decrease of Required RAM Figure 16: Fine-grained batch size impact on memory requirements and execution time. 10k x 10k, 100-D (No Batch case). Still, as explained in Subsection V-B, batching too many vectors together in large tensors simultaneously comes at a prohibitive memory cost. We propose using mini-batches partitioned across tuple boundaries (Figure 7) that can still benefit from the improved linear algebra algorithms and data locality. The impact of batching is presented in Figure 15. We run the tensor join formulation over 100k x 100k, 100-D input using 48 threads. The No Batch case runs the join on the whole input at once. At the same time, the experiment focuses on memory footprint reduction and the computational price to pay when various mini-batches are used. While there is a negligible relative slowdown due to some added data movement and repeated operations, there is a significant benefit due to the reduction of the necessary memory. However, taking mini-batching to extremely small cases results in an NLJ formulation performance where the computation boundary is an individual vector. In such cases, the slowdown scales along with the reduced memory requirement, as shown in Figure 16, as the available computational resources (SIMD, cores) remain idle, and explicit data movement becomes predominant. 10k x 10k100k x 10k100k x 100k1M x 100k1M x 1M$10^{2}$$10^{3}$$10^{4}$$10^{5}$$10^{6}$Timeout: 40+ minutes$35$$366.34$$4{,}303$$55{,}397.34$$4.52\cdot 10^{5}$$269.6$$2{,}854.8$$28{,}310$$2.5\cdot 10^{5}$$|$R$|$ x $|$S$|$ tuplesTime [ms] - log10 scaleTensorNLJ Figure 17: Tensor join vs. NLJ formulation, 100-D, 48 threads. Finally, we compare the NLJ with the Tensor formulation end-to-end execution time in Figure 17. While the execution time of both algorithms scales approximately linearly when increasing the input relation size, the algorithm optimizations enabled by batching vectors into tensors opened linear algebra- based optimizations. This enables further execution time gains at almost an order of magnitude improvement across various input sizes. Takeaway. Holistic optimization of the join algorithm with vector inputs is necessary to enable fast and efficient computation. This led to removing model-operator overheads, tuning the individual and batched vector computation and access patterns aware of the underlying hardware capabilities. ## VII Related Work This section outlines the related work and compares and places our approach in the rich design space of prior research. ### VII-A Machine Learning for Databases Machine learning for databases has been a topic of research where structural components of DBMS get enhanced using findings from the ML community. Learned indexes [37] avoid data structure traversal by learning the data distribution information and optimizing data access. From a systems perspective, using tensor processing frameworks for traditional relational processing has also recently been proposed [20]. Our approach is similar as we propose using ML embedding models to provide context to data traditionally opaque to relational DBMS. We propose a general framework for extending and analyzing relational operators using novel model- database interactions. ### VII-B Databases for Machine Learning On the other hand, databases for machine learning focus on applying or integrating machine learning components with systems. Frameworks such as Tensorflow [6] or Pytorch [7] are efficient, hardware-conscious dataflow engines. There has been recent work on developing vector-specialized databases [19] typically based on indexes for efficient high-dimensional similarity search [31]. Still, such engines often lack a DBMS’s expressiveness, functionality, and analytical operations for more complex data analysis, which would force users to write imperative code to integrate different siloed system components. ### VII-C String Similarity Joins We have proposed a join operator that functionally resembles a string similarity join. Traditional string similarity techniques require exact similarity specification using edit distance or q-grams as token-based string similarity [38]. Similarly, locality-sensitive hashing techniques [39] exist for approximate joins. These approaches are adequate for finding misspellings and limited token-based differences. In contrast, we propose using word embedding models capable of identifying misspellings, different tenses, and semantic similarity based on training and fine-tuning training dataset and parameters [25, 26]. Through the separation of concerns, from the perspective of DBMS, string similarity join has a tensor-based input with cosine distance and threshold as parameters. At the same time, the embedding model handles the string semantics and context and transforms the input into context-free embeddings for RDBMS to process. Furthermore, our approach extends the notion of similarity to context-rich data for which embedding models trained for correct similarity semantics exist. ### VII-D Representation Learning The significant body of work in representation learning is the key enabler of context-rich relational operators. It allows for transforming the human- centric, context-rich data representations into machine-centric formats amenable to automated processing. We combine ML-based embedding models with relational operators and analyze end-to-end interactions, from logical to physical optimizations. Such models allow masking and transforming contextual data into embeddings as ubiquitous data representations that can be processed by extending RDBMS with tensor-based operators. A rich research area in machine learning drives embedding models that support other context-rich data formats beyond strings, equally transforming the input into context-free embeddings. Enabling multi-modality through model-operator interactions in a topic of future research, where models such as ResNet [3] can be used for images or PANNs [23] for audio processing. Models trained on web-scale data exist as Foundation Models [4], that can be re-trained and adapted for a specific task and dataset. ## VIII Conclusion and Future Directions Data management systems support analysts with modern data processing tools. With strong results in automating context-rich processing from the machine learning community, such individual components would require imperative integration in complex analytical data pipelines. We propose instead a context-rich join operation that integrates embedding processing with relational operators based on the key observation of the separation of concerns. Embedding models are designed to handle context, while RDBMS needs to provide a declarative context-free interface based on extended relational algebra that allows logical and physical optimizations of the interactions between operators and models. With the common tensor data representation, we analyze the behavior of a join operator and propose relational algebra extensions, logical and physical optimizations, and introduce optimizations for efficient execution. We evaluate our cost model and show the impact holistic optimizations have on the execution time, resulting in orders of magnitude differences across logical, physical, and hardware optimizations. ## References * [1] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , J. Burstein, C. Doran, and T. Solorio, Eds. Association for Computational Linguistics, 2019, pp. 4171–4186. [Online]. Available: https://doi.org/10.18653/v1/n19-1423 * [2] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” in _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_ , H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. * [3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016_. IEEE Computer Society, 2016, pp. 770–778. [Online]. Available: https://doi.org/10.1109/CVPR.2016.90 * [4] R. Bommasani, D. A. Hudson, E. Adeli, R. B. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, E. Brynjolfsson, S. Buch, D. Card, R. Castellon, N. S. Chatterji, A. S. Chen, K. Creel, J. Q. Davis, D. Demszky, C. Donahue, M. Doumbouya, E. Durmus, S. Ermon, J. Etchemendy, K. Ethayarajh, L. Fei-Fei, C. Finn, T. Gale, L. Gillespie, K. Goel, N. D. Goodman, S. Grossman, N. Guha, T. Hashimoto, P. Henderson, J. Hewitt, D. E. Ho, J. Hong, K. Hsu, J. Huang, T. Icard, S. Jain, D. Jurafsky, P. Kalluri, S. Karamcheti, G. Keeling, F. Khani, O. Khattab, P. W. Koh, M. S. Krass, R. Krishna, R. Kuditipudi, and et al., “On the opportunities and risks of foundation models,” _CoRR_ , vol. abs/2108.07258, 2021. [Online]. Available: https://arxiv.org/abs/2108.07258 * [5] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_ , I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998–6008. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html * [6] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard _et al._ , “Tensorflow: a system for large-scale machine learning.” in _Osdi_ , vol. 16, no. 2016. Savannah, GA, USA, 2016, pp. 265–283. * [7] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga _et al._ , “Pytorch: An imperative style, high-performance deep learning library,” _Advances in neural information processing systems_ , vol. 32, 2019. * [8] P. E. O’Neil, E. J. O’Neil, and X. Chen, “The star schema benchmark (ssb),” _Pat_ , vol. 200, no. 0, p. 50, 2007. * [9] V. Sanca and A. Ailamaki, “Analytical engines with context-rich processing: Towards efficient next-generation analytics,” in _39th IEEE International Conference on Data Engineering, ICDE 2023, Anaheim, CA, USA, April 3-7, 2023_. IEEE, 2023, pp. 3699–3707. [Online]. Available: https://doi.org/10.1109/ICDE55515.2023.00298 * [10] L. Zhang, M. Butrovich, T. Li, A. Pavlo, Y. Nannapaneni, J. Rollinson, H. Zhang, A. Balakumar, D. Biales, Z. Dong, E. J. Eppinger, J. E. Gonzalez, W. S. Lim, J. Liu, L. Ma, P. Menon, S. Mukherjee, T. Nayak, A. Ngom, D. Niu, D. Patra, P. Raj, S. Wang, W. Wang, Y. Yu, and W. Zhang, “Everything is a transaction: Unifying logical concurrency control and physical data structure maintenance in database management systems,” in _CIDR 2021, Conference on Innovative Data Systems Research_ , 2021. [Online]. Available: https://db.cs.cmu.edu/papers/2021/cidr2021_paper06.pdf * [11] A. Kemper and T. Neumann, “Hyper: A hybrid oltp&olap main memory database system based on virtual memory snapshots,” in _Proceedings of the 27th International Conference on Data Engineering, ICDE 2011, April 11-16, 2011, Hannover, Germany_ , S. Abiteboul, K. Böhm, C. Koch, and K. Tan, Eds. IEEE Computer Society, 2011, pp. 195–206. [Online]. Available: https://doi.org/10.1109/ICDE.2011.5767867 * [12] A. Pavlo, G. Angulo, J. Arulraj, H. Lin, J. Lin, L. Ma, P. Menon, T. Mowry, M. Perron, I. Quah, S. Santurkar, A. Tomasic, S. Toor, D. V. Aken, Z. Wang, Y. Wu, R. Xian, and T. Zhang, “Self-driving database management systems,” in _CIDR 2017, Conference on Innovative Data Systems Research_ , 2017. [Online]. Available: https://db.cs.cmu.edu/papers/2017/p42-pavlo-cidr17.pdf * [13] T. Neumann, “Efficiently compiling efficient query plans for modern hardware,” _Proc. VLDB Endow._ , vol. 4, no. 9, pp. 539–550, 2011. [Online]. Available: http://www.vldb.org/pvldb/vol4/p539-neumann.pdf * [14] P. Chrysogelos, M. Karpathiotakis, R. Appuswamy, and A. Ailamaki, “Hetexchange: Encapsulating heterogeneous CPU-GPU parallelism in JIT compiled engines,” _Proc. VLDB Endow._ , vol. 12, no. 5, pp. 544–556, 2019. [Online]. Available: http://www.vldb.org/pvldb/vol12/p544-chrysogelos.pdf * [15] T. Neumann and M. J. Freitag, “Umbra: A disk-based system with in-memory performance,” in _10th Conference on Innovative Data Systems Research, CIDR 2020, Amsterdam, The Netherlands, January 12-15, 2020, Online Proceedings_. www.cidrdb.org, 2020. [Online]. Available: http://cidrdb.org/cidr2020/papers/p29-neumann-cidr20.pdf * [16] T. Kersten, V. Leis, A. Kemper, T. Neumann, A. Pavlo, and P. A. Boncz, “Everything you always wanted to know about compiled and vectorized queries but were afraid to ask,” _Proc. VLDB Endow._ , vol. 11, no. 13, pp. 2209–2222, 2018. [Online]. Available: http://www.vldb.org/pvldb/vol11/p2209-kersten.pdf * [17] M. Zukowski, M. van de Wiel, and P. A. Boncz, “Vectorwise: A vectorized analytical DBMS,” in _IEEE 28th International Conference on Data Engineering (ICDE 2012), Washington, DC, USA (Arlington, Virginia), 1-5 April, 2012_ , A. Kementsietsidis and M. A. V. Salles, Eds. IEEE Computer Society, 2012, pp. 1349–1350. [Online]. Available: https://doi.org/10.1109/ICDE.2012.148 * [18] S. Idreos, K. Zoumpatianos, B. Hentschel, M. S. Kester, and D. Guo, “The data calculator: Data structure design and cost synthesis from first principles and learned cost models,” in _Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018_ , G. Das, C. M. Jermaine, and P. A. Bernstein, Eds. ACM, 2018, pp. 535–550. [Online]. Available: https://doi.org/10.1145/3183713.3199671 * [19] J. Johnson, M. Douze, and H. Jégou, “Billion-scale similarity search with gpus,” _IEEE Trans. Big Data_ , vol. 7, no. 3, pp. 535–547, 2021. [Online]. Available: https://doi.org/10.1109/TBDATA.2019.2921572 * [20] A. Gandhi, Y. Asada, V. Fu, A. Gemawat, L. Zhang, R. Sen, C. Curino, J. Camacho-Rodríguez, and M. Interlandi, “The tensor data platform: Towards an ai-centric database system,” 2023. [Online]. Available: https://www.cidrdb.org/cidr2023/papers/p68-gandhi.pdf * [21] Q. Lin, S. Wu, J. Zhao, J. Dai, F. Li, and G. Chen, “A comparative study of in-database inference approaches,” in _38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12, 2022_. IEEE, 2022, pp. 1794–1807. [Online]. Available: https://doi.org/10.1109/ICDE53745.2022.00180 * [22] J. M. Hellerstein, C. Ré, F. Schoppmann, D. Z. Wang, E. Fratkin, A. Gorajek, K. S. Ng, C. Welton, X. Feng, K. Li, and A. Kumar, “The madlib analytics library or MAD skills, the SQL,” _Proc. VLDB Endow._ , vol. 5, no. 12, pp. 1700–1711, 2012. [Online]. Available: http://vldb.org/pvldb/vol5/p1700_joehellerstein_vldb2012.pdf * [23] Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley, “Panns: Large-scale pretrained audio neural networks for audio pattern recognition,” _IEEE ACM Trans. Audio Speech Lang. Process._ , vol. 28, pp. 2880–2894, 2020. [Online]. Available: https://doi.org/10.1109/TASLP.2020.3030497 * [24] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in _1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings_ , Y. Bengio and Y. LeCun, Eds., 2013. [Online]. Available: http://arxiv.org/abs/1301.3781 * [25] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word vectors with subword information,” _Trans. Assoc. Comput. Linguistics_ , vol. 5, pp. 135–146, 2017. [Online]. Available: https://doi.org/10.1162/tacl_a_00051 * [26] B. Edizel, A. Piktus, P. Bojanowski, R. Ferreira, E. Grave, and F. Silvestri, “Misspelling oblivious word embeddings,” in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , J. Burstein, C. Doran, and T. Solorio, Eds. Association for Computational Linguistics, 2019, pp. 3226–3234. [Online]. Available: https://doi.org/10.18653/v1/n19-1326 * [27] Y. Qi, D. S. Sachan, M. Felix, S. Padmanabhan, and G. Neubig, “When and why are pre-trained word embeddings useful for neural machine translation?” in _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers)_ , M. A. Walker, H. Ji, and A. Stent, Eds. Association for Computational Linguistics, 2018, pp. 529–535. [Online]. Available: https://doi.org/10.18653/v1/n18-2084 * [28] K. B. Petersen and M. S. Pedersen, “The matrix cookbook,” nov 2012, version 20121115. [Online]. Available: http://www2.compute.dtu.dk/pubdb/pubs/3274-full.html * [29] D. He, S. C. Nakandala, D. Banda, R. Sen, K. Saur, K. Park, C. Curino, J. Camacho-Rodríguez, K. Karanasos, and M. Interlandi, “Query processing on tensor computation runtimes,” _Proc. VLDB Endow._ , vol. 15, no. 11, pp. 2811–2825, 2022. [Online]. Available: https://www.vldb.org/pvldb/vol15/p2811-he.pdf * [30] V. Sanca and A. Ailamaki, “Post-moore’s law fusion: High-bandwidth memory, accelerators, and native half-precision processing for cpu-local analytics,” in _Joint Proceedings of Workshops at the 49th International Conference on Very Large Data Bases (VLDB 2023), Vancouver, Canada, August 28 - September 1, 2023_ , ser. CEUR Workshop Proceedings, R. Bordawekar, C. Cappiello, V. Efthymiou, L. Ehrlinger, V. Gadepally, S. Galhotra, S. Geisler, S. Groppe, L. Gruenwald, A. Y. Halevy, H. Harmouch, O. Hassanzadeh, I. F. Ilyas, E. Jiménez-Ruiz, S. Krishnan, T. Lahiri, G. Li, J. Lu, W. Mauerer, U. F. Minhas, F. Naumann, M. T. Özsu, E. K. Rezig, K. Srinivas, M. Stonebraker, S. R. Valluri, M. Vidal, H. Wang, J. Wang, Y. Wu, X. Xue, M. Zaït, and K. Zeng, Eds., vol. 3462. CEUR-WS.org, 2023. [Online]. Available: https://ceur-ws.org/Vol-3462/ADMS1.pdf * [31] J. Wang, X. Yi, R. Guo, H. Jin, P. Xu, S. Li, X. Wang, X. Guo, C. Li, X. Xu, K. Yu, Y. Yuan, Y. Zou, J. Long, Y. Cai, Z. Li, Z. Zhang, Y. Mo, J. Gu, R. Jiang, Y. Wei, and C. Xie, “Milvus: A purpose-built vector data management system,” in _SIGMOD ’21: International Conference on Management of Data, Virtual Event, China, June 20-25, 2021_ , G. Li, Z. Li, S. Idreos, and D. Srivastava, Eds. ACM, 2021, pp. 2614–2627. [Online]. Available: https://doi.org/10.1145/3448016.3457550 * [32] K. Goto and R. A. van de Geijn, “Anatomy of high-performance matrix multiplication,” _ACM Trans. Math. Softw._ , vol. 34, no. 3, pp. 12:1–12:25, 2008. [Online]. Available: https://doi.org/10.1145/1356052.1356053 * [33] T. M. Smith, R. A. van de Geijn, M. Smelyanskiy, J. R. Hammond, and F. G. V. Zee, “Anatomy of high-performance many-threaded matrix multiplication,” in _2014 IEEE 28th International Parallel and Distributed Processing Symposium, Phoenix, AZ, USA, May 19-23, 2014_. IEEE Computer Society, 2014, pp. 1049–1059. [Online]. Available: https://doi.org/10.1109/IPDPS.2014.110 * [34] “Intel® avx-512 - fp16 instruction set for intel® xeon® processor based products technology guide.” [Online]. Available: https://networkbuilders.intel.com/solutionslibrary/intel-avx-512-fp16-instruction-set-for-intel-xeon-processor-based-products-technology-guide * [35] N. Nassif, A. O. Munch, C. L. Molnar, G. Pasdast, S. V. Lyer, Z. Yang, O. Mendoza, M. Huddart, S. Venkataraman, S. Kandula _et al._ , “Sapphire rapids: The next-generation intel xeon scalable processor,” in _2022 IEEE International Solid-State Circuits Conference (ISSCC)_ , vol. 65. IEEE, 2022, pp. 44–46. * [36] “Wikidata.” [Online]. Available: https://www.wikidata.org/ * [37] T. Kraska, A. Beutel, E. H. Chi, J. Dean, and N. Polyzotis, “The case for learned index structures,” in _Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018_ , G. Das, C. M. Jermaine, and P. A. Bernstein, Eds. ACM, 2018, pp. 489–504. [Online]. Available: https://doi.org/10.1145/3183713.3196909 * [38] L. Gravano, P. G. Ipeirotis, H. V. Jagadish, N. Koudas, S. Muthukrishnan, and D. Srivastava, “Approximate string joins in a database (almost) for free,” in _Proceedings of the 27th International Conference on Very Large Data Bases_ , ser. VLDB ’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2001, p. 491–500. * [39] H. Zhang and Q. Zhang, “Minjoin: Efficient edit similarity joins via local hash minima,” in _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , ser. KDD ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 1093–1103. [Online]. Available: https://doi.org/10.1145/3292500.3330853
11institutetext: Michiel H.J. Paus 22institutetext: Department of Mathematics and Computer Science, Eindhoven University of Technology, the Netherlands 22email<EMAIL_ADDRESS>33institutetext: Edwin R. van den Heuvel 44institutetext: Department of Mathematics and Computer Science, Eindhoven University of Technology, the Netherlands 44email<EMAIL_ADDRESS>55institutetext: Marc J.M. Meddens 66institutetext: Brainscan BV Deventer, the Netherlands 66email<EMAIL_ADDRESS> # Binary disease prediction using tail quantiles of the distribution of continuous biomarkers Michiel H.J. Paus Edwin R. van den Heuvel Marc J.M. Meddens ###### Abstract In the analysis of binary disease classification, single biomarkers might not have significant discriminating power and multiple biomarkers from a large set of biomarkers should be selected. Many different approaches exist, but they merely work well for mean differences in biomarkers between cases and controls. Biological processes are however much more heterogeneous, and differences between cases and controls could also occur in other distributional characteristics (e.g. variances, skewness). Many machine learning techniques are better capable of utilizing these higher order distributional differences, sometimes at cost of explainability. In this study we propose quantile based prediction (QBP), a binary classification method that is based on the selection of multiple continuous biomarkers. It can be considered a hybrid technique, with the flexibility of a machine learning algorithm and the ability to select relevant features like classical statistical techniques. QBP generates a single score using the tails of the biomarker distributions for cases and controls. This single score can then be evaluated by receiver operating characteristic (ROC) analysis to investigate its predictive power. The performance of QBP is compared to supervised learning methods using extensive simulation studies, and two case studies: major depression disorder (MDD) and trisomy. Simultaneously, the classification performance of the existing techniques in relation to each other is assessed. The key strengths of QBP are the opportunity to select relevant biomarkers and the outstanding classification performance in the case biomarkers predominantly show variance differences between cases and controls, as demonstrated in the simulation study. When only shifts in means were present in the biomarkers, QBP obtained an inferior performance. Lastly, QBP proved to be unbiased in case of absence of disease relevant biomarkers and outperformed the other methods on the MDD case study. More research is needed to further optimize QBP, since it has several opportunities to improve its performance. Here we wanted to introduce the principle of QBP and show its potential. ###### Keywords: Quantile based prediction (QBP) Binary classification Logistic regression Random Forest XGBoost Regularization Feature selection Discriminant analysis ## 1 Introduction Biomarker research has increased fastly due to the development of new molecular biotechnologies pepe2008pivotal (27). A biomarker is defined as ’any substance, structure, or process that can be measured in the body or its products and influence or predict the incidence of outcome or disease’ world2001biomarkers (26). Biomarkers are developed for many different purposes: classification and prediction of diseases, as surrogate outcomes in clinical trials, as measures of toxic or preventive exposures, or as a guide to individual treatment choice halaris2013inflammation (16). For the classification and prediction of diseases, single biomarkers do often not have sufficient discriminating power to separate cases from controls calfee2011use (6, 19, 20). When analyzing multiple biomarkers simultaneously, models might become harder to interpret, but could also face the problem of high dimensionality with respect to the available number of observations. Firstly, to enhance the transparency of these classification or prediction models with numerous biomarkers, insight in the selected features and its importance is crucial. Whereas classical statistical techniques hold the possibility to perform in-depth inference on present relations, many machine learning techniques do not allow for a similar degree of interpretability. Secondly, when the number of biomarkers $p$ exceeds the number of observations $n$ ($p>n$ or $p>>n$), it is key to reduce the dimensionality of the data and to select a sparse set of biomarkers with high discriminant power that can be used to produce reliable predictions. Binary classification methods that reduce the dimensionality of the input variables can be categorized based on the relations between original input variables and new input variables ma2008penalized (22). (i) Dimension reduction methods that construct new input variables using linear combinations of $all$ input variables (i.e. partial least squares (PLS) and principal component analysis (PCA). (ii) Feature selection methods, which select a subset of the original input variables. Examples include likelihood functions for parametric models, such as penalized logistic regression (PLR) and linear discriminant analysis (LDA) by optimal scoring. (iii) Hybrid methods using (i) and (ii). These traditional methods, focus mainly on mean differences of the biomarker distributions between cases and controls. However, differences may occur somewhere else, since a disease may affect the variation, skewness and kurtosis of the biomarker distribution just2014improving (21). Over time, a wide scala of classification tree based techniques is developed, from individual trees (CART) to an ensemble of individual trees with various modifications such as different sampling strategies like bootstrapping (Random Forest) or boosting (AdaBoost, XGBoost). Other machine learning techniques for classification include support vector machines (SVM) and the k-nearest neighbors (kNN) algorithm that does not require a model to be fit friedman2001elements (14). In this paper we introduce a new approach for binary classification that takes advantage of the tail differences of the biomarker distributions between cases and controls. The performance of this new method is compared with various traditional binary classification methods and machine learning techniques using simulation studies and two case studies. Logistic regression is applied with and without penalization. The selected penalty functions are the lasso tibshirani1996regression (35), elastic net zou2005regularization (41) and the ridge hoerl1970ridge (17). Alternatively, to address multicollinearity among the predictors, principal component logistic regression (PCLR) is included in the analysis aguilera2006using (1). Next to these LR based methods, also LDA and PLS with LDA (abbreviated as PLS-LDA) was used marigheto1998comparison (23). The considered machine learning techniques include SVM, kNN, random forest (RF) and extreme gradient boosting (XGBoost). The first case study describes data on patients with major depressive disorder (MDD), which is a disease with a lifetime prevalence of around 15%. It is a major cause of disability in the Western world bromet2011cross (5, 33) and the prediction of MDD with biomarkers can help physicians diagnose MDD better. The second case study, is on an ongoing Dutch population study on the prevalence of trisomy 13, 18 and 21, containing 4894 observations. In this paper, the receiver operating characteristic (ROC) curve approach is used to derive the classification performance of cases and controls. In specific, we measure the area under the ROC curve (AUC). The AUC is a variant of the concordance ($c$) statistic for binary outcomes, that indicates the discriminative ability of a generalized linear model steyerberg2010assessing (34). Advantages of this non-parametric statistic are that it does not depend on a decision threshold and gives an indication of how well the negative and positive classes are separated bradley1997use (4). To assess the predictive performance of all methods in terms of AUC, we use different cross-validation strategies. For the simulation scenarios we apply k-fold cross-validation (CV) on the training dataset to determine the set of tunable parameters with the highest average AUC over all k folds. This set of parameters is used on an independently simulated validation dataset with 5000 observations to find a reliable estimate of the true prediction performance. In the case studies we apply repeated double cross-validation (rdCV). This strategy, that is suitable for small datasets, selects the optimal parameter based on multiple repetitions instead of a single double cross-validation that can be optimistic or pessimistic filzmoser2009repeated (12). Here, double (k-fold) cross-validation (dCV) is preferred above single k-fold CV, Monte Carlo CV (MCCV) or leave-one-out CV (LOOCV). Primarily because dCV is able to simultaneously provide an estimate for the prediction error and the tunable parameter, whereas single k-fold cross-validation only succeeds to perform one of these goals smit2007assessing (32). Secondly, dCV has a reduced computational complexity compared to LOOCV. The remainder of this article is structured as follows. In the next section, both the proposed and selected traditional classification methods are formulated mathematically. Moreover, a description on the applied performance measures and cross-validation techniques is presented. In the section ’Simulation study’ a detailed description of the design of the simulation study is provided, followed by the corresponding results. In the section ’Case studies’, the major depression disorder (MDD) dataset and trisomy dataset are presented. Here, we first describe the design of the study and then present the results of the different prediction methods. The last section contains the discussion. ## 2 Methods In this section we assume that $y_{i}$ denotes the group (or disease) indicator for subject $i=1,\ldots,n$ with $y_{i}=0$ a healthy control and $y_{i}=1$ a case. The (continuous) value of the $k^{\text{th}}$ biomarker for subject $i$ is denoted by $x_{i,k}$, where $k=1,\ldots,r$ and $r$ the number of observed biomarkers. ### 2.1 Quantile based prediction Quantile based prediction (QBP) is a binary prediction method for continuous biomarkers, that uses the left and right tails of the empirical biomarker distributions of two groups to discriminate between cases and controls. QBP is able to discriminate when the tails of two groups are shifted with respect to each other (irrespective of mean differences or the remainder part of the distribution). The stronger the shift in the tails of a biomarker, the more likely it is that this shift is due to the disease. By combining multiple biomarkers a subject’s total disease score can be constructed. This disease score represents some likelihood of being a case or control. The remainder of this paragraph follows the structure of QBP - that distinguishes the definition of its characteristics, the scoring mechanism based on these characteristics and the attribution of scores to individual subjects. An artificial example of a single biomarker $k$ is presented to illustrate the construction of the QBP characteristics (Figure 1 and Table 1) and the scoring mechanism (Table 2). Lastly, the arbitrary situation in Table 3 exemplifies the attribution of scores to a set of individuals in case of multiple biomarkers. #### 2.1.1 QBP characteristics Figure 1: Illustration of QBP characteristics on data of a single biomarker $k$ (index $k$ suppressed) Table 1: QBP characteristics on an arbitrary example using three ($m=2$) proportions per tail ($p_{L}=(p_{L_{0}},p_{L_{1}},p_{L_{2}})=(0.1,0.05,0.01)$ and ($p_{R}=(p_{R_{0}},p_{R_{1}},p_{R_{2}})=(0.9,0.95,0.99)$) for a single biomarker $k$ (index $k$ suppressed). Note that | $q_{p_{L_{2}}}$ | $q_{p_{L_{1}}}$ | $q_{p_{L_{0}}}$ | | $q_{p_{R_{0}}}$ | $q_{p_{R_{1}}}$ | $q_{p_{R2}}$ ---|---|---|---|---|---|---|--- Percentiles ($y_{i}=0$) | 273 | 372 | 424 | | 796 | 849 | 947 Percentiles ($y_{i}=1$) | 357 | 380 | 396 | | 644 | 713 | 880 Predominant group | | | $D_{L}=1$ | | $D_{R}=0$ | | | $C_{p_{L_{2}}}$ | $C_{p_{L_{1}}}$ | $C_{p_{L_{0}}}$ | | $C_{p_{R_{0}}}$ | $C_{p_{R_{1}}}$ | $C_{p_{R_{2}}}$ Cutpoints | 273 | 372 | 424 | | 644 | 713 | 880 | $\mathbf{F_{(y_{i})}^{-1}(C_{p_{L_{s}}})}$ | | $\mathbf{1-F_{(y_{i})}^{-1}(C_{p_{R_{s}}})}$ | $p_{L_{2}}$ | $p_{L_{1}}$ | $p_{L_{0}}$ | | $p_{R_{0}}$ | $p_{R_{1}}$ | $p_{R_{2}}$ Tail area ($y_{i}=0$) | 0.01 | 0.05 | 0.1 | | 0.407 | 0.240 | 0.03 Tail area ($y_{i}=1$) | 0.00 | 0.031 | 0.225 | | 0.1 | 0.05 | 0.01 | $R_{p_{L_{2}}}$ | $R_{p_{L_{1}}}$ | $R_{p_{L_{0}}}$ | | $R_{p_{R_{0}}}$ | $R_{p_{R_{1}}}$ | $R_{p_{R_{2}}}$ Exceedratio | 0 | 0.62 | 2.25 | | 4.07 | 4.8 | 3 | $I_{L_{3}}$ | $I_{L_{2}}$ | $I_{L_{1}}$ | $I_{0}$ | $I_{R_{1}}$ | $I_{R_{2}}$ | $I_{R_{3}}$ ---|---|---|---|---|---|---|--- Intervals | $(-\infty,273]$ | $(273,372]$ | $(372,424]$ | $(424,644)$ | $[644,713)$ | $[713,880)$ | $[880,\infty)$ The first step is to select a quantile (or percentile) $q_{p}$, with corresponding proportion $p$. For the left-tail percentile we select proportion $p_{L_{0}}<0.50$ and we select the right-tail percentile with proportion $p_{R_{0}}>0.5$. Without loss of generality, we select the tail proportion $p_{R_{0}}$ based on symmetry such that $p_{R_{0}}=1-p_{L_{0}}$. The corresponding percentiles for the controls and cases for each biomarker $k$ are used to determine the predominant group in the left tail $D_{L,k}\in\\{0,1\\}$ and in the right tail $D_{R,k}\in\\{0,1\\}$. For each biomarker this is defined by $\displaystyle D_{L,k}=\begin{cases}0&\text{ if }q_{p_{L_{0}},k}^{(0)}<q_{p_{L_{0}},k}^{(1)}\\\ 1&\text{ if }q_{p_{L_{0}},k}^{(0)}>q_{p_{L_{0}},k}^{(1)}\\\ \text{NA}&\text{ if }q_{p_{L_{0}},k}^{(0)}=q_{p_{L_{0}},k}^{(1)}\end{cases},\qquad D_{R,k}=\begin{cases}0&\text{ if }q_{p_{R_{0}},k}^{(0)}>q_{p_{R_{0}},k}^{(1)}\\\ 1&\text{ if }q_{p_{R_{0}},k}^{(0)}<q_{p_{R_{0}},k}^{(1)}\\\ \text{NA}&\text{ if }q_{p_{R_{0}},k}^{(0)}=q_{p_{R_{0}},k}^{(1)}\end{cases},$ (1) with $q_{p,k}^{(0)}$ and $q_{p,k}^{(1)}$ the $p^{\text{th}}$ percentile ($p\in\\{p_{L_{0}},p_{R_{0}}\\}$) of group 0 (healthy control) and group 1 (cases) of biomarker $k$, respectively. Thus the predominant group has its percentile at proportion $p_{L_{0}}$ or $p_{R_{0}}$ more extreme than the other group. For example, in the illustration of QBP in Figure 1, the control group ($y_{i}=0$) is predominant in the right tail and the case group ($y_{i}=1$) is predominant in the left tail. In the second step the tails of the biomarkers that have a predominant group will be included in the discrimination of groups using scores. The tails having no predominant group ($D_{L,k}=\text{NA}$ or $D_{R,k}=\text{NA}$) are eliminated in the discrimination of groups by attributing a neutral score (value 0). The third step is to define $m$ additional percentiles that are located further in the tail. The left and right tail now contain $m+1$ percentiles, with proportions $p_{L}=(p_{L_{0}},p_{L_{1}},\ldots,p_{L_{m}})$ in the left tail ($p_{L_{s-1}}>p_{L_{s}}$) and $p_{R}=(p_{R_{0}},p_{R_{1}},\ldots,p_{R_{m}})$ in the right tail. Again, without loss of generality, we use symmetry of the tails and take $p_{R_{s}}=1-p_{L_{s}}$. The cutpoints $C_{p,k}$ on biomarker $k$ for proportions $p\in\\{p_{L},p_{R}\\}$ will be defined by the quantiles of the non-predominant group. In particular, for $s=1,\ldots,m$ $\displaystyle C_{p_{L_{s}},k}=q_{p_{L_{s}},k}^{(1-D_{L,k})},\qquad C_{p_{R_{s}},k}=q_{p_{R_{s}},k}^{(1-D_{R,k})}.$ (2) With these cutpoints, we define $m+1$ intervals $I_{s,k}$ in each tail that will later be used to attribute scores to subjects. We define the intervals $I_{s,k}$ as follows $\displaystyle I_{L_{s},k}=(C_{p_{L_{s+1}},k},C_{p_{L_{s}},k}],\qquad I_{0,k}=(C_{p_{L_{0}},k},C_{p_{R_{0}},k}),\qquad I_{R_{s},k}=[C_{p_{R_{s}},k},C_{p_{R_{s+1}},k})$ (3) with $s=1,\ldots,m$, $C_{p_{L_{m}+1},k}=-\infty$, $C_{p_{R_{m}+1},k}=\infty$. In Figure 1, the cutpoints and intervals of QBP are shown for an arbitrary biomarker. The fourth step is to determine the exceedratio $R_{p_{s},k}$ based on the cutpoints. Here, an exceedratio is a measure for the relative difference of mass in the tails of the predominant and non-predominant group. The higher the exceedratio at a cutpoint, the higher the probability that a new subject contained in this tail belongs to the predominant group. Note that the predominant group may be different for the left and the right tail and the predominant group has more mass in the tail at the $C_{p_{L_{0}},k}$ and $C_{p_{R_{0}},k}$ than the non-predominant group. Thus the exceedratio $R_{p_{0},k}$ is greater than 1 at the corresponding quantile $q_{p_{L_{0}},k}^{(1-D_{L,k})}$ and $q_{p_{R_{0}},k}^{(1-D_{L,k})}$. However, this may not necessarily be greater than 1 for the other percentiles further in the tails. For the left and the right tail, the exceedratio is defined by $\displaystyle R_{p_{L_{s}},k}=F_{(D_{L},k)}^{-1}(C_{p_{L_{s}},k})/p_{{L_{s}},k},\qquad R_{p_{R_{s}},k}=(1-F_{(D_{R},k)}^{-1}(C_{p_{R_{s}},k})/(1-p_{{R_{s}},k}),$ (4) with $F_{(0,k)}$ and $F_{(1,k)}$ the empirical distribution function of biomarker $k$ for the controls and the cases, respectively, and, $F^{-1}$ is the inverse function of $F$. #### 2.1.2 Scoring mechanism Aiming to discriminate cases from controls, we will attribute the interval scores $V_{s,k}\in$ $\\{V_{0,k}$, $V_{L_{s},k}$, $V_{R_{s},k}\\}$ to the different intervals $I_{s,k}\in$ $\\{I_{0,k}$, $I_{L_{s},k}$, $I_{R_{s},k}\\}$, that were defined in (3), respectively. The result of the scoring mechanism - as explained below - applied on the artificial example from Figure 1 is shown in Table 2. Firstly, the predominant group in a tail will determine the sign of the interval scores. Whereas negative signs correspond to predominance of the healthy control group ($D_{L,k}=0$ or $D_{R,k}=0$), positive signs belong to predominance of the cases ($D_{L,k}=1$ or $D_{R,k}=1$). Secondly, to guarantee the predominant group has more mass in the tail for a certain percentile than the non-predominant group, and therefore a certain discriminating power, we introduce lower boundaries $R^{*}=(R_{1}^{*},\ldots,R_{m}^{*})$ on the exceedratios in (4) with $R_{s}^{*}>1,\;\forall s\in\\{1,\ldots,m\\}$. To indicate whether these lower boundaries – which we can choose ourselves – are met for biomarker $k$, we apply binary exceedscores for the left-tail $e_{L,k}=(e_{L_{0},k},\ldots,e_{L_{m},k})$ and right-tail $e_{R,k}=(e_{R_{0},k},\ldots,e_{R_{m},k})$. Note that this can vary per tail (percentile) and biomarker, as can be seen in the artificial example in Table 2. The binary exceedscores are mathematically defined by $\displaystyle e_{L_{s},k}=\mathbbm{1}(R_{p_{L_{s}},k}\geq R_{s}^{*}),\qquad e_{R_{s},k}=\mathbbm{1}(R_{p_{R_{s}},k}\geq R_{s}^{*}),$ (5) for $s=0,\ldots,m$ and with $\mathbbm{1}(A)$ an indicator value being $1$ if $A$ is true and zero otherwise. Note that for $s=1,\ldots,m$, the binary exceedratios $e_{L_{s-1},k}$ and $e_{R_{s-1},k}$ correspond to the intervals $I_{L_{s},k}$ and $I_{R_{s},k}$, respectively. Thirdly, intending to put more emphasis on subjects having (extreme) values in tails, we introduce maximal interval scores $v=(v_{1},\ldots,v_{m})$ such that $v_{1}\leq v_{2}\leq\ldots\leq v_{m}$. By appending these scores with the binary exceedratios, we will ensure that scores are only assigned in case of a certain discriminating power of a tail. For $s=1,\ldots,m$ we obtain the interval scores $\begin{split}V_{L_{s},k}&=(-1)^{(1-D_{L_{k}})}\cdot\max\\{v_{1}\cdot e_{L_{0},k},\ldots,v_{s}\cdot e_{L_{s-1},k}\\},\\\ V_{R_{s},k}&=(-1)^{(1-D_{L_{k}})}\cdot\max\\{v_{1}\cdot e_{R_{1}0,k},\ldots,v_{s}\cdot e_{R_{s-1},k}\\}.\end{split}$ (6) Note that for increasing $s$, the functions $\max\\{v_{1}\cdot e_{L_{0},k},\ldots,\allowbreak v_{s}\cdot e_{L_{s-1},k}\\}$ and $\max\\{v_{1}\cdot e_{R_{0},k},\ldots,v_{s}\cdot e_{R_{s-1},k}\\}$ are non- decreasing and that the central interval $I_{0,k}$ always obtains a neutral score $V_{0,k}=0$. Table 2: Scoring mechanism for the arbitrary example in Table 1 using lower boundaries for the exceedratios $R^{*}=(2,3,5)$ and maximal interval scores $v=(v_{1},v_{2},v_{3})=(1,2,3)$ for a single biomarker $k$ (index $k$ suppressed). Note that $D_{L}=1$ and $D_{R}=0$. | $R_{p_{L_{2}}}$ | $R_{p_{L_{1}}}$ | $R_{p_{L_{0}}}$ | | $R_{p_{R_{0}}}$ | $R_{p_{R_{1}}}$ | $R_{p_{R_{2}}}$ ---|---|---|---|---|---|---|--- Exceedratio | 0 | 0.62 | 2.25 | | 4.07 | 4.8 | 3 Lower boundaries on exceedratio ($R^{*}$) | 5 | 3 | 2 | | 2 | 3 | 5 Intervals | $\mathbf{I_{L_{3}}}$ | $\mathbf{I_{L_{2}}}$ | $\mathbf{I_{L_{1}}}$ | $\mathbf{I_{0}}$ | $\mathbf{I_{R_{1}}}$ | $\mathbf{I_{R_{2}}}$ | $\mathbf{I_{R_{3}}}$ | $e_{L_{2}}$ | $e_{L_{1}}$ | $e_{L_{0}}$ | | $e_{R_{0}}$ | $e_{R_{1}}$ | $e_{R_{2}}$ Binary exceedscores | 0 | 0 | 1 | | 1 | 1 | 0 | $v_{3}$ | $v_{2}$ | $v_{1}$ | | $v_{1}$ | $v_{2}$ | $v_{3}$ Maximal interval scores ($v$) | 3 | 2 | 1 | | 1 | 2 | 3 | $V_{L_{3}}$ | $V_{L_{2}}$ | $V_{L_{1}}$ | $V_{0}$ | $V_{R_{1}}$ | $V_{R_{2}}$ | $V_{R_{3}}$ Interval scores | 1 | 1 | 1 | 0 | -1 | -2 | -2 #### 2.1.3 Scoring individual subjects Now all elements of the QBP are determined, the disease scores ($DS_{i,k}$) can be computed for each subject $i$ per biomarker $k$. The disease score $DS_{i,k}$ is in essence a measure of the position of the biomarker value $x_{i,k}$ with respect to the predominant group. In order to prioritize specific biomarkers above others, biomarker weights $w=(w_{1},\ldots,w_{r})$ are introduced. The disease score $DS_{i,k}$ defined by $\displaystyle DS_{i,k}=\begin{cases}V_{L_{s},k}\cdot w_{k}&\text{ if }x_{i,k}\in I_{L,s,k},\\\ 0&\text{ if }x_{i,k}\in I_{0,k},\\\ V_{R_{s},k}\cdot w_{k}&\text{ if }x_{i,k}\in I_{R,s,k},\end{cases}$ (7) with $s=1,\ldots,m$. Note that $x_{i,k}$ will always fall in one of the intervals $I_{L_{m},k}$ $I_{L_{m-1},k}\ldots,I_{L_{1},k}$, $I_{0,k}$, $I_{R_{1},k},\ldots,I_{R_{m-1},k},I_{R_{m},k}$. By summing over all biomarkers, a total disease score $TDS_{i}=\sum_{k=1}^{m}DS_{i,k}$ per subject $i$ can be calculated. An extreme positive value for subject i, indicates that the subject is most likely a case, while an extreme negative value means that subject $i$ most likely a control. A value of zero would indicate that the subject is as likely a case as a control. This procedure is applied on an arbitrary example in Table 3. Table 3: Arbitrary example of the calculation of the summation of disease scores $DS_{i,k}$ and the total disease score $TDS_{i}$ for subjects $i\in\\{a,b,c\\}$. Here, we have the biomarker weights $w=(1,1,1,1,1)$ and maximal interval scores $v=(v_{1},v_{2},v_{3})=(1,2,3)$ | Biomarker $k$ | | Subject | TDS ---|---|---|---|--- Interval | $1$ | $2$ | $3$ | $4$ | $5$ | | | $I_{L_{3},k}$ | $1$ | $-1$ | $2$ | $-3^{c}$ | $-3$ | | a | $3$ $I_{L_{2},k}$ | $1^{a}$ | $-1$ | $2$ | $-1$ | $-2^{c}$ | | b | $0$ $I_{L_{1},k}$ | $1$ | $-1^{c}$ | $1$ | $-1^{b}$ | $0$ | | c | $-7$ $I_{0,k}$ | $0^{b}$ | $0^{a}$ | $0^{a,c}$ | $0$ | $0^{b}$ | | | $I_{R_{1},k}$ | $-1^{c}$ | $0^{b}$ | $1^{b}$ | $0$ | $0$ | | | $I_{R_{2},k}$ | $-2$ | $0$ | $2$ | $0^{a}$ | $2^{a}$ | | | $I_{R_{3},k}$ | $-2$ | $0$ | $3$ | $3$ | $2$ | | | ### 2.2 (Penalized) Logistic regression As described in hosmer2000introduction (18), logistic regression considers $n$ independent observations $\\{(y_{i},x_{i});i=1,\ldots,n\\}$, where $y_{i}$ corresponds to a disease ($y_{i}=1$) or no disease ($y_{i}=0$) and $\bm{x}_{i}=(1,x_{i,1},\ldots,x_{i,r})$ is the vector of independent predictor variables, which are the results of the $r$ biomarkers. The logistic regression model assumes that, $\mathbb{P}(Y_{i}=1|\bm{x}_{i})=\pi(\bm{x}_{i})=1-\mathbb{P}(Y_{i}=0|\bm{x}_{i})),$ (8) with $Y_{i}$ Bernoulli$(\pi(\bm{x}_{i}))$ distributed and $\pi(\bm{x}_{i})$ given by $\pi(\bm{x}_{i})=\frac{\exp(\beta_{0}+\sum_{j=1}^{r}x_{i,j}\beta_{j})}{1+\exp(\beta_{0}+\sum_{j=1}^{r}x_{i,j}\beta_{j}))}$ (9) In case the number of events is large enough to be able to estimate all model parameters, maximum likelihood estimators can be used. The log-likelihood function for $\bm{y}=(y_{1},\ldots,y_{n})$ is given by $l(\beta)=\sum_{i=1}^{n}[y_{i}\cdot\pi(\bm{x}_{i})+(1-y_{i})\cdot\log(1-\pi(\bm{x}_{i}))].$ (10) In case the number of events is sparse, a penalized logistic regression can be used to determine the most promising or relevant biomarkers. The penalized logistic regression model maximizes the log-likelihood function $l(\beta)$ in (10) with a penalty term $P(\beta)$, i.e. maximizes $l_{\lambda}(\beta)=l(\beta)-\lambda P(\beta)$ over $\beta$ for a fixed value of $\lambda$ that determines the strength of the penalty. Three well known penalty functions are the lasso tibshirani1996regression (35), elastic net (EN) zou2005regularization (41) and the ridge hoerl1970ridge (17) (see (11)). $\displaystyle\text{Lasso: }P(\beta)=\sum_{k=1}^{r}|\beta_{k}|$ $\displaystyle\qquad\text{EN: }P_{\alpha}(\beta)=\sum_{k=1}^{r}[\frac{1-\alpha}{2}\beta_{k}^{2}+\alpha|\beta_{k}|]$ $\displaystyle\qquad\text{Ridge: }P(\beta)=\sum_{k=1}^{r}\beta_{k}^{2}$ (11) with $\alpha$ an additional parameter for the elastic net. ### 2.3 Principal Components Logistic Regression First of all, we briefly describe the concept of principal component analysis (PCA) in line with a more comprehensive description of this method in aguilera2006using (1). Let all observations be contained in matrix $\bm{X}=(x_{i,k})_{n\times r}$, with column vectors $\bm{X}_{1},\bm{X}_{2},\ldots,\bm{X}_{r}$. Furthermore, denote the sample covariance matrix $\bm{S}=(s_{k,l})_{r\times r}$ with the elements $s_{k,l}=\tfrac{1}{n-1}\sum_{i=1}^{n}(x_{i,k}-\bar{x}_{k})(x_{i,l}-\bar{x}_{l})$, with sample means given by $\bar{x}_{k}=\tfrac{1}{n}\sum_{i=1}^{n}x_{i,k}$, with $k=1,\ldots,r$. In order to simplify, without loss of generality, it is considered that the observations are centered, so that $\bar{x}_{1}=\ldots=\bar{x}_{r}=0$, and the sample covariance matrix $\bm{S}=(s_{k,l})_{r\times r}=\tfrac{1}{n-1}\bm{X^{\prime}}\bm{X}$. The sample principal components (pc’s) are defined as orthogonal linear spans with maximum variance of the column matrix $\bm{X}$, denoted by $Z_{k}=\bm{XV}_{k}$ with $k=1,\ldots,r$. The vectors $\bm{V}_{1},\ldots,\bm{V}_{r}$ that define the pc’s, are the eigenvectors of the sample covariance matrix $\bm{S}$ associated to their corresponding eigenvalues $\lambda_{1}\geq\ldots\geq\lambda_{r}\geq 0$. These eigenvalues are again the variances of the corresponding pc’s. If we denote by $\bm{Z}$ the matrix whose columns are the sample pc’s, it can be expressed as $\bm{Z}=\bm{XV}$, with $\bm{V}=(v_{k,l})_{r\times r}$ being the matrix whose columns are the eigenvectors of the sample covariance matrix. Note that the sample variance can be decomposed as $\bm{S}=\bm{V\Delta V}^{\prime}$, with $\bm{V}$ orthogonal, $\bm{V}^{\prime}$ being the transposed of $\bm{V}$ and $\bm{\Delta}=diag(\lambda_{1},\ldots,\lambda_{r})$, so the matrix of observations is given by $\bm{X}=\bm{ZV^{\prime}}$. This pc decomposition has given us an approximate reconstruction of each original observation in terms of a reduced number of pc’s that was selected based on explained variance, namely $\bm{X}_{k}=\sum_{l=1}^{s}\bm{Z}_{l}v_{k,l},\;k=1,\ldots,r,\;\text{with }s\leq r.$ (12) The percentage of the variability that is accounted for by the model is given by $\frac{\sum_{l=1}^{s}\lambda_{l}\cdot 100}{\sum_{l=1}^{r}\lambda_{l}},\;\text{with }s\leq r.$ (13) Now that the pc’s are obtained, the logit model is applied, with (9) being replaced by $\pi_{s}(\bm{Z}_{i})=\frac{\exp\\{\beta_{0}+\sum_{k=1}^{s}\sum_{l=1}^{s}z_{i,l}v_{k,l}\beta_{k}\\}}{1+\exp\\{\beta_{0}+\sum_{k=1}^{s}\sum_{l=1}^{s}z_{i,l}v_{k,l}\beta_{k}\\}}=\frac{\exp\\{\beta_{0}+\sum_{l=1}^{s}z_{i,l}\gamma_{l}\\}}{1+\exp\\{\beta_{0}+\sum_{l=1}^{s}z_{i,l}\gamma_{l}\\}}$ (14) with $z_{i,l}$ being the elements of the pc’s matrix $Z=XV$ and $\gamma_{l}=\sum_{k=1}^{r}v_{k,l}\beta_{k},\;k=1,\ldots,r$. ### 2.4 Linear Discriminant Analysis In the search for a separating hyperplane using linear discriminant analysis (LDA), two approaches can be distinguished, namely LDA based on the Bayes’ rule and Fisher-LDA. We focus on Bayesian LDA, since it appears to be more suitable with a large number of covariates vera2011discrimination (37). As extensively described in friedman2001elements (14), Bayesian LDA assumes Gaussian class densities with a common covariance matrix for all classes. For the binary case, this comes down to observing the log-ratio of the cases ($y=1$) and the controls ($y=0$). This log-ratio is defined by $\log\frac{P(Y=1|X=x)}{P(Y=0|X=x)}=\log\frac{\pi_{1}}{\pi_{0}}-\frac{1}{2}(\mu_{1}+\mu_{0})^{T}\Sigma^{-1}(\mu_{1}-\mu_{0})+x^{T}\Sigma^{-1}(\mu_{1}-\mu_{0}),$ (15) with the prior distributions $\pi_{1}$ and $\pi_{0}$ and the mean vectors of the multivariate Gaussian $\mu_{1}$ and $\mu_{0}$ of the cases and controls, respectively. In addition, $\Sigma^{-1}$ denotes the common covariance matrix and $x$ the vector of biomarker values of a subject. ### 2.5 Partial Least Squares - Linear Discriminant Analysis Partial least squares (PLS) wold1985partial (39) was first introduced for a continuous response, however, later a two-step approach for binary classification was proposed, namely PLS-LDA nguyen2002tumor (24). Here, PLS is used for dimension reduction and then (Fisher)-LDA is used on the PLS latent variables. The underlying idea of PLS regression is to find uncorrelated linear transformations of the original predictor variables which have high covariance with the response variables. In this case, the classes of cases and controls are represented as binary responses and treated as if they were continuous in the projection on the latent structure of PLS boulesteix2004pls (3). Since the principle of LDA is already explained in Subsection 2.4, we will now explain the PLS dimension reduction using the SIMPLS algorithm de1993simpls (10). Let us first recall that $\bm{X}\in\mathbb{R}^{n\times r}$ denotes the matrix containing all biomarker observations. Then, $\bm{Z}=\bm{XA}\in\mathbb{R}^{n\times s}$ denotes the matrix of linear transformations, with the column vectors $\bm{Z}_{1},\ldots,\bm{Z}_{s}$ representing the PLS latent variables of $\bm{Z}$. Here, the matrix $\bm{A}\in\mathbb{R}^{r\times s}$ defines the linear transformation and contains the vectors $a_{1},\ldots,a_{s}$ as its columns. The SIMPLS algorithm determines the vector $a_{1},\ldots,a_{s}$ by computing linear transformations of $X$ and linear transformations of $y=(y_{1},\ldots,y_{n})$ which have maximal covariance, under the constraint that the linear transformations of $X$ (the PLS latent variables) are mutually uncorrelated. In particular, we first determine the unit vector $a_{1}$ and scalar $b_{1}$ maximizing the empirical covariance $\hat{COV}(Xa_{1},b_{1}y)$. Then for all $l=2,\ldots,s$, the unit vector $a_{l}$ and scalar $b_{l}$ maximize $\hat{COV}(Xa_{l},b_{l}y)$ subject to $\hat{COV}(Xa_{l},b_{u}y)=0$ for all $u=1,\ldots,l-1$. Note that before applying the SIMPLS algorithm, $y$ and the columns of $X$ need to be centered. Now we have obtained the matrix $\bm{Z}=\bm{XA}$, Fisher LDA is applied using $Z_{1},\ldots,Z_{s}$ as predictor variables. In order to determine the optimal number of components $s$ that results into the best classification performance, cross-validation (see 2.11) is performed. ### 2.6 Support Vector Machine Support vector machine (SVM) is a generalization of optimal separating hyperplanes to the non-separable case and creates non-linear decision boundaries for classification composed by taking linear combinations of a largely transformed (sometimes infinite) version of the feature space boser1992training (2, 9). In both the separable and non-separable situation, we have $n$ independent observations $\\{(y_{i},x_{i});i=1,\ldots,n\\}$, where $y_{i}$ corresponds to a disease ($y_{i}=1$) or no disease ($y_{i}=-1$) and $\bm{x}_{i}=(1,x_{i,1},\ldots,x_{i,r})$. Here, we can define a hyperplane by $\\{\bm{x}:f(\bm{x})=\bm{x}^{T}\beta+\beta_{0}=0\\}$ (16) and the classification rule $\text{sign}[\bm{x}^{T}\beta+\beta_{0}]$ to distinguish between cases and controls. As extensively described in friedman2001elements (14), we can find the optimal separating hyperplane that maximizes the margin (M) between the cases and controls, by solving the following optimization problem $\displaystyle\max_{\beta,\beta_{0},||\beta||=1}$ $\displaystyle\quad M$ (17) subject to $\displaystyle\quad y_{i}(\bm{x_{i}}^{T}\beta+\beta_{0})>M,\;i=1,\ldots,n.$ Note that the problem (17) can not be solved for the non-separable case. To allow for overlap in the feature space between cases and controls, a slack variable $\xi_{i}$ that allows for points on the wrong side of the decision boundary was introduced. This concept of accepting errors in the training set is called soft margin. When extending (17) with this slack variable that is proportional to the margin, we the following optimization problem $\displaystyle\min_{\beta,\beta_{0}}$ $\displaystyle\quad\frac{1}{2}||\beta||^{2}+C\sum_{i=1}^{n}\xi_{i}$ (18) subject to $\displaystyle\quad\xi_{i}\geq 0,\;y_{i}(\bm{x_{i}}^{T}\beta+\beta_{0})>1-\xi_{i},\;\forall_{i},$ where the parameter $C$ is a cost parameter that can be used for regularization cortes1995support (9). Note that for $C=\infty$, we obtain the separable case again. The quadratic optimization problem can be rewritten as a dual SVM problem, such that it only depends inner products. We obtain $\displaystyle\max_{\alpha}$ $\displaystyle\quad\sum_{i=1}^{n}\alpha_{i}-\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}y_{i}y_{j}\bm{x_{i}}^{T}\bm{x_{j}}$ (19) subject to $\displaystyle\quad 0\leq\alpha_{i}\leq C$ This form makes it possible to apply the kernel trick, in which the inner product $\bm{x_{i}}^{T}\bm{x_{j}}$ is replaced by $K(\bm{x_{i}},\bm{x_{j}})$ representing a kernel that enlarges the original feature space using polynomials or splines. The main advantage of this enlarged space is the enhanced training-class separation. To avoid over-fitting, one can make a trade-off between model complexity and error frequency by changing the soft margin cost parameter $C$ cortes1995support (9). In this study, we apply two types of kernels, namely the linear $K(\bm{x_{i}},\bm{x_{j}})=\langle h(\bm{x_{i}}),h(\bm{x_{j}})\rangle$ and the radial base function (RBF) with $K(\bm{x_{i}},\bm{x_{j}})=\exp(-\gamma||\bm{x_{i}}-\bm{x_{j}}||^{2}),\gamma>0$. ### 2.7 Random Forest A random forest algorithm is an ensemble of individual regression (or decision) trees, that can be used for both regression or classification problems. Each individual tree is grown by recursively selecting a number of random features from the training sets composed of bootstrap samples from the original data, and consequently creating two daughter nodes at the feature that provides the best split. Here, the best split is defined such that the response can be predicted in the best possible way. This partitioning at nodes continues until a stopping criterion has been met. In the end, each tree provides a tree-structured classifier $\hat{C}_{b}(x)$. Since individual trees have a relatively low bias but are noisy, it is beneficial to average individual trees to reduce the variance friedman2001elements (14). The random forest classifier consists of a majority vote of the collection of all individual tree classifiers. In specific, $\displaystyle\hat{C}_{\text{rf}}^{B}(x)=\text{majority vote}\\{\hat{C}_{b}(x)\\}_{1}^{B}.$ (20) By combining all votes of $x$ for which $x$ is not contained in the training set we obtain the out-of-bag classifier of input $x$ friedman2001elements (14). The proportion of these out-of-bag votes is used to determine the classification performance in terms of AUC, as explained in Subsection 2.10. ### 2.8 k-Nearest Neighbors The philosophy behind k-nearest neighbors (kNN) is that observations that show a high degree of similarity are likely to share the same class label. Here, the distance between data points is considered a measure for similarity. The (kNN) technique searches, for each point in the validation dataset, the $k$ datapoints from the training set that are closest in terms of Euclidean distance. The classification is decided by majority vote, with ties broken at random. If there are ties for the kth nearest vector, all candidates are included in the vote friedman2001elements (14). In the case of skewed class distributions, this majority voting might be somewhat problematic, since one class is dominant by default coomans1982alternative (8). Generally, larger values of $k$ make the classification less susceptible to the effect of noise everitt2011miscellaneous (11). The value of value $k$ is based on cross validation, as explained in Subsection 2.10. Moreover, in this study, we normalize all input variables before applying kNN. ### 2.9 XGBoost EXtreme Gradient Boosting (XGBoost) is a variant of the Gradient Boosting Machines (GBM) algorithm that includes regularization and dedicates its name due to its highly efficient algorithmic implementation chen2016xgboost (7). XGBoost is a machine learning technique that uses the boosting principle by combining weakly performing individual trees into an ensemble of trees representing a strong classifier. The primary purpose of boosting is to reduce bias, but also suitable for reducing variance zhou2012ensemble (40). XGBoost evaluates the classification performance in each iteration and aims to correct for the errors in each consequent step by adding a new tree. This new tree is trained on the gradient, that is determined by deriving the negative gradient of the loss function with respect to the predictions. The algorithm repeats this process for pre-specified number of iterations. Regularization is applied to avoid overly complex models. The predictions of the final ensemble of trees are the weighted sum of the predictions on the log odds scale from the individual tree models. As extensively described in chen2016xgboost (7), XGBoost aims to minimize the regularized objective function $\mathcal{L}^{(t)}=\sum_{i=1}^{n}l\left(y_{i},\hat{y_{i}}^{(t-1)}+f_{t}(x_{i})\right)+\Omega(f_{t}),$ (21) where $\hat{y_{i}}^{(t)}$ is the prediction of the $i$-th instance at the $t$-th iteration, $\Omega(f)$ the regularization term. In each iteration, a new tree $f_{t}$ is added aiming to minimize (21). Given the convex nature of the loss function $l$, a second order approximation of $\mathcal{L}^{(t)}$ is applied. In addition to the regularization of weights leaf weights, shrinkage is implemented in the XGBoost algorithm by scaling newly added weights with a factor $\nu$, with $0<\nu\leq 1$. Here, the lower the value for $\nu$, the higher the computation time. Empirically, it was found that small values ($\nu<0.1$) lead to much better generalization error friedman1999stochastic (15). ### 2.10 Performance measures To assess the performance of the classification of cases and controls for all methods, a receiver operating characteristic (ROC) curve is constructed by means of the sensitivity (true positive rate) and the specificity (1$-$false positive rate) using different cut-offs for the probability of an outcome steyerberg2010assessing (34). Here, each method requires a different way to define these cut-offs. For QBP, we use the total disease score of each subject as different cut-offs. The logistic regression approaches naturally have an estimation of the class probabilities. Both LDA and PLS-LDA use the posterior probability that follows from the Bayesian way of modeling. SVM applies Platt-scaling to come up with the posterior probability for the classifier Platt99probabilisticoutputs (28). The proportion of the votes is used for random forest and kNN. Lastly, XGB uses the ’binary:logistic’ objective function to define the class probabilities. For each cutpoint the sensitivity and specificity are defined by $\begin{split}\text{{Sensitivity}}=\frac{\text{{TP}}}{\text{{TP}}+\text{{FN}}},\qquad\text{{Specificity}}=\frac{\text{{TN}}}{\text{{TN}}+\text{{FP}}},\end{split}$ (22) with $TP=\;$true positives, $TN=\;$true negatives, $FP=\;$false positives, $FN=\;$false negatives. In fact, the area under the ROC curve (AUC) represents the probability that a randomly chosen positive example is correctly rated (ranked) with greater likelihood than a randomly chosen negative example. Moreover, this probability of correct ranking is the same quantity estimated by the non-parametric Wilcoxon statistic bradley1997use (4). Thus the higher the AUC the better the classification. Here, a perfect separation of cases and controls is denoted by $AUC=1$ and a separation which is not better than random is denoted by $AUC=0.5$. To determine the AUC, the trapezoidal integration method is used, which is implemented by the [R] software package ’ROCR’ ROCRpackage (31). The performance of the biomarker inclusion is evaluated with the sensitivity, specificity and accuracy. Here, the accuracy defined by $\text{{Accuracy}}=\frac{\text{{TP}}+\text{{TN}}}{\text{{TP}}+\text{{TN}}+\text{{FP}}+\text{{FN}}}.$ (23) The closer the accuracy is to one the better the classification. ### 2.11 Cross-validation A major difference between the simulation scenarios and the case studies is the (dis-)ability to generate datasets of an arbitrary size. Therefore, we choose to apply different cross-validation (CV) strategies for the simulation scenarios and the case studies. For all simulation scenarios, we generate a total number of 500 repetitions, each with a separate training set of size $n$ and new validation set with 5000 subjects. Note that the training set size depends per simulation scenario, and is defined in Table 4. For every single repetition, we apply 6-fold CV on the training set to determine the optimal set of tunable parameters for a particular method. Here, the parameter settings with the highest mean AUC over all 6 folds are selected as the optimal set of tunable parameter $t_{opt}$. So $t_{opt}=\operatorname*{\arg\\!\max}_{t}\\{\text{mean}(\text{AUC}(t))\\}$. Then the predictive performance of each method is assessed on the validation set using the optimal parameters obtained via CV on the training data. In the case studies we apply repeated double CV with a total number of 500 repetitions. For each repetition, 6-fold outer-CV is applied to assess the predictive performance. Here, the dataset is divided into a training and validation (also called test) set, according to a 5:1 ratio. For all 6 permutations of the outer-CV training and outer-CV validation set, $t_{opt}$ is determined using 6-fold inner-CV. Consequently, this parameter is applied in the model fit on the full outer-CV training set and used to assess the predictive performance on the outer-CV validation set. Since one particular split of the outer-CV could skew the results positively or negatively, we use different splits per repetition to obtain an unbiased estimate of the predictive performance. This way of cross-validation is especially useful when limited data or just one dataset is available filzmoser2009repeated (12). In addition, the prediction error is representative for new samples westerhuis2008assessment (38). ### 2.12 Tunable parameters In this study, the considered methods vary in the number of tunable parameters. Where LR and LDA have no tunable parameters, the methods PLR, PLS- LDA, PCLR and kNN just have a single tunable parameter. Lastly, QBP, RF, SVM and XGBoost use numerous tunable parameters. In specific, we define the penalty term for PLR, the number of principal components for PLS-LDA and PCLR, and the number of neighbors for kNN. Here, the penalty term of PLR was obtained using the automated cross-validation procedure of the $glmnet$ package glmnet (13) of [R]. For both PLS-LDA and PCLR we selected the optimal number of sparse components via CV $ncomp\in\\{1,\ldots,p\\}$, with $p$ the number of covariates. For kNN, the optimal number of neighbors was selected from a set of candidates with step size from 1 to 20 and an increasing step size above 20 neighbors. QBP has in principle many tunable parameters, but we made some decisions upfront. We fix both the number of percentiles and the corresponding proportion choice – obtaining $\\{q_{1},q_{5},q_{10},q_{90},\allowbreak q_{95},q_{99}\\}$ – and keep all biomarker weights equal. The settings that are determined by cross-validation are the lower boundaries of the exceedratios and the maximal interval scores, which are defined by the sets $R^{*}=(R_{1}^{*},R_{2}^{*},R_{3}^{*})\in\\{(1.5,2,3)$, $(1.5,2,5)$, $(1.5,2.5,5)$, $(1.4,2.5,8)$, $(2,3,6)$, $(2,3,10)\\}$ and $v=(v_{1},v_{2},v_{3})\in\\{(1,2,3),(1,4,9)\\}$ respectively. Eventually, the optimal setting is selected from $R^{*}\times v$. To reduce the computational complexity for RF, XGBoost and SVM in the final simulation study, we have selected a subset of a larger grid of candidate tunable parameters. Each combination of tunable parameters was used to fit a model on a training dataset of 5000 subjects, after which the performance was evaluated on the corresponding validation datasets with 5000 subjects. To determine the final subset, we considered all scenarios and selected the most relevant tuning parameters using a regression approach. For RF, checking the convergence of the out-of-bag error resulted in a total number of trees $ntree$ of 3000. In addition, we chose number of variables sampled randomly at each split to be $mtry\in\\{6,9,12,15,18\\}$. For XGBoost, the final set of tunable parameters is $nrounds=300$, $eta\in\\{0.05,0.15,0.3\\}$, $max\\_depth\in\\{2,4\\}$, $colsample\\_bytree=0.75$, $min\\_child\\_weight=2$, $gamma=0$ and $subsample=1$. ## 3 Simulation study ### 3.1 Model and settings The group indicator $y_{i}\in\\{0,1\\}$ was divided such that we obtain $\phi\cdot n$ cases ($y_{i}=1$) and $(1-\phi)\cdot n$ controls ($y_{i}=0$), where $\phi$ denotes the proportion of cases and $n$ the total number of participants $n$. Then the variables $z_{i,1},\ldots,z_{i,r}$ were drawn from a multivariate distribution with mean $0$ and variance-covariance matrix R. In the statistical software [R], we used the mvrnorm function of the ’MASS’ package to create the variables $z$ MASSpackage (36). Then the variables $v_{i,1},\ldots,v_{i,r}$ were taken equal to $\displaystyle v_{i,k}=\mu_{i,k}+\sigma_{i,k}z_{i,k},$ (24) with $\mu_{i,k}=\alpha_{k}+\beta_{k}y_{i}$ and $\sigma_{i,k}=\eta_{k}\cdot(1+\nu_{k}-2\nu_{k}y_{i})$ for all $i=1,\ldots,n$ and $k=1,\ldots,r$. When $\beta_{k}=0$ and $\nu_{k}=0$, cases and controls are drawn from the same distribution and the variable $v_{i,k}$ does not contribute directly to the classification of cases and controls. Moreover, the variables $\alpha_{k}$ and $\eta_{k}$ differ per dataset and are based on (a transformation of) the MDD case study and correspond to its mean and standard deviation, respectively. Note that positivity of $\sigma_{i,k}$ is ensured in the simulation study by positivity of $\eta_{k}$ and selecting $\nu_{k}$ such that $-0.5<\nu_{k}<0.5$ for all $k$. Finally, we take a transformation of the variables $v_{i,k}$ to have non-normally distributed variables that can be skewed. Thus, $x_{i,k}=\Psi_{k}(v_{i,k})$ with $\Psi_{k}$, the transformation that can be unique for each variable $k=1,\ldots,r$. In total, 9 different types of datasets are simulated, with varying transformations, sample sizes and number of relevant biomarkers ($\beta_{k}\neq 0$ and/or $\nu_{k}\neq 0$). We distinguish two types of transformations, namely $\displaystyle\Psi_{k}(x)=x,\qquad\Psi_{k}(x)=\exp(x),$ (25) where the former one results into normally distributed data and the latter in log-normally distributed data. We select only one type of transformation per dataset and biomarker, except for dataset 5, where for some covariates the biomarker distributions of the controls are normally distributed and those of the cases log-normally distributed, to create differences in terms of skewness. Here, the values for $\alpha_{k}$ and $\nu_{k}$ of $\Psi_{k_{0}}=I^{*}$ of the control group are chosen such its expected average and variance are similar to those of the distribution of the cases with $\Psi_{k_{1}}=\exp$. The variance-covariance matrix R was always the same and based on the MDD case study in this paper. The full specification of R and the settings for $\alpha_{k}$ and $\eta_{k}$ are given in Table 4. The relevant biomarkers varied in number and in the way that they were different between cases and controls. Some varied only in mean ($\beta_{k}\neq 0$), some varied only in variance ($\nu_{k}\neq 0$) and other varied in both means and variances. A full overview of the choices is given in Table 4. Each dataset type is simulated 500 times. Dataset 1,2 and 3 have the identity biomarker transformation and therefore obey a normal distribution. The datasets differ in terms of number of relevant biomarkers. Moreover, the applied linear transformation is a shift in mean $\beta_{k}$ of one standard deviation of that particular biomarker $\sigma_{k}$. Dataset 4 is also normally distributed with a shift in standard deviation $\nu_{k}$. A difference in skewness for some of the biomarkers is simulated in dataset 5. Whereas datasets 6 and 7 solely have log-normally distributed biomarkers, dataset 8 has a mixture of normally and log-normally distributed biomarkers. Dataset 6 to 8 show a fixed shift in mean $\beta_{k}$ and/or shift in standard deviation $\nu_{k}$. Besides that these datasets vary in the total number of participants $n$, where both a balanced and unbalanced number of cases and controls is considered. Except datasets 6c, 7c and 8c that consider an unbalanced setting with $\phi=1/5$, all other datasets are balanced ($\phi=1/2$). Table 4: Full design simulation study: All characteristics of all 9 datasets Note that the transformation $\Psi_{k}=I$ equals $\Psi_{k}(x)=x$ and $\Psi_{k}=\exp$ equals $\Psi_{k}(x)=\exp(x)$. Moreover, $\alpha_{k}$ and $\eta_{k}$ denote the applied mean and standard variance derived from the MDD case study. Lastly, $\beta_{k}$ and $\nu_{k}$ denote the shift in mean and standard deviation. Lastly, empty cells correspond to a value of 0. | Datasets 1-4 | Dataset 5 | Dataset 6 to 7 | Dataset 8 ---|---|---|---|--- | ($\Psi_{k}=I$) | ($\Psi_{k_{y}}\in\\{I^{*},exp\\}$) | ($\Psi_{k}=exp$) | ($\Psi_{k}\in\\{I,exp\\}$) | | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | $\beta_{k}=0$ | | | $\beta_{k}=0$ | $\beta_{k}=0$ | | $\beta_{k}=0$ | | | $\nu_{k}=0$ | $\nu_{k}=0$ | $\nu_{k}=0$ | | $\nu_{k}=0$ | $\nu_{k}=0$ | | | Values of $\alpha_{k}$ and $\nu_{k}$ per transformation | $\forall_{k}$ | $\forall_{k}$ | $\forall_{k}$ | $\forall_{k}$ | $\forall_{k}$ | $\forall_{k}$ | $\forall_{k}$ | | $\Psi_{k}=I$ | $\Psi_{k}=exp$ | $\Psi_{k}=I^{*}$ | | | | | | | | | | | $k$ | $\alpha_{k}$ | $\eta_{k}$ | $\alpha_{k}$ | $\eta_{k}$ | $\alpha_{k}$ | $\eta_{k}$ | | $\beta_{k}$ | $\beta_{k}$ | $\nu_{k}$ | $\Psi_{k_{0}}$ | $\Psi_{k_{1}}$ | $\beta_{k}$ | $\nu_{k}$ | $\Psi_{k}$ | $\beta_{k}$ | $\nu_{k}$ 1 | 617.8 | 509.7 | 6.19 | 0.65 | 604.4 | 439.2 | | | | | exp | exp | | | exp | | 2 | 276.9 | 296.3 | 5.33 | 0.87 | 301.1 | 322.4 | | | | | exp | exp | | | exp | | 3 | 2.61 | 14.94 | -1.86 | 1.53 | 0.50 | 1.55 | | | $\sigma_{3}$ | | exp | exp | | | $I$ | | 4 | 6.94 | 4.81 | 1.62 | 0.95 | 7.90 | 9.52 | | | | -0.15 | $I^{*}$ | exp | -0.29 | -0.15 | exp | -0.29 | -0.15 5 | 72.08 | 16.72 | 4.25 | 0.23 | 72.13 | 17.02 | | | | -0.25 | exp | exp | | -0.25 | $I$ | | -0.25 6 | 16.69 | 17.28 | 2.27 | 1.21 | 20.23 | 36.99 | | $\sigma_{6}$ | $\sigma_{6}$ | | $I^{*}$ | exp | | | exp | | 7 | 3.25 | 1.28 | 1.11 | 0.38 | 3.27 | 1.30 | | | | 0.15 | exp | exp | -0.44 | 0.15 | exp | -0.44 | 0.15 8 | 5.94 | 2.73 | 1.69 | 0.42 | 5.94 | 2.63 | | | | | $I^{*}$ | exp | | | exp | | 9 | 11.66 | 13.59 | 1.84 | 1.22 | 13.29 | 24.78 | | | $\sigma_{9}$ | | exp | exp | -0.41 | | exp | -0.41 | 10 | 1.41 | 0.38 | 0.31 | 0.26 | 1.42 | 0.38 | | | | | exp | exp | -0.14 | | exp | -0.14 | 11 | 62.29 | 20.64 | 4.07 | 0.37 | 62.73 | 23.78 | | | | | exp | exp | | | $I$ | | 12 | 592.1 | 1395 | 5.90 | 0.86 | 526.6 | 549.8 | | | | | exp | exp | | | exp | | 13 | 103.1 | 129.9 | 3.88 | 1.36 | 121.7 | 279.9 | | $\sigma_{13}$ | $\sigma_{13}$ | 0.15 | $I^{*}$ | exp | | 0.15 | exp | | 0.15 14 | 177.4 | 61.28 | 5.13 | 0.31 | 177.0 | 55.50 | | | | | $I^{*}$ | exp | | | exp | | 15 | 53.88 | 29.79 | 3.87 | 0.47 | 53.74 | 26.80 | | | | -0.15 | exp | exp | | -0.15 | exp | | -0.15 16 | 8.55 | 0.76 | 2.14 | 0.09 | 8.56 | 0.78 | | | | 0.10 | exp | exp | | 0.10 | $I$ | | 0.10 17 | 12.97 | 11.29 | 2.30 | 0.69 | 12.62 | 9.84 | | | $\sigma_{17}$ | | exp | exp | | | exp | | 18 | 0.71 | 0.48 | -0.47 | 0.51 | 0.71 | 0.39 | | | | | exp | exp | | | exp | | 19 | 0.37 | 1.78 | 1.47 | 0.78 | 5.93 | 5.45 | | | | | exp | exp | | | $I$ | | 20 | 0.78 | 1.11 | -1.54 | 2.01 | 1.63 | 12.27 | | $\sigma_{20}$ | $\sigma_{20}$ | | exp | exp | | | exp | | 21 | 33.24 | 19.59 | 3.37 | 0.51 | 33.17 | 18.05 | | | | 0.20 | exp | exp | | 0.20 | $I$ | | 0.20 22 | 0.31 | 0.20 | -1.30 | 0.58 | 0.32 | 0.21 | | | | -0.20 | exp | exp | | -0.20 | $I$ | | -0.20 23 | 0.34 | 0.23 | -1.29 | 0.71 | 0.35 | 0.29 | | | $\sigma_{23}$ | | $I^{*}$ | exp | | | exp | | 24 | 0.22 | 0.29 | -1.87 | 0.80 | 0.21 | 0.20 | | | | | exp | exp | | | exp | | 25 | 0.07 | 0.10 | -2.82 | 0.64 | 0.07 | 0.05 | | | | | exp | exp | | | exp | | 26 | 3.64 | 2.12 | 1.04 | 1.01 | 4.72 | 6.29 | | | | | $I^{*}$ | exp | 0.32 | | exp | 0.32 | 27 | 66.95 | 82.64 | 3.37 | 1.82 | 153.1 | 794.2 | | $\sigma_{27}$ | $\sigma_{27}$ | | exp | exp | | | exp | | 28 | 4.98 | 2.34 | 1.39 | 0.92 | 6.10 | 7.02 | | | | 0.10 | exp | exp | | 0.10 | exp | | 0.10 29 | 21.40 | 29.97 | 2.64 | 0.81 | 19.47 | 18.87 | | | | | exp | exp | 0.26 | | exp | 0.26 | 30 | 13.09 | 24.77 | 1.71 | 1.36 | 14.03 | 32.74 | | | $\sigma_{30}$ | | $I^{*}$ | exp | | | exp | | 31 | 14.69 | 12.06 | 2.39 | 0.82 | 15.23 | 14.84 | | | | | exp | exp | 0.31 | | exp | 0.31 | 32 | 7.28 | 5.65 | 1.77 | 0.65 | 7.25 | 5.26 | | | | | exp | exp | | | exp | | 33 | 15.37 | 37.54 | 1.67 | 1.38 | 13.69 | 32.66 | | | | | exp | exp | | | exp | | 34 | 0.13 | 0.20 | -2.64 | 1.21 | 0.15 | 0.27 | | $\sigma_{34}$ | $\sigma_{34}$ | | exp | exp | | | exp | | 35 | 22.53 | 37.47 | 2.62 | 0.94 | 21.27 | 25.29 | | | | | $I^{*}$ | exp | | | exp | | Nr. relevant biomarkers | 0 | 5 | 10 | 9 | 9 | 7 | 9 | 14 Nr. of participants (n) | 100 | 100 | 100 | 100 | 100 | a: 100 | a: 100 | a: 100 | | | | | | b: 400 | b: 400 | b: 400 | | | | | | c: 250 | c: 250 | c: 250 Proportion of cases ($\phi$) | 1/2 | 1/2 | 1/2 | 1/2 | 1/2 | a: 1/2 | a: 1/2 | a: 1/2 | | | | | | b: 1/2 | b: 1/2 | b: 1/2 | | | | | | c: 1/5 | c: 1/5 | c: 1/5 ### 3.2 Results For all binary classification techniques and datasets, the predictive performance is presented in Table 5. In Figure 7 and 3, we present the density plots and confidence intervals of the predictive performance, respectively. These graphs only contain a subset of the techniques, namely PLR.Lasso, LDA, SVM.Radial, RF, kNN, XGB and QBP. This selection is based on superior performance in at least one of the simulated datasets. Moreover, in Table 6 the number of used biomarkers in the final model is presented as well as the applied number of sparse components for the PCLR and PLS-LDA and the number of neighbors for kNN. The effect of a sample size on the sensitivity, specificity and accuracy of the biomarker selection is presented in Table 7. Here, only the methods PLR.Lasso, PLR.EN and QBP are included in the overview, since all the other methods always include all biomarkers and therefore apply no selection. Finally, in Table 8 the average computation times are listed for the datasets with a sample size of $n=100$, $n=250$ and $n=400$. Table 5: Performance (in AUC) of all considered techniques of all simulated datasets on validation data Validation data | | PLR | | PLS- | | SVM | | | | ---|---|---|---|---|---|---|---|---|---|--- Dataset | | LR | Lasso | EN | Ridge | PCLR | LDA | LDA | Linear | Radial | RF | kNN | XGB | QBP 1 ($n=100,\;\phi=1/2$) | mean | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 sd | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 2 ($n=100,\;\phi=1/2$) | mean | 0.943 | 0.967 | 0.971 | 0.972 | 0.937 | 0.977 | 0.977 | 0.971 | 0.973 | 0.919 | 0.853 | 0.915 | 0.854 sd | 0.022 | 0.013 | 0.011 | 0.011 | 0.029 | 0.009 | 0.008 | 0.010 | 0.010 | 0.015 | 0.026 | 0.014 | 0.034 3 ($n=100,\;\phi=1/2$) | mean | 0.984 | 0.988 | 0.992 | 0.992 | 0.980 | 0.995 | 0.996 | 0.994 | 0.993 | 0.968 | 0.957 | 0.953 | 0.948 sd | 0.009 | 0.007 | 0.006 | 0.005 | 0.013 | 0.003 | 0.002 | 0.004 | 0.005 | 0.008 | 0.013 | 0.01 | 0.017 4 ($n=100,\;\phi=1/2$) | mean | 0.499 | 0.499 | 0.499 | 0.499 | 0.500 | 0.499 | 0.499 | 0.500 | 0.527 | 0.629 | 0.530 | 0.584 | 0.652 sd | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.008 | 0.017 | 0.029 | 0.011 | 0.031 | 0.028 5 ($n=100,\;\phi=1/2$) | mean | 0.502 | 0.504 | 0.505 | 0.502 | 0.501 | 0.502 | 0.503 | 0.503 | 0.568 | 0.963 | 0.552 | 0.917 | 0.860 sd | 0.009 | 0.014 | 0.016 | 0.009 | 0.009 | 0.009 | 0.010 | 0.010 | 0.039 | 0.013 | 0.018 | 0.027 | 0.035 6a ($n=100,\;\phi=1/2$) | mean | 0.714 | 0.804 | 0.801 | 0.782 | 0.730 | 0.776 | 0.779 | 0.763 | 0.768 | 0.788 | 0.667 | 0.783 | 0.688 sd | 0.039 | 0.031 | 0.027 | 0.031 | 0.043 | 0.033 | 0.029 | 0.033 | 0.034 | 0.021 | 0.037 | 0.023 | 0.036 6b ($n=400,\;\phi=1/2$) | mean | 0.862 | 0.866 | 0.865 | 0.861 | 0.859 | 0.859 | 0.860 | 0.854 | 0.859 | 0.834 | 0.754 | 0.859 | 0.785 sd | 0.009 | 0.009 | 0.009 | 0.009 | 0.011 | 0.010 | 0.010 | 0.011 | 0.011 | 0.010 | 0.018 | 0.010 | 0.015 6c ($n=250,\;\phi=1/5$) | mean | 0.824 | 0.832 | 0.831 | 0.826 | 0.816 | 0.815 | 0.818 | 0.780 | 0.792 | 0.781 | 0.688 | 0.815 | 0.724 sd | 0.020 | 0.037 | 0.020 | 0.019 | 0.024 | 0.021 | 0.019 | 0.029 | 0.027 | 0.020 | 0.030 | 0.016 | 0.038 7a ($n=100,\;\phi=1/2$) | mean | 0.551 | 0.537 | 0.538 | 0.545 | 0.536 | 0.544 | 0.549 | 0.542 | 0.547 | 0.629 | 0.539 | 0.587 | 0.652 sd | 0.023 | 0.027 | 0.027 | 0.020 | 0.024 | 0.021 | 0.022 | 0.023 | 0.019 | 0.029 | 0.017 | 0.029 | 0.029 7b ($n=400,\;\phi=1/2$) | mean | 0.601 | 0.592 | 0.592 | 0.594 | 0.588 | 0.590 | 0.596 | 0.582 | 0.620 | 0.746 | 0.584 | 0.744 | 0.751 sd | 0.014 | 0.023 | 0.023 | 0.017 | 0.026 | 0.018 | 0.014 | 0.015 | 0.018 | 0.015 | 0.018 | 0.016 | 0.021 7c ($n=250,\;\phi=1/5$) | mean | 0.574 | 0.552 | 0.552 | 0.561 | 0.548 | 0.560 | 0.568 | 0.563 | 0.598 | 0.672 | 0.559 | 0.653 | 0.674 sd | 0.021 | 0.033 | 0.033 | 0.024 | 0.032 | 0.024 | 0.022 | 0.023 | 0.021 | 0.020 | 0.020 | 0.030 | 0.029 8a ($n=100,\;\phi=1/2$) | mean | 0.623 | 0.626 | 0.627 | 0.629 | 0.618 | 0.623 | 0.618 | 0.603 | 0.621 | 0.705 | 0.588 | 0.663 | 0.704 sd | 0.032 | 0.038 | 0.036 | 0.030 | 0.038 | 0.030 | 0.031 | 0.032 | 0.03 | 0.028 | 0.026 | 0.029 | 0.029 8b ($n=400,\;\phi=1/2$) | mean | 0.703 | 0.702 | 0.700 | 0.698 | 0.698 | 0.687 | 0.686 | 0.668 | 0.723 | 0.799 | 0.659 | 0.798 | 0.791 sd | 0.015 | 0.019 | 0.017 | 0.016 | 0.017 | 0.017 | 0.016 | 0.018 | 0.018 | 0.013 | 0.020 | 0.014 | 0.016 8c ($n=250,\;\phi=1/5$) | mean | 0.663 | 0.652 | 0.651 | 0.650 | 0.651 | 0.641 | 0.643 | 0.628 | 0.670 | 0.730 | 0.610 | 0.725 | 0.726 sd | 0.024 | 0.034 | 0.035 | 0.026 | 0.033 | 0.026 | 0.024 | 0.029 | 0.024 | 0.020 | 0.025 | 0.024 | 0.026 Figure 2: Performance (in AUC) of all 500 simulations with each a different training set to tune the parameters and validation set of 5000 subjects to assess the performance Figure 3: Confidence intervals of the classification performances (in AUC) based on 500 simulations per dataset type Table 6: Number of included biomarkers, components for sparse methods derived from training data and applied in final model and neighbors included for kNN. Note that LR.LOGIT, PLR.Ridge, PCLR, PLS-LDA, LDA, SVM, RF, kNN $\&$ XGB use all biomarkers: $mean=35$ and $sd=0$. | | | Biomarkers | Sparse components | ---|---|---|---|---|--- | Nr. relevant | | PLR | | | | Dataset | biomarkers | | Lasso | EN | QBP | PCLR | PLS-LDA | kNN 1 ($n=100,\;\phi=1/2$) | 0 | mean | 16 | 16.5 | 26 | 15 | 5.4 | 12 sd | 13.6 | 13.8 | 6.9 | 11.5 | 5.6 | 9.2 2 ($n=100,\;\phi=1/2$) | 5 | mean | 18.5 | 23 | 21.8 | 29.1 | 6.1 | 19.8 sd | 5.9 | 7.4 | 5 | 4.6 | 3.0 | 6.6 3 ($n=100,\;\phi=1/2$) | 10 | mean | 18.8 | 23.5 | 25 | 26.3 | 4.7 | 19.4 sd | 4.5 | 6.4 | 4.6 | 6.4 | 3.1 | 6.7 4 ($n=100,\;\phi=1/2$) | 9 | mean | 16.4 | 16.8 | 27.2 | 14.6 | 5.4 | 10.4 sd | 13.6 | 13.9 | 6 | 11.5 | 5.3 | 8.4 5 ($n=100,\;\phi=1/2$) | 9 | mean | 14.2 | 14.8 | 22.9 | 16 | 4.8 | 9.7 sd | 13.4 | 13.6 | 4.8 | 11.6 | 4.6 | 8.2 6a ($n=100,\;\phi=1/2$) | 7 | mean | 11.7 | 12.5 | 24.2 | 24.4 | 5.4 | 18.9 sd | 9.6 | 10.9 | 6.3 | 6.8 | 3.8 | 7.7 6b ($n=400,\;\phi=1/2$) | 7 | mean | 24.3 | 26.2 | 6.8 | 33.6 | 6.4 | 47.7 sd | 5.6 | 5.7 | 4.2 | 2.1 | 2.9 | 9.6 6c ($n=250,\;\phi=1/5$) | 7 | mean | 18.5 | 20.4 | 15.7 | 31.8 | 6.0 | 32.8 sd | 9.2 | 10.3 | 7.3 | 3.1 | 3.0 | 10.9 7a ($n=100,\;\phi=1/2$) | 9 | mean | 20.3 | 20.7 | 27.4 | 20.2 | 5.8 | 11.8 sd | 13.4 | 13.6 | 5.8 | 11.7 | 5.0 | 8.6 7b ($n=400,\;\phi=1/2$) | 9 | mean | 28.1 | 28.6 | 16.9 | 30.2 | 7.2 | 21.9 sd | 9.6 | 9.8 | 4.4 | 9 | 3.9 | 15.1 7c ($n=250,\;\phi=1/5$) | 9 | mean | 21.4 | 22.3 | 23.2 | 21.1 | 5.4 | 19 sd | 13.2 | 13.3 | 7.5 | 12.9 | 4.1 | 13.5 8a ($n=100,\;\phi=1/2$) | 14 | mean | 17.5 | 18.4 | 28.7 | 21.3 | 3.9 | 13.4 sd | 12.3 | 12.5 | 5.3 | 9.1 | 3.6 | 8.5 8b ($n=400,\;\phi=1/2$) | 14 | mean | 20.8 | 21.2 | 19.7 | 29.9 | 3.7 | 34 sd | 10.2 | 10.7 | 3.9 | 4.9 | 2.6 | 14.9 8c ($n=250,\;\phi=1/5$) | 14 | mean | 21.9 | 22.6 | 24.4 | 27 | 3.6 | 22.7 sd | 11.9 | 12.2 | 6.8 | 7.4 | 2.7 | 13.2 Table 7: Effect of sample size on inclusion performance of the relevant biomarkers. Note that the methods LR.LOGIT, PLR.Ridge, PCLR, PLS-LDA, LDA, SVM, RF, kNN $\&$ XGB use all biomarkers and are not included in this overview. | Nr. relevant | | PLR | ---|---|---|---|--- Dataset | biomarkers | measure | Lasso | EN | QBP 6a ($n=100,\;\phi=1/2$) | 7 | accuracy | 0.689 | 0.676 | 0.459 sensitivity | 0.557 | 0.58 | 0.878 specificity | 0.721 | 0.7 | 0.355 6b ($n=400,\;\phi=1/2$) | 7 | accuracy | 0.467 | 0.42 | 0.885 sensitivity | 0.903 | 0.922 | 0.696 specificity | 0.358 | 0.295 | 0.932 6c ($n=250,\;\phi=1/5$) | 7 | accuracy | 0.58 | 0.537 | 0.666 sensitivity | 0.771 | 0.797 | 0.791 specificity | 0.533 | 0.472 | 0.635 7a ($n=100,\;\phi=1/2$) | 9 | accuracy | 0.481 | 0.475 | 0.434 sensitivity | 0.618 | 0.628 | 0.922 specificity | 0.434 | 0.422 | 0.266 7b ($n=400,\;\phi=1/2$) | 9 | accuracy | 0.395 | 0.383 | 0.744 sensitivity | 0.883 | 0.887 | 0.942 specificity | 0.226 | 0.208 | 0.676 7c ($n=250,\;\phi=1/5$) | 9 | accuracy | 0.482 | 0.465 | 0.545 sensitivity | 0.68 | 0.698 | 0.903 specificity | 0.414 | 0.385 | 0.421 8a ($n=100,\;\phi=1/2$) | 14 | accuracy | 0.556 | 0.552 | 0.513 sensitivity | 0.569 | 0.599 | 0.917 specificity | 0.548 | 0.522 | 0.244 8b ($n=400,\;\phi=1/2$) | 14 | accuracy | 0.59 | 0.586 | 0.787 sensitivity | 0.732 | 0.738 | 0.939 specificity | 0.495 | 0.485 | 0.687 8c ($n=250,\;\phi=1/5$) | 14 | accuracy | 0.54 | 0.534 | 0.619 sensitivity | 0.709 | 0.725 | 0.894 specificity | 0.427 | 0.406 | 0.435 Table 8: Average computation times (in seconds) for a training (6-fold inner cross-validation) and validation (model fit and evaluation) cycle for a single datasets with a sample size of $n=100$ (dataset 1,2,3,4,5,6a,7a,8a), $n=400$ (dataset 6b,7b,8b) and $n=250$ (dataset 6c,7c,8c). Note that LR.LOGIT and LDA do not include a tunable parameter optimization. | | PLR | | | | SVM | | | | ---|---|---|---|---|---|---|---|---|---|--- Sample size | LR | Lasso | EN | Ridge | PCLR | PLS-LDA | LDA | Linear | Radial | RF | kNN | XGB | QBP 100 | 0.06 | 0.81 | 0.46 | 0.98 | 9.09 | 8.03 | 0.01 | 81.1 | 5.64 | 18.7 | 94.5 | 80.5 | 22.6 400 | 0.06 | 0.39 | 0.40 | 0.67 | 9.71 | 9.64 | 0.02 | 363.9 | 31.6 | 94.9 | 135.6 | 86.5 | 25.4 250 | 0.06 | 0.48 | 0.43 | 0.70 | 9.51 | 8.89 | 0.01 | 367.2 | 14.0 | 51.4 | 125.8 | 82.4 | 24.3 ## 4 Case study ### 4.1 Major Depression Disorder #### 4.1.1 Design of the study The MDD data contains 35 biomarkers, of which 16 are serum based biomarkers and 19 are urine based biomarkers. An overview of all biomarker types is presented in Table 9. These serum and first morning urine biomarkers were selected based on a thorough literature search, combined with a pilot study in 24 participants (12 MDD patients and their sex, age and ethnic matched non-MDD controls). The MDD study contains 101 patients in total, of which 4 patients had missing values. These patients were excluded from the analysis to make a fair comparison between the methods and avoid the effect of imputations on the performance. The predictive performance of all methods is assessed using rdCV. Table 9: Included biomarkers in MDD data, where the numbers are aligned with the biomarker numbers $k$. Serum biomarkers | Urine biomarker ---|--- 1\. BDNF | 9\. Thromboxane | 17\. cAMP | 25\. Endothelin | 33\. Lipocalin 2\. Midkine | 10\. Endothelin | 18\. cGMP | 26\. Aldosteron | 34\. Pregnonelon 3\. Nitrotyrosine | 11\. Lipocalin | 19\. Calprotectin | 27\. Adiponectin | 35\. NPY 4\. EGF | 12\. NPY | 20\. Leptin | 28\. HVEM | 5\. TNFR2 | 13\. Leptin | 21\. LTB4 | 29\. Midkine | 6\. LTB4 | 14\. HVEM | 22\. Cortisol | 30\. EGF | 7\. Cortisol | 15\. Vit-D | 23\. Thromboxane | 31\. SubstanceP | 8\. Calprotectin | 16\. Zonulin | 24\. Isoprostane | 32\. TNFR2 | #### 4.1.2 Results For all binary classification techniques, the predictive performance expressed in the mean and its standard error are shown in Table 10. Besides, this table contains the number of biomarkers that were used on the validation dataset. The density plots of the predictive performance of a subset of the techniques (LR, PLR.Lasso, SVM.Radial, RF, kNN, XGB and QBP) are shown in 5. Table 10: Summary statistics of the performance of all methods on MDD data: AUC validation data and included number of components ∗ For PCLR, PLS-LDA the number of sparse components is given by $(\ldots)$ and for kNN the number of neighbors $k$ is represented by $(\ldots)$ | | | PLR | | | | SVM | | | | ---|---|---|---|---|---|---|---|---|---|---|--- | | LR | Lasso | EN | Ridge | PCLR | PLS-LDA | LDA | Linear | Radial | RF | kNN | XGB | QBP AUC VAL | mean | 0.518 | 0.486 | 0.485 | 0.512 | 0.492 | 0.505 | 0.523 | 0.516 | 0.501 | 0.635 | 0.495 | 0.606 | 0.680 sd | 0.130 | 0.131 | 0.130 | 0.135 | 0.131 | 0.0.131 | 0.134 | 0.133 | 0.134 | 0.136 | 0.131 | 0.150 | 0.132 NCOMP | mean | 35 | 22.2 | 22.7 | 35 | 35 (19.3) | 35 (5.5) | 35 | 35 | 35 | 35 | 35 (9.3) | 35 | 27.6 sd | 0 | 13.2 | 13.2 | 0 | 0 (10.9) | 0 (5.4) | 0 | 0 | 0 | 0 | 0 (6.6) | 0 | 4.5 Figure 4: AUC of validation data Figure 5: Performance (in AUC) measured on validation dataset: 500 repeats, 6-fold outer CV, 6-fold inner CV ### 4.2 Trisomy #### 4.2.1 Design of the study The trisomy dataset is provided by the Foundation of Prenatal Screening of the Northern Netherlands and consists of a first-trimester combined-test screening program in the Netherlands in a multi-centre routine clinical setting. Whereas earlier evaluations have taken place based on data in the period of July 2002 to May 2004, as published in schielen2006multi (29), this study only includes subjects after of July 1, 2010. From this moment, risks at trisomy were calculated by the Dutch National Institute for Public Health and the Environment (RIVM) according to the Astraia/Fetal Medicine Foundation (FMF) risk software. The first-trimester combined test is composed of three elements: (1) assay of the serum concentrations of pregnancy-associated plasma protein A ($PAPP\text{-}A$) and the free $\beta$ subunit of human chorion gonadotrophin ($f\beta\text{-}hCG$) between 8–14 weeks of the pregnancy, (2) ultrasound measurement of the nuchal translucency ($NT$) subcutaneous oedema in the fetal neck, to be measured at a gestational age (GA) between 10–11 and 14 weeks, and (3) maternal age. Accompanied with this test, the crown-rump length ($CRL$) that was used to determine the GA was recorded, the age of the mother, parity and gravidity. In the late ’90s, with the introduction of maternal serum biochemistry and ultrasound screening for chromosomal defects at different stages of pregnancy, it has become necessary to establish maternal and gestational age-specific risks for chromosomal defect nicolaides2003screening (25). Since the GA affects the biochemical parameters ($PAPP\text{-}A$ and $f\beta\text{-}hCG$), we use the multiple of median (MoM) versions $PAPP\text{-}A$ and $f\beta\text{-}hCG$ in the analysis. The method that RIVM uses to determine the risk on trisomy per subject, namely the FMF risk, takes into account women’s a priori risk, based on her maternal age and gestational age, and multiply this by a series likelihood ratios of $MoM\text{-}f\beta\text{-}hCG$, $MoM\text{-}PAPP\text{-}A$, $NT$. This likelihood ratio is obtained by dividing the percentage of cases by the percentage of controls with that measurement. The probability on having Down Syndrome is defined in terms of an odds-ratio shiefa2013first (30). In the dataset provided by RIVM, the FMF risk is determined on a dataset with $n=3784$ observations (53 cases and 3731 controls) and derived using the biomarkers maternal age, $NT$, $MoM\text{-}f\beta\text{-}hCG$ and $MoM\text{-}PAPP\text{-}A$. Note that for some subjects in this dataset a single biomarker value is missing. For these missing values of a certain combination of subject and biomarker, QBP imputes a disease score of 0, making that the biomarker distribution remains unaffected. As the classification performance of the FMF risk was assessed by training and validating on the full dataset, we do the same for QBP. For the comparison of QBP with the selected alternative methods we use a smaller dataset with only complete observations to make sure that the comparison is not influenced by any imputation procedure. This dataset has $n=3514$ observations (48 cases and 3466 controls) and utilizes the biomarkers maternal age, parity, gravidity, $MoM\text{-}f\beta\text{-}hCG$, $MoM\text{-}PAPP\text{-}A$, $NT$ and $CRL$. Here, the predictive performance is assessed using rdCV. Here, QBP uses the optimal tunable parameter setting of the maximal interval score and lower boundary on the exceedratio. #### 4.2.2 Results The predictive performance and number of biomarkers of all considered techniques is presented in Table 11. In Figure 7, the density plots of the predictive performance are provided for subset of the techniques - namely LR, PLR.Lasso, SVM.Radial, RF, kNN, XGB and QBP. Regarding the FMF risk, we obtain a performance of the classification of cases and controls of $AUC=0.9151$. For QBP, we have $AUC=0.9249$ with the maximal interval score $v=(1,2,3)$ and lower boundaries for the exceedratios $R^{*}=(2,3,6)$ as optimal tunable parameter combination. Table 11: Summary statistics of the performance of all methods on Trisomy data: AUC validation data and included number of components ∗ For PCLR, PLS-LDA the number of sparse components is given by $(\ldots)$ and for kNN the number of neighbors $k$ is represented by $(\ldots)$ | | | PLR | | | | SVM | | | | ---|---|---|---|---|---|---|---|---|---|---|--- | | LR | Lasso | EN | Ridge | PCLR | PLS-LDA | LDA | Linear | Radial | RF | kNN | XGB | QBP AUC VAL | mean | 0.914 | 0.834 | 0.855 | 0.728 | 0.909 | 0.881 | 0.886 | 0.909 | 0.886 | 0.898 | 0.896 | 0.908 | 0.908 sd | 0.058 | 0.176 | 0.153 | 0.209 | 0.062 | 0.076 | 0.073 | 0.058 | 0.075 | 0.073 | 0.070 | 0.069 | 0.066 NCOMP | mean | 7 | 5.0 | 6.4 | 7 | 7 (6.8) | 7 (4.6) | 7 | 7 | 7 | 7 | 7 (143.2) | 7 | 5.8 sd | 0 | 2.1 | 0.8 | 0 | 0 (0.7) | 0 (2.0) | 0 | 0 | 0 | 0 | 0 (24.8) | 0 | 0.6 Figure 6: AUC of validation data Figure 7: Performance (in AUC) measured on validation dataset: 500 repeats, 6-fold outer CV, 6-fold inner CV ## 5 Discussion In this study, we have performed an extensive comparative study between supervised binary disease prediction methods, focusing on all sorts of differences in distributions between cases and controls that appear in reality caused by biological processes and the complexity of diseases. Inspired by the situation in which using simple location measures are failing to discriminate between cases and controls, and using only tail information may better capture differences in biomarker distributions, we proposed a novel method called QBP. Our method, that uses the quantiles of the continuous biomarker distributions, was compared with traditional statistical classification methods such as LR, PLR, PCLR, LDA and PLS-LDA, as well as more novel machine learning techniques such as kNN, RF, SVM and XGB. We studied the predictive performance of QBP compared to the alternative methods, but also other features, e.g. effect of sample size and number of selected biomarkers/components in the final model. In a simulation study, differences in means, variance and skewness between cases and controls were simulated for certain biomarkers. When cases and controls were drawn from the same distribution (dataset 1), it was demonstrated that QBP is unbiased (average $AUC=0.5$) just like all other methods. In the two datasets with biomarkers having only systematic shifts in the mean with a size of one times the standard deviation, LDA tends to be superior ($AUC=0.977$ in dataset 2 and $AUC=0.996$ in dataset 3). Compared to LDA, QBP has a worse predictive performance in terms of AUC ($AUC=0.854\;(-12.6\%)\text{ and }AUC=0.948\;(-5\%)$, for dataset 2 and 3 respectively). In contrast to the performance gap with LDA, PLR, PLS-LDA and SVM, QBP performs just slightly worse compared to RF and XGB. In case of normally distributed data with a shift in standard deviation (dataset 4), QBP is superior to all methods ($AUC=0.652$). Whereas RF and XGB seems to come relatively close ($AUC=0.629$ and $AUC=0.584$, respectively), all logistic regression and LDA based techniques and SVM.Linear fail to discriminate better than random ($AUC=0.5$). In order to create a mixture of skewed and not skewed biomarker distributions, both normal and log-normal biomarkers are simulated. When simulating a shift in skewness, while remaining the mean and variance constant (dataset 5), RF and XGB were superior ($AUC=0.963$ and $AUC=0.917$, respectively), followed by QBP ($AUC=0.860$). All other techniques show a very weak classification performance ($AUC<0.569$). Compared to dataset 2 and 3, it seems that changing the biomarker distribution from normally distributed biomarkers to log- normally distributed biomarkers – while maintaining the shifts in mean parameter for some biomarkers (dataset 6a, 6b and 6c) – just slightly changes the relative differences in performance between the techniques. In specific, QBP demonstrated an inferior performance ($AUC=0.688\;(-14.4\%),\;AUC=0.785\;(-9.3\%)\text{ and }AUC=0.724\;(-13\%)$ for datasets 6a, 6b and 6c, respectively) relative to the best in class Lasso. Simultaneously, the gap between Lasso and the machine learning techniques RF and XGB has shrinked. In the datasets with only changes in the variances for some biomarkers and log-normal biomarker distributions (datasets 7a, 7b and 7c), the predictive performance of QBP ($AUC=0.652,\;AUC=0.751\text{ and }AUC=0.674$ for datasets 7a, 7b and 7c, respectively) was better or equal compared to its successor RF. This conclusion is also in line with dataset 4, where the data was normally distributed. Note that the difference in performance between QBP and RF decreased with increasing sample size. In the datasets where biomarkers may change in means, in variance or in both (dataset 8a, 8b and 8c), QBP performed equal compared to RF and XGB and was superior in relation to the other methods in terms of prediction. Thus, in the most realistic setting – where cases and controls do not just differ in mean – QBP truly competes with XGB and RF and does substantially better than more classical methods. The simulation study also showed for all methods that an increase in sample size tends to increase the predictive performance and decrease the standard deviation. In particular, a balanced increase of the number of cases and controls appeared to be most effective. A primary cause of this increased performance is the fact that the standard error of the quantiles decreases when increasing the sample size. For QBP, this directly results into more precise estimates for the quantiles and estimates of the exceedratios. As a consequence, the probability of falsely including biomarkers decreases. This sample size effect was mainly visible for QBP in the lower number of selected biomarkers and the increased specificity and accuracy of the biomarker selection for the balanced datasets with $n=400$ compared to $n=100$. For the PLR methods on the other hand, the specificity and accuracy decreased with increasing sample size, except for datasets 8a, 8b and 8c where the accuracy increased with increasing sample size. Whenever, a relative number of biomarkers is involved with different variances between cases and controls, QBP has a better sensitivity than traditional methods, although not always a better specificity when the number of cases and/or controls is low. This was observed in the balanced datasets with $n=100$. Apart from the simulation study, two case studies were analyzed: a major depression disorder dataset and a trisomy dataset. Whereas the traditional methods barely detected any difference between cases and controls in the MDD dataset ($AUC\approx 0.5$), QBP reached an area under the curve of 0.680, which is more than $7.1\%$ and $12.2\%$ higher than the two successors RF and XGB, respectively. This superior performance can mainly be ascribed to the fact that most relevant biomarkers in this dataset show differences in distributional characteristics other than just differences in means between cases and controls. When considering the predictive performance of the methods on the trisomy dataset using all biomarkers, it can be concluded that QBP ($AUC=0.908$) performs equally well as LR, PCLR, SVM.Linear and XGB, and significantly better than the other methods. A comparison of QBP and the FMF risk that is used by RIVM to predict trisomy was performed on a larger dataset with a lower number of biomarkers. It was shown that the classification performance of QBP in terms of the AUC is slightly better than the FMF risk ($AUC=0.9249$ and $AUC=0.9151$ for QBP and FMF risk, respectively). In our simulation study, we only applied normal and log-normal distributions, but did not use other statistical distributions. However, QBP can easily be translated to other continuous statistical distributions, most likely without losing its strength in detecting tail differences. Moreover, note that in the implementation of PCLR, the principal components are selected in the natural order given by their explained variances. Although, an alternative method using a stepwise procedure of selecting principal components based on the conditional likelihood-ratio test is described to be superior aguilera2006using (1), we do not expect the conclusions of this study to change in this case. We however used PLS-LDA as well, which creates sparse representation of the data before applying LDA. Finally, although we currently did not include interactions or other higher order terms, these could be easily constructed. Additional research on the QBP should be conducted as the complete set of possible tunable parameters and corresponding settings have not been studied or explored in its full potential. This can be in terms of the number of percentiles and the corresponding proportions, where one could focus on its relation with the sample size. Note that the proportions should be selected with care, especially when dealing with small sample sizes, as this will result into less robust percentiles. Furthermore, it could be investigated whether the weights of biomarkers should be equal for all biomarkers or it should depend on a certain statistic. For example, biomarkers that vary in variation between cases and controls may receive larger weights that could be proportional to Levene’s test of homogeneity. Thus it is not unlikely that the QBP can become even better in predicting cases and selecting relevant biomarkers. Another point of attention is the topic of collinearity, since it could easily inflate the disease scores of QBP. A simple precaution could be to reduce the biomarker weights of biomarker scores in case of confounding, however, more sophisticated measures could be developed. At the moment, QBP is limited to binary outcomes and continuous biomarkers. If one wants to include binary covariates such as gender or use multiple outcome levels this is not straightforward. For binary covariates, we could for example apply location- scale transformations. Especially in datasets that are too small for separate QBP analyses this might be useful. For discrete covariates – which we treated as continuous covariates in the trisomy dataset – a more sophisticated rule based on proportions could established to improve the performance of QBP. From a computational perspective, QBP algorithm is currently more computationally intensive than other classical statistical methods – especially in comparison to (P)LR or LDA. Relative to machine learning techniques, QBP seems to perform comparable or better. Note that the processing times are particularly high for the techniques that require CV to select the optimal set of tunable parameters. This CV was performed such that each method received exactly the same split of the training data, and with that ensuring a fair comparison by giving each method the same information to fit a model. Besides that the computational efficiency could still be improved, a mathematical or theoretical underpinning of QBP is needed to demonstrate its capability. Summarizing, QBP outperforms the observed traditional methods in discriminating cases from controls if the predictor variables show differences in variances between cases and controls. In case only systematic shifts in the mean of normally or log-normally distributed predictor variables are present, QBP is inferior to the traditional methods. For situations with mixtures of shifts in means, variances or other distributional differences, as expected in real life due to complex biological processes, QBP was superior to all methods in the MDD casestudy and was amongst the best performing methods in the simulation study – together with RF and XGB. There are still numerous settings for which the performance of QBP should be assessed, but we demonstrated its potential on predicting diseases. Although QBP is currently applied on disease classification, it can be used in all fields involving binary classification with continuous covariates, such as economics, marketing, engineering and social sciences. ## Acknowledgements The Foundation for Prenatal Screening in Northern Netherlands is gratefully acknowledged for providing the data for the Trisomy case study, enabling us to perform the analysis on a large set of routine clinical screening data. ## References * (1) Ana M Aguilera, Manuel Escabias and Mariano J Valderrama “Using principal components for estimating logistic regression with high-dimensional multicollinear data” In _Computational Statistics & Data Analysis_ 50.8 Elsevier, 2006, pp. 1905–1924 * (2) Bernhard E Boser, Isabelle M Guyon and Vladimir N Vapnik “A training algorithm for optimal margin classifiers” In _Proceedings of the fifth annual workshop on Computational learning theory_ , 1992, pp. 144–152 * (3) Anne-Laure Boulesteix “PLS dimension reduction for classification with microarray data” In _Statistical applications in genetics and molecular biology_ 3.1, 2004, pp. 1075 * (4) Andrew P Bradley “The use of the area under the ROC curve in the evaluation of machine learning algorithms” In _Pattern recognition_ 30.7 Elsevier, 1997, pp. 1145–1159 * (5) Evelyn Bromet et al. “Cross-national epidemiology of DSM-IV major depressive episode” In _BMC medicine_ 9.1 BioMed Central, 2011, pp. 1 * (6) Carolyn S Calfee et al. “Use of risk reclassification with multiple biomarkers improves mortality prediction in acute lung injury” In _Critical care medicine_ 39.4 NIH Public Access, 2011, pp. 711 * (7) Tianqi Chen and Carlos Guestrin “Xgboost: A scalable tree boosting system” In _Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining_ , 2016, pp. 785–794 * (8) Danny Coomans and Désiré Luc Massart “Alternative k-nearest neighbour rules in supervised pattern recognition: Part 1. k-Nearest neighbour classification by using alternative voting rules” In _Analytica Chimica Acta_ 136 Elsevier, 1982, pp. 15–27 * (9) Corinna Cortes and Vladimir Vapnik “Support-vector networks” In _Machine learning_ 20.3 Springer, 1995, pp. 273–297 * (10) Sijmen De Jong “SIMPLS: an alternative approach to partial least squares regression” In _Chemometrics and intelligent laboratory systems_ 18.3 Elsevier, 1993, pp. 251–263 * (11) Brian S Everitt, Sabine Landau, Morven Leese and Daniel Stahl “Miscellaneous clustering methods” In _Cluster analysis_ Wiley, 2011, pp. 215–255 * (12) Peter Filzmoser, Bettina Liebmann and Kurt Varmuza “Repeated double cross validation” In _Journal of Chemometrics_ 23.4 Wiley Online Library, 2009, pp. 160–171 * (13) Jerome Friedman, Trevor Hastie and Robert Tibshirani “Regularization Paths for Generalized Linear Models via Coordinate Descent” In _Journal of Statistical Software_ 33.1, 2010, pp. 1–22 URL: http://www.jstatsoft.org/v33/i01/ * (14) Jerome Friedman, Trevor Hastie and Robert Tibshirani “The elements of statistical learning” Springer series in statistics New York, 2001 * (15) JH Friedman “Stochastic gradient boosting. Department of Statistics”, 1999 * (16) Angelos Halaris “Inflammation, heart disease, and depression” In _Current psychiatry reports_ 15.10 Springer, 2013, pp. 1–9 * (17) Arthur E Hoerl and Robert W Kennard “Ridge regression: Biased estimation for nonorthogonal problems” In _Technometrics_ 12.1 Taylor & Francis Group, 1970, pp. 55–67 * (18) David W Hosmer and Stanley Lemeshow “Introduction to the logistic regression model” In _Applied Logistic Regression, Second Edition_ Wiley Online Library, 2000, pp. 1–30 * (19) Man-Jen Hsu, Yuan-Chin Ivan Chang and Huey-Miin Hsueh “Biomarker selection for medical diagnosis using the partial area under the ROC curve” In _BMC research notes_ 7.1 BioMed Central, 2014, pp. 1 * (20) Mike C Jentsch et al. “Biomarker approaches in major depressive disorder evaluated in the context of current hypotheses” In _Biomarkers_ 9.3 Future Medicine, 2015, pp. 277–297 * (21) Nathalie Just “Improving tumour heterogeneity MRI assessment with histograms” In _British journal of cancer_ 111.12 Nature Publishing Group, 2014, pp. 2205–2213 * (22) Shuangge Ma and Jian Huang “Penalized feature selection and classification in bioinformatics” In _Briefings in bioinformatics_ 9.5 Oxford Univ Press, 2008, pp. 392–403 * (23) NA Marigheto, EK Kemsley, M Defernez and RH Wilson “A comparison of mid-infrared and Raman spectroscopies for the authentication of edible oils” In _Journal of the American oil chemists’ society_ 75.8 Springer, 1998, pp. 987–992 * (24) Danh V Nguyen and David M Rocke “Tumor classification by partial least squares using microarray gene expression data” In _Bioinformatics_ 18.1 Oxford Univ Press, 2002, pp. 39–50 * (25) KH Nicolaides “Screening for chromosomal defects” In _Ultrasound in Obstetrics & Gynecology_ 21.4 Wiley Online Library, 2003, pp. 313–321 * (26) World Health Organization “Biomarkers in risk assessment: Validity and validation” WHO, 2001 * (27) Margaret S Pepe et al. “Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: standards for study design” In _Journal of the National Cancer Institute_ 100.20 Oxford University Press, 2008, pp. 1432–1438 * (28) John C. Platt “Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods” In _ADVANCES IN LARGE MARGIN CLASSIFIERS_ MIT Press, 1999, pp. 61–74 * (29) PCJI Schielen et al. “Multi-centre first-trimester screening for Down syndrome in the Netherlands in routine clinical practice” In _Prenatal diagnosis_ 26.8 Wiley Online Library, 2006, pp. 711–718 * (30) S Shiefa et al. “First trimester maternal serum screening using biochemical markers PAPP-A and free $\beta$-hCG for down syndrome, patau syndrome and edward syndrome” In _Indian Journal of Clinical Biochemistry_ 28.1 Springer, 2013, pp. 3–12 * (31) T. Sing, O. Sander, N. Beerenwinkel and T. Lengauer “ROCR: visualizing classifier performance in R” In _Bioinformatics_ 21.20, 2005, pp. 7881 URL: http://rocr.bioinf.mpi-sb.mpg.de * (32) Suzanne Smit et al. “Assessing the statistical validity of proteomics based biomarkers” In _Analytica Chimica Acta_ 592.2 Elsevier, 2007, pp. 210–217 * (33) Patrik Sobocki, Bengt Jönsson, Jules Angst and Clas Rehnberg “Cost of depression in Europe.” In _The journal of mental health policy and economics_ 9.2, 2006, pp. 87–98 * (34) Ewout W Steyerberg et al. “Assessing the performance of prediction models: a framework for some traditional and novel measures” In _Epidemiology (Cambridge, Mass.)_ 21.1 NIH Public Access, 2010, pp. 128 * (35) Robert Tibshirani “Regression shrinkage and selection via the lasso” In _Journal of the Royal Statistical Society. Series B (Methodological)_ JSTOR, 1996, pp. 267–288 * (36) W.. Venables and B.. Ripley “Modern Applied Statistics with S” ISBN 0-387-95457-0 New York: Springer, 2002 URL: http://www.stats.ox.ac.uk/pub/MASS4 * (37) Luciano Vera et al. “Discrimination and sensory description of beers through data fusion” In _Talanta_ 87 Elsevier, 2011, pp. 136–142 * (38) Johan A Westerhuis et al. “Assessment of PLSDA cross validation” In _Metabolomics_ 4.1 Springer, 2008, pp. 81–89 * (39) Herman Wold “Partial least squares” In _Encyclopedia of statistical sciences_ Wiley Online Library, 1985 * (40) Zhi-Hua Zhou “Ensemble methods: foundations and algorithms” CRC press, 2012 * (41) Hui Zou and Trevor Hastie “Regularization and variable selection via the elastic net” In _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ 67.2 Wiley Online Library, 2005, pp. 301–320
# General Relativity from Einstein-Gauss-Bonnet gravity 1∧Fabrizio Canfora, 2,⋆Adolfo Cisterna, 3,∗Sebastián Fuenzalida, 4,†Carla Henríquez-Báez, 4,‡Julio Oliva 1 Centro de Estudios Científicos (CECs), Casilla 1469, Valdivia, Chile 2Sede Esmeralda, Universidad de Tarapacá, Av. Luis Emilio Recabarren 2477, Iquique, Chile 3Departamento de Física, Universidad Técnica Federico Santa María, Casilla 110-V, Valparaíso, Chile 4Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción, Chile ###### Abstract In this work we show that Einstein gravity in four dimensions can be consistently obtained from the compactification of a generic higher curvature Lovelock theory in dimension $D=4+p$, being $p\geq 1$. The compactification is performed on a direct product space $\mathcal{M}_{D}=\mathcal{M}_{4}\times\mathcal{K}^{p}$, where $\mathcal{K}^{p}$ is a Euclidean internal manifold of constant curvature. The process is carried out in such a way that no fine tuning between the coupling constants is needed. The compactification requires to dress the internal manifold with the flux of suitable $p$-forms whose field strengths are proportional to the volume form of the internal space. We explicitly compactify Einstein-Gauss-Bonnet theory from dimension six to Einstein theory in dimension four and sketch out a similar procedure for this compactification to take place starting from dimension five. Several black string/p-branes solutions are constructed, among which, a five dimensional asymptotically flat black string composed of a Schwarzschild black hole on the brane is particularly interesting. Finally, the thermodynamic of the solutions is described and we find that the consistent compactification modifies the entropy by including a constant term, which may induce a departure from the usual behavior of the Hawking-Page phase transition. New scenarios are possible in which large black holes dominate the canonical ensamble for all temperatures above the minimal value.††<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ## I Introduction The original Kaluza-Klein scheme Kaluza ; Klein provides, via dimensional reduction to four dimensions, a consistent method to unify gravity and electromagnetism by starting from a purely geometrical higher dimensional theory Overduin:1998pn . In higher dimensions, it is assumed that only gravity exists, described by a five dimensional Einstein-Hilbert action, that gives rise to the electromagnetic force once the spacetime is properly compactified to dimension four. Two fundamental assumptions are made: First, gravity is described by Einstein General Relativity (GR) and second, the higher dimensional spacetime is empty. Despite this conceptual appealing of the Kaluza-Klein framework, some subtleties emerge given by two experimental signatures that are in tension with the simplest approaches of compactification: the small, positive value of the four dimensional cosmological constant that we observe today and the highly accurate particle physics experiments whose consistency requires strict bounds on the size of the extra dimensions. In the cosmological context, this procedure has made it possible to find solutions of the Friedmann-Robertson-Walker class Castillo- Felisola:2016kpe . In addition, simple compactifications over direct product spaces, namely consistent compactifications with vanishing gauge and dilaton fields, where the internal manifold is of constant curvature, usually suffer from incompatibilities at the level of the field equations Duff:1986hr . A concrete example to see this, is the compactification of Einstein theory on a $D=d+p$ dimensional spacetime of the form $\mathcal{M}_{D}=\mathcal{M}_{d}\times\mathcal{K}^{p}$, where $\mathcal{M}_{d}$ is a $d$-dimensional spacetime and $\mathcal{K}^{p}$ an internal $p$-dimensional Euclidean manifold of constant curvature. Compatibility of the field equations will imply a vanishing of the Ricci tensor of the internal manifold, which if assumed of constant curvature, implies that it must be locally flat and consequently, its isometry group can only be Abelian. Isometries of the internal manifold govern the symmetry group of the compactified interaction and in consequence within this ansatz of cylindrycal compactification, GR it is only unifiable with electromagnetism. On the other hand, to accommodate a small non-vanishing four dimensional cosmological constant, it is then mandatory to include an internal manifold with a non-vanishing curvature which forces the radius of the extra dimensions to be large. In consequence, to properly compactify Einstein theory over internal manifolds constructed with small dimensions, with or without a small cosmological constant, it is necessary to either give up on the emptiness of the higher dimensions or to generalize the geometric structure of the theory by including well-possed higher curvature terms into the gravitational action. Including matter fields in higher dimensions seems to go against the very idea of geometrization of interactions behind Kaluza-Klein compactifications. As stated in the 80’s, this would be still consistent if there is a guiding principle that fixes the matter content in higher dimensions in a relatively unique manner. Supergravity achieves this goal in eleven dimensions, since its bosonic matter content in uniquely defined by a metric, and a fundamental three-form, which cannot be further coupled to other matter multiplets Duff:1986hr . On the other hand, at a practical level, including higher dimensional matter fields has been shown to be fruitful to compactify Einstein theory. It is known that dressing the internal manifold with fundamental $(p-1)$-forms $A_{[p-1]}$, with a field strength $F_{[p]}=dA_{[p-1]}$ proportional to the volume form of such space, renders possible compactifications of Einstein gravity Freund:1980xh ; RandjbarDaemi:1982hi . In fact, the $p$-form charge allows for a non-trivial curvature of the internal manifold while at the same time accommodating a small cosmological constant. This dressing protocol is also widely applied in the construction of black holes with $p$-form fields depending on the horizon coordinates which support a black hole horizon from collapsing when deviating from bald solutions Bardoux:2012aw (see also Edery:2015wha , Edery:2018jyp ) In higher dimensions, higher curvature terms when combined properly in such a way that the resulting theory of gravity respect general covariance and possesses second order field equations, give rise to Lovelock theory, the natural generalization of Einstein gravity in higher dimensions Lovelock:1971yv . This theory corresponds to an infinite series of higher curvature terms of order $k$, which are non-trivial when $k<D/2$. In this way, on top of a cosmological term, the Ricci scalar is the first term of the series and the only one that it is non-trivial in dimension four. The first dynamical correction of Lovelock gravity to the Einstein-Hilbert action appears in dimension five and is given by a precise combination of quadratic curvature terms known as the Gauss-Bonnet density Garraffo:2008hu , which also appears as an $\alpha^{\prime}$-correction to GR in string theory Zwiebach:1985uq . It is natural to wonder, how this model, dubbed Einstein- Gauss-Bonnet (EGB) gravity, compactifies on direct product spacetimes. Einstein-Gauss-Bonnet gravity with a cosmological constant can be easily compactified over an internal manifold of non-vanishing constant curvature of dimension $p$ to a spacetime of dimension $d\geq 5$. The presence of the Gauss-Bonnet density renders the compactification trivial but at the price of both, tuning the cosmological constant with the Gauss-Bonnet coupling and forcing the internal manifold to have an hyperbolic structure, i.e a negative constant curvature. The compactified theory becomes ill-behaved, the Killing vectors of the hyperbolic internal manifold are not globally defined and then it is not possible to accommodate the unification of gravity with any other interaction. On might be tempted to dress the internal manifold with $p$-forms, however we have shown in Cisterna:2020kde that this only eliminates the tuning of the cosmological constant with the Gauss-Bonnet coupling, but the internal manifold remains hyperbolic. To solve these problems it is mandatory to include higher dimensional matter in the shape of $p$-forms non-minimally coupled to the curvature tensor, keeping the second order character of the theory as a guiding principle. Such an interaction, first described by Horndeski in Horndeski:1976gi and later generalized in Feng:2015sbw provides the only non-minimally coupled gauge invariant electrodynamics with second order field equations that when going to flat spacetime reduces to Maxwell equations. This model renders possible a generic compactification of Einstein-Gauss-Bonnet gravity on an internal manifold of positive constant curvature Cisterna:2020kde , a fact that is extendable to any Lovelock theory. Regarding these compactifications, a natural concern arises: Whether or not a given Lovelock theory can be compactified to Einstein gravity in dimension four? In four dimensions Lovelock densities are either topological or identically zero, nevertheless, after the compactification, traces of these geometrical entities survive at the level of the field equations, rendering the resulting theory not compatible. In Canfora:2008iu it has been shown that Lovelock gravity can be compactified to Einstein theory in dimension four as long as the theory at least includes the cubic term of the Lovelock series, and in consequence with an internal manifold of dimension $p\geq 3$. The price to pay for this spontaneous compactification, beyond the fact that it exists starting from seven dimensions, is that the cosmological constant and the Gauss-Bonnet and cubic the Lovelock coupling are related to each other in a manner that does not lead to a symmetry enhancement, and therefore, such relation it is not expected to survive quantum corrections. In this paper we show how any Lovelock theory, in particular Einstein-Gauss- Bonnet theory, can be compactified to four dimensional Einstein gravity starting from five or six dimensions and without imposing any relation among the model parameters, namely for a generic Lovelock theory. To accomplish this we employ the same strategy of Cisterna:2020kde and we show that there is no need to consider any interaction of order higher than two in the curvature. The article is organized as follows: Section II is devoted to introduce our model and the corresponding field equations. In Section III we show how to use our model to compactify Einstein-Gauss-Bonnet theory to Einstein gravity in dimension four by considering a two-dimensional internal manifold of constant curvature. In section IV we provide a method for this compactification to take place starting from five dimensions. Section V is destined to study the thermodynamic features of the corresponding anti-de Sitter black brane solutions of section III and IV. We conclude in Section VI. ## II Theory and field equations We start by considering the Lovelock Lagrangian of order $k$ in $D$ dimensions $\displaystyle\mathcal{L}_{Lovelock}[g]$ $\displaystyle=\sum^{[(D-1)/2]}_{k=0}\alpha_{k}\mathcal{L}^{k},$ (1) where the $\alpha_{k}$ are the Lovelock couplings and $\mathcal{L}^{k}$ the Euler densities defined by $\displaystyle\mathcal{L}^{k}=\frac{1}{2^{k}}\delta^{A_{1}\cdots A_{2k}}_{B_{1}\cdots B_{2k}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}},$ (2) being $\delta^{A_{1}\cdots A_{2k}}_{B_{1}\cdots B_{2k}}$ the anti-symmetric generalized Kronecker delta. Each Euler density will contribute non-trivially to the field equations for $k\leq\left[\frac{(D-1)}{2}\right]$, otherwise they are either topological or identically zero. Notice that we have parameterized the sum in (1) in such a way that in even dimension the topological Euler density is not present. We will perform compactifications of (1) from a $D=d+p$ dimensional direct product spacetime $\mathcal{M}_{D}$ to a $d$-dimensional spacetime $\mathcal{M}_{d}$, being the internal manifold $\mathcal{K}^{p}$ a Euclidean $p$-dimensional space of constant curvature. As previously explained, in order to perform the compactification to Einstein gravity in dimension four it is necessary to include suitable non-minimal couplings between curvature tensors and $p$-form fields. The corresponding theory Feng:2015sbw is constructed in complete analogy with the Lovelock Lagrangian, and it is built in terms of a polynomial invariant made of curvature tensors and the field strength of the corresponding $p$-form fields. We start by introducing the bi-linear combination $\displaystyle Z^{A_{1}\cdots A_{p}}{}_{B_{1}\cdots B_{p}}$ $\displaystyle:=F^{A_{1}\cdots A_{p}}F_{B_{1}\cdots B_{p}},$ (3) where $F_{[p]}=dA_{[p-1]}$, so the new interaction can be defined as $\displaystyle\mathcal{L}_{p-forms}\left[g,A_{[p-1]}\right]=\sum^{[(D-1)/p]}_{n=1}\sum^{[(D-np)/2]}_{k=0}\frac{\beta_{k}}{2^{k}(p!)^{n}}$ $\displaystyle\delta^{A_{1}\cdots A_{2k}C_{1}^{1}\cdots C_{1}^{p}\cdots C_{n}^{1}\cdots C_{n}^{p}}_{B_{1}\cdots B_{2k}D_{1}^{1}\cdots D_{1}^{p}\cdots D_{n}^{1}\cdots D_{n}^{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}$ $\displaystyle\times Z^{D_{1}^{1}\cdots D_{1}^{p}}{}_{C_{1}^{1}\cdots C_{1}^{p}}\cdots Z^{D_{n}^{1}\cdots D_{n}^{p}}{}_{C_{n}^{1}\cdots C_{n}^{p}},$ (4) being $\beta_{k}$ coupling constants. The field equations are of second order which can be proved using the Bianchi identity satisfied by the $p$-forms $\displaystyle\nabla_{\left[A_{1}\right.}F_{\left.B_{1}\cdots B_{p}\right]}=\nabla^{\left[A_{1}\right.}F^{\left.B_{1}\cdots B_{p}\right]}=0$ (5) as well as the following identity on the $Z$ tensor $\displaystyle\nabla_{\left[D_{1}\right.}\nabla^{\left[C_{1}\right.}Z^{\left.A_{1}\cdots A_{p}\right]}{}_{\left.B_{1}\cdots B_{p}\right]}$ $\displaystyle=\nabla_{\left[D_{1}\right.}F^{\left[A_{1}\cdots A_{p}\right.}\nabla^{\left.C_{1}\right]}F_{\left.B_{1}\cdots B_{p}\right]}+p(-1)^{p}F^{\left[A_{1}\cdots A_{p}\right.}R^{E}{}_{\left[B_{1}D_{1}\right.}{}^{\left.C_{1}\right]}F_{\left.B_{2}\cdots B_{p}\right]E}.$ (6) Note that the order $k$ in (4) can be set to zero and only $Z$ tensors might appear in the theory. Such a model, dubbed quasitopological electromagnetism has been recently explored in the context of spherically symmetric black holes Liu:2019rib ; Cisterna:2020rkc ; Cano:2020qhy . As a further guiding principle, we require the matter field equations to be linear on the $p$-form field, and therefore we restrict to Lagrangians that are quadratic in $F_{[p]}$ in (4) and therefore linear in $Z$ namely, we keep as non-vanishing only the contributions coming from the term with $n=1$ in (4). The metric field equations of (1) supplemented with (4) consequently take the form $\displaystyle\sum^{[(D-1)/2]}_{k=0}\alpha_{k}E^{\left(k\right)}_{AB}-\sum^{[(D-p)/2]}_{k=0}\beta_{k}T^{\left(k,1\right)}_{AB,p}$ $\displaystyle=0,$ (7) where $E^{\left(k\right)}_{AB}$ is the Lovelock tensor of order $k$ in the curvature, $\displaystyle E^{\left(k\right)}_{AB}$ $\displaystyle=-\frac{1}{2^{k+1}}g_{\left(A\right|C}\delta^{CA_{1}\cdots A_{2k}}_{\left|B\right)B_{1}\cdots B_{2k}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}$ (8) and $T^{\left(k,1\right)}_{AB,p}$ is the corresponding energy-momentum tensor associated to (4), $\displaystyle T^{\left(k,1\right)}_{AB,p}=$ $\displaystyle\frac{1}{2^{k+1}p!}g_{AB}\delta^{A_{1}\cdots A_{2k}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{2k}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle-\frac{k}{2^{k}p!}\delta^{A_{1}A_{2}\cdots A_{2k}C_{1}\cdots C_{p}}_{B_{1}\left(A\right|\cdots B_{2k}D_{1}\cdots D_{p}}R^{B_{1}}{}_{\left|B\right)A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle+\frac{2k}{2^{k}p!}\delta^{A_{1}\cdots A_{2k}C_{1}\cdots C_{p}}_{\left(A\right|\cdots B_{2k}D_{1}\cdots D_{p}}g_{A_{2}\left|B\right)}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}\nabla_{A_{1}}F^{D_{1}\cdots D_{p}}\nabla^{B_{2}}F_{C_{1}\cdots C_{p}}$ $\displaystyle+\frac{2pk}{2^{k}p!}\delta^{A_{1}\cdots A_{2k}C_{1}\cdots C_{p}}_{\left(A\right|\cdots B_{2k}D_{1}\cdots D_{p}}g_{A_{2}\left|B\right)}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}R^{D_{1}}{}_{E}{}^{B_{2}}{}_{A_{1}}Z^{ED_{2}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle-\frac{p}{2^{k}p!}\delta^{A_{1}\cdots A_{2k}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{2k}\left(A\right|\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}Z_{\left|B\right)}{}^{D_{2}\cdots D_{p}}{}_{C_{1}\cdots C_{p}},$ (9) while for the gauge field we have $\displaystyle\sum^{[(D-p)/2]}_{k=0}\frac{\beta_{k}}{2^{k-1}}\delta^{A_{1}\cdots A_{2k}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{2k}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\cdots R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}\nabla^{D_{1}}F_{C_{1}\cdots C_{p}}=0.$ (10) In the next section we will address the compactification of Einstein-Gauss- Bonnet gravity from dimension $D=d+p$ to Einstein theory in dimension four, paying particular attention to the six dimensional case. ## III Compactifying Einstein-Gauss-Bonnet gravity to Einstein theory in dimension four Let us address now the compactification of Einstein-Gauss-Bonnet theory from dimension $d+p$ to Einstein gravity in dimension four, for generic values of the couplings. We will provide the general analysis for an arbitrary dimension $d$ and set $d=4$ when required. For the compactification to exists, our interaction (4) should contain all terms up to quadratic order in the Riemann tensor. This imposes the following action principle $\displaystyle I\left[g,A_{\left[p-1\right]}\right]=$ $\displaystyle\int\sqrt{-g}d^{d+p}x\left(R-2\Lambda+\frac{\alpha_{2}}{4}\delta^{A_{1}\cdots A_{4}}_{B_{1}\cdots B_{4}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}-\frac{1}{2p}Z^{C_{1}\cdots C_{p}}{}_{C_{1}\cdots C_{p}}\right.$ $\displaystyle\left.+\frac{\beta_{1}}{2p!}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{B_{1}B_{2}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}\right.$ $\displaystyle\left.+\frac{\beta_{2}}{4p!}\delta^{A_{1}\cdots A_{4}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{4}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}\right).$ (11) Notice that when $d+p=6$, this is the most general combination of the Lovelock-like form leading to a linear matter equation. Here the couplings $\alpha_{2}$ and $\beta_{1}$ have mass dimension $-2$, while $\beta_{2}$ has mass dimension $-4$. Varying the functional (11) with respect to the metric field we obtain $\displaystyle G_{AB}+\Lambda g_{AB}+\alpha_{2}H_{AB}$ $\displaystyle=T^{\left(0,1\right)}_{AB}+\beta_{1}T^{\left(1,1\right)}_{AB}+\beta_{2}T^{\left(2,1\right)}_{AB},$ (12) where $G_{AB}$ and $H_{AB}$ are respectively the Einstein and Gauss-Bonnet tensors $\displaystyle G_{AB}$ $\displaystyle=R_{AB}-\frac{1}{2}g_{AB}R,$ (13) $\displaystyle H_{AB}$ $\displaystyle=2RR_{AB}-4R_{AC}R^{C}{}_{B}-4R^{CD}R_{ACBD}+2R_{ACDE}R_{B}{}^{CDE}-\frac{1}{2}g_{AB}\mathcal{GB},$ (14) with $\mathcal{GB}=R^{2}-4R_{AB}R^{AB}+R_{ABCD}R^{ABCD}$ corresponding to the Gauss-Bonnet density. In addition, the first energy-momentum tensor $T^{\left(0,1\right)}_{AB}$ gives the dynamics of the minimally coupled contribution $\displaystyle T^{\left(0,1\right)}_{AB,p}=$ $\displaystyle\frac{1}{2}Z_{B}{}{}^{C_{2}\cdots C_{p}}{}_{AC_{2}\cdots C_{p}}-\frac{1}{4p}g_{AB}Z^{C_{1}\cdots C_{p}}{}_{C_{1}\cdots C_{p}},$ (15) while the non-minimally coupled sectors contribute as $\displaystyle T^{\left(1,1\right)}_{AB,p}=$ $\displaystyle\frac{1}{4p!}g_{AB}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{B_{1}B_{2}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}-\frac{p}{2p!}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{B_{1}B_{2}\left(A\right|\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}Z_{\left|B\right)}{}^{D_{2}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle-\frac{1}{2p!}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{B_{1}\left(A\right|D_{1}\cdots D_{p}}R^{B_{1}}{}_{\left|B\right)A_{1}A_{2}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}+\frac{1}{p!}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{\left(A\right|B_{2}D_{1}\cdots D_{p}}g_{A_{2}\left|B\right)}\nabla_{A_{1}}F^{D_{1}\cdots D_{p}}\nabla^{B_{2}}F_{C_{1}\cdots C_{p}}$ $\displaystyle+\frac{p}{p!}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{\left(A\right|B_{2}D_{1}\cdots D_{p}}g_{A_{2}\left|B\right)}R^{D_{1}}{}_{E}{}^{B_{2}}{}_{A_{1}}Z^{ED_{2}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ (16) and $\displaystyle T^{\left(2,1\right)}_{AB,p}=$ $\displaystyle\frac{1}{8p!}g_{AB}\delta^{A_{1}\cdots A_{4}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{4}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle-\frac{p}{4p!}\delta^{A_{1}\cdots A_{4}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{4}\left(A\right|\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}Z_{\left|B\right)}{}^{D_{2}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle-\frac{1}{2p!}\delta^{A_{1}A_{2}\cdots A_{4}C_{1}\cdots C_{p}}_{B_{1}\left(A\right|\cdots B_{4}D_{1}\cdots D_{p}}R^{B_{1}}{}_{\left|B\right)A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}Z^{D_{1}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}$ $\displaystyle+\frac{1}{p!}\delta^{A_{1}\cdots A_{4}C_{1}\cdots C_{p}}_{\left(A\right|\cdots B_{4}D_{1}\cdots D_{p}}g_{A_{2}\left|B\right)}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}\nabla_{A_{1}}F^{D_{1}\cdots D_{p}}\nabla^{B_{2}}F_{C_{1}\cdots C_{p}}$ $\displaystyle+\frac{p}{p!}\delta^{A_{1}\cdots A_{4}C_{1}\cdots C_{p}}_{\left(A\right|\cdots B_{4}D_{1}\cdots D_{p}}g_{A_{2}\left|B\right)}R^{D_{1}}{}_{E}{}^{B_{2}}{}_{A_{1}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}Z^{ED_{2}\cdots D_{p}}{}_{C_{1}\cdots C_{p}}.$ (17) By virtue of the identities (5) and (6) we clearly see that the energy- momentum tensors (16)-(17) are casted in a manifestly second-order fashion. On the other hand, variations with respect to the gauge field deliver the following second order Maxwell like equation $\displaystyle(p-1)!\nabla^{D_{1}}F_{D_{1}\cdots D_{p}}-\beta_{1}\delta^{A_{1}A_{2}C_{1}\cdots C_{p}}_{B_{1}B_{2}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}\nabla^{D_{1}}F_{C_{1}\cdots C_{p}}$ $\displaystyle-\frac{\beta_{2}}{2}\delta^{A_{1}\cdots A_{4}C_{1}\cdots C_{p}}_{B_{1}\cdots B_{4}D_{1}\cdots D_{p}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}\nabla^{D_{1}}F_{C_{1}\cdots C_{p}}=0.$ (18) In order to proceed with the compactification on $\mathcal{M}_{D}=\mathcal{M}_{d}\times\mathcal{K}^{p}$, we consider the following direct product metric $\displaystyle ds^{2}$ $\displaystyle=g_{AB}dx^{A}dx^{B}=\tilde{g}_{\mu\nu}\left(y\right)dy^{\mu}dy^{\nu}+\hat{g}_{ij}\left(z\right)dz^{i}dz^{j}.$ (19) Here $\tilde{g}_{\mu\nu}dy^{\mu}dy^{\nu}$ stands for the $d$-dimensional spacetime manifold $\mathcal{M}_{d}$ while $\hat{g}_{ij}\left(z\right)dz^{i}dz^{j}$ represents a $p$-dimensional Euclidean manifold $\mathcal{K}^{p}$ $\displaystyle\hat{g}_{ij}\left(z\right)dz^{i}dz^{j}$ $\displaystyle=\frac{d\vec{z}\cdot d\vec{z}}{\left(1+\frac{\gamma}{4}\sum^{p}_{j=1}z^{2}_{j}\right)^{2}}$ (20) of constant curvature, i.e $\hat{R}_{ijkl}=\gamma(\hat{g}_{ik}\hat{g}_{jl}-\hat{g}_{il}\hat{g}_{jk})$ (21) with $\gamma$ defining the corresponding curvature radius $R_{0}=|\gamma|^{-1}$. From now on quantities with a tilde are intrinsically defined on the $d$-dimensional spacetime (the brane), while quantities with a hat refer to the internal manifold of dimension $p$. In accordance with our dressing approach, the $A_{p-1}$ gauge field lives on the internal manifold and in consequence we take it to be proportional to the volume form of $\mathcal{K}^{p}$ $\displaystyle\hat{F}_{i_{1}\cdots i_{p}}$ $\displaystyle=\frac{q_{m}}{\left(1+\frac{\gamma}{4}\sum^{p}_{j=1}z^{2}_{j}\right)^{p}}\hat{\epsilon}_{i_{1}\cdots i_{p}},$ (22) where $q_{m}$ plays the role of a generalized magnetic charge and $\hat{\epsilon}_{i_{1}\cdots i_{p}}=\delta^{z_{1}\ldots z_{p}}_{i_{1}\ldots i_{p}}$ is the antisymmetrized Kronecker delta. This choice immediately provides a solution of the gauge field equation (18). We are left then with the Einstein equations. As the spacetime is a direct product of two manifolds, the metric field equations split into the field equations on the brane and on the internal manifold, yielding $\displaystyle\left(\alpha_{2}+\beta_{2}q^{2}_{m}p!\right)\tilde{H}_{\mu\nu}+\left[1+2\alpha_{2}\gamma p\left(p-1\right)+\beta_{1}q^{2}_{m}p!\right]\tilde{G}_{\mu\nu}$ $\displaystyle+\left[-\frac{\gamma}{2}p\left(p-1\right)+\Lambda-\frac{\alpha_{2}}{2}\gamma^{2}p\left(p-1\right)\left(p-2\right)\left(p-3\right)+\frac{q^{2}_{m}}{4}\left(p-1\right)!\right]\tilde{g}_{\mu\nu}=0,$ (23) and $\displaystyle\left(-\frac{\alpha_{2}}{2}\hat{g}_{ij}+\frac{\beta_{2}}{2}q^{2}_{m}p!\hat{g}_{ij}\right)\tilde{\mathcal{GB}}_{d}+\left[-\frac{1}{2}\hat{g}_{ij}-\alpha_{2}\gamma\left(p-1\right)\left(p-2\right)\hat{g}_{ij}+\frac{\beta_{1}}{2}q^{2}_{m}p!\hat{g}_{ij}\right]\tilde{R}_{d}$ $\displaystyle+\left[-\frac{\gamma}{2}\left(p-1\right)\left(p-2\right)\hat{g}_{ij}+\Lambda\hat{g}_{ij}-\frac{\alpha_{2}}{2}\gamma^{2}\left(p-1\right)\left(p-2\right)\left(p-3\right)\left(p-4\right)\hat{g}_{ij}-\frac{q^{2}_{m}}{4}\left(p-1\right)!\hat{g}_{ij}\right]=0.$ (24) When compactifying, most incompatibilities emerge from these field equations. In consequence to further analyze a possible incompatibility we trace both equations, obtaining $\displaystyle\left(\alpha_{2}+\beta_{2}q^{2}_{m}p!\right)\left(4-d\right)\tilde{\mathcal{GB}}_{d}+\left[1+2\alpha_{2}\gamma p\left(p-1\right)+\beta_{1}q^{2}_{m}p!\right]\left(2-d\right)\tilde{R}_{d}+\left[-\gamma dp\left(p-1\right)+2d\Lambda\right.$ $\displaystyle\left.-\alpha_{2}\gamma^{2}dp\left(p-1\right)\left(p-2\right)\left(p-3\right)+\frac{q^{2}_{m}}{2}d\left(p-1\right)!\right]=0,$ (25) and $\displaystyle\left(-\alpha_{2}p+\beta_{2}q^{2}_{m}pp!\right)\tilde{\mathcal{GB}}_{d}+\left[-p-2\alpha_{2}\gamma p\left(p-1\right)\left(p-2\right)+\beta_{1}q^{2}_{m}pp!\right]\tilde{R}_{d}+\left[-\gamma p\left(p-1\right)\left(p-2\right)+2p\Lambda\right.$ $\displaystyle\left.-\alpha_{2}\gamma^{2}p\left(p-1\right)\left(p-2\right)\left(p-3\right)\left(p-4\right)-\frac{q^{2}_{m}}{2}p\left(p-1\right)!\right]=0,$ (26) To demonstrate the net effect of the inclusion of our interaction (4) we explicitly compactify from six dimensions to Einstein gravity in dimension four, i.e $d=4$ and $p=2$. Both traces then become $\displaystyle\left(2\beta_{1}q^{2}_{m}+4\gamma\alpha_{2}+1\right)\tilde{R}_{4}+\left(4\gamma-4\Lambda-q^{2}_{m}\right)$ $\displaystyle=0,$ (27) $\displaystyle\left(4\beta_{2}q^{2}_{m}-2\alpha_{2}\right)\tilde{\mathcal{GB}}_{4}+\left(4\beta_{1}q^{2}_{m}-2\right)\tilde{R}_{4}+\left(4\Lambda-q^{2}_{m}\right)$ $\displaystyle=0,$ (28) whose compatibility is ensured by the relations $\displaystyle q^{2}_{m}=\frac{\alpha_{2}}{2\beta_{2}},\quad\gamma=-\frac{1}{4}\frac{24\Lambda\alpha_{2}\beta_{1}\beta_{2}-8\Lambda\beta_{2}^{2}+\alpha_{2}^{2}\beta_{1}-3\alpha_{2}\beta_{2}}{\beta_{2}\left(8\Lambda\alpha_{2}\beta_{2}-\alpha_{2}^{2}-4\alpha_{2}\beta_{1}+4\beta_{2}\right)},$ (29) that fix the magnetic charge, which is an integration constant, in terms of the couplings $\alpha_{2}$ and $\beta_{2}$ and fix as well the compactification radius $\gamma$. A few comments are in order. First, the presence of $\beta_{2}$, that controls a $k=2$ order term in (4) is mandatory otherwise, although $\mathcal{GB}_{4}$ is topological in $d=4$ it presence at the level of the field equations renders impossible the compactification, indeed it constraints the system in such a way that no general relativity solutions are allowed (see also Kastor:2006vw and Giribet:2006ec ). Second, we observe that $\alpha_{2}$ and $\beta_{2}$ are related through the magnetic charge, an integration constant that can take any value, and in consequence there is no tuning among the couplings of the theory. This is in stark contrast to what happens in Canfora:2008iu , where for the compactification to exists a cubic Lovelock interaction must be included whose coupling $\alpha_{3}$ is directly fixed in terms of $\alpha_{2}$ and $\Lambda$. Third, this compactification can be performed from dimension six due to the fact that all terms in (4) are not topological and in consequence contribute non- trivially in the critical dimension, opposite to the cubic Lovelock interaction which is non-topological starting from dimension seven. Finally, it is interesting to notice that in a scenario in which the higher curvature terms arise as higher curvature corrections proper of an effective field theory, $\alpha_{2}\sim M^{-2}\sim\beta_{1}$ while $\beta_{2}\sim M^{-4}$, where $M$ is the energy scale that defines the effective approach, namely physical quantities must be expanded in the limit $M$ large. From the second relation in (29), one can show that the compactification radius $R_{0}^{2}\sim M^{-2}$, and therefore the perturbative scheme is indeed consistent with having a perturbatively small, compact, extra dimension. Therefore, after fixing the radius of compactification as well as the value of the magnetic charge as in (29), one consistently obtains an Einstein equation induced on the brane, which can be read from (23) leading to $\frac{1}{16\pi G_{\text{eff}}}\left(G_{\mu\nu}+\Lambda_{\text{eff}}g_{\mu\nu}\right)=0\ ,$ (30) where we have defined the effective Newton and cosmological constants respectively as $\displaystyle G_{\text{eff}}$ $\displaystyle=\frac{1}{32\pi}\frac{\beta_{2}(8\Lambda\alpha_{2}\beta_{2}-\alpha_{2}^{2}-4\alpha_{2}\beta_{1}+4\beta_{2})}{(\beta_{2}-\alpha_{2}\beta_{1})(\alpha_{2}^{2}+2\alpha_{2}\beta_{1}+8\Lambda\alpha_{2}\beta_{2}+2\beta_{2})}\ ,\ \Lambda_{\text{eff}}=\frac{(8\Lambda\beta_{2}-\alpha_{2})}{16(\beta_{2}-\alpha_{2}\beta_{1})}\ ,$ (31) The field equations are now solvable, for example, by the following $6$-dimensional black 2-brane (static spherically symmetric case) $ds^{2}=-\left(\frac{r^{2}}{l^{2}_{\text{eff}}}-\frac{\mu}{r}+K\right)dt^{2}+\frac{dr^{2}}{\left(\frac{r^{2}}{l^{2}_{\text{eff}}}-\frac{\mu}{r}+K\right)}+r^{2}d\Sigma^{2}_{K}+\frac{dz_{1}^{2}+dz_{2}^{2}}{[1+\frac{\gamma}{4}(z_{1}^{2}+z_{2}^{2})]^{2}}$ (32) with $K$ the curvature of the transverse manifold $d\Sigma_{K}^{2}$, $\mu$ an integration constant related with ADM mass and $l_{\text{eff}}^{2}=-3/\Lambda_{\text{eff}}$ an effective (A)dS radius $l_{\text{eff}}^{-2}=\frac{1}{48}\left(\frac{8\Lambda\beta_{2}-\alpha_{2}}{\alpha_{2}\beta_{1}-\beta_{2}}\right).$ (33) The four dimensional brane is given then by a Schwarzschild (A)dS black hole with an effective cosmological constant, i.e an Einstein solution. From the latter equation, one can see that for perturbative, higher curvature couplings, the effective four-dimensional cosmological constant turns out to be large, which in the case of de Sitter solutions is in tension with a direct application of this scenario to late time cosmology. ## IV The scalar case: One dimensional internal manifold In order to compactify Einstein-Gauss-Bonnet gravity starting from dimension five it is necessary to consider a one dimensional internal manifold, and in consequence the dressing must be given by $0$-forms, i.e scalar fields. This kind of dressing, has been successfully applied in order to construct homogenous anti-de Sitter black string/p-branes in general relativity and Lovelock theories Cisterna:2017qrb ; Cisterna:2018jsx ; Cisterna:2018mww ; Arratia:2020hoy ; Cisterna:2019scr . Analogously to the case we have previously faced, in which the fundamental forms were proportional to the volume form of the internal manifold, the scalars are required to be linear in the internal manifold coordinate, providing in this way an immediate solution to the corresponding Klein-Gordon equation. Our interaction (4) is modified in such a manner that now the curvature tensors are coupled with a specific combination of first derivatives of the scalars. We then consider $\mathcal{L}_{scalar}[g,\phi]=-\frac{1}{2^{2k+1}}\delta^{CA1...A_{2k}}_{DB1...B_{2k}}R^{B1B2}{}{}_{A1A2}...R^{B_{2k-1}B_{2k}}{}{}_{A_{2k-1}A_{2k}}\nabla_{C}\phi^{i}\nabla^{D}\phi^{i}$ (34) which is equivalent to a generalized non-minimal kinetic coupling controlled by the Lovelock tensor of order $k$ $\mathcal{L}_{scalar}[g,\phi^{i}]=-\frac{1}{2^{2k+1}}E^{CD}_{(k)}\nabla_{C}\phi^{i}\nabla_{D}\phi^{i}$ (35) In complete analogy with the previous section we truncate our theory up to order $k=2$, then our action principle is written as $\displaystyle I\left[g,\phi,\psi\right]=$ $\displaystyle\int\sqrt{-g}d^{5}x\left[R-2\Lambda+\frac{\alpha_{2}}{4}\delta^{A_{1}\cdots A_{4}}_{B_{1}\cdots B_{4}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}-\frac{1}{2}g^{AB}\nabla_{A}\phi\nabla_{B}\phi\right.$ $\displaystyle\left.+\frac{\beta_{1}}{8}G^{AB}\nabla_{A}\phi\nabla_{B}\phi+\frac{\gamma_{1}}{64}H^{AB}\nabla_{A}\phi\nabla_{B}\phi-\frac{1}{2}g^{AB}\nabla_{A}\psi\nabla_{B}\psi+\frac{\beta_{2}}{8}G^{AB}\nabla_{A}\psi\nabla_{B}\psi\right.$ $\displaystyle\left.+\frac{\gamma_{2}}{64}H^{AB}\nabla_{A}\psi\nabla_{B}\psi\right]$ (36) where, in order to avoid any fine tuning between the couplings, one has to introduce at least two different scalar fields (as the computation here below shows). This is not necessary in the previous six dimensional case because the curvature of the internal manifold provides an extra scale that prevents the appearance of such relations. The resulting theory belongs to a sector of Horndeski gravity, the most general scalar-tensor theory with second order field equations Horndeski:1974wa . The kinetic coupling to the Einstein tensor corresponds to a quadratic sector of Horndeski gravity in dimension four while the coupling with the Gauss-Bonnet tensor appears naturally in dimension greater or equal than five. Taking the variation with respect to the metric, we obtain $\displaystyle G_{AB}+\Lambda g_{AB}+\alpha_{2}H_{AB}$ $\displaystyle=T^{\left(0\right)}_{\phi_{i}AB}+\beta_{i}T^{\left(1\right)}_{\phi_{i}AB}+\gamma_{i}T^{\left(2\right)}_{\phi_{i}AB},$ (37) where $\displaystyle T^{(0)}_{\phi_{i}AB}$ $\displaystyle=\frac{1}{2}\left(\partial_{A}\phi_{i}\partial_{B}\phi_{i}-\frac{1}{2}g_{AB}\partial_{C}\phi_{i}\partial^{C}\phi_{i}\right),$ (38) $\displaystyle T^{\left(1\right)}_{\phi_{i}AB}$ $\displaystyle=\frac{1}{2}\left(\frac{1}{2}\partial_{A}\phi_{i}\partial_{B}\phi_{i}R-2\partial_{C}\partial_{(A}\phi_{i}R_{B)}^{\,\,C}-\partial_{C}\phi_{i}\partial_{D}\phi_{i}R_{A\,\,\,B}^{\,\,\,C\,\,\,D}-\nabla_{A}\nabla^{C}\phi_{i}\nabla_{B}\nabla_{C}\phi_{i}+\nabla_{A}\nabla_{B}\phi_{i}\square\phi_{i}\right.$ $\displaystyle\left.+\frac{1}{2}G_{AB}\left(\partial\phi_{i}\right)^{2}-g_{AB}\left[-\frac{1}{2}\nabla^{C}\nabla^{D}\phi_{i}\nabla_{C}\nabla_{D}\phi_{i}+\frac{1}{2}\left(\square\phi_{i}\right)^{2}-\partial_{C}\phi_{i}\partial_{D}\phi_{i}R^{CD}\right]\right)\ ,$ (39) $\displaystyle T^{(2)}_{\phi_{i}AB}=$ $\displaystyle-\frac{1}{64}g_{AB}\nabla_{C}\phi_{i}\nabla^{D}\phi_{i}\delta^{CA_{1}\cdots A_{4}}_{DB_{1}\cdots B_{4}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}+\frac{1}{32}\nabla_{C}\phi_{i}\nabla_{\left(A\right|}\phi_{i}\delta^{CA_{1}\cdots A_{4}}_{\left|B\right)B_{1}\cdots B_{4}}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}B_{4}}{}{}_{A_{3}A_{4}}$ $\displaystyle+\frac{1}{16}\nabla_{C}\phi_{i}\nabla^{D}\phi_{i}\delta^{CA_{1}\cdots A_{4}}_{DB_{1}\cdots\left(A\right|}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R^{B_{3}}{}_{\left|B\right)A_{3}A_{4}}+\frac{1}{8}\nabla^{B_{3}}\nabla_{C}\phi_{i}\nabla_{A_{3}}\nabla^{D}\phi_{i}\delta^{CA_{1}\cdots A_{4}}_{DB_{1}\cdots\left(B\right|}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}g_{\left|A\right)A_{4}}$ $\displaystyle+\frac{1}{16}\nabla_{C}\phi_{i}\nabla^{E}\phi_{i}\delta^{CA_{1}\cdots A_{4}}_{DB_{1}\cdots\left(B\right|}R^{B_{1}B_{2}}{}{}_{A_{1}A_{2}}R_{A_{3}E}{}{}^{B_{3}D}g_{\left|A\right)A_{4}}\ .$ (40) where $\phi_{i}$ stands for the scalar fields and $i=1,2$. On the other hand, the scalar fields fulfil $\displaystyle\left(g^{AB}-\frac{\beta_{i}}{4}G^{AB}-\frac{\gamma_{i}}{32}H^{AB}\right)\nabla_{A}\nabla_{B}\phi^{i}$ $\displaystyle=0.$ (41) To start with, we take a five dimensional product space $\mathcal{M}_{D}=\mathcal{M}_{4}\times\mathbb{R}$ whose metric along the lines of (19) can be written as $\displaystyle ds^{2}=d\tilde{s}_{4}^{2}+dz^{2}\ .$ (42) As we said, each scalar field is linear in the internal manifold coordinate $\phi(z)=\lambda_{0}z,\hskip 14.22636pt\psi(z)=\lambda_{1}z,$ (43) being $\lambda_{0}$ and $\lambda_{1}$ integration constants, the scalar charges, and they immediately solve each corresponding Klein-Gordon equation. Computing the field equations on the brane and on the extra coordinate and tracing them, we get $\displaystyle\left(4\Lambda+\lambda_{0}^{2}+\lambda_{1}^{2}\right)-\left(1-\frac{\beta_{1}}{4}\lambda_{0}^{2}-\frac{\beta_{2}}{4}\lambda_{1}^{2}\right)\tilde{R}$ $\displaystyle=0$ (44) and $\displaystyle\left(-2\Lambda+\frac{\lambda_{0}^{2}}{2}+\frac{\lambda_{1}^{2}}{2}\right)+\left(1+\frac{\beta_{1}}{4}\lambda^{2}_{0}+\frac{\beta_{2}}{4}\lambda^{2}_{1}\right)\tilde{R}+\left(\alpha_{2}-\frac{\gamma_{1}}{8}\lambda^{2}_{0}-\frac{\gamma_{2}}{8}\lambda^{2}_{1}\right)\tilde{\mathcal{GB}}=0.$ (45) Compatibility will then require $\lambda_{0}^{2}=\frac{8\alpha_{2}-\gamma_{2}\lambda_{1}^{2}}{\gamma_{1}},\hskip 14.22636pt\Lambda=-\frac{1}{4}\frac{\left(\gamma_{1}\lambda_{1}^{2}-\gamma_{2}\lambda_{1}^{2}+8\alpha_{2}\right)\left(-\beta_{1}\gamma_{2}\lambda_{1}^{2}+\beta_{2}\gamma_{1}\lambda_{1}^{2}+8\alpha_{2}\beta_{1}+12\gamma_{1}\right)}{\gamma_{1}\left(-3\beta_{1}\gamma_{2}\lambda_{1}^{2}+3\beta_{2}\gamma_{1}\lambda_{1}^{2}+24\alpha_{2}\beta_{1}+4\gamma_{1}\right)}.$ (46) For an spherically symmetric ansatz on the brane, solving Einstein equations delivers $ds^{2}=-\left(\frac{r^{2}}{l^{2}_{\text{eff}}}-\frac{\mu}{r}+K\right)dt^{2}+\frac{dr^{2}}{\left(\frac{r^{2}}{l^{2}_{\text{eff}}}-\frac{\mu}{r}+K\right)}+r^{2}d\Sigma_{K}^{2}+dz^{2}$ (47) where the effective cosmological constant, by means of (46), is given by $l^{-2}_{\text{eff}}=\frac{1}{6}\frac{\left(\gamma_{1}-\gamma_{2}\right)\lambda_{1}^{2}+4\Lambda\gamma_{1}+8\alpha_{2}}{\left(-\beta_{1}\gamma_{2}+\beta_{2}\gamma_{1}\right)\lambda_{1}^{2}+8\alpha_{2}\beta_{1}-2\gamma_{1}}\ .$ (48) It is interesting to note that, from here, it is possible to obtain a Schwarzschild black string, i.e a five dimensional black string solution to (37) in which the four dimensional brane is given by a Schwarzschild black hole. This might be done by considering the case in which $\Lambda=\lambda_{0}=0$. For such a case, even the $\beta_{1}$ interaction maybe neglected and the result is nothing else than the standard black string $ds^{2}=-\left(K-\frac{\mu}{r}\right)dt^{2}+\frac{dr^{2}}{\left(K-\frac{\mu}{r}\right)}+r^{2}d\Sigma_{K}^{2}+dz^{2}$ (49) Here we observe the evident relation between black strings and compactifications on direct product spaces. The original five dimensional Schwarzschild black string exists because general relativity admits cylindrical compactifications of the form on $\mathcal{M}_{D}=\mathcal{M}_{4}\times\mathbb{R}(\mathcal{S}^{1}$). This solution is particularly interesting because in five dimensions these configurations suffers from a long wavelength instability, the Gregory- Laflamme instability Gregory:1993vy , revealing the weakness of the cosmic censorship in higher dimensions Lehner:2010pn . One might conjecture that this is related with the fact that the Schwarzschild black hole is a solution of general relativity, and that when uplifting the solution to a five dimensional black string we are still under the domain of the same theory, namely, solving Einstein field equations in dimension five. However, in five dimensions Einstein theory is not the most general gravitational theory we have at hand, in fact, Einstein-Gauss-Bonnet gravity plays that role. By means of our compactification procedure, we have been able to construct a black string in five dimensional Einstein-Gauss-Bonnet gravity (47,49) which precisely represent an Einstein solution on the four dimensional brane. In consequence it would be appealing to study its mechanical stability at least in the linear regime. ## V Thermodynamic quantities We address now the thermodynamic analysis of the $6$-dimensional black $2$-brane solution (32)111The $5$-dimensional black string (47) follows the same procedure.. The black $2$-brane temperature is obtained, as usual, by requiring a smooth Euclidean continuation of the solution, then $\displaystyle T$ $\displaystyle=\frac{r_{+}}{2\pi l_{\text{eff}}^{2}}+\frac{\mu(r_{+})}{4\pi r_{+}^{2}}\ ,$ (50) where $r=r_{+}$ denotes the location of the black hole horizon and the effective AdS length is given in (33). On the other hand, by means of Wald’s formula Wald:1993nt we obtain that the entropy density is $s=\frac{S}{\mathrm{Vol}\left[\mathcal{K}^{2}\right]}=\frac{\sigma r_{+}^{2}}{4G_{\text{eff}}}+32\pi\sigma K\alpha_{2}\ ,$ (51) with $G_{\text{eff}}$ defined in (31). Here $\mathrm{Vol}[\mathcal{K}^{2}]$ represents the volume of the $2$-dimensional internal manifold $\mathcal{K}^{2}$, while $\sigma$ corresponds to the volume of the transverse manifold of the four dimensional black hole on the brane. It is interesting to notice that there are two types of contributions to the black hole entropy. Since the effective theory that dictates the dynamics of the four dimensional manifold is GR, the entropy of the black hole has a contribution that goes as $r_{+}^{2}$, namely as the area of the event horizon. This contribution acquires corrections from all the higher curvature matter couplings, and can be written as $A/(4G_{\text{eff}})$. It is also worth pointing out that there is an extra, universal constant contribution to the entropy, which comes from the presence of the higher dimensional Gauss-Bonnet term (see also Cai:2001dz ). Using the first law of black hole thermodynamic, $dm=Tds$, we obtain that the constant $\mu$ is identified to the mass of the solution by $m=\frac{\sigma r_{+}(r_{+}^{2}+Kl_{\text{eff}}^{2})}{8G_{\text{eff}}l_{\text{eff}}^{2}\pi}=\frac{\mu\sigma}{8\pi G_{\text{eff}}}\ ,$ (52) where $r_{+}=r_{+}(\mu)$. We observe that there is no gap, i.e the mass density goes to zero in the limit $r_{+}\rightarrow 0$. As expected, the Kretschmann invariant for (32) as $r\rightarrow 0$ behaves as $\displaystyle R_{ABCD}R^{ABCD}=\mathcal{O}(r^{-6}),$ (53) revealing the existence of a curvature singularity located at the origin.It is very interesting to see the effect of the constant term in the entropy, induced by the higher curvature terms in higher dimensions, which lead to a consistent dimensional reduction thanks to the non-minimally coupled Maxwell field. Considering the temperature, entropy and mass, respectively given in (50), (51) and (52), one finds the following expression for the Helmholtz free energy $\frac{F\left(r_{+}\right)}{\mathrm{Vol}\left[\mathcal{K}^{2}\right]}=\frac{r_{+}}{4G_{\text{eff}}}\left(1-\frac{r_{+}^{2}}{l_{\text{eff}}^{2}}\right)-32\pi\alpha_{2}\left(\frac{3r_{+}}{l_{\text{eff}}^{2}}+\frac{1}{r_{+}}\right)\ ,$ (54) where we have focused on the spherically symmetric case, namely, we have fixed $\sigma=4\pi$ and $K=1$. Since the temperature is not modified with respect to that of Schwarzschild-AdS, it takes a minimum value, at which the free energy reduces to $\frac{F\left(T_{\text{min}}\right)}{\mathrm{Vol}\left[\mathcal{K}^{2}\right]}=2\sqrt{3}l_{\text{eff}}\left(\frac{1}{36G_{\text{eff}}}-\frac{32\pi\alpha_{2}}{l_{\text{eff}}^{2}}\right)\ .$ (55) Notice that $F(T_{\text{min}})$ can have either signs depending on the precise values of the Gauss-Bonnet coupling, and the presence of the effective cosmological and Newton constant, allows to modify the sign even when the higher curvature terms are of a perturbative nature. It is also interesting to notice that for arbitrarily small black holes the free energy goes to minus infinity due to the presence of the Gauss-Bonnet coupling $\alpha_{2}$, in stark contrast to the behavior of small black holes in GR whose free energy asymptotically vanish. Nevertheless the presence of the additive term in the entropy, sets a lower bound on the mass of the black holes and consequently a lower bound on their radii. This standard bound comes from considering an idealized, quasi-static process of black hole fusion where the initial black holes have masses $M_{1}$ and $M_{2}$ and, disregarding the gravitational radiation, the variation of the entropy reads $\Delta S=8\pi G_{\text{eff}}M_{1}M_{2}-128\pi^{2}\alpha_{2}-\frac{64\pi G_{\text{eff}}^{3}}{l_{\text{eff}}^{2}}M_{1}M_{2}(2M_{1}^{2}+3M_{1}M_{2}+2M_{2}^{2})+\mathcal{O}(l_{\text{eff}}^{-4})\ ,$ (56) where $\Delta S=S_{\text{final}}-S_{\text{initial}}$. Since the expression for finite $l_{\text{eff}}$ is not very illuminating, we have taken the expansion of for large $l_{\text{eff}}$. In the asymptotically flat case, the positivity of $\Delta S$ clearly imposes a lower bound on the black hole masses. Our discussion is based on the original interpretation of these type of terms given in Jacobson:1993xs . According to this interpretation, the extra term in the entropy for two far apart black holes must be added, leading to a potential violation of the law of increasing entropy in a proccess of quasi- static black hole coalesence for $\alpha_{2}>0$, unless the mass is bounded from below. If $\alpha_{2}<0$ no such bound appears using this argument. It is important to remark that when the GB term emerges as a higher curvature correction in string theory, $\alpha_{2}$ is positive. Nevertheless, there are two extra arguments supporting the Jacobson-Myers interpretation. Firstly, this constant term (despite its temperature independence) is actually non- vanishing only in the presence of the black hole horizon, since otherwise the domain of integration is void. Secondly, there are many physical systems which possess non-zero extensive entropy at zero temperature, as for instance the antiferromagnetic Potts model with not too large q (q=2 for the Ising model). Figure 1 depicts the free energy curves for different values of the parameters. Since no other integration constant, beyond the mass is present, the appropriate ensemble is the canonical ensemble and therefore the statistically most favoured configuration is the one that minimizes the Helmholtz free energy for a given fixed temperature. It is also important to notice that the vacuum configuration, namely the solution with vanishing $F(T)$ is equal to thermal AdS${}_{4}\times S^{2}$ with non-vanishing flux of the Maxwell field on the two-dimensional, compactified internal sphere. Remarkably, when $F(T_{\text{min}})$ is negative, there is no Hawking-Page transition Hawking:1982dh and the large black holes always dominate the ensemble. Figure 1: Free energy versus temperature for Schwarzschild-AdS in General Relativity in four dimensions (left panel) and for the compactified black hole solution Schwarzschild-AdS${}_{4}\times S^{2}$ (right panel). The compactified solution corresponds to the values $\alpha_{2}=0.1,\ \beta_{1}=0.2,\ \beta_{2}=0.05$ and $\Lambda=-1$. ## VI Further comments We have been able to perform a dimensional reduction of EGB theory to dimension four, giving raise to General Relativity, for arbitrary values of the couplings. The consistency of the compactification relies on the presence of non-minimally coupled, magnetically charged, $p$-forms. As a guiding principle for the introduction of this matter field, we have restricted to the family of couplings introduced in Feng:2015sbw , which mimic the structure of Lovelock theories, leading to second order field equations. The Gauss-Bonnet term, as well as other higher curvature couplings with matter fields naturally emerge as perturbative correction of fundamental theories in effective field theory scenarios (see e.g. Burgess:2007pt ). Whether or not the precise couplings considered here emerge from a low energy limit of a fundamental theory after considering the freedom of field redefinitions, is beyond our present objectives, nevertheless we have shown that in such potential scenario, the radius of compactification of the two-dimensional compact manifold considered in Section III, is indeed proportional to the inverse of the energy scale $M$, and therefore can be consistently considered as small. The structure of the theories considered suggest that our results can be extended to Lovelock theories beyond the EGB Lagrangian. One might even been able to consider a given, arbitrary Lovelock theory in dimension $D$ and obtain a consistent arbitrary Lovelock theory in dimension $d=D-p$ by considering magnetically charged, non-minimally coupled $p$-forms in the family (4). As mentioned above, in vacuum, such compactifications usually require to introduce relations between the couplings, which are not compatible with the interpretation of the higher curvature terms as perturbative corrections (for solutions of Lovelock theories in $N+1$-dimensions see Kastor:2017knv ). As one increases the dimension of the compact manifold, releasing its geometry, it is natural to expect the presence of some higher curvature constraints on its curvature. For the case of topological black holes, it was shown in Dotti:2005rc that a new constant characterising the square of the Weyl tensor of the horizon appears on the lapse function and the effects of such constant in the thermodynamics has recently been explored in Hull:2021bry . A simple family of manifolds that would allow to go beyond constant curvature internal spaces is the products of spheres. These manifolds will allow to introduce further parameters in the compactifications which may help when contrasting the four-dimensional obtained theory with experimental evidence. Finally, as explained in Section V, the consistent compactification we have found, may lead to black holes in AdS${}_{4}\times S^{2}$ which are always thermodynamically favoured with respect to the solitonic thermal background. In addition, since the Hawking-Page phase transition plays a very important role in holography Witten:1998zw , it would be appealing to explore some holographic properties of the present compactification scenario. ## VII Acknowlegdments This work is partially funded by FONDECYT grants 1200022, 1210500 and 1181047. J.O. also thanks the support of Proyecto de Cooperación Internacional 2019/13231-7 FAPESP/ANID. C.H acknowledges support from the National Agency for Research and Development (ANID) /Scholarship Program/BECA DE DOCTORADO NACIONAL/2017 - 21171394. S.F. thanks the support of Beca Doctorado USM. The Centro de Estudios Científicos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of ANID. ## References * (1) T. Kaluza, Zum Unitatsproblem der Physik, Sitz. Preuss. Akad. Wiss. Phys. Math. K1 (1921) 966. * (2) O.Klein, Quantentheorie und funfdimensionale Relativitatstheorie, Zeits. Phys. 37 (1926) 895. * (3) J. M. Overduin and P. S. Wesson, Phys. Rept. 283, 303-380 (1997). [arXiv:gr-qc/9805018 [gr-qc]]. * (4) O. Castillo-Felisola, C. Corral, S. del Pino and F. Ramírez, Phys. Rev. D 94, no.12, 124020 (2016) doi:10.1103/PhysRevD.94.124020 [arXiv:1609.09045 [gr-qc]]. * (5) M. J. Duff, B. E. W. Nilsson and C. N. Pope, Phys. Rept. 130, 1-142 (1986) * (6) P. G. O. Freund and M. A. Rubin, “Dynamics of Dimensional Reduction,” Phys. Lett. 97B, 233 (1980). * (7) S. Randjbar-Daemi, A. Salam and J. A. Strathdee, “Spontaneous Compactification in Six-Dimensional Einstein-Maxwell Theory,” Nucl. Phys. B 214, 491 (1983). * (8) Y. Bardoux, M. M. Caldarelli and C. Charmousis, JHEP 05, 054 (2012) [arXiv:1202.4458 [hep-th]]. * (9) A. Edery and Y. Nakayama, Mod. Phys. Lett. A 30, no.30, 1550152 (2015) doi:10.1142/S0217732315501527 [arXiv:1502.05932 [hep-th]]. * (10) A. Edery and Y. Nakayama, Phys. Rev. D 98, no.6, 064011 (2018) doi:10.1103/PhysRevD.98.064011 [arXiv:1807.07004 [hep-th]]. * (11) D. Lovelock, J. Math. Phys. 12, 498-501 (1971) * (12) C. Garraffo and G. Giribet, Mod. Phys. Lett. A 23, 1801-1818 (2008) [arXiv:0805.3575 [gr-qc]]; C. Charmousis, Lect. Notes Phys. 769, 299-346 (2009) doi:10.1007/978-3-540-88460-6_8 [arXiv:0805.0568 [gr-qc]]. * (13) B. Zwiebach, Phys. Lett. B 156, 315-317 (1985) * (14) A. Cisterna, S. Fuenzalida and J. Oliva, Phys. Rev. D 101, no.6, 064055 (2020) [arXiv:2001.00788 [hep-th]]. * (15) G. W. Horndeski, “Conservation of Charge and the Einstein-Maxwell Field Equations,” J. Math. Phys. 17, 1980 (1976). * (16) X. H. Feng and H. Lu, “Higher-Derivative Gravity with Non-minimally Coupled Maxwell Field,” Eur. Phys. J. C 76, no. 4, 178 (2016) [arXiv:1512.09153 [hep-th]]. * (17) F. Canfora, A. Giacomini, R. Troncoso and S. Willison, Phys. Rev. D 80, 044029 (2009) [arXiv:0812.4311 [hep-th]]. * (18) H. S. Liu, Z. F. Mai, Y. Z. Li and H. Lü, Sci. China Phys. Mech. Astron. 63, 240411 (2020) [arXiv:1907.10876 [hep-th]]. * (19) A. Cisterna, G. Giribet, J. Oliva and K. Pallikaris, Phys. Rev. D 101, no.12, 124041 (2020) [arXiv:2004.05474 [hep-th]]. * (20) P. A. Cano and Á. Murcia, JHEP 10, 125 (2020) [arXiv:2007.04331 [hep-th]]. * (21) D. Kastor and R. B. Mann, JHEP 04, 048 (2006) [arXiv:hep-th/0603168 [hep-th]]. * (22) G. Giribet, J. Oliva and R. Troncoso, JHEP 05, 007 (2006) [arXiv:hep-th/0603177 [hep-th]]. * (23) A. Cisterna and J. Oliva, Class. Quant. Grav. 35, no.3, 035012 (2018) [arXiv:1708.02916 [hep-th]]. * (24) A. Cisterna, C. Corral and S. del Pino, Eur. Phys. J. C 79, no.5, 400 (2019) [arXiv:1809.02903 [gr-qc]]. * (25) A. Cisterna, S. Fuenzalida, M. Lagos and J. Oliva, Eur. Phys. J. C 78, no.11, 982 (2018) [arXiv:1810.02798 [hep-th]]. * (26) E. Arratia, C. Corral, J. Figueroa and L. Sanhueza, [arXiv:2010.02460 [hep-th]]. * (27) A. Cisterna, C. Henríquez-Báez and J. Oliva, JHEP 01, 052 (2020) [arXiv:1909.05404 [hep-th]]. * (28) G. W. Horndeski, Int. J. Theor. Phys. 10, 363-384 (1974) * (29) R. Gregory and R. Laflamme, Phys. Rev. Lett. 70, 2837-2840 (1993) [arXiv:hep-th/9301052 [hep-th]]. * (30) L. Lehner and F. Pretorius, Phys. Rev. Lett. 105, 101102 (2010) [arXiv:1006.5960 [hep-th]]. * (31) R. M. Wald, Phys. Rev. D 48, no.8, 3427-3431 (1993) [arXiv:gr-qc/9307038 [gr-qc]]. * (32) R. G. Cai, Phys. Rev. D 65, 084014 (2002) doi:10.1103/PhysRevD.65.084014 [arXiv:hep-th/0109133 [hep-th]]. * (33) T. Jacobson and R. C. Myers, Phys. Rev. Lett. 70, 3684-3687 (1993) doi:10.1103/PhysRevLett.70.3684 [arXiv:hep-th/9305016 [hep-th]]. * (34) S. W. Hawking and D. N. Page, Commun. Math. Phys. 87, 577 (1983) * (35) C. P. Burgess, Ann. Rev. Nucl. Part. Sci. 57, 329-362 (2007) [arXiv:hep-th/0701053 [hep-th]]. * (36) D. Kastor, S. Ray and J. Traschen, Class. Quant. Grav. 34, no.19, 195005 (2017) [arXiv:1706.06684 [gr-qc]]. * (37) G. Dotti and R. J. Gleiser, Phys. Lett. B 627, 174-179 (2005) [arXiv:hep-th/0508118 [hep-th]]. * (38) B. R. Hull and R. B. Mann, [arXiv:2102.05282 [gr-qc]]. * (39) E. Witten, Adv. Theor. Math. Phys. 2, 505-532 (1998) [arXiv:hep-th/9803131 [hep-th]].
# Bayesian inference of real-time dynamics from lattice QCD ARAlexander Rothkopf Faculty of Science and Technology, University of Stavanger, 4021 Stavanger, Norway ###### Abstract The computation of dynamical properties of nuclear matter, ranging from parton distribution functions of nucleons and nuclei to transport properties in the quark-gluon plasma, constitutes a central goal of modern theoretical physics. This real-time physics often defies a perturbative treatment and the most successful strategy so far is to deploy lattice QCD simulations. These numerical computations are based on Monte-Carlo sampling and formulated in an artificial Euclidean time. Real-time physics is most conveniently formulated in terms of spectral functions, which are hidden in lattice QCD behind an ill- posed inverse problem. I will discuss the current methods state-of-the art in the extraction of spectral functions from lattice QCD simulations, based on Bayesian inference and emphasize the importance of prior domain knowledge, vital to regularizing the otherwise ill-posed extraction task. With Bayesian inference allowing us to make explicit the uncertainty in both observations and in our prior knowledge, a systematic estimation of the total uncertainties in the extracted spectral functions is nowadays possible. Two implementations of the Bayesian Reconstruction (BR) method for spectral function extraction, one for MAP point estimates and one based on an open access Monte-Carlo sampler are provided. I will briefly touch on the use of machine learning for spectral function reconstruction and discuss some new insight it has brought to the Bayesian community. Bayesian inference, Lattice QCD, spectral functions, ###### keywords: Methods Review - PreprintFP ## 1 Introduction ### 1.1 The Physics Challenge After a successful decade of studying the static properties of the strong interactions, such as their phase diagram (for reviews see e.g. Guenther:2020jwe ; Fukushima:2010bq ) and equation of state (for recent studies see e.g. Borsanyi:2022soo ; Bazavov:2017dus ; Borsanyi:2022qlh ) through relativistic heavy-ion collisions (for an overview see e.g. Busza:2018rrf ) and more recently through the multi-messenger observations of colliding neutron stars (for a review see e.g. Kojo:2020krb ), high energy nuclear physics sets out to make decisive progress in the understanding of real-time dynamics of quarks and gluons in the coming years. The past heavy-ion collision campaigns at collider facilities such as RHIC at Brookhaven National Laboratory (BNL) and the LHC at the European Center for Nuclear Physics (CERN) provided conclusive evidence for the existence of a distinct high-temperature state of nuclear matter, the quark-gluon-plasma (for a review see e.g. Pasechnik:2016wkt ). At the same time, theory, by use of high-performance computing, predicted the thermodynamic properties, such as the equation of state Bazavov:2017dsy ; Borsanyi:2016ksw ; HotQCD:2014kol ; Burger:2014xga ; Borsanyi:2013bia of hot nuclear matter from first principles. When data and theory were put to the test in the form of phenomenological models based on relativistic hydrodynamics, excellent agreement was observed (for a review see e.g. Jaiswal:2016hex ). Similarly past $e^{-}$+$p$ collider experiments at HERA (DESY) revealed (for a review see Klein:2008di ) that the properties of nucleons can only be understood when in addition to the three valence quarks of the eponymous quark-model also the virtual excitations of quarks and gluons are taken into account. In particular the emergent phenomenon of asymptotic freedom manifests itself clearly in their data, as the coupling between quarks and gluons becomes weaker with increasing momentum exchange in a collision (for the current state-of-the art see e.g dEnterria:2022hzv ). Simulations of the strong interactions are by now able to map this intricate behavior of the strong coupling over a wide range of experimentally relevant scales, again leading to excellent agreement between theory and experiment (for a community overview see chap. 9 of Aoki:2021kgd ). Going beyond the static or thermodynamic properties of nuclear matter proves to be challenging for both theory and experiment. In heavy-ion collisions most observed particles in the final state at best carry a memory on the whole time-evolution of the collision. This requires phenomenology to disentangle the physics of the QGP from other effects e.g. those arising in the early partonic stages or the hadronic aftermath of the collision. It turns out that in order to construct accurate multi-stage models of the collision dynamics (see e.g. Lin:2004en ; Petersen:2008dd ; Bratkovskaya:2011wp ), a variety of first-principles insight is needed. The dynamics of the bulk of the light quarks and gluons which make up the QGP produced in the collision is conveniently characterized by transport coefficients. Of central interest are the viscosities of deconfined quarks and gluons and their color charge conductivity. The physics of hard probes, such as fast jets (see e.g. Cao:2020wlm ) or slow heavy quark bound states (see e.g. Rothkopf:2019ipj ), which traverse the bulk nearly as test particles on the other hand requires insight into different types of dynamical quantities. In this context first principles knowledge of the complex in-medium potential between a heavy quark and antiquark, the heavy quark diffusion constant or the so-called jet quenching parameter $\hat{q}$, which summarizes the momentum broadening of a parton jet is called for. As it turns out computing any of these quantities represents a major challenge for numerical simulation methods of the strong interactions. Going beyond merely establishing asymptotic freedom and instead revealing the full 6-dimensional phase space (i.e. spatial and momentum distribution) of partons inside nucleons and nuclei is the aim of an ambitious collider project just green-lit in the USA. The upcoming electron-ion collider AbdulKhalek:2022hcn will be able to explore the quark and gluon content of nucleons in kinematic regimes previously inaccessible and opens up the first opportunity to carry out precision tomography of nuclei using well-controlled point-particle projectiles. Simulations have already revealed that the virtual particle content of nucleons is vital for the overall angular momentum budget of the proton (see e.g. Alexandrou:2020sml ; Wang:2021vqy ). A computation of the full generalized transverse momentum distribution Meissner:2009ww however has not been achieved yet. This quantity describes partons in terms of their longitudinal momentum fraction x, the impact parameter of the collision $b_{\rm T}$ and the transverse momentum of the parton $k_{\rm T}$. Integrating out different parts of the transverse kinematics leads to simpler object, such as transverse momentum distributions (TMDs, integrated over $b_{\rm T}$) or generalized parton distributions (GPDs, integrated over $k_{\rm T}$). Integrating all transverse dependence leads eventually to the conventional parton distribution functions (PDFs), which depend only on the longitudinal Bjorken x variable. A vigorous research community has made significant conceptual and technical progress over the past years, moving towards the first-principles determination of PDFs and more recently GPDs and TMDs from lattice QCD ( for a community overview see Constantinou:2022yye ). Major advances in the past years include the development of the quasi PDF Ji:2013dva and pseudo PDF Radyushkin:2017cyf formalism, which offer complementary access to PDFs besides their well-known relation to the hadronic tensor Liu:1993cv . With the arrival of the first exascale supercomputer in 2022, major improvements in the precision and accuracy of parton dynamics from lattice QCD are on the horizon. ### 1.2 Lattice QCD In order to support experiment and phenomenology, theory must provide model independent, i.e. first-principles insight into the dynamics of quarks and gluons in nuclei and within the QGP. This requires the use of quantum chromo dynamics (QCD), the renormalizable quantum field theory underlying the strong interactions. Renormalizability refers to the fact that one only needs to provide a limited number of experimental measurements to calibrate each of its input parameters (strong coupling constant and quark masses) before being able to make predictions at any scale. In order to utilize this vast predictive power of QCD however we must be able to evaluate correlation functions of observables from their defining equations in terms of Feynman’s path integral $\displaystyle\langle O(t_{1})\tilde{O}(t_{2})\rangle=\frac{1}{Z}\int{\cal D}[A^{\mu}_{a},\psi^{a}_{f},\bar{\psi}^{a}_{f}]\;O(t_{1})\tilde{O}(t_{2})\;{\rm exp}\big{[}iS_{\rm QCD}[A^{\mu}_{a},\psi^{a}_{f},\bar{\psi}^{a}_{f}]\big{]},$ (1) where $A^{\mu}_{a}$ denotes the gluon fields and $\psi^{a}_{f}$ the color charged quarks of flavor $f$. The path integral weight is given by the exponentiated QCD action denoted by $S_{\rm QCD}$ (for more details see Schwartz:2014sze ) and the normalization $Z$ refers to the path integral evaluated in the absence of observables in the integrand. Computing the dynamical properties of quarks in gluons both inside nucleons as well as in the experimentally accessible QGP requires us to evaluate the above path integral in the presence of strong fluctuations, which invalidate commonly used weak-coupling expansions of the path integral weight. Instead a non-perturbative evaluation of observables is called for. While progress has been made in non-perturbative analytic approaches to QCD, such as the functional renormalization group Dupuis:2020fhh ; Blaizot:2021ikl or Dyson- Schwinger equations fischer2006infrared ; Roberts:2012sv , I focus here on the most prominent numerical approach: lattice QCD (for textbooks see e.g. montvay1994quantum ; Gattringer:2010zz ). In lattice QCD four-dimensional spacetime is discretized on a hypercube with $N^{4}$ grid points n, separated by a lattice spacing $a$. In order to maintain the central defining property of QCD, the invariance of observables under local $SU(3)$ rotations of quark and gluon degrees of freedom, in such a discrete setting, one introduces gauge link variables $U_{\mu}(x)={\rm exp}[-igA_{\mu}^{a}(x+\frac{1}{2}a\hat{\mu})T^{a}]$, which connect the nodes of the grid in direction $\hat{\mu}$. Here $g$ denotes the strong coupling constant and $T^{a}$ refers to the Gell-Mann matrices defining the gauge group $SU(3)$. From the closed products of four or more link variables, as well as the quark fermion fields, discrete but fully gauge invariant actions can be constructed (the simplest one called the Wilson action). This action allows to formulate a discretized version of Feynman’s path integral. It is the next and final step in the formulation of lattice QCD, which is crucial to understand the challenge we face in extracting dynamical properties from its simulations. The path integral of QCD, while already formulated in a discrete fashion, still contains the canonical complex Feynman weight ${\rm exp}[-iS_{\rm QCD}[U,\psi,\bar{\psi}]]$. So far, even though progress is being made, no universal numerical method to evaluate such highly dimensional oscillatory integrals has been developed, a challenge often referred to as the sign problem (see e.g. Gattringer:2016kco ; Berger:2019odf ). Instead one circumvents this difficulty by making use of complex analysis and analytically continues the Minkowski time variable $t$ onto the imaginary axis in the lower half complex plane $\tau=it$. The additional factors of the imaginary unit, which arise from this manipulation can be conveniently combined to cancel the pre-factor of $i$ in the Feynman weight leading to $\displaystyle\langle O_{{\bm{n}}_{1}}\tilde{O}_{{\bm{n}}_{2}}\rangle=\frac{1}{Z}\int\prod_{n}\prod_{\mu}dU_{\mu,{\bm{n}}}d[\psi_{f,{\bm{n}}},\bar{\psi}_{f,{\bm{n}}}]\;O_{{\bm{n}}_{1}}\tilde{O}_{{\bm{n}}_{2}}\;{\rm exp}\big{[}-S_{\rm E}[U,\psi,\bar{\psi}]\big{]}.$ (2) The action $S_{\rm E}\in\mathbb{R}$ one obtains after analytic continuation is referred to as Euclidean action. As a curiosity of quantum field theory one should note that due to a subtle relation between the Boltzmann factor, which describes thermal systems and time evolution in imaginary time, the extent of the imaginary time axis is directly linked to the inverse temperature $\beta=1/T$ of the system (KMS-relation) Bellac:2011kqa . By varying the length of the imaginary time axis it is therefore possible to change between a scenario at $T\approx 0$ relevant for nucleon structure and $T>0$ relevant for the study of the QGP. Besides allowing us to incorporate the concept of temperature in a straight forward manner, this Euclidean path integral is now amenable to standard methods of stochastic integration, since the Euclidean Feynman weight is real and bounded from below. Using established Markov-Chain Monte Carlo techniques one generates ensembles of gauge field configurations distributed according to $\frac{1}{Z}{\rm exp}\big{[}-S_{\rm E}[U,\psi,\bar{\psi}]\big{]}$. Evaluating (measuring) correlation functions $D(\tau=\tau_{2}-\tau_{1})=\langle O(\tau_{1})O(\tau_{2})\rangle$ on $N_{\rm conf}$ on statistically independent field realizations $U^{(k)}$ and computing the mean, systematically estimates the quantum statistical expectation value $\displaystyle D(\tau)=\langle O(\tau_{1})O(\tau_{2})\rangle=\frac{1}{N_{\rm conf}}\sum_{k=1}^{N_{\rm conf}}{O(\tau_{1};U^{(k)})O(\tau_{2};U^{(k)})}+{\cal O}(1/\sqrt{N_{\rm conf}}).$ (3) Here the error decreases with the number of generated configurations independent of the dimensionality of the underlying integral. To avoid misunderstandings, let me emphasize that results obtained from lattice QCD at finite lattice spacing may not be directly compared to physical measurements. A valid comparison requires that the so-called continuum limit is taken $a\to 0$, while remaining close to the thermodynamic limit $V\to\infty$. Different lattice discretizations may yield deviating results, as long as this limit has not been adequately performed. For precision lattice QCD computations a community quality control has been established through the FLAG working group Aoki:2021kgd to catalog different simulation results including information on the limits taken. ## 2 The Inverse Problem The technical challenge we face is now laid bare: in order to make progress in the study of the dynamics of the strong interactions we need to evaluate Minkowski time correlation functions in QCD, related to parton distribution functions in nucleons or the dynamical properties of partons in the QGP. The lattice QCD simulations we are able to carry out however are restricted to imaginary time. Reverting back to the real-time domain as it turns out presents an ill-posed inverse problem. The key to attacking this challenge is provided by the spectral representation of correlation functions Bellac:2011kqa . It tells us that different incarnations of relevant correlation functions (e.g. the retarded or Euclidean correlators) share common information content in the form of a so-called spectral function Ghiglieri:2020dpq . The Källén–Lehmann representation reveals that the retarded correlator of fields in momentum space may be written as $\displaystyle D^{\rm R}(p_{0},{\bf p})=\frac{i}{\pi}\int d\mu\frac{1}{p_{0}-\mu+i\epsilon}\rho(\mu,{\bf p}),$ (4) while the same correlator in Euclidean time is given as $\displaystyle D^{\rm E}(\tau,{\bf p})=\int\frac{e^{-\tau\mu}}{1\mp e^{-\beta\mu}}\rho(\mu,{\bf p}).$ (5) where the sign in the denominator differs between bosonic $(-)$ and fermionic $(+)$ correlators. Both real-time and Euclidean correlator therefore can be expressed by the same spectral function, integrated over different analytically known kernel functions. As we do have access to the Euclidean correlator, extracting the spectral function from it in principle gives us direct access to its Minkowski counterpart. It is important to note that often phenomenologically relevant physics is encoded directly and intuitively in the structures of the spectral function, making an evaluation of the real-time correlator superfluous. Transport coefficients e.g. can be read off from the low frequency behavior of the zero-momentum spectral function of an appropriate correlation function Meyer:2011gj . For the extraction of parton distribution function similar challenges ensue. PDFs can be computed from a quantity christened the hadronic tensor $W^{\rm M}(t)$ Liu:1993cv , a four-point correlation function of quark fields in Minkowski time. The Euclidean hadronic tensor on the lattice is related to its real-time counterpart via a Laplace transform $\displaystyle W^{\rm E}(\tau)=\int\,d\mu\,e^{-\mu\tau}\,W^{\rm M}(\mu)$ (6) that needs to be inverted. Recently the pseudo PDF approach Radyushkin:2017cyf has shown how a less numerically costly three-point correlation function ${\cal M}^{\rm Ioffe}$ can be used to extract similar information on e.g. quark distributions $q(x)$. It too is hidden behind an inverse problem of the form $\displaystyle{\cal M}^{\rm Ioffe}(\nu)=\int\,dx\,{\rm cos}(\nu x)\,q(x),$ (7) where the Ioffe-time matrix elements ${\cal M}^{\rm Ioffe}(\nu)$ are accessible on the lattice. All the above examples of inverse problems share that in practice they are in fact ill-posed. Not only is the Euclidean correlator from the lattice $D_{i}$ known only at $N_{\tau}$ discrete points $\tau_{i}$, but in addition, as it arises from a Monte-Carlo simulations, it also carries a finite error $\Delta D/D\neq 0$. Let us write down the discretized spectral representation in terms of a spectral function $\rho_{l}$ discretized at frequencies $\mu_{l}$ along $N_{\mu}$ equidistant frequency bins of with $\Delta\mu_{l}$ and the discretized kernel matrix $K_{il}$ $\displaystyle D^{\rho}_{i}=\frac{1}{2}\Delta\mu_{1}K_{i1}\;\rho_{1}+\sum_{l=2}^{N_{\mu}-1}\Delta\mu_{l}K_{il}\;\rho_{l}+\frac{1}{2}\Delta\mu_{N_{\mu}}K_{iN_{\mu}}\;\rho_{N_{\mu}}.$ (8) The task at hand is to solve the inverse problem of determining the parameters $\rho_{l}$ from the sparse and noisy $D_{i}$’s. The ill-posedness of this inverse problem is manifest in two aspects: State-of-the-art lattice QCD simulations provide only around ${\cal O}(10-100)$ points along imaginary time $\tau$. From it we must reconstruct the function $\rho$, which often contains intricate patterns at different scales. The fact that $N_{\mu}\gg N_{\tau}$ entails that many degenerate sets of $\rho_{l}$ exist, which all reproduce the input data $D_{i}$ within their statistical uncertainty. The inverse problem is thus highly degenerate. In addition many of the kernel functions we have to deal with are of exponential form. This entails a strong loss of information between the spectral function and the Euclidean correlator. In other words, large changes in the spectral function translate into minute changes in the Euclidean correlator. Indeed, each of the tiny eigenvalues of the kernel is associated with a mode along frequencies, which can be added to the spectral function without significantly changing the correlator. Ref. Shi:2022yqw has recently investigated this fact in detail analytically for the bosonic finite temperature kernel relevant in transport coefficient computations. Even the at first sight benign $cos$ kernel matrix arising in the pseudo PDF approach turns out to feature exponentially diminishing eigenvalues Karpie:2019eiq as the lattice simulation cannot access the full Brillouin zone in $\nu$. I.e. the matrix $K_{il}$ is in general ill-conditioned, making its inversion unstable even if no noise is present. In the presence of noise the exponentially small eigenvalues lead to a strong enhancement of even minute uncertainties in the correlation functions rendering the inversion meaningless without further regularization. We will see in the next section how Bayesian inference can be used to give meaning to the inverse problem arising in extracting real-time dynamics from lattice QCD. ## 3 Bayesian Inference of Spectral Functions Figure 1: Statistical inference attempts to estimate from observed data $D^{(k)}$ the unknown process parameters $\rho_{l}$ and as of yet unobserved data $\tilde{D}$. Bayesian inference exploits the fact that in many instances our model of the unknown process is embedded in a domain from which prior knowledge can be derived. The use of Bayesian inference to extract spectral functions from lattice QCD simulations was pioneered by a team of researchers from Japan in two seminal papers Nakahara:1999vy ; Asakawa:2000tr . Inspired by prior work in condensed matter physics jarrell_bayesian_1996 and image reconstruction skilling1991bayesian , the team successfully transferred the approach to the extraction of QCD real-time information. The work sparked a wealth of subsequent studies, which have applied and further developed Bayesian techniques to the extraction of real-time information from lattice QCD in various contexts, zero temperature hadron spectra and excited states PhysRevD.65.014501 ; SASAKI2005208 ; PhysRevD.65.094512 , parton-distribution functions Karpie:2019eiq ; Liang:2019frk , in-medium hadrons Asakawa:2003re ; Datta:2003ww ; Umeda:2002vr ; Jakovac:2006sf ; Aarts:2011sm ; Aarts:2012ka ; Ding:2012sp ; Aarts:2013kaa ; Aarts:2014cda ; Borsanyi:2014vka ; Kim:2014iga ; Ikeda:2016czj ; Kelly:2018hsi ; Kim:2018yhk , sum rules Gubler:2011ua ; Araki:2014qya , transport coefficients Meyer:2007ic ; Meyer:2007dy ; Aarts:2007wj ; Ding:2010ga ; Meyer:2011gj ; Aarts:2014nba ; Amato:2013naa and the complex in-medium heavy quark potential Rothkopf:2011db ; Burnier:2014ssa ; Burnier:2015tda ; Burnier:2016mxc . The following discussion focuses on the Bayesian extraction of spectral functions that does not rely on a fixed parameterization of the functional form of $\rho$. If strong prior information exists, e.g. if vacuum hadronic spectral functions consist of well separated delta peaks, direct Bayesian parameter fitting methods are applicable Lepage:2001ym and may be advantageous. Similarly, some studies of in-medium spectra and transport phenomena deploy explicit parameterizations of the spectral function derived from model input, whose parameters can be fitted in a Bayesian fashion (see Ref. Burnier:2017bod for a recent example). Our goal here is to extract spectral features for systems in which no such apriori parameterization is known. ### 3.1 Bayesian inference Bayesian inference is a sub-field of statistical data analysis (for an excellent introduction see e.g. statrethinkingbook ; BishopPRML ), which focuses on the estimation of unobserved quantities, based on incomplete and uncertain observed data (see fig. 1). The term unobserved is used to refer to the unknown parameters governing the process, which generates the observed data or to as of yet unobserved future data. In the context of the inverse problem in lattice QCD, the Euclidean correlation functions produced by a Monte-Carlo simulation take on the role of the observed data while the unobserved parameters are the values of the discretized spectral function $\rho_{l}$. Future observations can be understood as further realizations of the Euclidean correlator along the Markov-Chain of the simulation. What makes Bayesian inference particularly well suited to attack the inverse problem is that it offers an explicit and well controlled strategy to incorporate information $(I)$ beyond the measured data $(D)$ into the reconstruction of spectral functions $(\rho)$. It does so by using a more flexible concept of probability, which does not necessarily rely on the outcome of a large number of repeatable trials but instead assigns a general degree of uncertainty. To be more concrete, Bayesian inference asks us to acknowledge that any model of a physical process is constructed within the context of its specific domain, in our case strong interaction physics. I.e. the structure of the model and its parameters are chosen according to prior information obtained within its domain. Bayesian inference then requires us to explicitly assign degrees of uncertainty to all these choices and propagate this uncertainty into a generalized probability distribution called the posterior $P[\rho|D,I]$. Intuitively it describes how probable it is that a test function $\rho$ is the correct spectral function, given simulated data $D$ and prior QCD knowledge $I$. The starting point of any inference task is the joint probability distribution $P[\rho,D,I]$. As it refers to the parameters $\rho$, data $D$ and prior information $I$ it combines information about the specific process generating the data as well as the domain it is embedded in. After applying the rules of conditional probability one obtains the work-horse of Bayesian inference, the eponymous Bayes theorem $\displaystyle\underbracket{P[\rho|D,I]}_{\rm posterior}=\underbracket{P[D|I,\rho]}_{\rm likelihood}\underbracket{P[\rho|I]}_{\rm prior}/\underbracket{P[D|I]}_{\rm evidence}.$ (9) It tells us how the posterior $P[\rho|D,I]$ can be efficiently computed. The likelihood denotes the probability for the data $D$ to be generated from QCD given a fixed spectral function $\rho$. The prior probability quantifies how compatible $\rho$ is compared to our domain knowledge. Historically the $\rho$ independent normalization has been called the evidence. Let us construct the different ingredients to Bayes theorem in the following. What is the likelihood in the case of spectral function reconstruction? Since in Monte-Carlo simulations one usually computes sub-averages of correlation functions on each of the $N_{\rm conf}$ generated gauge field configurations, the data is to a good approximation normal distributed. The corresponding likelihood probability $P[D|\rho,I]\propto{\rm exp}[-L]$, written in terms of the likelihood function $L$, is therefore a multidimensional Gaussian $\displaystyle P[D|\rho,I]={\cal N}[D^{\rho},C]\propto{\rm exp}\Big{[}-\sum_{ij}\frac{1}{2}(D_{i}-D^{\rho}_{i})C_{ij}^{-1}(D_{j}-D^{\rho}_{j})\Big{]},$ (10) where $D_{i}$ denotes the mean of the simulated data at the $i$th Euclidean time step and $D^{\rho}_{i}$ the corresponding Euclidean datapoint, arising from inserting the parameters $\rho_{l}$ into the spectral representation eq. 8. $C_{ij}$ refers to the covariance matrix of the mean $\displaystyle C_{ij}=\frac{1}{N_{\rm conf}(N_{\rm conf}-1)}\sum_{k=1}^{N_{\rm conf}}\big{(}D^{(k)}_{i}-D_{i}\big{)}\big{(}D^{(k)}_{j}-D_{j}\big{)},$ (11) where the individual measurements enter as $D^{(k)}$. Note that in order to obtain an accurate estimate of $C_{ij}$, the number of samples $N_{\rm conf}$ must be significantly larger than the number of data along imaginary time. In particular $C_{ij}$ develops exact zero eigenvalues if the number of configurations is less than that of the datapoints. A speedup in the computation of the likelihood can be achieved in practice if, following Ref. Nakahara:1999vy , one computes the eigenvalues $\sigma_{i}$ and eigenvectors of $C$ and changes both the kernel and the input data into the coordinate system where $S^{t}CS={\rm diag}[\sigma_{i}]$ becomes diagonal. Then the two sums in eq. 10 collapse onto a single one $L=\sum_{i}\frac{1}{2}(\tilde{D}_{i}-\tilde{D}^{\rho}_{i})/\sigma_{i}^{2}$ with $\tilde{D}^{\rho}_{i}=S^{t}_{ij}K_{jl}\rho_{l}$ and $\tilde{D}_{i}=S^{t}_{ij}D_{j}$. Since the likelihood is a central ingredient in the posterior, all Bayesian reconstruction methods ensure that the reconstructed spectral function, when inserted into the spectral representation will reproduce the input data within their uncertainty. I.e. they will always produce a valid statistical hypothesis for the simulation data. This crucial property distinguishes the Bayesian approach from competing non-Bayesian methods, such as the Backus- Gilbert method and the Padé reconstruction (see examples in Cyrol:2018xeq ), in which the reconstructed spectral function does not necessarily reproduce the input data. In case that we do not possess any prior information we have $P[\rho|I]=1$ and Bayes theorem only contains the likelihood. Since the functional $L$ is highly degenerate in terms of $\rho_{l}$’s, the question of what is the most probable spectral function, i.e. the maximum likelihood estimate of $\rho$, does not make sense at this point. Only by supplying meaningful prior information can we regularize and thus give meaning to the inverse problem. ### 3.2 Bayesian spectral function reconstruction Different Bayesian strategies to attack the ill-posed spectral function inverse problem differ by the type of domain information they incorporate in the prior probability $P[\rho|I]\propto{\rm exp}[S]$, where $S$ is called the regulator functional. Once the prior probability is constructed, the spectral reconstruction consists of evaluating the posterior probability $P[\rho|D,I]$, which informs us of the distribution of the values of $\rho_{l}$ in each frequency bin $\mu_{l}$. The versatility of the Bayesian approach actually allows us to reinterpret several classic regularization prescriptions in the language of Bayes theorem, providing a unifying language to seemingly different strategies. When surveying approaches to inverse problems in other fields, Tikhonov regularization tikhonov_stability_1943 is by far the most popular regularization prescription. It amounts to choosing an independent Gaussian prior probability for each parameter $\displaystyle P[\rho|I]=\prod_{l=1}^{N_{\mu}}{\cal N}[m_{l},1/\sqrt{\alpha_{l}}]\propto{\rm exp}\Big{[}-\sum_{l=1}^{N_{\mu}}\alpha_{l}\;\frac{1}{2}\,(\rho_{l}-m_{l})^{2}\Big{]}.$ (12) Each normal distribution is characterized by its maximum (mean) denoted here by $m_{l}$ and width (uncertainty) $1/\alpha_{l}$. In the literature $m_{l}$ is usually referred to as the default model and $\alpha_{l}$ simply as hyperparameter. The significance of the two quantities is that in the absence of simulation data, $m_{l}$ denotes the most probable apriori value of $\rho_{l}$ with intrinsic uncertainty $1/\alpha_{l}$. Since these parameters, even though they are constrained by QCD, will be known only up to a some uncertainty, the Bayesian strategy requires us to assign distributions $P[m]$ and $P[\alpha]$ to these model parameters. This is a first example of a so- called hierarchical model, where each level of the model encodes the uncertainties and correlations among model (hyper-)parameters in the subsequent layer. It then remains the task of the user to extract from QCD domain knowledge appropriate uncertainty budgets for $m$ and $\alpha$. Another regularization deployed in the field of image reconstruction is the so-called total variation approach 1992PhyD…60..259R . Here the difference between neighboring parameters $\rho_{l}$ and $\rho_{l+1}$ , i.e. $\Delta\rho_{l}$, is modelled bardsley2012laplace as a Laplace distribution $\displaystyle P[\Delta\rho|I]=\prod_{l=1}^{N_{\mu}-1}{\rm Laplace}[m_{l},\alpha_{l}]\propto{\rm exp}\Big{[}-\sum_{l=1}^{N_{\mu}-1}\alpha_{l}\;|(\rho_{l+1}-\rho_{l})-m_{l}|^{2}\Big{]}.$ (13) Since $\Delta\rho_{l}$ is related to the first derivative of the spectral function this regulator incorporates knowledge about rapid changes, such as kinks, in spectral features. Choosing $\alpha_{l}$ and $m_{l}$ appropriately one may e.g. prevent the occurrence of kink features in the reconstructed spectral function, if it is known that the underlying true QCD spectral function is smooth. In Ref. Fischer:2017kbq I proposed a regulator related to the derivative of $\rho$, with a different physical meaning $\displaystyle P[\Delta\rho|I]=\prod_{l=1}^{N_{\mu}-1}{\cal N}[m_{l},\alpha_{l}]\propto{\rm exp}\Big{[}-\sum_{l=1}^{N_{\mu}-1}\alpha_{l}\;\Big{(}(\rho_{l+1}-\rho_{l})-m_{l}\Big{)}^{2}\Big{]}.$ (14) Often spectral reconstructions, which are based on a relatively small number of input data, suffer from ringing artifacts, similar to the Gibbs ringing arising in the inverse problem of the Fourier series. These artifacts lead to a reconstructed spectral function with a similar area as the true spectral function but with a much larger arc length due to the presence of unphysical wiggles. Since such ringing is not present in the true QCD spectral function we may apriori suppress it by penalizing arc length $\ell=\int d\mu\;\sqrt{1+(d\rho/d\mu)^{2}}$. And since the square root is monotonous, we may remove it for our purposes, as well as, discard the addition of unity, as it is absorbed into the normalization of the corresponding prior distribution. The hyperparameters of such a prior must be chosen appropriately, since the remedy to one artifact, ringing, can lead to the introduction of a different artifact, which is over-damping of reconstructed spectral features. The relevant ranges for $\alpha$ and $m$, as e.g. in Ref. Kim:2018yhk , can be established using mock data tests. If our prior domain knowledge contains information about the smoothness and the absence of ringing then it is of course possible to combine different regulators by multiplying the prior probabilities. The reconstruction of the first picture of a black hole e.g. combined the Tikhonov and total variation regularization 2019ApJ…875L…1E . In the presence of multiple regulators, the hyperparameters $\alpha$ and $m$ of each of these distributions need to be assigned an (independent) uncertainty distribution. One may ask, whether a proliferation of such parameters spoils the benefit of the Bayesian approach? The answer is that in practice one can estimate the probable ranges of these parameters by use of mock data. One carries out the spectral function reconstruction, i.e. the estimation of the posterior probability $P[\rho|D,I]$, using data, which has been constructed from known spectral functions with realistic features and which has been distorted with noise similar to those occurring in Monte-Carlo simulations (see e.g. Kim:2018yhk ). One may then observe from such test data sets, what the most probable values of the hyperparameters are and in what interval they vary, depending on different spectral features present in the input data. The three priors discussed so far are not commonly used as stand-alone regulators in the reconstruction of QCD spectral functions in practice. The reason is that neither of them can exploit a central prior information available in the lattice context, which is the positivity montvay1994quantum of the most relevant hadronic spectral functions111For the reconstruction of non-positive spectral functions using Bayesian approaches see e.g. the Hobson- Lasenby modification of the MEM Hobson:1998bz , the shift strategy of Haas:2013hpa or my extension of the BR method Rothkopf:2016luz .. I.e. in most of the relevant reconstruction tasks from lattice QCD, the problem can be formulated in terms of a positive definite spectral function, which significantly limits the function space of potential solutions. Methods that are unable to exploit this prior information, such as the Backus-Gilbert method have therefore been shown to perform poorly relative to the Bayesian approaches, when it comes to the reconstruction of well-defined spectral features (see e.g. Liang:2019frk ). In the following let us focus on two prominent Bayesian methods, which have been deployed in the reconstruction of positive spectral functions from lattice QCD, the Maximum Entropy Method (MEM) and the Bayesian Reconstruction (BR) method. The MEM narayan1986maximum ; skilling1991bayesian ; jarrell_bayesian_1996 ; Asakawa:2000tr has originally been constructed to attack image reconstruction problems in astronomy. It therefore focuses on two-dimensional input data and deploys the Shannon-Jaynes entropy $S_{\rm SJ}$ as regulator: $\displaystyle P[\rho|I]\propto{\rm exp}\Big{[}-\sum_{l=1}^{N_{\mu}}\alpha_{l}\;\Delta\mu\Big{(}\rho_{l}-m_{l}-\rho_{l}{\rm log}\Big{[}\frac{\rho_{l}}{m_{l}}\Big{]}\Big{)}\Big{]}.$ (15) Its regulator is based on four axioms skilling1991bayesian , which specify the prior information the method exploits. They are subset independence, which states that prior information on $\rho_{l}$’s at different discrete frequency bins $l$ can be combined in a linear fashion within $S_{\rm SJ}$. The second axiom enforces that $S_{\rm SJ}$ has its maximum at the default model, which establishes the meaning of $m_{l}$ as the apriori most probable value of $\rho_{l}$ in the absence of data. These two axioms are not specific to the MEM and find use in different Bayesian methods. It is the third and fourth axiom that distinguish the MEM from other approaches: coordinate invariance requires that $\rho$ itself should transform as a dimensionless probability distribution and system independence assumes certain factorizability properties of a two-dimensional spectral function along the two dimensions. From the appearance of the logarithm in $S_{\rm SJ}$ it is clear that the MEM can exploit the positivity of the spectral function. Due to the fact that the logarithm is multiplied by $\rho$, $S_{\rm SJ}$ is actually able to accommodate exact zero values of a spectral function. Since the reconstruction task in lattice QCD is one-dimensional, it is not obvious how to directly translate system independence. An intuitive way of interpreting this axiom using e.g. the monkeys and kangaroos example of Ref. jarrell_bayesian_1996 is that the MEM shall not introduce correlations among $\rho_{l}$’s where the data does not require it. This is a quite restrictive property, as it is exactly prior information, which should help us to limit the potential solutions space by providing as much information about the structure of $\rho$ as possible. Similarly, the assumption that $\rho$ must transform as a probability distribution, while appropriate for a distribution of dimensionless pixel values in an image, does not necessarily apply to spectral functions. These are in general dimensionful quantities and may even contain UV divergences when evaluated naively. To overcome these conceptual difficulties the BR method was developed in Ref. Burnier:2013nla with the one-dimensional reconstruction problem of lattice QCD real-time dynamics in mind. The BR method features a regulator $S_{\rm BR}$ related to the Gamma distribution $\displaystyle P[\rho|I]=$ $\displaystyle\;\prod_{l=1}^{N_{\mu}}{\rm Gamma}[1+\Delta\mu\alpha_{l},\Delta\mu\alpha_{l}/m_{l}],$ $\displaystyle\propto$ $\displaystyle\;{\rm exp}\Big{[}-\sum_{l=1}^{N_{\mu}}\alpha_{l}\;\Delta\mu\;\Big{(}1-\frac{\rho_{l}}{m_{l}}-{\rm log}\Big{[}\frac{\rho_{l}}{m_{l}}\Big{]}\Big{)}\Big{]},$ (16) which looks similar to the Shannon-Jaynes entropy but differs in crucial ways. Its construction shares the first two axioms of the MEM but replaces the third and fourth axiom with the following: scale invariance enforces that the posterior may not depend on the units of the spectral function, leading to only ratios between $\rho_{l}$ and the default model $m_{l}$, which by definition must share the same units. The use of ratios also requires that neither $\rho$ nor $m$ vanishes. $S_{\rm BR}$ differs therefore from the Shannon-Jaynes regulator where the integrand of $S_{\rm SJ}$ is dimensionful. The units of $\Delta_{\mu}$ enter as multiplicative scale and can be absorbed into a redefinition of $\alpha$ (and which will be marginalized over as described in section 3.3). Furthermore, one introduces a smoothness axiom, which requires the spectral function to be twice differentiable. While it may appear that the latter axiom is at odds with the potential presence of delta- function like structures in spectral functions, it ensures that one smoothly approximates such well defined peaks as the input data improves. Let us compare the regulators of the Tikhonov approach, the MEM and the BR method in fig. 2, which plots the negative of the integrand for the choice of $m=1$. The top panel shows a linear plot, the bottom panel a double logarithmic plot. By construction, all feature an extremum at $\rho=m$ and the MEM and BR enforce positive (semi-)definiteness of the spectral function. The functional form of the BR regulator turns out to be the one with the weakest curvature among all three for $\rho>m$, while it still manages to regularize the inverse problem. Note that the weaker the regulator, the more efficiently it allows information in the data to manifest itself (it is actually the weakest on the market). At the same time a weaker regulator is less potent in suppressing artifacts, such as ringing, which may affect spectral function reconstruction based on very small number of datapoints222To avoid this complication, the BR regulator has been successfully combined with the arc- length penalty regulator in Ref. Kim:2018yhk .. Figure 2: Comparison of the regulators of the Tikhonov approach (green), the MEM (red) and the BR method (blue) in linear scale (top) and double logarithmic scale (bottom). The Shannon-Jaynes regulator accommodates $\rho=0$ but appears flat for spectral functions with values close to zero. The BR prior shows the weakest curvature for $\rho>m$ among all regulators. At this point we are ready to carry out the Bayesian spectral reconstruction. I.e. after choosing according to one’s domain knowledge a prior distribution $P[\rho|I(m,\alpha)]$ and assigning appropriate uncertainty intervals to their hyperparameters $P[\alpha]$ and $P[m]$ via mock-data studies, we can proceed to evaluate the posterior distribution $P[\rho|D,I]$. If we can access this highly dimensional object through a Monte-Carlo simulation (see e.g. section 4.3) it provides us not only with the information of what the most probable spectral function is, given our simulation data, but also contains the complete uncertainty budget, including both statistical (data related) and systematic errors (hyperparameter related). The maximum of the prior defines the most probable value for each $\rho_{l}$ and its spread allows a robust uncertainty quantification beyond a simple Gaussian approximation (i.e. standard deviation) as it may contain tails that lead to a deviation of the mean from the most probable value. ### 3.3 Uncertainty quantification for point estimates While access to the posterior allows for a comprehensive uncertainty analysis, a full evaluation of $P[\rho|D,I]$ historically remained computationally prohibitive. Thus the community focused predominantly333A few works have explored stochastic strategies for the evaluation of the posterior in the context of the SOM mishchenko2000diagrammatic or the stochastic analytic continuation (SAI) method ding2018stochastic ; Shao:2022yez , of which the MEM is a special limit beach2004identifying . on determining a point estimate of the most probable spectral function from the posterior $P[\rho|D,I]$, also called MAP, the maximum aposteriori estimate $\displaystyle\left.\frac{\delta}{\delta\rho}P[\rho|D,I]\right|_{\rho=\rho^{\rm MAP}}=0.$ (17) In this case, while much easier to handle as a numerical optimization task, only a fraction of the information contained in the posterior is made accessible. In particular most information related to uncertainty remains unknown and thus needs to be approximated separately. The above optimization problem in general can be very demanding as the posterior may contain local extrema in addition to the global one that defines $\rho^{\rm MAP}$. At least in the case of the Tikhonov, MEM and BR methods however it is possible to prove that if an extremum for eq. 17 exists it must be unique. The reason is that all three regulators are convex. The proof of this statement does not rely on a specific parameterization of the spectral function and therefore promises that standard (quasi)Newton methods, such as Levenberg-Marquardt or LBFGS (see e.g. Ref. 10.5555/1403886 ) can be used to locate this unique global extremum in the $N_{\mu}$ dimensional search space. Also from an information point of view it is fathomable that at this point a unique solution to the former ill-posed inverse problem can be found. We need to estimate the most probable values of $N_{\mu}$ parameters $\rho_{l}$ and have now provided $N_{\tau}$ simulation data $D_{i}$, as well as $N_{\mu}$ pieces of information in the form of the $m_{l}$’s and $\alpha_{l}$’s each. I.e. the number of knowns $2N_{\mu}+N_{\tau}>N_{\mu}$ is larger than the number of unknowns, making a unique determination possible. The proof presented in Ref. Asakawa:2000tr formalizes this intuitive statement. In practice it turns out that the finite intercept of the Shannon-Jaynes entropy for $\rho=0$ can lead to slow convergence if spectral functions with wide ranges of values close to zero are reconstructed. In lattice QCD this occurs regularly when e.g. hadronic spectral functions contain sharp and well separated peak structures. $S_{\rm SJ}$ for very small values (see fig. 2) is effectively flat and thus unable to efficiently guide the optimizer toward the unique minimum and convergence slows down. It is therefore that one finds in the literature that the extremum eq. 17 in the MEM is accepted for tolerances around $\Delta\approx 10^{-7}$, which is much larger than zero in machine (double-)precision. Such a large tolerance does not guarantee bitwise identical results when starting the optimization from different initial conditions. The BR prior on the other hand does not exhibit a finite intercept at $\rho=0$ and therefore avoids this slow convergence problem. It has been found to be capable of locating the unique extremum $\rho^{\rm MAP}$ in real- world settings down to machine precision, which guarantees that the reconstruction result is independent of the starting point of the optimizer. Bayesian inference forces us to acknowledge two sources of uncertainty: statistical uncertainty in the data and uncertainty associated to the choice and parameters of the prior probability. Before continuing to the technical details of how to estimate uncertainty, let us focus on the role of prior information first. It enters both through the selection of a prior probability and the choice of the distributions $P[m]$ and $P[\alpha]$. It is important to recognize that already from an information theory viewpoint, one needs to supply prior information if the goal is to give meaning to an ill-posed inverse problem: originally we started out to estimate $N_{\mu}\gg N_{\tau}$ parameters $\rho_{l}$ from $N_{\tau}$ noisy input data $D_{i}$. I.e. in order to select among the infinitely many degenerate parameter sets $\rho_{l}$ a single one as the most probable, we need information beyond the likelihood. Conversely any method that offers a unique answer to the inverse problem utilizes some form of prior information, whether it acknowledges it or not. Bayesian inference, by making the role of prior knowledge explicit in Bayes theorem, allows us to straight forwardly explore the dependence of the result on our choices related to domain information. It is therefore ideally suited to assess the influence of prior knowledge on reconstructed spectral functions. This distinguishes it from other approaches, such as the Backus Gilbert method, where a similarly clear distinction of likelihood and prior is absent. The Tikhonov method is another example. Originally formulated with a vanishing default model, one can find statements in the literature that it is default model independent. Reformulated in the Bayesian language, we however understand that its original formulation just referred to one specific choice of model, which made the presence of prior knowledge hard to spot. Figure 3: Sketch of how the confluence of (left) likelihood (red) and (a convex) prior (blue) in the posterior (orange, right) leads to a regularization of the inverse problem. Instead of multiple degenerate minima in the likelihood (gray circles) only a single unique one remains in the posterior. The presence of the prior as regulator also entails that among the structures in a reconstructed spectral function only some are constrained by the simulation data and others are solely constrained by prior information. It is only in the Bayesian continuum limit, which refers to taking simultaneously the error on the input data to zero while increasing the number of available datapoints toward infinity, that the whole of the spectral function is fixed by input data alone. Our choice of regulator determines how efficiently we converge to this limit and which type of artefacts (e.g. ringing or over- damping) one will encounter on the way. One important element of uncertainty analysis in Bayesian spectral reconstruction therefore amounts to exploring how reconstructed spectra improve as the data improves444In lattice QCD it is often easier to collect more samples than to simulate on grids with more points along Euclidean time. Then at least the improvement of the reconstruction with increasing statistics needs to be considered.. This is a well-established practice in the community. When reconstructing the spectral function according to a given set of Monte- Carlo estimates $D^{k}_{i}$ of a lattice QCD correlator $D_{i}$, we need to reliably estimate the statistical and systematic uncertainty budget. It is important to recognize that these may be related, e.g. increasing the precision of input data often makes the reconstructed spectrum less susceptible to changes in $m$ or $\alpha$. An often deployed strategy is to nevertheless estimate the effects separately: In order to assess statistical uncertainty we may use established bootstrap methods or the (blocked) Jackknife (for an introduction see Ref. EfroTibs93 ), where the reconstruction is performed repeatedly on subensembles of the input data $D^{k}_{i}$ and the variance among the reconstructed spectra provides a direct estimate of their statistical uncertainty. In the case of point estimates, one usually decides apriori on a regulator and fixes to a certain value of the default model $m$ and of the hyperparameter $\alpha$, before carrying out the reconstruction. The freedom in all these choices enters the systematic uncertainty budget. Often the user has access to a reliable default model $m(\omega)$ only along a limited range of frequencies $\mu$. In lattice QCD such information is often obtained from perturbative computations describing the large frequency and momentum behavior of the spectral function. In the low frequency part of the spectrum, where non-perturbative physics dominates, we often do not possess any relevant information about the functional form of $\rho$. It is then customary to extend the default model into the non-perturbative regime using simple and smooth functional forms that join up in the perturbative regime. In practice the user repeats the reconstruction using different choices for the unknown parts of $m$, e.g. different polynomial dependencies on the frequency and subsequently uses the variation in the end result as indicator of the systematic uncertainty. It is important to note that if there exist different regulators that encode compatible and complementary prior information that one should also consider repeating the reconstruction based on different choices of $P[\rho|I]$ itself. Since we have access to the likelihood and prior, we may ask whether a combined estimation of the statistical and systematic uncertainty can be carried out even in the case of a point estimate. Since the reconstructed spectrum $\rho^{\rm MAP}$ denotes a minimum of the posterior, one may try to compute the curvature of the (log) posterior $L-S$ around that minimum, which would indicate how steep or shallow that minimum actually is. This is the strategy laid out e.g. in Ref. Asakawa:2000tr . In practice it relies on a saddle point approximation of the posterior and therefore can lead to an underestimation of the uncertainty. Many recent studies thus deploy a combination of the Jackknife and a manual variation of the default model. Since the treatment of hyperparameters differs among the various Bayesian methods, let me discuss it here in more detail. Appropriate ranges for the values of $m$ can often be estimated from mock data studies and since the functional dependence of the default model is varied as part of the uncertainty estimation discussed above, we focus here on the treatment of $\alpha$. I.e. we will treat the values of $m$ as fixed and consider the effect of $P[\alpha]$. If alpha is taken to be small, a large uncertainty in the value $m$ ensues, which leads to a weak regularization and therefore to large uncertainty in the posterior. If $\alpha$ is large it constrains the posterior to be close to the prior and limits the information that data can provide to the posterior. Three popular strategies are found in the literature to treat $\alpha$. Note that in the context of the MEM, a common value is assigned to all hyperparameters $\alpha_{l}$, i.e. the same uncertainty is assigned to the default model parameters $m_{l}$ at all frequencies, an ad-hoc choice. The simplest treatment of $\alpha$, also referred to as the Morozov criterion or historic MEM is motivated by the goal to avoid over fitting of the input data. It argues that if we knew the correct spectral function and were to compute the corresponding likelihood function $L$, it would on average evaluate to $\langle L\rangle=\frac{1}{2}N_{\tau}$ i.e. half the number of datapoints. Therefore one should tune the value of $\alpha$ such that the likelihood reproduces this value. The second and third strategy are based directly on Bayes theorem. The Bayesian way of handling uncertainties in model parameters is to make their dependence explicit in the joint probability distribution $P[\rho,D,I(m,\alpha)]$. Now that the distribution depends on more than three elements, application of conditional probabilities leads to $\displaystyle P[\rho,D,\alpha,m]=$ $\displaystyle P[D|\rho,\alpha,m]P[\rho|\alpha,m]P[\alpha,m],$ $\displaystyle=$ $\displaystyle P[\alpha|\rho,D,m]P[\rho|D,m]P[D,m].$ (18) The modern MEM approach solves eq. 18 for $P[\alpha|\rho,D,m]$. It then integrates point estimates $\rho^{\rm MAP}_{\alpha}$ obtained for fixed values of $\alpha$ over that probability distribution. In order to compute $P[\alpha|\rho,D,m]$ two ingredients are necessary: the full posterior $P[\rho|D,\alpha,m]$ and the distribution $P[\alpha]$. The former is in general not analytically known and therefore is in practice approximated by a saddle point approximation. The latter is in the literature either chosen as constant or as $P[\alpha]\propto 1/\alpha$, a choice referred to as Jeffrey’s prior. Let me briefly clarify the often opaque notion of Jeffrey’s prior doi:10.1098/rspa.1946.0056 . Given a probability distribution $P[x|{\bm{\alpha}},{\bf m}]$ and a choice of parameter, e.g. $\bm{\alpha}$, Jeffrey’s prior refers to the unique distribution $P_{\rm J}[{\bm{\alpha}}]=\sqrt{{\rm det}[I({\bm{\alpha}})]}$ defined from the Fisher information matrix $I({\bm{\alpha}})$. This definition is considered to be uninformative, as it remains invariant under a change of coordinates of $\bm{\alpha}$. Using the one-dimensional Gaussian distribution as example, we can obtain an intuitive understanding of its role. Let $P[x|\sigma,m]={\cal N}[x|\sigma,m]$, then $\displaystyle P_{\rm J}[m]$ $\displaystyle=\sqrt{\int dx{\cal N}[x|\sigma,m]\big{(}\frac{d}{dm}{\cal N}[x|\sigma,m]\big{)}^{2}}=\sqrt{\frac{1}{\sigma^{2}}}={\rm const.},$ (19) $\displaystyle P_{\rm J}[\sigma]$ $\displaystyle=\sqrt{\int dx{\cal N}[x|\sigma,m]\big{(}\frac{d}{d\sigma}{\cal N}[x|\sigma,m]\big{)}^{2}}=\sqrt{\frac{2}{\sigma^{2}}}=\sqrt{2}\frac{1}{\sigma}.$ (20) Jeffrey’s prior for $m$ is independent of $m$ and thus refer to the unique translation invariant distribution on the real values (Haar-measure for addition). It therefore does not impart any information on the location of the peak of the normal distribution. Similarly $P_{\rm J}[\sigma]$ is a scale invariant distribution on the positive real values (Haar-measure for multiplication). Since the uncertainty parameter $\sigma$ enters as a multiplicative scale in the normal distribution its Jeffrey’s prior also does not introduce any additional information. Both priors investigated here are improper distributions, i.e. they are well-defined only in products with proper probability distributions. The third strategy to treat the parameters $\alpha_{l}$ has been put forward in the context of the BR method. It sets out to overcome the two main limitations of the MEM approach: the need for saddle point approximations in the handling of $\alpha$ and the overly restrictive treatment of assigning a common uncertainty to all $m_{l}$’s. The BR method succeeds in doing so, by using Bayes theorem to marginalize the parameters $\alpha_{l}$ apriori, making the (highly conservative) assumption that no information about $\alpha_{l}$ is known, i.e. $P[\alpha_{l}]=1$. It benefits from the fact that in contrast to the Shannon-Jaynes prior, the BR-prior is analytically tractable and its normalization can be expressed in closed form. We start from eq. 18 and assume that the parameters $\alpha$ and $m$ are independent, so that their distributions factorize. Marginalizing a parameter simply means integrating the posterior over the probability distribution of that parameter. Via application of conditional probabilities it is possible to arrive at the corresponding expression $\displaystyle\prod_{l}\int d\alpha_{l}P[\alpha|\rho,D,m]P[\rho|D,m]$ $\displaystyle=\frac{P[D|\rho,I]}{P[D|m]P[m]}\prod_{l}\int d\alpha_{l}P[\rho|\alpha,m]P[\alpha]P[m],$ $\displaystyle P[\rho|D,m]$ $\displaystyle=\frac{P[D|\rho,I]}{P[D|m]}\prod_{l}\int d\alpha_{l}P[\rho|\alpha,m]P[\alpha],$ (21) where $P[\rho|D,m]$ does not depend on $\alpha$ anymore and by definition of probabilities $\int d\alpha P[\alpha|\rho,D,m]=1$. The posterior $P[\rho|D,m]$ now includes all effects arising from the uncertainty of $\alpha$ without referring to that variable anymore. Due to the form of the BR prior $P[\rho|\alpha,m]$, the integral over $\alpha_{l}$ is well defined, even though we used the improper distribution $P[\alpha]=1$. One may wonder whether integrating over $\alpha_{l}$ impacts the convexity of the prior. While not proven rigorously, in practice it turns out that the optimization of the marginalized posterior $P[\rho|D,m]$ in the BR method does not suffer from local extrema. A user of the BR method therefore only needs to provide a set of values for the default model $m_{l}$ to compute the most probable spectral function $\displaystyle\left.\frac{\delta}{\delta\rho}P[\rho|D,m]\right|_{\rho=\rho^{\rm MAP}_{\rm BR}}=0.$ (22) By carrying out several reconstructions by varying the functional form of $m$ within reasonable bounds, established by mock-data tests, the residual dependence on the default model can be quantified. So far we have discussed the inherent uncertainties from the use of Bayesian inference and how to assess them. Another source of uncertainty in spectral reconstructions arises from specific implementation choices. Let me give an example based on the Maximum Entropy Method. In order to save computational cost, the MEM historically is combined with a singular value decomposition to limit the dimensionality of the solution space. The argument by Bryan bryan_maximum_1990 suggests that instead of having to locate the unique extremum of $P[\rho|D,I]$ in the full $N_{\mu}$ dimensional search space of parameters $\rho_{l}$, it is sufficient to use a certain parameterization of $\rho(\omega)$ in terms of $N_{\tau}$ parameters, the number of input data points. The basis functions are obtained from a singular value decomposition (SVD) of the transpose of the kernel matrix $K^{t}$. Bryan’s argument only refers to the functional form of the Kernel $K$ and the number of data points $N_{\tau}$ in specifying the parameterization of $\rho(\omega)$. If true in general, this would lead to an enormous reduction in computational complexity. However, I have put forward a counter example to Bryan’s argument (originally in Rothkopf:2011ef ) including numerical evidence, which show that in general the extremum of the prior is not part of Bryan’s reduced search space. One manifestation of the artificial limitation of Bryan’s search space is a dependence of the MEM resolution on the position of a spectral feature along the frequency axis. As shown in Fig.3 of Ref. Rothkopf:2012vv , if one reconstructs a single delta peak located at different positions $\mu_{0}$ with the MEM, one finds that the reconstructed spectral functions show a different width, depending on the value of $\mu_{0}$. This can be understood by inspecting the SVD basis functions, which are highly oscillatory close to $\mu_{\rm min}$ the smallest frequency chosen to discretize the $\mu$ range. At larger values of $\mu$ these functions however damp towards zero. I.e. if the relevant spectral feature is located in the $\mu$ range where the basis functions have structure, it is possible to reconstruct a sharp peak reasonable well, while if it is located at larger $\mu$ the resolution of the MEM decreases rapidly. The true Bayesian $\rho^{\rm MAP}$, i.e. the global extremum of the MEM posterior, however does not exhibit such a resolution restriction, as one can see when changing the parameterization of the spectral function to a different basis, e.g. the Fourier basis consisting of cos and sin functions. In addition Ref. Jakovac:2006sf in its Fig.28 showed that using a different parameterization of the spectral function, which restricts $\rho$ to a space that is equivalent to the SVD subspace from a linear algebra point of view, one obtains a different result. This, too, emphasizes that the unique global extremum of the posterior is not accessible within these restricted search spaces. Note that one possible explanation for the occurrence of the extremum of $P[\rho|D,I]$ outside of the SVD space lies the fact that in constrained optimization problems (here the constraint is positivity), the extremum can either be given by the stationarity condition of the optimization functional in the interior of the search space or it can lie on the boundary of the search space restricted by the constraint. I.e. in addition to artifacts introduced into the reconstructed spectrum via a particular choice of prior distribution and handling of its hyperparameters (e.g. ringing or over-damping), one also must be aware of additional artifacts arising from choices in the implementation of each method. The dependence of Bryan’s MEM on the limited search space was among the central reasons for the development of the BR method. The advantageous form of the BR prior, which does not suffer from slow convergence in finding $\rho^{\rm MAP}$ in practice, allows one to carry out the needed optimization in the full $N_{\mu}$ dimensional solution space to $P[\rho|D,I]$ with reasonable computational cost. The proof from Ref. Asakawa:2000tr which also applies to the convex BR prior, guarantees that in the full search space a single unique Bayesian solution can be located if it exists. In section 4 we will take a look at hands-on examples of using the BR method to extract spectral functions and estimating their reliability. ### 3.4 Two lattice QCD uncertainty challenges Spectral function reconstruction studies from lattice QCD have encountered two major challenges in the past. The first one is related to the number of available input data points, which, compared to simulations in e.g. condensed matter physics is relatively small, of the order ${\cal}O(10-100)$. Especially when analyzing datasets at the lower end of this range, the sparsity of the $D_{i}$’s along Euclidean time $\tau$ often translates into ringing artefacts. Due to the restricted search space of Bryan’s MEM, this phenomenon may be hidden, while the global extremum of the MEM posterior $\rho^{\rm MAP}_{\rm MEM}$, as well as the BR method MAP estimate $\rho^{\rm MAP}_{\rm BR}$ do show ringing. Since ringing leads to spectral functions with a too large arc length compared to the true spectral function one can treat this artifact by combining either the MEM or the BR prior with the arc-length penalty regulator discussed in section 3.2. The additional hyperparameters associated with this penalty term can be estimated using realistic mock data, as shown e.g. in Ref. Kim:2018yhk . The benefit of this genuine Bayesian approach is that the mechanism by which ringing is suppressed is made explicit and is not hidden in a particular choice of basis function. The second challenge affects predominantly spectral reconstructions at finite temperature, in particular their comparability at different temperatures. In lattice QCD, temperature is encoded in the length of the imaginary time axis. I.e. simulations at lower temperature have access to a larger $\tau$ regime, as those at higher temperature. Since the available Euclidean time range affects the resolution capabilities of any spectral reconstruction it is important to calibrate one’s results to a common baseline. I.e. one needs to establish how the accuracy of the reconstruction method changes as one increases temperature. Otherwise changes in the reconstructed spectral functions are attributed to physics, while they actually represent simply a degradation of the method’s resolution. The concept of the reconstructed correlator Datta:2003ww is an important tool in this regard. Assume we have a correlator encoding a certain spectral function at temperature $T_{1}$ with $N_{\tau}^{\rm T_{1}}$ points. We can now ask: how would the correlator look like where the same spectral function is encoded at a higher temperature $T_{2}$, i.e. within a smaller Euclidean time window of $N_{\tau}^{\rm T_{2}}$ points. Since the underlying kernel relating spectral function and correlator is often temperature dependent, this question is not easily answered by just discarding imaginary time datapoints from the large $\tau$ region of the original correlator555In cases where the kernel is temperature independent, e.g. for lattice effective field theory correlators, discarding large $\tau$ datapoints is equivalent to computing the reconstructed kernel.. Instead if one wishes to evaluate the corresponding higher temperature correlator Ref. Ding:2012sp showed that for the bosonic finite temperature kernel $K^{\rm T>0}(\mu,\tau)={\rm cosh}[\mu(\tau-\beta/2)]/{\rm sinh}[\mu\beta/2]$, relevant for studies of relativistic bosonic spectral functions, one has to form the following quantity $\displaystyle D_{\rm rec}(\tau,T_{2}|T_{1})=\sum_{\tau^{\prime}/a=\tau/a,\Delta\tau^{\prime}/a=N_{\tau}^{\rm T_{1}}}^{N_{\tau}^{\rm T_{1}}-N_{\tau}^{\rm T_{2}}+\tau/a}D_{\rm lattice}(\tau^{\prime}|T_{1}).$ (23) By carrying out a reconstruction based on the two correlators at different Euclidean extent $D_{\rm lattice}(\tau|T_{1})$ and $D_{\rm lattice}(\tau|T_{2})$ one will in general obtain two different spectral functions, even though the encoded spectrum is the same. Only when one compares the reconstruction based on $D_{\rm rec}(\tau,T_{2}|T_{1})$ with that of $D_{\rm lattice}(\tau|T_{2})$ is it possible to disentangle the genuine effects of a change in temperature from the one’s induced by the reduction in access to Euclidean time. This reconstruction strategy has been first deployed for relativistic correlators in Ref. Kelly:2018hsi . A similar analysis in the context of non-relativistic spectral functions in Ref. Kim:2018yhk showed that the temperature effect of a negative mass shift for in-medium hadrons was only observable, if the changes in resolution of the reconstruction had been taken into account. ## 4 Hands-on spectral reconstruction with the BR method This publication is accompanied by two open-source codes. The first BRMAP , written in the C/C++ language, implements the BR method (and the MEM) in its traditional form to compute MAP estimates with arbitrary precision arithmetic. The second BRMCStan , written in the Python language uses standard double precision arithmetic and utilizes the modern MCStan Monte-Carlo sampler to evaluate the full BR posterior. ### 4.1 BR MAP implementation in C/C++ The BR MAP code deploys arbitrary precision arithmetics, based on the GMP Granlund12 and MPFR 10.1145/1236463.1236468 libraries, which offers numerical stability for systems where exponential kernels are evaluated over large frequency ranges. A run-script called BAYES.scr is provided in which all parameters of the code can be specified. The kernel for a reconstruction task is apriori known and depends on the system in question. The BR MAP code implements three common types encountered in the context of lattice QCD (see parameter KERNELTYPE). Both zero temperature kernel $K^{\rm T=0}(\mu,\tau)={\rm exp}[-\mu\tau]$, and the naive finite temperature kernel for bosonic correlators $K^{\rm T>0}(\mu,\tau)={\rm cosh}[\mu(\tau-\beta/2)]/{\rm sinh}[\mu\beta/2]$ are available. Here $\beta$ refers to the extend of the imaginary time axis. The third option is the regularized finite temperature kernel $K^{\rm T>0}_{\rm reg}(\mu,\tau)=\frac{\beta}{2\pi}{\rm atan}[\mu]K^{\rm T>0}(\mu,\tau)$ suggested in Ref. Ding:2012sp (see also Aarts:2007wj ; Ding:2009ie ). It lifts the divergence of the kernel at $\mu=0$, which is related to the antisymmetry of bosonic spectral functions at $T>0$. Note that when redefining the kernel, one also redefines the spectral function to reconstruct and thus an appropriately modified default model must be supplied. Next, the discretization of the frequency interval $\mu$ needs to be decided on (see parameters WMIN and WMAX). When relativistic lattice QCD correlators are investigated, the lattice cutoff $\pm\sqrt{3}\frac{\pi}{a}$ provides a reliable estimate up to where spectral structures will be present. It is often a good crosscheck to use a larger range of frequencies beyond where the input data can provide constraining information, in order to see that the reconstructed spectral function in that regime is correctly given by the supplied default model. In case that lattice effective field theory correlators are investigated, the user has to keep in mind that their spectra may be populated beyond the naive lattice cutoff. In some cases the appropriate range can be estimated from an inspection of semi-analytically tractable free theory spectral functions. A rough guess for the UV cutoff can be obtained by fitting an exponential to the first few correlator points at small imaginary time $\tau$. Depending on the resolution required for the encoded spectral features, the number of frequency bins $N_{\mu}$ can be chosen via NOMEGA. If a very sharp peak feature is present, one can use the parameters HPSTART, HPEND and HPNUM to define a high resolution window along $\mu$ for which HPNUM of the NOMEGA points are used. The number of points along the Euclidean time axis of the lattice simulation is specified by NT and its extend noted by BETA. Depending on the form of the kernel and the choices for $\beta$ and $\mu_{\rm max}$ the dynamic range of the kernel matrix may be large and one has to choose an appropriate precision NUMPREC for the arithmetic operations used. For the analysis of lattice QCD correlators FILEFORMAT $4$ is most useful. Each of the total NUMCONF measurements of a correlator is expected to be placed in individual files with a common name DATANAME (incl. directory information) and a counter as extension, which counts upward from FOFFSET. The format of each file is expected to contain two columns in ASCII format, the first denoting the Euclidean time step as integer and the second one the real- valued Euclidean correlator. Via TMIN and TMAX the user can specify which are the smallest and largest Euclidean times provided in each input data file, while TUSEMIN and TUSEMAX define which of these datapoints are used for the reconstruction. In order to robustly estimate the statistical uncertainty of the input data, the code is able to perform an analysis of the autocorrelation among the different measurements. The value of ACTHRESH is used to decide to which threshold the normalized autocorrelation function montvay1994quantum must have decayed, for us to consider subsequent measurements as uncorrelated. To test the quality of the estimated errors one can manually enlarge or shrink the assigned error values using the parameter ERRADAPTION. As discussed in the previous section, a robust estimate of the statistical uncertainty of the spectral reconstruction can be obtained from a Jackknife analysis. The code implements this type of error estimate when the number of Jackknife blocks are set to a value larger than two in JACKNUM. The NUMCONF measurements are divided into consecutive blocks and in each iteration of the Jackknife a single block is remove when computing the mean of the correlator. If JACKNUM is set to zero a single reconstruction based on the full available statistics is carried out. Once the data is specified, we have to select the default model. The default model can either be chosen to take on a simple functional form choosing values $1$ or $2$ for PRIORMODEL. The latter corresponds to a constant given by MFAC. The former leads to $m(\mu)=m_{0}/(\mu-\mu_{\rm min}+1)^{\rm power}$, where the power is set via the parameter PRIORPOWER and $m_{0}$ via MFAC. To supply more elaborate default models the user can set PRIORMODEL to $4$ and provide a file prior.0 in the working directory of the code that contains two columns, the first with the frequencies $\mu$ and the second with the values of $m$. Note that we have already marginalized over the uncertainty of the default model using $P[\alpha]=1$ so that specifying $m$ suffices for the BR method. In the present implementation of the BR method (ALGORITHM value $1$) the integration over $\alpha$ is implemented in a semi-analytic fashion, which is based on a large $S$ expansion. In practice this simply means that one must avoid to start the minimizer from the default model for which $S=0$. The original Ref. Burnier:2013nla conservatively stated that it is advantageous with regards to avoiding overfitting to instruct the minimizer to keep the values of the likelihood close to the number of provided datapoints. The code maintains this condition within a tolerance that is specified by a combination of the less than ideal named ALPHAMIN and ANUM parameters. The reconstruction will be performed ANUM times where in each of the iterations counted by ANUM the likelihood is constrained to fulfill $|L-N_{\rm data}|=(1/$ALPHAMIN$\times 10^{\rm ACNT})$. The search for $\rho^{\rm MAP}_{\rm BR}$ is carried out internally using the LBFGS minimization algorithm 10.3115/1118853.1118871 . It terminates when the step size of the minimizer falls below the threshold MINTOL. Note that for high precision arithmetic a correspondingly small threshold should be specified (e.g. for NUMPREC=128 MINTOL=$10^{-30}$ or for NUMPREC=256 MINTOL=$10^{-60}$). The results of the minimizer are output into the folder RESULTNAME every $2000$ steps in files called BAYES_rhovalues_A(ANUM-ACNT).dat and the final result is found in the file spec_rec.dat. The spectra are also collected in the file PROB_ESTIMATES_FREQ.dat in column $6$, where the frequencies are listed in column $4$. If the Jackknife analysis is selected then this file contains multiple spectra for each Jackknife subaverage counted by the value in column $8$. To speed up the convergence in case that very high precision data is supplied (i.e. when very sharp valleys exist in the likelihood) it is advantageous to carry out the reconstruction first with artificially enlarged errorbars via ERRADAPTION$>1$. The corresponding result in file BAYES_rhovalues_A(ANUM).dat if copied into the working directory of the code with the name start.0 can be used as starting point for the next minimization with the actual errorbars, by selecting the value $2$ for the parameter RESTARTPREV. The code, when compiled with the preprocessor macro VERBOSITY set to value one, will give ample output about each step of the reconstruction. It will output the frequency discretization, the values for the Euclidean times used, as well as show which data from each datafile has been read-in. In addition it presents the estimated autocorrelation and the eigenvalues of the covariance matrix, before outputting each step of the minimizer to the terminal. This comprehensive output allows the user to spot potential errors during data read-in and allows easy monitoring whether the minimizer is proceeding normally. The incorrect estimation of the covariance matrix due to autocorrelations is a common issue, which can prevent the minimizer to reach the target of minimizing the likelihood down to values close to the number of input data. Enlarging the errorbars until the likelihood reaches small enough values provides a first indication of how badly the covariance matrix is affected by autocorrelations. Another diagnosis step is to only consider the diagonal entries of the covariance matrix, which can be selected using the preprocessor macro DIAGCORR set to $1$. ### 4.2 MEM MAP implementation in C/C++ The provided C/C++ code also allows to perform the MAP estimation based on the MEM prior using arbitrary precision arithmetic. By setting the parameter ALGORIHM to value $2$ once can choose Bryan’s implementation, where the spectral function is parameterized via the SVD of the kernel matrix. The standard implementation uses as many SVD basis functions as input datapoints are provided. By varying the SVDEXT parameter the user may choose to include more or reduce the number of SVD basis function deployed. Alternatively by using the value $3$ the user can deploy the Fourier basis functions introduced in Ref. Rothkopf:2012vv and for value $4$ $\rho^{\rm MAP}_{\rm MEM}$ is searched for in the full $N_{\mu}$ dimensional search space. Due to the proof of uniqueness of the extremum, even searching in the full space is supposed to locate a single Bayesian answer $\rho^{\rm MAP}_{\rm MEM}$ to the inverse problem. In the MEM, the common uncertainty parameter $\alpha$ for the default model $m_{l}$ is still part of the posterior and needs to be treated explicitly. To this end the MEM reconstruction is repeated ANUM times, scanning a range of $\alpha$ values between ALPHAMIN and ALPHAMAX. Since apriori the appropriate range of values is not known, the user is recommended to carry out reconstructions with artificially enlarged errorbars via ERRADAPTION that converge quickly and which allow to scan a large range between usually $\alpha\in[0;100]$. The LBFGS minimizer will be used to find the point estimates $\rho^{\rm MAP}_{\alpha}$ for each fixed value of the hyperparameter and then according to Ref. Asakawa:2000tr estimate the probability distribution $P[\alpha|D,I]$ over which a weighted average is computed. The final result is then outputted in the file spec_rec.dat in column $4$ with the frequencies located in column $3$. Intermediate steps of the minimizer are output to files MEM_rhovalues_A(ACNT).dat, where ACNT refers to the step along the alpha interval. In case of a Jackknife analysis all reconstructed spectra can be found in PROB_ESTIMATES_FREQ.dat in column $6$, where the frequencies are listed in column $4$. Note that due to the functional form of the Shannon-Jaynes prior the convergence for spectral functions with large regions of vanishing values is often slow, which is why in practice the tolerance for convergence is chosen by MINTOL around $10^{-7}$. Note that the estimation of the $\alpha$ probabilities involves the computation of eigenvalues of a product of the kernel with itself. In turn this step may require additional numerical precision via NUMPREC if an exponential kernel is used. If the precision is insufficient, the determination of the eigenvalues might fail and the final integrated spectral function will show NAN values, while intermediate results in MEM_rhovalues_A(ACNT).dat are well behaved. In that case rerunning the reconstruction with higher precision will remedy the issue. ### 4.3 Full Monte-Carlo based BR method in Python In many circumstances the MAP point estimate of spectral functions already provides relevant information to answer questions about real-time physics from lattice QCD. However, as discussed in the previous section section 3.3, its full uncertainty budget may be challenging to estimate. It is therefore that I here discuss a modern implementation of the BR method, allowing for access to the posterior distribution via Monte-Carlo sampling. The second code provided with this publication is a Python script based on the MCStan Monte-Carlo sampler library carpenter2017stan ; standev2018stancore . It uses the same parameters for the description of frequency and imaginary time as the C/C++ code but works solely with double precision arithmetic. Since different kernels are easily re-implemented, the script contains as single example the zero temperature kernel $K^{\rm T=0}(\tau,\mu)$. In order to sample from the posterior, we must define all the ingredients of our Bayesian model in the MCStan language. A simple model consists of three sections, data, parameters and the actual model. In data the different variables and vectors used in the evaluation of the model are specified. It contains e.g. the number of datapoints sNt and the number of frequency bins sNw. The decorrelated kernel is provided in a two-dimensional matrix datatype Kernel, while the decorrelated simulation data come in the form of a vector D. The eigenvalues of the covariance matrix enter via the vector Uncertainty. The values of the default model are stored in the vector DefMod. In the original BR method we would assume full ignorance of the uncertainty parameters $\alpha_{l}$ with $P[\alpha]=1$. Such improper priors may lead to inefficient sampling in MCStan, which is why in this example script a lognormal distribution is used. It draws $\alpha$ values from a range considered relevant in mock data tests. The user can always check self consistently whether the sampling range of $\alpha$’s was chosen appropriately by interrogating the marginalized posterior for $\alpha$ itself, making sure that its maximum lies well within the sampling range. After selecting how many Markov-chains to initialize via NChain and how many steps in Monte Carlo time to proceed via NSamples the Monte-Carlo sampler of MCStan is executed using the sample command. MCStan automatically adds additional steps for thermalization of the Markov chain. Depending on how well localized the histrograms for each $\rho_{l}$ are, the number of samples must be adjusted. Since the BR prior is convex, initializing different chains in different regions of parameter space does not affect the outcome as long as enough samples are drawn. We may then subsequently estimate the spectral function reconstruction from the posterior by inspecting the histograms for each parameter. Since in this case we have access to the full posterior distribution we can now answer not only what the most probable value for $\rho_{l}$ is but also compute its mean and median, giving us relevant insight about the skewness of the distribution of values. ### 4.4 Mock Data Both code packages contain two realistic mock-data test sets, which have been used in the past to benchmark the performance of Bayesian methods. They are based on the Euclidean Wilson loop computed in first order hard-thermal-loop perturbation theory, for which the temperature independent kernel $K(\tau,\mu)={\rm exp}[-\tau\mu]$ is appropriate. The correlator included here corresponds to the one computed at $T=631$MeV in Ref. Burnier:2013fca and which is evaluated at $r=0.066$fm, as well as $r=0.264$fm spatial extend. The continuum correlator is discretized with $32$ steps in Euclidean time. The underlying spectral functions are provided in the folder MockSpectra in separate files for comparison. To stay as close to the scenario of a lattice simulation, based on the ideal correlator data, a set of 1000 individual datafiles is generated in the folder MockData in which the imaginary time data is distorted with Gaussian noise. The noise strength is set to give a constant $\Delta D/D=10^{-4}$ relative error on the mean when all samples are combined. The user is advised to skip both the first $D(0)$ and last datapoint $D(\tau_{\rm max})$ in the dataset, which are contaminated by unphysical artifacts related to the regularization of the Wilson loop computation. The reader will find that this mock data provides a challenging setting for any reconstruction method, as it requires the reconstruction both of a well defined peak, as well as of a broad background structure. It therefore is well suited to test the resolution capabilities of reconstruction methods, as well as their propensity for ringing and over-damping artifacts. For the C/C++ implementation of the BR MAP estimation a set of example scripts are provided. The user can first execute e.g. BAYESMOCK066_precon.scr to carry out a preconditioning run with enlarged errorbars. In a second step one provides the outcome of the preconditioning run as file start.0 and executes BAYESMOCK066.scr to locate the global extremum of the BR prior. The outcome of these sample scripts is given for reference in fig. 4 compared to the semi- analytically computed HTL spectral functions in SpectrumWilsonLoopHTLR066.dat. Figure 4: BR MAP reconstructions of the HTL Wilson loop spectral function (gray points) evaluated at $T=631$MeV and spatial separation distance $r=0.066$fm (top) and $r=0.264$fm (bottom). The reconstruction based on $N_{\tau}=32$ Euclidean data and a frequency range between $\mu a\in[-5,25]$ with $N_{\omega}=1000$ are shown as colored open symbols. The red data denotes the reconstruction based on the preconditioning ERRADAPTION$=50$ while the final result exploting the full $\Delta D/D=10^{-4}$ is given in blue. ## 5 New insight from machine learning Over the past years interest in machine learning approaches to spectral function reconstruction has increased markedly (see also Boyda:2022nmh ). Several groups have put forward pioneering studies that explore how established machine learning strategies, such as supervised kernel ridge regression Offler:2021fmg ; Spriggs:2021dsb , artificial neural networks fournier2020artificial ; Kades:2019wtd ; Karpie:2019eiq ; Chen:2021giw ; Wang:2021cqw ; Shi:2022yqw or Gaussian processes Horak:2021syv can be used to tackle the inverse problem in the context of extracting spectral functions from Euclidean lattice correlators. The machine learning mindset has already lead to new developments in the spectral reconstruction community, by providing new impulses to regularization of the ill-posed problem. As a first step let us take look at how machine learning strategies incorporate the necessary prior knowledge to obtain a unique answer to the reconstruction task. While in the Bayesian approach this information enters explicitly through the prior probability and its hyperparameters, it does so in the machine-learning context in three separate ways: To train supervised reconstruction algorithms a training dataset needs to be provided, often consisting of pairs of correlators and information on the encoded spectral functions. Usually a limited selection of relevant structures is included in this training data set, which amounts to prior knowledge on the spectrum. Both supervised and unsupervised machine learning is build around the concept of a cost- or optimization functional, which contains information on the provided data. It most often also features regulator terms, which can be of similar form as those discussed in section 3.2. This in particular means that these regulators define the most probable values for the $\rho_{l}$’s in the absence of data and therefor take on a similar role as a Bayesian default model. The third entry point for prior knowledge lies in the choice of structures used to compose the machine learning model. In case that e.g. Gaussian processes are used, the choice of kernel of the common normal distribution for observed and unobserved data is based on prior knowledge, as is the selection of its hyperparameters. In case that neural networks are used, the number and structure of the deployed layers and activation functions similarly imprint additional prior information on the reconstructed spectral function, such as e.g. their positivity. Direct applications of machine learning approaches developed in the context of image reconstruction to positive spectral function reconstruction have shown good performance on-par with Bayesian algorithms, such as the BR method or the MEM. Can we understand why machine learning so far has not outpaced Bayesian approaches? One potential answer lies in the information scarcity of the input correlators themselves. If there is no unused information present in the correlator also sophisticated machine learning cannot go beyond what Bayesian approaches utilize. As shown in recent mock-data tests in the context of finite temperature hadron spectral functions in Ref. Kim:2018yhk , increasing the number of available datapoints in imaginary time (i.e. going closer to the continuum limit) does not necessarily improve the reconstruction outcome significantly as the relevant information content about thermal physics does not increase. This is easily seen when considering the Matsubara frequency correlator. As one decreases the temporal lattice spacing, the range of accessible high lying Matsubara frequencies increases but their coarseness, given by the inverse temperature of the system remains the same. It turns out that the relevant thermal physics is often hidden in the range between the first and second Matsubara frequency and the correlator at higher frequencies already coincides within errors with the zero temperature correlator. This information scarcity dilemma asks us to provide our reconstruction algorithms with more QCD specific prior information. So far the Bayesian priors have focused on very generic properties, such as positivity and smoothness. It is here that machine learning can and already has provided new impulses to the community. One promising approach is to use neural networks as parameterization of spectral functions or parton distribution functions. First introduced in the context of PDFs in Ref. Karpie:2019eiq and recently applied to the study of finite temperature spectra in Ref. Shi:2022yqw this approach allows to infuse the reconstruction with additional information about the analytic properties of $\rho$. Traditionally one would choose a specific parameterization apriori such as rational functions (Padé) or SVD basis functions (Bryan) and vary their parameters. The more versatile NN approach, thanks to the universal approximation theorem, allows us instead to explore different types of basis functions and assign an uncertainty to each choice. The concept of learning can also be brought to the prior probability or regulator itself. Instead of constructing a regulator based on generic axioms, one may consider it as a neural network mapping the parameters $\rho_{l}$ to a single penalty value $P[\rho|I]$. Training an optimal regulator within a Bayesian setting, based e.g. on realistic mock data, promises to capture more QCD specific properties than what is currently encoded in the BR or MEM. Exploring this path is work in progress. ## 6 Summary and Conclusion Progress in modern high-energy nuclear physics depends on first-principle knowledge of QCD dynamics, be it in the form of transport properties of quarks and gluons at high temperatures or the phase-space distributions of partons inside nucleons at low temperatures. Lattice QCD offers non-perturbative access to these quantities but due to its formulation in imaginary time, hides them behind an ill-posed inverse problem. The inverse problem is most succinctly stated in terms of a spectral decomposition, where the Euclidean correlator accessible on the lattice is expressed as integral over a spectral function multiplied by an analytic kernel. The real-time information of interest can often be read-off directly from the structures occurring in the spectral function. The determination of PDFs from the hadronic tensor and via pseudo PDFs can be formulated in terms of a similar inversion problem. Bayesian inference provides a versatile tool set for the reconstruction of spectral functions. It gives meaning to the ill-posed inverse problem by incorporating relevant domain knowledge with an associated uncertainty budget through the prior probability distribution. Evaluating the posterior distribution, defined through Bayes theorem, gives access to the most probable values of the spectral function based on simulation data and prior knowledge. In addition it also encodes the full uncertainty budget through its spread. Traditionally predominantly MAP point estimates were computed due to lower computational cost of the corresponding optimization problem, compared to full Monte-Carlo sampling of the posterior. In that case information about the uncertainty budget is hidden from the user and it must be estimated manually. Several relevant challenges for uncertainty estimation in the lattice QCD context were discussed, including the problem of ringing and those related to comparing reconstructions based on different Euclidean time extents. A brief user guide described how to run two open access codes accompanying this publication. One focuses on the determination of MAP point estimates based on the BR and MEM prior. The other utilizes a modern Monte-Carlo library to sample from the full BR posterior. Last but not least a brief look is taken at machine learning approaches to spectral function reconstruction. The need for providing prior information is discussed and a common challenge among all reconstruction approaches, information scarcity in the input data, is pointed out. Two venues for combining the machine-learning viewpoint with the Bayesian strategy are touched upon. With the concrete conceptual and technical discussions contained in this publication, the reader is equipped with a solid basis to carry out Bayesian spectral reconstructions. The provided open-access source codes offer a quick entry into the research field and can be modified according to different needs in regards to kernels arising in different lattice QCD studies. ## Acknowledgements The author gladly acknowledges support by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. Some of the spectral function reconstruction code has been developed in the context of the project NN9578K-QCDrtX ”Real-time dynamics of nuclear matter under extreme conditions” funded by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. ## Competing interests The author declares that he has no competing interests. ## Author’s contributions * • A. Rothkopf: conception, literature study, code development, writing, editing. ## References * (1) Guenther, J.N.: Overview of the QCD phase diagram: Recent progress from the lattice. Eur. Phys. J. A 57(4), 136 (2021). doi:10.1140/epja/s10050-021-00354-6. 2010.15503 * (2) Fukushima, K., Hatsuda, T.: The phase diagram of dense QCD. Rept. Prog. Phys. 74, 014001 (2011). doi:10.1088/0034-4885/74/1/014001. 1005.4814 * (3) Borsanyi, S., Fodor, Z., Giordano, M., Guenther, J.N., Katz, S.D., Pasztor, A., Wong, C.H.: Equation of state of a hot-and-dense quark gluon plasma: lattice simulations at real $\mu_{B}$ vs. extrapolations (2022). 2208.05398 * (4) Bazavov, A., et al.: The QCD Equation of State to $\mathcal{O}(\mu_{B}^{6})$ from Lattice QCD. Phys. Rev. D 95(5), 054504 (2017). doi:10.1103/PhysRevD.95.054504. 1701.04325 * (5) Borsanyi, S., Fodor, Z., Guenther, J.N., Kara, R., Parotto, P., Pasztor, A., Ratti, C., Szabo, K.K.: Resummed lattice QCD equation of state at finite baryon density: Strangeness neutrality and beyond. Phys. Rev. D 105(11), 114504 (2022). doi:10.1103/PhysRevD.105.114504. 2202.05574 * (6) Busza, W., Rajagopal, K., van der Schee, W.: Heavy Ion Collisions: The Big Picture, and the Big Questions. Ann. Rev. Nucl. Part. Sci. 68, 339–376 (2018). doi:10.1146/annurev-nucl-101917-020852. 1802.04801 * (7) Kojo, T.: QCD equations of state and speed of sound in neutron stars. AAPPS Bull. 31(1), 11 (2021). doi:10.1007/s43673-021-00011-6. 2011.10940 * (8) Pasechnik, R., Šumbera, M.: Phenomenological Review on Quark–Gluon Plasma: Concepts vs. Observations. Universe 3(1), 7 (2017). doi:10.3390/universe3010007. 1611.01533 * (9) Bazavov, A., Petreczky, P., Weber, J.H.: Equation of State in 2+1 Flavor QCD at High Temperatures. Phys. Rev. D 97(1), 014510 (2018). doi:10.1103/PhysRevD.97.014510. 1710.05024 * (10) Borsanyi, S., et al.: Calculation of the axion mass based on high-temperature lattice quantum chromodynamics. Nature 539(7627), 69–71 (2016). doi:10.1038/nature20115. 1606.07494 * (11) Bazavov, A., et al.: Equation of state in ( 2+1 )-flavor QCD. Phys. Rev. D 90, 094503 (2014). doi:10.1103/PhysRevD.90.094503. 1407.6387 * (12) Burger, F., Ilgenfritz, E.-M., Lombardo, M.P., Müller-Preussker, M.: Equation of state of quark-gluon matter from lattice QCD with two flavors of twisted mass Wilson fermions. Phys. Rev. D 91(7), 074504 (2015). doi:10.1103/PhysRevD.91.074504. 1412.6748 * (13) Borsanyi, S., Fodor, Z., Hoelbling, C., Katz, S.D., Krieg, S., Szabo, K.K.: Full result for the QCD equation of state with 2+1 flavors. Phys. Lett. B 730, 99–104 (2014). doi:10.1016/j.physletb.2014.01.007. 1309.5258 * (14) Jaiswal, A., Roy, V.: Relativistic hydrodynamics in heavy-ion collisions: general aspects and recent developments. Adv. High Energy Phys. 2016, 9623034 (2016). doi:10.1155/2016/9623034. 1605.08694 * (15) Klein, M., Yoshida, R.: Collider Physics at HERA. Prog. Part. Nucl. Phys. 61, 343–393 (2008). doi:10.1016/j.ppnp.2008.05.002. 0805.3334 * (16) d’Enterria, D., et al.: The strong coupling constant: State of the art and the decade ahead (2022). 2203.08271 * (17) Aoki, Y., et al.: FLAG Review 2021 (2021). 2111.09849 * (18) Lin, Z.-W., Ko, C.M., Li, B.-A., Zhang, B., Pal, S.: A Multi-phase transport model for relativistic heavy ion collisions. Phys. Rev. C 72, 064901 (2005). doi:10.1103/PhysRevC.72.064901. nucl-th/0411110 * (19) Petersen, H., Steinheimer, J., Burau, G., Bleicher, M., Stöcker, H.: A Fully Integrated Transport Approach to Heavy Ion Reactions with an Intermediate Hydrodynamic Stage. Phys. Rev. C 78, 044901 (2008). doi:10.1103/PhysRevC.78.044901. 0806.1695 * (20) Bratkovskaya, E.L., Cassing, W., Konchakovski, V.P., Linnyk, O.: Parton-Hadron-String Dynamics at Relativistic Collider Energies. Nucl. Phys. A 856, 162–182 (2011). doi:10.1016/j.nuclphysa.2011.03.003. 1101.5793 * (21) Cao, S., Wang, X.-N.: Jet quenching and medium response in high-energy heavy-ion collisions: a review. Rept. Prog. Phys. 84(2), 024301 (2021). doi:10.1088/1361-6633/abc22b. 2002.04028 * (22) Rothkopf, A.: Heavy Quarkonium in Extreme Conditions. Phys. Rept. 858, 1–117 (2020). doi:10.1016/j.physrep.2020.02.006. 1912.02253 * (23) Abdul Khalek, R., et al.: Snowmass 2021 White Paper: Electron Ion Collider for High Energy Physics. In: 2022 Snowmass Summer Study (2022). 2203.13199 * (24) Alexandrou, C., Bacchio, S., Constantinou, M., Finkenrath, J., Hadjiyiannakou, K., Jansen, K., Koutsou, G., Panagopoulos, H., Spanoudes, G.: Complete flavor decomposition of the spin and momentum fraction of the proton using lattice QCD simulations at physical pion mass. Phys. Rev. D 101(9), 094513 (2020). doi:10.1103/PhysRevD.101.094513. 2003.08486 * (25) Wang, G., Yang, Y.-B., Liang, J., Draper, T., Liu, K.-F.: Proton momentum and angular momentum decompositions with overlap fermions. Phys. Rev. D 106(1), 014512 (2022). doi:10.1103/PhysRevD.106.014512. 2111.09329 * (26) Meissner, S., Metz, A., Schlegel, M.: Generalized parton correlation functions for a spin-1/2 hadron. JHEP 08, 056 (2009). doi:10.1088/1126-6708/2009/08/056. 0906.5323 * (27) Constantinou, M., et al.: Lattice QCD Calculations of Parton Physics (2022). 2202.07193 * (28) Ji, X.: Parton Physics on a Euclidean Lattice. Phys. Rev. Lett. 110, 262002 (2013). doi:10.1103/PhysRevLett.110.262002. 1305.1539 * (29) Radyushkin, A.V.: Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions. Phys. Rev. D 96(3), 034025 (2017). doi:10.1103/PhysRevD.96.034025. 1705.01488 * (30) Liu, K.-F., Dong, S.-J.: Origin of difference between anti-d and anti-u partons in the nucleon. Phys. Rev. Lett. 72, 1790–1793 (1994). doi:10.1103/PhysRevLett.72.1790. hep-ph/9306299 * (31) Schwartz, M.D.: Quantum Field Theory and the Standard Model. Cambridge University Press, ??? (2014) * (32) Dupuis, N., Canet, L., Eichhorn, A., Metzner, W., Pawlowski, J.M., Tissier, M., Wschebor, N.: The nonperturbative functional renormalization group and its applications. Phys. Rept. 910, 1–114 (2021). doi:10.1016/j.physrep.2021.01.001. 2006.04853 * (33) Blaizot, J.-P., Pawlowski, J.M., Reinosa, U.: Functional renormalization group and 2PI effective action formalism. Annals Phys. 431, 168549 (2021). doi:10.1016/j.aop.2021.168549. 2102.13628 * (34) Fischer, C.S.: Infrared properties of qcd from dyson–schwinger equations. Journal of Physics G: Nuclear and Particle Physics 32(8), 253 (2006) * (35) Roberts, C.D.: Strong QCD and Dyson-Schwinger Equations. IRMA Lect. Math. Theor. Phys. 21, 355–458 (2015). 1203.5341 * (36) Montvay, I., Münster, G.: Quantum Fields on a Lattice. Cambridge Monographs on Mathematical Physics. Cambridge University Press, ??? (1994). https://books.google.no/books?id=NHZshmEBXhcC * (37) Gattringer, C., Lang, C.B.: Quantum Chromodynamics on the Lattice vol. 788. Springer, Berlin (2010). doi:10.1007/978-3-642-01850-3 * (38) Gattringer, C., Langfeld, K.: Approaches to the sign problem in lattice field theory. Int. J. Mod. Phys. A 31(22), 1643007 (2016). doi:10.1142/S0217751X16430077. 1603.09517 * (39) Berger, C.E., Rammelmüller, L., Loheac, A.C., Ehmann, F., Braun, J., Drut, J.E.: Complex Langevin and other approaches to the sign problem in quantum many-body physics. Phys. Rept. 892, 1–54 (2021). doi:10.1016/j.physrep.2020.09.002. 1907.10183 * (40) Bellac, M.L.: Thermal Field Theory. Cambridge Monographs on Mathematical Physics. Cambridge University Press, ??? (2011). doi:10.1017/CBO9780511721700 * (41) Ghiglieri, J., Kurkela, A., Strickland, M., Vuorinen, A.: Perturbative Thermal QCD: Formalism and Applications. Phys. Rept. 880, 1–73 (2020). doi:10.1016/j.physrep.2020.07.004. 2002.10188 * (42) Meyer, H.B.: Transport Properties of the Quark-Gluon Plasma: A Lattice QCD Perspective. Eur. Phys. J. A 47, 86 (2011). doi:10.1140/epja/i2011-11086-3. 1104.3708 * (43) Shi, S., Wang, L., Zhou, K.: Rethinking the ill-posedness of the spectral function reconstruction - why is it fundamentally hard and how Artificial Neural Networks can help (2022). 2201.02564 * (44) Karpie, J., Orginos, K., Rothkopf, A., Zafeiropoulos, S.: Reconstructing parton distribution functions from Ioffe time data: from Bayesian methods to Neural Networks. JHEP 04, 057 (2019). doi:10.1007/JHEP04(2019)057. 1901.05408 * (45) Nakahara, Y., Asakawa, M., Hatsuda, T.: Hadronic spectral functions in lattice QCD. Phys. Rev. D 60, 091503 (1999). doi:10.1103/PhysRevD.60.091503. hep-lat/9905034 * (46) Asakawa, M., Hatsuda, T., Nakahara, Y.: Maximum entropy analysis of the spectral functions in lattice QCD. Prog. Part. Nucl. Phys. 46, 459–508 (2001). doi:10.1016/S0146-6410(01)00150-8. hep-lat/0011040 * (47) Jarrell, M., Gubernatis, J.E.: Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data. Physics Reports 269(3), 133–195 (1996). doi:10.1016/0370-1573(95)00074-7. Number: 3. Accessed 2020-03-20 * (48) Skilling, J., Gull, S.F.: Bayesian maximum entropy image reconstruction. Lecture Notes-Monograph Series, 341–367 (1991) * (49) Yamazaki, T., Aoki, S., Burkhalter, R., Fukugita, M., Hashimoto, S., Ishizuka, N., Iwasaki, Y., Kanaya, K., Kaneko, T., Kuramashi, Y., Okawa, M., Taniguchi, Y., Ukawa, A., Yoshié, T.: Spectral function and excited states in lattice qcd with the maximum entropy method. Phys. Rev. D 65, 014501 (2001). doi:10.1103/PhysRevD.65.014501 * (50) Sasaki, K., Sasaki, S., Hatsuda, T.: Spectral analysis of excited nucleons in lattice qcd with maximum entropy method. Physics Letters B 623(3), 208–217 (2005). doi:10.1016/j.physletb.2005.07.026 * (51) Fiebig, H.R.: Spectral density analysis of time correlation functions in lattice qcd using the maximum entropy method. Phys. Rev. D 65, 094512 (2002). doi:10.1103/PhysRevD.65.094512 * (52) Liang, J., Draper, T., Liu, K.-F., Rothkopf, A., Yang, Y.-B.: Towards the nucleon hadronic tensor from lattice QCD. Phys. Rev. D 101(11), 114503 (2020). doi:10.1103/PhysRevD.101.114503. 1906.05312 * (53) Asakawa, M., Hatsuda, T.: J / psi and eta(c) in the deconfined plasma from lattice QCD. Phys. Rev. Lett. 92, 012001 (2004). doi:10.1103/PhysRevLett.92.012001. hep-lat/0308034 * (54) Datta, S., Karsch, F., Petreczky, P., Wetzorke, I.: Behavior of charmonium systems after deconfinement. Phys. Rev. D 69, 094507 (2004). doi:10.1103/PhysRevD.69.094507. hep-lat/0312037 * (55) Umeda, T., Nomura, K., Matsufuru, H.: Charmonium at finite temperature in quenched lattice QCD. Eur. Phys. J. C 39S1, 9–26 (2005). doi:10.1140/epjcd/s2004-01-002-1. hep-lat/0211003 * (56) Jakovac, A., Petreczky, P., Petrov, K., Velytsky, A.: Quarkonium correlators and spectral functions at zero and finite temperature. Phys. Rev. D 75, 014506 (2007). doi:10.1103/PhysRevD.75.014506. hep-lat/0611017 * (57) Aarts, G., Allton, C., Kim, S., Lombardo, M.P., Oktay, M.B., Ryan, S.M., Sinclair, D.K., Skullerud, J.I.: What happens to the $\Upsilon$ and $\eta_{b}$ in the quark-gluon plasma? Bottomonium spectral functions from lattice QCD. JHEP 11, 103 (2011). doi:10.1007/JHEP11(2011)103. 1109.4496 * (58) Aarts, G., Allton, C., Kim, S., Lombardo, M.P., Oktay, M.B., Ryan, S.M., Sinclair, D.K., Skullerud, J.-I.: S wave bottomonium states moving in a quark-gluon plasma from lattice NRQCD. JHEP 03, 084 (2013). doi:10.1007/JHEP03(2013)084. 1210.2903 * (59) Ding, H.T., Francis, A., Kaczmarek, O., Karsch, F., Satz, H., Soeldner, W.: Charmonium properties in hot quenched lattice QCD. Phys. Rev. D 86, 014509 (2012). doi:10.1103/PhysRevD.86.014509. 1204.4945 * (60) Aarts, G., Allton, C., Kim, S., Lombardo, M.P., Ryan, S.M., Skullerud, J.-I.: Melting of P wave bottomonium states in the quark-gluon plasma from lattice NRQCD. JHEP 12, 064 (2013). doi:10.1007/JHEP12(2013)064. 1310.5467 * (61) Aarts, G., Allton, C., Harris, T., Kim, S., Lombardo, M.P., Ryan, S.M., Skullerud, J.-I.: The bottomonium spectrum at finite temperature from Nf = 2 + 1 lattice QCD. JHEP 07, 097 (2014). doi:10.1007/JHEP07(2014)097. 1402.6210 * (62) Borsanyi, S., et al.: Charmonium spectral functions from 2+1 flavour lattice QCD. JHEP 04, 132 (2014). doi:10.1007/JHEP04(2014)132. 1401.5940 * (63) Kim, S., Petreczky, P., Rothkopf, A.: Lattice NRQCD study of S- and P-wave bottomonium states in a thermal medium with $N_{f}=2+1$ light flavors. Phys. Rev. D 91, 054511 (2015). doi:10.1103/PhysRevD.91.054511. 1409.3630 * (64) Ikeda, A., Asakawa, M., Kitazawa, M.: In-medium dispersion relations of charmonia studied by maximum entropy method. Phys. Rev. D 95(1), 014504 (2017). doi:10.1103/PhysRevD.95.014504. 1610.07787 * (65) Kelly, A., Rothkopf, A., Skullerud, J.-I.: Bayesian study of relativistic open and hidden charm in anisotropic lattice QCD. Phys. Rev. D 97(11), 114509 (2018). doi:10.1103/PhysRevD.97.114509. 1802.00667 * (66) Kim, S., Petreczky, P., Rothkopf, A.: Quarkonium in-medium properties from realistic lattice NRQCD. JHEP 11, 088 (2018). doi:10.1007/JHEP11(2018)088. 1808.08781 * (67) Gubler, P., Morita, K., Oka, M.: Charmonium spectra at finite temperature from QCD sum rules with the maximum entropy method. Phys. Rev. Lett. 107, 092003 (2011). doi:10.1103/PhysRevLett.107.092003. 1104.4436 * (68) Araki, K.-J., Ohtani, K., Gubler, P., Oka, M.: QCD sum rules on the complex Borel plane. PTEP 2014, 073–03 (2014). doi:10.1093/ptep/ptu092. 1403.6299 * (69) Meyer, H.B.: A Calculation of the shear viscosity in SU(3) gluodynamics. Phys. Rev. D 76, 101701 (2007). doi:10.1103/PhysRevD.76.101701. 0704.1801 * (70) Meyer, H.B.: A Calculation of the bulk viscosity in SU(3) gluodynamics. Phys. Rev. Lett. 100, 162001 (2008). doi:10.1103/PhysRevLett.100.162001. 0710.3717 * (71) Aarts, G., Allton, C., Foley, J., Hands, S., Kim, S.: Spectral functions at small energies and the electrical conductivity in hot, quenched lattice QCD. Phys. Rev. Lett. 99, 022002 (2007). doi:10.1103/PhysRevLett.99.022002. hep-lat/0703008 * (72) Ding, H.-T., Francis, A., Kaczmarek, O., Karsch, F., Laermann, E., Soeldner, W.: Thermal dilepton rate and electrical conductivity: An analysis of vector current correlation functions in quenched lattice QCD. Phys. Rev. D 83, 034504 (2011). doi:10.1103/PhysRevD.83.034504. 1012.4963 * (73) Aarts, G., Allton, C., Amato, A., Giudice, P., Hands, S., Skullerud, J.-I.: Electrical conductivity and charge diffusion in thermal QCD from the lattice. JHEP 02, 186 (2015). doi:10.1007/JHEP02(2015)186. 1412.6411 * (74) Amato, A., Aarts, G., Allton, C., Giudice, P., Hands, S., Skullerud, J.-I.: Electrical conductivity of the quark-gluon plasma across the deconfinement transition. Phys. Rev. Lett. 111(17), 172001 (2013). doi:10.1103/PhysRevLett.111.172001. 1307.6763 * (75) Rothkopf, A., Hatsuda, T., Sasaki, S.: Complex Heavy-Quark Potential at Finite Temperature from Lattice QCD. Phys. Rev. Lett. 108, 162001 (2012). doi:10.1103/PhysRevLett.108.162001. 1108.1579 * (76) Burnier, Y., Kaczmarek, O., Rothkopf, A.: Static quark-antiquark potential in the quark-gluon plasma from lattice QCD. Phys. Rev. Lett. 114(8), 082001 (2015). doi:10.1103/PhysRevLett.114.082001. 1410.2546 * (77) Burnier, Y., Kaczmarek, O., Rothkopf, A.: Quarkonium at finite temperature: Towards realistic phenomenology from first principles. JHEP 12, 101 (2015). doi:10.1007/JHEP12(2015)101. 1509.07366 * (78) Burnier, Y., Rothkopf, A.: Complex heavy-quark potential and Debye mass in a gluonic medium from lattice QCD. Phys. Rev. D 95(5), 054511 (2017). doi:10.1103/PhysRevD.95.054511. 1607.04049 * (79) Lepage, G.P., Clark, B., Davies, C.T.H., Hornbostel, K., Mackenzie, P.B., Morningstar, C., Trottier, H.: Constrained curve fitting. Nucl. Phys. B Proc. Suppl. 106, 12–20 (2002). doi:10.1016/S0920-5632(01)01638-3. hep-lat/0110175 * (80) Burnier, Y., Ding, H.-T., Kaczmarek, O., Kruse, A.-L., Laine, M., Ohno, H., Sandmeyer, H.: Thermal quarkonium physics in the pseudoscalar channel. JHEP 11, 206 (2017). doi:10.1007/JHEP11(2017)206. 1709.07612 * (81) McElreath, R.: Statistical Rethinking: A Bayesian Course with Examples in R and Stan, 2nd Edition, 2nd edn. CRC Press, ??? (2020). http://xcelab.net/rm/statistical-rethinking/ * (82) Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, Berlin, Heidelberg (2006) * (83) Cyrol, A.K., Pawlowski, J.M., Rothkopf, A., Wink, N.: Reconstructing the gluon. SciPost Phys. 5(6), 065 (2018). doi:10.21468/SciPostPhys.5.6.065. 1804.00945 * (84) TIKHONOV, A.N.: On the stability of inverse problems. Dokl. Akad. Nauk SSSR 39, 195–198 (1943) * (85) Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D Nonlinear Phenomena 60(1-4), 259–268 (1992). doi:10.1016/0167-2789(92)90242-F * (86) Bardsley, J.M.: Laplace-distributed increments, the laplace prior, and edge-preserving regularization. Journal of Inverse and Ill-Posed Problems 20(3), 271–285 (2012) * (87) Fischer, C.S., Pawlowski, J.M., Rothkopf, A., Welzbacher, C.A.: Bayesian analysis of quark spectral properties from the Dyson-Schwinger equation. Phys. Rev. D 98(1), 014009 (2018). doi:10.1103/PhysRevD.98.014009. 1705.03207 * (88) Event Horizon Telescope Collaboration et.al.: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole. ApJ Letters 875(1), 1 (2019). doi:10.3847/2041-8213/ab0ec7. 1906.11238 * (89) Hobson, M., Lasenby, A.: The entropic prior for distributions with positive and negative values. Mon. Not. Roy. Astron. Soc. 298, 905 (1998). doi:10.1046/j.1365-8711.1998.01707.x. astro-ph/9810240 * (90) Haas, M., Fister, L., Pawlowski, J.M.: Gluon spectral functions and transport coefficients in Yang–Mills theory. Phys. Rev. D 90, 091501 (2014). doi:10.1103/PhysRevD.90.091501. 1308.4960 * (91) Rothkopf, A.: Bayesian inference of nonpositive spectral functions in quantum field theory. Phys. Rev. D 95(5), 056016 (2017). doi:10.1103/PhysRevD.95.056016. 1611.00482 * (92) Narayan, R., Nityananda, R.: Maximum entropy image restoration in astronomy. Annual review of astronomy and astrophysics 24(1), 127–170 (1986) * (93) Burnier, Y., Rothkopf, A.: Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories. Phys. Rev. Lett. 111, 182003 (2013). doi:10.1103/PhysRevLett.111.182003. 1307.6106 * (94) Mishchenko, A., Prokof’Ev, N., Sakamoto, A., Svistunov, B.: Diagrammatic quantum monte carlo study of the fröhlich polaron. Physical Review B 62(10), 6317 (2000) * (95) Ding, H.-T., Kaczmarek, O., Mukherjee, S., Ohno, H., Shu, H.-T.: Stochastic reconstructions of spectral functions: Application to lattice qcd. Physical Review D 97(9), 094503 (2018) * (96) Shao, H., Sandvik, A.W.: Progress on stochastic analytic continuation of quantum Monte Carlo data (2022). 2202.09870 * (97) Beach, K.: Identifying the maximum entropy method as a special limit of stochastic analytic continuation. arXiv preprint cond-mat/0403055 (2004) * (98) Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes 3rd Edition: The Art of Scientific Computing, 3rd edn. Cambridge University Press, USA (2007) * (99) Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. Monographs on Statistics and Applied Probability, vol. 57. Chapman & Hall/CRC, Boca Raton, Florida, USA (1993) * (100) Jeffreys, H.: An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 186(1007), 453–461 (1946). doi:10.1098/rspa.1946.0056. https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1946.0056 * (101) Bryan, R.K.: Maximum entropy analysis of oversampled data problems. European Biophysics Journal 18(3), 165–174 (1990). doi:10.1007/BF02427376. Number: 3 * (102) Rothkopf, A.: Improved Maximum Entropy Analysis with an Extended Search Space. J. Comput. Phys. 238, 106–114 (2013). doi:10.1016/j.jcp.2012.12.023. 1110.6285 * (103) Rothkopf, A.: Improved Maximum Entropy Method with an Extended Search Space. PoS LATTICE2012, 100 (2012). doi:10.22323/1.164.0100. 1208.5162 * (104) Rothkopf, A.: BR Method MAP (to be published after acceptance of the manuscript). Zenodo * (105) Rothkopf, A.: BR Method MCStan (to be published after acceptance of the manuscript). Zenodo * (106) Granlund, T., the GMP development team: GNU MP: The GNU Multiple Precision Arithmetic Library, 5.0.5 edn. (2012). http://gmplib.org/ * (107) Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: Mpfr: A multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 33(2), 13 (2007). doi:10.1145/1236463.1236468 * (108) Ding, H.-T., Kaczmarek, O., Karsch, F., Satz, H., Soldner, W.: Charmonium correlators and spectral functions at finite temperature. PoS LAT2009, 169 (2009). doi:10.22323/1.091.0169. 0910.3098 * (109) Malouf, R.: A comparison of algorithms for maximum entropy parameter estimation. In: Proceedings of the 6th Conference on Natural Language Learning - Volume 20. COLING-02, pp. 1–7. Association for Computational Linguistics, USA (2002). doi:10.3115/1118853.1118871. https://doi.org/10.3115/1118853.1118871 * (110) Carpenter, B., Gelman, A., Hoffman, M.D., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., Riddell, A.: Stan: A probabilistic programming language. Journal of statistical software 76(1) (2017) * (111) Stan Development Team: The Stan Core Library. Version 2.18.0 (2018). http://mc-stan.org/8 * (112) Burnier, Y., Rothkopf, A.: A hard thermal loop benchmark for the extraction of the nonperturbative $Q\bar{Q}$ potential. Phys. Rev. D 87, 114019 (2013). doi:10.1103/PhysRevD.87.114019. 1304.4154 * (113) Boyda, D., et al.: Applications of Machine Learning to Lattice Quantum Field Theory. In: 2022 Snowmass Summer Study (2022). 2202.05838 * (114) Offler, S., Aarts, G., Allton, C., Jäger, B., Kim, S., Lombardo, M.-P., Page, B., Ryan, S.M., Skullerud, J.-I., Spriggs, T.: Reconstruction of bottomonium spectral functions in thermal QCD using Kernel Ridge Regression. PoS LATTICE2021, 509 (2022). doi:10.22323/1.396.0509. 2112.02116 * (115) Spriggs, T., et al.: A comparison of spectral reconstruction methods applied to non-zero temperature NRQCD meson correlation functions. EPJ Web Conf. 258, 05011 (2022). doi:10.1051/epjconf/202225805011. 2112.04201 * (116) Fournier, R., Wang, L., Yazyev, O.V., Wu, Q.: Artificial neural network approach to the analytic continuation problem. Physical Review Letters 124(5), 056401 (2020) * (117) Kades, L., Pawlowski, J.M., Rothkopf, A., Scherzer, M., Urban, J.M., Wetzel, S.J., Wink, N., Ziegler, F.P.G.: Spectral Reconstruction with Deep Neural Networks. Phys. Rev. D 102(9), 096001 (2020). doi:10.1103/PhysRevD.102.096001. 1905.04305 * (118) Chen, S.-Y., Ding, H.-T., Liu, F.-Y., Papp, G., Yang, C.-B.: Machine learning spectral functions in lattice QCD (2021). 2110.13521 * (119) Wang, L., Shi, S., Zhou, K.: Automatic differentiation approach for reconstructing spectral functions with neural networks. In: 35th Conference on Neural Information Processing Systems (2021). 2112.06206 * (120) Horak, J., Pawlowski, J.M., Rodríguez-Quintero, J., Turnwald, J., Urban, J.M., Wink, N., Zafeiropoulos, S.: Reconstructing QCD spectral functions with Gaussian processes. Phys. Rev. D 105(3), 036014 (2022). doi:10.1103/PhysRevD.105.036014. 2107.13464
# Achieving Reliable Coordination of Residential Plug-in Electric Vehicle Charging: A Pilot Study Polina Alexeenko Polina Alexeenko and Eilyan Bitar are with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, 14853, USA. Emails: {pa357<EMAIL_ADDRESS>Eilyan Bitar11footnotemark: 1 ###### Abstract We report findings from a real-world pilot study exploring a novel pricing and control mechanism to coordinate residential EV charging loads. The proposed pricing mechanism presents EV owners with a “menu of deadlines” that offers lower electricity prices the longer they’re willing to delay their charging completion times. Given customers’ reported charging preferences, a smart charging system dynamically optimizes the power drawn by EVs in real-time to minimize their collective strain on the grid while ensuring all EVs are charged by their user-requested deadlines. We find that customers allow their charging to be delayed by over eight hours on average. Using this flexibility, the smart charging system reliably eliminates demand spikes by reshaping EV loads to flatten the aggregate load curve. Importantly, customer participation rates remained stable throughout the study, providing evidence that the proposed mechanism is a viable “non-wires alternative” to meet the growing demand for electricity from EVs. ## 1 Introduction The US transportation sector is on the brink of a major transformation. The internal combustion engine’s hold on American transportation is beginning to slip, and for the first time a vision of an all-electric vehicle (EV) future is coming into focus. Driven by declining battery costs (Nykvist and Nilsson, 2015; Lutsey and Nicholas, 2019; Bloomberg NEF, 2021), progressive policy (Fung, 2016; Grandoni et al., 2020; International Energy Agency, 2020), consumer demand (International Energy Agency, 2020; Daramy-Williams et al., 2019; Parker et al., 2021), and the world’s largest automakers (Colias, 2021), it’s a vision that may become a reality in a matter of decades. A recent study by the International Energy Agency anticipates that there will be between 140 and 245 million EVs on the road worldwide by 2030 (International Energy Agency, 2020). But even with this great momentum, the transition to an all-electric vehicle future won’t be possible without careful coordination with the power grid. If left unmanaged, the power demanded by many EVs charging at the same time in the evening will amplify existing peak loads (Quiros-Tortos et al., 2018), resulting in significant power losses (Clement-Nyns et al., 2010), reduced power quality (Gruosso, 2016; Leou et al., 2013), and potentially exceeding the grid’s capacity to reliably meet demand (Muratori, 2018; Jardini et al., 2000; Gong et al., 2011; Hilshey et al., 2012; Powell et al., 2020). To accommodate the increase in load driven by the unmanaged charging of EVs, electric power utilities and grid operators would need to build new generators to produce enough power, and expand transmission and distribution infrastructure to deliver that power to the electric vehicles. In states like Texas and California, where EV adoption is growing rapidly, these enhancements to the grid infrastructure would potentially cost tens of billions of dollars, and could take decades to complete (Davidson et al., 2018). Timing matters. While most EV owners typically begin charging their cars when they come home in the evening—when demand for electricity is peaking—their charging requirements are usually flexible in the sense that their EVs remained connected to their chargers long after they’ve completed charging. Providing concrete evidence for this claim, The EV Project—a nation-wide study spanning three years and involving more than 8000 EV owners—found that most EVs, when charging overnight, usually finished charging within three hours of plugging in, but remained connected to their chargers for an average of twelve hours (Smart and Salisbury, 2015). In other words, most EV owners don’t need their cars charged immediately, but within a reasonable window of time before they expect to unplug and depart next for their next trip. This finding suggests the possibility that some EV owners, being flexible in this way, might be willing to delay the time required to charge their cars given the right incentive. ### 1.1 Limitations of time-of-use pricing In an effort to unlock this flexibility in EV charging, a number of electric power utilities have begun to offer their EV-owning residential customers time-of-use (TOU) rates, where the price of electricity varies over the course of the day according to a predetermined and fixed schedule, typically being cheapest during off-peak hours. As of September 2019, there were over fifty different TOU rates being offered by utilities to residential EV owners across the US (Myers et al., 2019). While this might seem like a reasonable approach, nondiscriminatory111An electricity rate is defined to be _nondiscriminatory_ if all customers within a particular service class and territory are charged identical rates for their electricity consumption. Price discrimination is prohibited by the Federal Energy Regulatory Commission to limit the possibility of inequity among customers (Eisen, 2015). TOU rates are constrained in terms of their ability to attenuate the impact that EVs will have on peak load. By offering customers lower prices to charge during hours of the day that are ordinarily off-peak, TOU rates can have an unintentional synchronizing effect on EV charging patterns. For example, a number of utility-run trials have observed a “timer peak” phenomenon in which many EV owners program their vehicles to begin charging simultaneously at the start of the off-peak pricing period (Smart and Salisbury, 2015). Related to this effect, several recent studies have shown that TOU rates can sharpen aggregate EV load profiles to such an extent that they accelerate distribution transformer aging more rapidly than unmanaged charging (Wu et al., 2011; Hilshey et al., 2012; Powell et al., 2020). Importantly, these issues can arise even at relatively low levels of aggregate EV penetration, because EV registrations tend to be geographically clustered (Idaho National Laboratory, 2013). TOU rates will ultimately fail to eliminate demand peaks driven by EV charging at scale. A greater degree of coordination is needed. To address these drawbacks of TOU rate designs, a number of alternative dynamic pricing mechanisms have been proposed in the literature, where the retail price of electricity is allowed to vary in proportion to the realized aggregate demand over the course of the day (Ma et al., 2011; Karfopoulos and Hatziargyriou, 2012; Tushar et al., 2012). While such dynamic pricing schemes are shown to effectively ‘flatten’ the aggregate demand profile in theory, their effectiveness hinges on the assumption that individual EV owners will possess the capacity to solve nontrivial decision making problems where they correctly anticipate the impact of their charging control strategies on the determination of prices. Another practical shortcoming of such dynamic pricing mechanisms is the risk they impose upon customers in the form of uncertainty about future prices and the subsequent payments that they must make. Because income levels are known to be negatively correlated with risk aversion (Grable, 2016), price uncertainty is likely to disproportionately impact lower income customers. ### 1.2 Direct control of EV charging Direct load control is an alternative approach to managed EV charging which addresses many of the shortcomings of dynamic pricing mechanisms while offering additional benefits such as the potential for vehicle-to-grid (V2G) services. Under direct load control, a central coordinating authority (e.g., the utility) provides each participating customer with an incentive in exchange for the ability to directly control their EV charging subject to a set of constraints specified by the customer. As a result, the central coordinating authority can dynamically reshape EV charging profiles to optimize a variety of objectives including minimizing the cost of energy (e.g., (Jin et al., 2013; Jiang and Zhen, 2019)), maximizing the integration of intermittent renewables (e.g. (Honarmand et al., 2014; Szinai et al., 2020)), or minimizing peak aggregate load (e.g., (Gan et al., 2012; Zhang et al., 2017)). There is a vast theoretical literature on direct control of electric vehicle charging; for example, Yang et al. (Yang et al., 2015) and Wang et al. (Wang et al., 2016) survey a wide variety of approaches to scheduling EV charging load considering various objectives, computational methods, and models of vehicle and customer behavior. While direct load control mechanisms have been studied extensively using theoretical models, there are relatively few real-world experimental studies to date evaluating their effectiveness. To more effectively manage EV charging loads on their distribution networks, some utilities and charging facility operators have begun to explore the possibility of directly controlling EV chargers in both residential (Quiros-Tortos et al., 2018; Bauman et al., 2016) and workplace charging environments (Lee et al., 2016; Chynoweth et al., 2014; Bohn and Glenn, 2016; Andersen et al., 2019). For example, in the ChargeTO Program (Bauman et al., 2016), the Adaptive Charging Network Project (Lee et al., 2016), and the ChargeForward program (Spencer et al., 2021), participating EV owners use a web or mobile interface to initiate a charging request—specifying when they need their EVs charged by—and a centralized smart-charging system actively manages the power being drawn by their EVs while ensuring that all EVs are fully charged by their user-specified departure times. An important drawback of these frameworks for managed EV charging is that they do not provide their users with explicit incentives to accurately report their expected departure times. As a result, users may underreport their departure times, preferring to have their EVs fully charged well in advance of their anticipated departure times. Indeed, data from the Adaptive Charging Network Project reveals that users of their platform frequently underreport their departure times by more than several hours on average (Lee et al., 2019). Such a reduction in the charging flexibility provided by users constrains the extent to which their EV charging loads can be reshaped and shifted in time, which ultimately limits the effectiveness of managed EV charging programs and their capacity to minimize peak aggregate loads. ### 1.3 The OptimizEV Project In this article, we report findings from the OptimizEV Project—a pilot study that tests a novel rate structure and technology for managed EV charging. As one of its primary objectives, OptimizEV seeks to maximize the charging flexibility procured from customers by offering them monetary incentives to delay the time required to charge their EVs. Each time a customer initiates a charging session, they are presented with a “menu of deadlines” that offers lower electricity prices the longer they’re willing to delay the time required to charge their EV. Using their smartphones, customers specify a desired state-of-charge and select a corresponding deadline by which their requested energy must be delivered. Given a collection of active charging requests, a smart-charging system dynamically adjusts the power being drawn by each participating EV in real time to minimize their collective impact on peak system load, while simultaneously ensuring that every customer’s car is charged up to the desired level by the requested deadline. Customers get their energy when they need it and the smart-charging system optimally coordinates the delivery of that energy to limit demand spikes. In 2019, 34 plug-in EV owners residing in Tompkins County, New York were recruited to participate in the OptimizEV Project on a voluntary first-come, first-served basis.222We refer the reader to Appendix A.4 for the demographic information of project participants. A level 2 charging station with cellular communication and control capabilities was installed in the home of each project participant. The project was divided into two phases: an initial two- month _unmanaged charging phase_ (Phase I) where baseline EV charging data was collected while project participants were exposed to the preexisting flat electricity rate between January 1, 2020 – February 29, 2020. This was followed by a fifteen-month _managed charging phase_ (Phase II), between March 8, 2020 – May 31, 2021, during which time all project participants were required to initiate charging sessions using the OptimizEV platform. Project participants were taught to use the OptimizEV platform during the transition week between Phase I and Phase II. During Phase I of the OptimizEV Project, the unmanaged charging patterns of project participants frequently resulted in a substantial increase in peak aggregate load. The user charging data collected over the course of this project also sheds light on several undesirable effects that TOU pricing may have on EV load patterns. We find that when many of the participating EV owners respond to the TOU price signal by delaying their charging until the start of the off-peak pricing period, the synchronization of EV power demand that results when many EVs start charging simultaneously induces new aggregate demand peaks in the middle of the night that are sometimes sharper and larger in magnitude than the unmanaged aggregate demand peaks that would have otherwise resulted in the absence of TOU rate-based incentives. This suggests that at increased EV penetration levels, passive load coordination mechanisms like TOU pricing will likely fail to attenuate the peak load associated with unmanaged charging, and may make matters worse. During the managed charging phase of the project, the proposed incentive and control mechanism was highly effective in shifting the majority of EV charging loads off-peak into the night-time valley of aggregate load curve. In particular, the increase in peak load driven by EV charging was significantly reduced, if not entirely eliminated, on a majority of days during the fifteen- month span of the project. The observed efficacy of the mechanism is due in part to high customer participation rates that remained stable over the course of the pilot—customers frequently engaged in optimized charging sessions, allowing the smart charging system to delay the completion of their charging by nine hours on average. Interestingly, we also find that the majority of sessions in which users decide to opt out of controlled charging are typically characterized by inflexible charging requirements—revealing that users are less likely to opt out if they do in fact have flexibility to offer. The sustained user engagement over the course of the pilot not only confirms the presence of substantial flexibility in real-world residential EV charging loads, but also demonstrates that EV owners are frequently willing to cede control of this flexibility to a smart charging system given a modest monetary incentive. Collectively, these observations provide concrete evidence in support of the proposed incentive and control mechanism as a potentially viable “non-wires alternative” to support the increased demand for electricity driven by the growing adoption of EVs. ## 2 Flexibility-differentiated pricing of EV charging services The underlying premise of the OptimizEV Project is that different customers have different needs when charging their electric vehicles. Some customers are flexible and may be willing to delay the time required to charge their EVs in exchange for a discounted electricity price, while other less flexible customers will pay the full electricity price (or perhaps a premium on that price) to have their EVs charged as quickly as possible. Consequently, if customers are presented with a menu of charging options that are differentiated according to the length of time required to charge their EVs (with more flexible charging options being offered at lower prices), then different customers will prefer to have their EVs charged with different degrees of flexibility. The flexibility procured from customers can, in turn, be utilized to optimally coordinate the charging patterns of EVs in a manner that minimizes the utility’s supply and delivery costs, while respecting customers’ stated charging preferences. Ultimately, the ability to better align the utility’s cost structure with the diverse preferences of customers, through this special form of product differentiation, has the potential to improve both supplier and customer welfare.333Of course, in order for such programs to be cost effective at scale, the benefit to the utility, in the form of reduced energy costs or avoided infrastructure costs, must exceed the program implementation and operational costs. These include administrative costs, the cost of the rate discounts paid out to participating customers, and the cost of the additional metering, communication, and control equipment required to perform direct EV load control. $0$$2$$4$$6$$8$$10$$12$$0$$2$$4$$6$$R$$d$Time (hours)Power (kW) (a) Charging at maximum power with no delay $0$$2$$4$$6$$8$$10$$12$$0$$2$$4$$6$$R$$d$Time (hours)Power (kW) (b) Charging at maximum power with partial delay $0$$2$$4$$6$$8$$10$$12$$0$$2$$4$$6$$R$$d$Time (hours)Power (kW) (c) Charging at minimum constant power $0$$2$$4$$6$$8$$10$$12$$0$$2$$4$$6$$R$$d$Time (hours)Power (kW) (d) Charging with time-varying power levels Figure 1: Four examples of feasible EV charging profiles associated with (a) an inflexible charging request and (b)-(d) a flexible charging request. (a) EV Charging profile induced by an inflexible charging request given by ($R=5$ kW, $E=20$ kWh, $d=4$ hours). Because this charging request results in a slack time of $s=0$ hours, the EV is charged without delay at its maximum rate until its deadline. (b)-(d) Three different EV charging profiles that satisfy a flexible charging request given by ($R=5$ kW, $E=20$ kWh, $d=10$ hours). This charging request has a slack time of $s=6$ hours. As part of the OptimizEV Project, we price differentiate customers’ charging requests according to a standard measure of scheduling flexibility in queueing systems known as _slack time_. The slack time associated with a charging request is defined as the length of time between the minimum time required to supply the desired state-of-charge (when charging at the maximum rated power) and the maximum time that the customer is willing to wait to receive that energy. More formally, the _slack time_ $s$ (units: hours) of an EV charging request is a function of three user-specified parameters: (1) the amount of _energy_ $E$ (units: kilowatt-hours) that the customer would like to receive; (2) the maximum amount of time that the customer is willing to wait to receive their requested energy, which we referred to as their deadline $d$ (units: hours); and (3) the _maximum rate_ $R$ (units: kilowatts) at which the customer’s EV can be charged. It follows that the slack time associated with a charging request can be expressed as $\displaystyle s=d-\frac{E}{R},$ (1) where $E/R$ equals the minimum amount of time required to supply $E$ kilowatt- hours to an EV at a maximum charging power of $R$ kilowatts. Equation (1) offers an intuitive interpretation of slack time as the extra amount of time available to complete a charging request. It follows that a charging request with zero slack ($s=0$) is completely inflexible, since the corresponding EV must be charged at maximum power from start to finish in order to deliver the customer’s desired energy by their deadline. Figure 1 depicts an inflexible charging profile induced by a charging request with zero slack time. By contrast, a charging request with positive slack ($s>0$) can be satisfied using one of a wide variety of charging profiles, some of which are illustrated in Figures 1-1. The greater the slack time in a charging request, the greater its flexibility. In exchange for their willingness to delay the time required to charge their EVs, customers receive a discount on the total cost of electricity required to charge their EVs given by $\displaystyle\text{EV Charging Discount}=p(s)\times E.$ (2) Here, $s$ denotes the slack time associated with a particular customer’s charging request ($E$, $d$, $R$), and $p(s)$ (units: $/kilowatt-hour) denotes the corresponding discount (per unit of requested energy) given to that customer. To incentivize flexibility in the charging requests made by customers, the proposed _slack-differentiated discount mechanism_ is structured so that (i) completely inflexible charging requests with zero slack receive no discount, i.e., $p(0)=0$; and (ii) charging requests with larger slack times are rewarded with larger discounts, i.e., $p(s)\geq p(s^{\prime})$ for all $s\geq s^{\prime}$. We also note that customers are not permitted to make charging requests with negative slack ($s<0$), since such requests are impossible to satisfy by their deadlines. $0$$2$$4$$6$$8$$10$$12$$14$$0$$2$$4$$6$$\cdot 10^{-2}$$p^{\rm max}$$s^{\max}$Slack (hours)$p(s)$ ($/kWh) Figure 2: A graphical illustration of the slack-differentiated discount mechanism used in the OptimizEV Project. The discount $p(s)$ (per unit of request energy) given to customers increases linearly from $p(0)=0$ $\cent$/kWh to the maximum available discount of $p^{\rm max}=4.3$ $\cent$/kWh at a maximum slack time of $s^{\rm max}=10$ hours. While customers are permitted to make charging requests with slack times exceeding $s^{\rm max}$, they are not rewarded for providing additional flexibility beyond this ten hour threshold. Apart from these requirements, the structure of the proposed discount mechanism is highly versatile and can be adapted to reflect the value of charging flexibility (slack time) as determined by the particular cost structure being optimized by the utility. For example, the discounts can be chosen to reflect expected distribution-level infrastructure costs that can be avoided through optimized charging using the flexibility provided by customers. In the OptimizEV Project, specifically, the slack-differentiated discount mechanism $p(s)$ is structured as a piecewise-linear function of the slack time $s$ (depicted in Figure 2). The maximum discount (per unit of requested energy) available to customers under this mechanism was chosen to equal the delivery rate paid by residential electricity customers in NYSEG’s territory at the time of the pilot.444An electric utility’s delivery rate reflects costs associated with the infrastructure required to transport and distribute energy (e.g., transformers, feeders, and substation hardware). ## 3 Optimized delivery of flexible EV charging services Given a collection of active charging requests, an effective smart-charging system must coordinate the charging patterns of the corresponding EVs to minimize their aggregate contribution to peak system load, while ensuring that every EV is charged by its requested deadline. This coordination problem is complicated by the presence of many sources of uncertainty, including uncertainty in the timing and nature of future charging requests, unexpected variations in total system load, fluctuating wholesale electricity prices, unanticipated EV charging characteristics, and failures in the communication network being used for control. (a) Landing page (b) Charging preferences page (c) Charging session status page Figure 3: User interface of the OptimizEV system. Subfigure (a) shows the landing page, where customers select whether to engage in a controlled or uncontrolled charging session. Subfigure (b) shows the preferences page, where customers select their controlled charging session preferences. Subfigure (c) shows the status page, where customers can see information about an ongoing charging session (e.g., which preferences were selected, the session’s duration, and how much energy has been delivered). In order to effectively adapt to changing system conditions in real time, the smart-charging system developed for the OptimizEV Project utilizes a model predictive control approach Camacho and Alba (2013) to continuously re- optimize the active EVs’ charging profiles every minute of the day. At a high level, the OptimizEV smart-charging system functions as follows: 1. 1. _Customer Inputs:_ To initiate a charging session, a customer begins by connecting their EV to their charging station and logging into the OptimizEV user interface (UI) using their smart phone. The UI (depicted in Figure 3) allows customers to make charging requests and monitor the progress of their ongoing charging sessions. When making a charging request, a customer must provide their EV’s present state-of-charge (SOC), in addition to providing their desired state-of-charge and completion deadline. The difference between their desired SOC and present SOC is used to calculate their energy requirement. Customers can also choose to opt out of optimized charging at any time prior to or during an ongoing charging session. To limit their “interaction costs” with the UI, customers can also preset default charging preferences, bypassing the need to manually input these preferences in subsequent charging sessions. 2. 2. _Data Acquisition:_ Every minute, the smart-charging system collects information from all newly and previously connected EVs that are actively being charged. These measurements include connection status, current state-of- charge, energy consumption, and the actual power drawn by active EVs during the previous one-minute time interval. 3. 3. _Computation:_ This information is passed to an optimization algorithm, which determines optimal charging profiles for all active EVs. These optimal charging profiles are designed to collectively minimize peak aggregate load, while respecting the individual constraints associated with each EV, e.g., charging completion deadlines, charging rate constraints, and battery charging dynamics. 4. 4. _Control:_ The optimized charging profiles are then transmitted to each EV’s charging station as a sequence of time-varying power commands that each EV is instructed to track. The EVs adjust their charging rates to track the updated power commands as closely as possible, while respecting the safe operating limits specified by their battery management systems. We use the SAE J1772 standard for level-2 charging stations to dynamically control each vehicle’s charging rate. 5. 5. _Repeat:_ This data acquisition, computation, and control process repeats periodically every minute to enable real-time adaptation to changing system conditions and unexpected EV charging characteristics, while ensuring the complete satisfaction of all customers’ charging requests and constraints. We refer the reader to Appendix A.2 for a detailed description of the underlying optimization model and the model predictive control algorithm used to repeatedly optimize the EV charging profiles in the manner described above. In Appendix A.3, we discuss a number of practical challenges encountered when deploying the proposed real-time scheduling algorithm in a real-world setting. (a) Weekday unmanaged charging patterns (b) Weekend unmanaged charging patterns Figure 4: Unmanaged charging patterns during Phase I (January 1, 2020 to February 29, 2020) of the OpimizEV Project for weekdays (left column) and weekends (right column). The top figures depict kernel density estimates of session start times (solid yellow), session end times (dotted purple), and the times at which charging is finished (dashed green). The bottom figures provide plots of aggregate unmanaged EV charging patterns during Phase I. The baseline (non-EV) load is depicted as a solid black curve. The median EV charging load is shown as a solid red curve, the interdecile range of the EV charging load is shaded in dark red, and the range between the maximum and minimum EV charging load is shaded in light red. ## 4 Unmanaged charging patterns and latent flexibility During Phase I of the OptimizEV Project (January 1, 2020 to February 29, 2020), all customers participated in unmanaged charging: their EVs were charged at maximum power whenever connected to their in-home charging stations, and charging was not delayed or controlled. During this time, we continuously monitored and recorded the connection status and power drawn by each user’s EV at regular one-minute intervals. In our analysis, we make use of EV connection and disconnection times as surrogates for user arrival and departure times to and from their homes. In Figure 4, we summarize several key attributes of unmanaged charging patterns revealed by these data. Using the data collected during the unmanaged charging phase of the project, we are able to empirically estimate distributions over user plug-in times, unplug times, and charging completion times, which are depicted in the upper plots of Figures 4-4. Unsurprisingly, these distributions reveal that users typically plug in and start charging in the late afternoon and early evening, and typically unplug the following morning, presumably departing for their next trip. The resulting concentration of unmanaged EV charging loads in the evening is shown to manifest in a substantial amplification of peak load. The maximum and median increases in weekday peak loads are observed to be 47 kW (39% of the baseline peak) and 18 kW (15% of the baseline peak), respectively. The impact of unmanaged EV charging patterns on peak load is less pronounced on weekends, as user plug-in times are more dispersed, which results in a more even spreading of EV charging loads across time.555The baseline (non-EV) load profile used in our analysis is based on load data taken from a primarily residential distribution circuit in Tompkins County, NY, which we re-scaled to reflect an aggregate demand profile associated with approximately 60 households. We refer the reader to Appendix A.1 for a more detailed description of how the baseline load profile is constructed. We also note that, while there is significant heterogeneity in the day-to-day connection patterns across different users, the aggregate behavior of users is more predictable. As can be seen in Figure 5, the total number of EVs connected to the grid as a function of time exhibits a clear diurnal pattern, with the total number of EVs connected overnight averaging between 14-15 vehicles on weekdays and 11-13 vehicles on weekends. The unmanaged charging data also reveals that users typically keep their EVs connected to their charging stations far longer than the amount of time required to fully charge their batteries. In particular, we find that EVs remained plugged in for 11 hours on average, while actively drawing power for only 2 hours on average. This implies that charging requests have an average slack time of 9 hours—revealing the presence of a significant amount of latent flexibility in users’ charging requirements, and the potential to harness this flexibility to minimize the contribution of residential EV charging loads to evening peak demand. Figure 5: User-level connection patterns during Phase I (January 1, 2020 to February 29, 2020) of the OptimizEV Project. Each row of the heatmap depicts the empirical probability that a particular user’s vehicle is connected during each minute of the week. The plot above the heatmap depicts the average, lower quartile, and upper quartile of the number of EVs connected to the grid throughout the week. (a) Unmanaged (b) OptimizEV (c) TOU pricing Figure 6: Aggregate load profiles (a) simulated under unmanaged charging, (b) realized under managed (OptimizEV) charging, and (c) simulated under TOU pricing between March 9, 2020 and March 14, 2020. In each subfigure, the baseline load is depicted by a black curve, unmanaged EV loads are shown in red, managed (OptimizEV) loads are shown in green, and loads under TOU pricing are shown in blue. (a) Unmanaged (b) OptimizEV (c) TOU pricing (d) Unmanaged (e) OptimizEV (f) TOU pricing Figure 7: Daily charging patterns under unmanaged charging (left column), OptimizEV (middle column), and TOU pricing (right column). Subfigures (a)-(c) depict aggregate loads realized under each scenario on March 11, 2020. In each subfigure, the baseline load is depicted by a black curve, unmanaged EV loads are depicted in shades of red, optimized loads in shades of green, and loads responding to TOU pricing in shades of blue. Subfigures (d)-(f) depict the empirical range (lightly shaded region), interdecile range (darkly shaded region), and median (solid line) associated with the aggregate EV load data under each scenario during Phase II of the Optimize Project. ## 5 Flattening peak demand with optimized charging During Phase II of the OptimizEV Project (March 8, 2020 to May 31, 2021), we offered all customers flexibility-differentiated pricing to incentivize delayed charging of their EVs. Utilizing the flexibility provided by customers, a smart charging system continuously optimizes customers’ EV charging profiles to flatten the resulting aggregate load profile, while simultaneously ensuring that all customers’ EVs are charged by their requested deadlines. Here, we describe the transformation of aggregate load patterns enabled by this flexibility-differentiated pricing and control mechanism. As a basis for comparison, we simulate unmanaged EV charging during Phase II of the project. We do so by generating an unmanaged charging profile for every charging session initiated during that time frame, where every EV is assumed to draw power at its maximum rate upon connecting to its charging station, until the energy requirement associated with its charging session is fulfilled. Figures 6 and 6 depict the aggregate load profiles induced by unmanaged charging and OptimizEV, respectively, between March 9 and March 14, 2020. Notice that unmanaged charging results in a substantial amplification of peak load on these days, while OptimizEV effectively redistributes the aggregate EV load to fill the nighttime valley in the baseline load profile. To better illustrate the contribution of individual EVs to the aggregate load profile, we disaggregate the EV load profile across different vehicles under unmanaged and optimized charging in Figures 7 and 7, respectively, on March 9, 2020. From Figure 7, it can be seen that EVs participating in optimized charging sessions are typically delayed in time and are charged at lower power levels over longer stretches of time, as compared to their unmanaged counterparts depicted in Figure 7. It’s also worth noting that on this particular day, OptimizEV does not entirely eliminate the contribution of EVs to peak load, as a small subset of users do opt out of optimized charging, deciding instead to have their EVs charged at maximum power without delay. Interestingly, we find that participation rates in optimized charging sessions are strongly correlated with the underlying flexibility (slack) in users’ charging requirements, where users are more likely to opt out if they have little flexibility to offer. We discuss user participation rates and trends at more length in Section 7. Beyond these specific days, the OptimizEV mechanism is also shown to reliably flatten the aggregate load profile on a majority of days over the fifteen month course of Phase II. To show this, we plot the empirical range, interdecile range, and median associated with the aggregate EV load data under unmanaged charging and OptimizEV during Phase II in Figures 7 and 7, respectively. As can be seen from these plots, unmanaged charging results in a median increase in peak demand of 6% (over the baseline peak of 120 kW), while OptimizEV results in a 0% median increase in peak demand. This contrast between unmanaged and optimized charging is even more pronounced when considering days in the bottom 90% of the aggregate load distribution, where unmanaged charging is shown to induce aggregate load profiles that increase peak demand by as much as 18%, while OptimizEV only results in a 6% increase in peak demand on these days. We also find that unmanaged charging results in aggregate loads which exceed the baseline peak for significantly longer periods of time than under OptimizEV. In particular, the load duration curves depicted in Figure 8 show that unmanaged charging induces aggregate loads that exceed the baseline peak by at least 20 kW for a total duration of approximately 40 hours over the course of Phase II. In comparison, under OptimizEV, aggregate loads exceed the baseline peak by at least 20 kW for only 20 minutes over that same fifteen- month time span. These findings confirm that unmanaged charging will likely result in considerable and sustained increases in peak demand, while direct control mechanisms that effectively harness the underlying flexibility in customers’ charging requirements can significantly minimize these impacts. Figure 8: Load duration curves for three scenarios: unmanaged charging (red), OptimizEV (green), and TOU pricing (blue). Each colored curve depicts the number of minutes during which load was greater than or equal to a particular power level throughout Phase II of the OptimizEV Project under the corresponding scenario. ## 6 Unintended consequences of time-of-use pricing Many electric power utilities and regulators have advocated for the use of nondiscriminatory time-of-use (TOU) electricity rates to minimize the impact of EV charging on peak demand. Under TOU rates, customers are exposed to a high electricity price during “on-peak” time periods, and a lower electricity price during the complementary “off-peak” time periods. TOU rates are designed with the intent of shifting a fraction of the on-peak electricity demand to off-peak periods to flatten the aggregate demand profile. While this might seem like a reasonable approach, the user charging data collected over the course of this project reveals that TOU rates can have perverse consequences. To understand the potential impact of TOU rates on aggregate demand, we simulate EV charging patterns under TOU rates using actual customer participation and charging data collected during Phase II of the OptimizEV Project. For every managed (opt-in) charging session initiated during that time frame, we generate a corresponding delayed charging profile, where a user’s EV does not begin charging until the start of the ensuing off-peak pricing period at midnight, at which time the EV begins drawing power at its maximum rate until its energy requirement is satisfied. A charging profile is only delayed to the off-peak period if it is feasible to do so given its underlying energy and completion time requirements. Finally, all of the charging profiles associated with unmanaged (opt-out) charging sessions are left unchanged. Figure 7 depicts the empirical median, interdecile range, and range of the aggregate load data simulated under TOU pricing during Phase II of the OptimizEV Project. Notice that TOU pricing is successful in keeping the median aggregate load profile beneath the baseline peak of 120 kW, suggesting that TOU rates may be effective in combating the adverse effects of unmanaged charging at modest EV penetration levels. However, the synchronization of EV power demand that results when multiple EVs begin charging simultaneously at the start of the off-peak pricing period (midnight) induces new aggregate demand spikes that are sharper and sometimes larger in magnitude than the unmanaged aggregate demand peaks that would have otherwise resulted in the absence of TOU rate-based incentives. This unintended consequence of TOU pricing is clearly illustrated in Figure 6, which depicts a realization of the aggregate load profile simulated under TOU pricing between March 9 and March 14, 2020. Notice that on several days during that week, the peak demand under TOU pricing significantly exceeds the peak demand realized under unmanaged charging on the same days. These observations suggest that at increased EV penetration levels, passive load coordination mechanisms like TOU pricing will likely fail to attenuate the peak demand driven by EV charging, and may make matters worse. ## 7 Why do customers opt out of managed charging? We find that a large majority of “opt-out” sessions are characterized by inflexible charging requirements. Unsurprisingly, customers were more likely to opt out of managed charging if they needed their EVs charged very quickly, in time for their upcoming trips. On the other hand, customers were more inclined to opt in to managed charging if they did in fact possess a reasonable degree of flexibility (slack) in their charging requirements. To illustrate this behavior, we plot the empirical opt-in rate as a function of realized session slack time in Figure 9. The _realized slack time_ associated with a session is defined as the difference between the actual session duration and the minimum amount of time required to deliver the energy consumed during that session given the user’s maximum charging rate. Among all Phase II charging sessions, the empirical opt-in rate is lowest (at roughly 10%) for sessions with realized slack times that are less than one hour. This is unsurprising, since customers with limited flexibility in their charging requirements have very little to offer (or gain) by relinquishing control of their EVs for managed charging. Remarkably, however, the empirical opt-in rate increases steadily as realized session slack times increase from zero to seven hours, and remains relatively constant (hovering around 80%) for realized session slack times between seven and eighteen hours—revealing that customers are unlikely to opt out of managed charging if they have several hours or more of slack time in their charging requirements. Surprisingly, the empirical opt- in rate decreases slightly for realized session slack times between 18 and 24 hours. A possible explanation for this behavior is that sessions with very long sojourn times may be associated with lower opt-in rates because they tend to occur on weekends, when customers’ commuting patterns are more variable and departure times may be more difficult to anticipate, resulting in a lower willingness to participate in managed charging sessions. Figure 9: Empirical opt-in frequency as a function of realized session slack time based on all Phase II sessions with realized slack times that are less than or equal to 24 hours. The opt-in frequency associated with Phase II sessions excluded from this plot (i.e., those having realized slack times in excess of 24 hours) is 71%. Figure 10: Scatter plot of Phase II session start times versus session end times, where opt-in (opt-out) sessions as blue circles (red squares). The distributions along the vertical and horizontal axes depict kernel density estimates of the marginal distributions for session start and end times, respectively. Opt-in and opt-out charging sessions also differ significantly in terms of their timing. In Figure 10, we visualize the relationship between Phase II session start times and session end times in the form of a scatter plot that depicts opt-in (opt-out) sessions as blue circles (red squares). Notice that opt-in sessions typically correspond to overnight charging patterns, starting in the evening and ending the following morning. In comparison, opt-out sessions are more commonly associated with daytime commuting patterns, which result in charging sessions that are much shorter in duration and have start times that are more evenly distributed throughout the day. This contrast between the timing of opt-in and opt-out sessions suggests that customers are more likely to opt in to managed charging when their commuting needs are flexible and follow regular and predictable patterns, e.g., when they arrive home in the evening and expect to depart the following morning. It is also worth noting that, although opt-out sessions do occur regularly, the uniform dispersion of opt-out session start times throughout the daytime softens their expected contribution to peak demand, as they are unlikely to cluster during on-peak hours. Figure 11: The plot depicts the average number of EVs connected to the grid throughout the week for three different phases of the COVID-19 pandemic in New York State during this pilot study: _Pre-lockdown_ from January 1, 2020 to March 14, 2020 (black curve), _lockdown_ from March 15, 2020 to June 7, 2020 (red curve), and _phased re-opening_ from June 8, 2020 to May 31, 2021 (blue curve). ## 8 COVID-19 impacts on user charging behaviors The first recorded U.S. case of COVID-19 was reported on January 20, 2020. In the months following, federal, state, and local governments enacted a sequence of interventions designed to mitigate the spread of infections through “shelter-in-place” orders and closure of non-essential businesses. Unsurprisingly, the impacts of these interventions can be clearly seen in the shifting charging patterns throughout the OptimizEV Project. In Figure 11, we plot the average number of EVs connected to the grid throughout the week for three different phases of the COVID-19 pandemic. Prior to the lockdown, customers typically connected their vehicles to charge in the evening and disconnected their vehicles the following morning. During the lockdown, as customers started working from home, their charging patterns—previously dictated primarily by their commute to and from work—became less rigid. The average number of vehicles connected throughout the week flattened considerably as customer session start and end times became less concentrated, their session duration became longer, and their charging more infrequent. As New York State gradually reopened, user connection patterns partially reverted to pre-lockdown patterns, as charging sessions shortened and concentrated more heavily during overnight hours. The frequency with which customers charge their EVs also appears to have been impacted by the COVID-19 pandemic. Figure 12 shows the evolution of charging session frequency between January 2020 and May 2021. Prior to the lockdown, customers were found to charge their EVs three or more times per week on average. Immediately after the start of the lockdown, the average number of weekly charging sessions per customer drops to approximately 1 session per week, as customers traveled less frequently and spent more time at home. Between May and November, the average number of weekly charging sessions per customer gradually rises as pandemic related restrictions are loosened. In December, weekly charging session frequency falls again, likely driven in part by the resurgence in COVID-19 cases during that time frame. Despite these shifts in user connection patterns and charging session frequency, the rate at which customers participated in managed charging sessions remained relatively constant over time, hovering around a 60% opt-in rate throughout Phase II of the pilot study. Interestingly, this suggests that customers’ willingness to participate in managed charging sessions was not impacted by the various lockdown and reopening measures implemented over the course of the COVID-19 pandemic. Figure 12: Plot of average number of charging sessions per customer for each week during Phase I (Jan. 1, 2020 to Feb. 29, 2020) and Phase II (Mar. 8, 2020 to May 31, 2021) of the OptimizEV Project (excluding the transition week between Phase I and Phase II). During Phase II, opt-in (opt-out) sessions are depicted in blue (red). The weekly empirical opt-in rate during Phase II is depicted as a green curve. Sessions with zero energy consumption are excluded from these calculations. ## 9 Conclusion The flexibility-differentiated pricing and control mechanism studied as part of the OptimizEV Project was shown to be highly effective in reshaping residential EV charging loads to minimize their impact on peak electricity demand. The observed effectiveness of the mechanism is due in large part to high customer participation rates—when customers exhibited flexibility in their charging requirements, they frequently engaged in managed charging sessions, allowing the smart charging system to delay the completion of their charging by nine hours on average. While these initial observations provide concrete evidence in support of the proposed incentive and control mechanism as a potentially viable “non-wires alternative” to support the increased demand for electricity driven by the growing adoption of EVs, larger experimental studies and surveys are needed to better understand customer attitudes towards managed charging and the feasibility of such programs at scale. In particular, because of the differences between early EV adopters and the general population of electricity customers (Powells and Fell, 2019; Fjells et al., 2021), studies involving a more diverse population of customers are necessary. Although personal vehicle usage patterns in the general population suggest the presence of significant charging flexibility across the general population (Anwar et al., 2022), it is difficult to ascertain how a typical person’s willingness to engage in managed charging compares to that of the project participants. Moreover, because the typical electricity customer tends to be less technologically savvy and less motivated by sustainability considerations than the early adopters participating in the OptimizEV Project, the monetary incentive required to elicit flexibility from the general population may be larger. Finally, in order for managed EV charging programs like OptimizEV to be cost effective at scale, the benefit to the utility company, in the form of reduced energy procurement or avoided infrastructure costs, must exceed the program implementation and operational costs. These include administrative costs, the total cost of incentives paid out to participating customers, and the cost of the additional metering, communication, and control equipment required to perform direct EV load control. If the estimated benefits are expected to exceed the costs, then it may be prudent for the utility company to pursue programs of this kind to manage the increasing penetration of EV loads on their networks. ## 10 Acknowledgments There are a number of partners without whom the OptimizEV Project would not have been possible. These partners include New York State Electric and Gas (NYSEG), New York State Energy Research and Development Authority (NYSERDA), New York State Department of Public Service, Cornell Cooperative Extension, Taitem Engineering, and Kitu Systems. This research was supported in part by the the Holland Sustainability Project Trust and the National Science Foundation under a Graduate Research Fellowship, Grant No. ECCS-135162, and Grant No. IIP-1632124. ## Appendix A Methods ### A.1 Baseline non-EV load profile Throughout Phase II of the OptimizEV Project, we utilized a fixed baseline non-EV load profile as an input to our real-time scheduling algorithm. The load profile is based on a 24-hour long time series of aggregate load measurements taken from a distribution circuit in Tompkins County, New York, which serves 623 residential customers and 69 commercial and industrial customers. In order to obtain a load profile that reflects a small residential neighborhood consisting of 60 households, we re-scaled the given time series so that the resulting peak load equals 120 kW. ### A.2 Real-time scheduling algorithm In what follows, we provide a detailed mathematical formulation of the real- time scheduling algorithm that was used as part of the OptimizEV Project. We index time periods by $t=1,2,\dots$, and let $\Delta=(1/60)$ (units: hours) denote the length of each time period. We let $\mathcal{N}(t)\subseteq\\{1,\dots,34\\}$ denote the set of EVs that are actively engaged in _managed_ (opt-in) charging sessions at time $t$. The charging profiles associated with EVs engaged in managed sessions can be adjusted within the constraints implied by their users’ stated charging requirements. We denote the charging requirements associated with each user $i\in\mathcal{N}(t)$ at time $t$ by * • $R_{i}$: user $i$’s maximum charging rate (units: kilowatts), * • $E_{i}(t)$: user $i$’s residual energy requirement at time $t$ (units: kilowatt-hours), * • $d_{i}(t)$: number of remaining time periods by which the energy requirement $E_{i}(t)$ must be satisfied. The SAE J1772 charging protocol also imposes a charging constraint in the form of a minimum non-zero charging rate, which we denote by $R^{\mathrm{min}}$. At each time $t$, the scheduling algorithm generates a charging profile for every actively managed EV spanning the next $T=1440$ time periods (24 hours). We denote the sequence of charging power commands sent to each active EV $i\in\mathcal{N}(t)$ at time $t$ by $r_{i}(1),\,r_{i}(2),\,\dots,\,r_{i}(T)\ \text{(units: kilowatts).}$ The managed charging profiles are chosen to collectively flatten the aggregate load profile by solving the following optimization problem at each time $t$: minimize $\displaystyle\sum_{k=1}^{T}\left(\ell(t+k-1)+\sum_{i\in\mathcal{N}(t)}r_{i}(k)\right)^{2}$ (3a) subject to $\displaystyle\sum_{k=1}^{d_{i}\left(t\right)}r_{i}(k)\Delta=E_{i}(t),\hskip 29.63095pt\forall\,i\in\mathcal{N}(t),$ (3b) $\displaystyle r_{i}(k)\in\left\\{0\right\\}\cup\left[R^{\mathrm{min}},\,R_{i}\right],\hskip 7.94974pt\forall\,k\in\left\\{1,\dots,d_{i}(t)\right\\}\ \,\text{and}\ \,i\in\mathcal{N}(t),$ (3c) $\displaystyle r_{i}(k)=0,\hskip 75.16031pt\,\forall\,k\notin\left\\{1,\dots,d_{i}(t)\right\\}\ \,\text{and}\ \,i\in\mathcal{N}(t).$ (3d) In the above optimization problem, the sequence $\ell(t),\,\dots,\;\ell(t+T-1)$ (units: kilowatts) denotes the baseline non-EV load profile plus the aggregate _unmanaged_ EV load profile stemming from all EVs engaged in unmanaged (opt-out) charging sessions at time $t$. Note that EVs engaged in unmanaged charging sessions are charged at their maximum rates, without delay, until their energy requirements are satisfied. Constraint (3b) ensures that each user’s requested energy is delivered prior to their deadline. Constraint (3c) ensures that the charging power commands computed for each actively managed EV are either zero valued, or between the charging station’s minimum non-zero charging rate and the EV’s maximum charging rate. Constraint (3d) ensures that no actively managed EV is charged after its deadline has passed. The optimization criterion, defined on line (3a), promotes EV charging profiles with a “valley-filling” characteristic. In order to effectively adapt to changing system conditions in real time, the smart-charging system developed for the OptimizEV Project utilizes a model predictive control approach to continuously re-optimize the active EVs’ charging profiles every minute of the day. We provide pseudocode that clearly specifies the recursive nature of this real-time optimization routine in Algorithm 1. 1 for _$t\in\left\\{1,2,\dots\right\\}$ _ do 2 Solve optimization problem (3) to compute optimal charging profile $(r^{*}_{i}(1),\dots r^{*}_{i}(T))$ for each actively managed EV $i\in\mathcal{N}(t)$ ; 3 for _$i\in\mathcal{N}(t)$ _ do 4 Transmit optimal charge rate ${r}^{*}_{i}(1)$ to EV $i$ ; 5 Measure charge rate $\widehat{r}_{i}\left(t\right)$ implemented by EV $i$ ; 6 Set $E_{i}\left(t+1\right):=E_{i}(t)-\widehat{r}_{i}\left(t\right)\Delta$ ; 7 Set $d_{i}\left(t+1\right):=d_{i}\left(t\right)-1$ ; 8 9 end for 10 11 end for Algorithm 1 Real-time Scheduling Algorithm We briefly summarize each step of the pseudocode provided in Algorithm 1. Immediately prior to time period $t$, the smart-charging system solves problem (3) to determine optimal charging profiles for all actively managed EVs, which we denote by $r^{*}=\left\\{r^{*}_{i}\left(k\right)\mid i\in\mathcal{N}(t),k\in\\{1,\dots,T\\}\right\\}$. These optimal charging profiles are then transmitted to each EV’s charging station as a sequence of time-varying power commands. Each EV $i\in\mathcal{N}(t)$ is instructed to execute the first component of the sequence of power commands that it receives, $r^{*}_{i}(1)$. Because there may be deviations between the actual power drawn by an EV and the commanded power, each active EV $i$ transmits a measurement of the actual power drawn during time period $t$ to the smart- charging system, which we denote by $\widehat{r}_{i}(t)$. The smart-charging system uses this information to update each user’s residual energy requirement according to $E_{i}(t+1)=E_{i}(t)-\widehat{r}_{i}(t)\Delta$. This information, together with an updated forecast of the baseline non-EV load and aggregate unmanaged EV load, is used to resolve problem (3), generating a new sequence of power commands for the subsequent time period $t+1$. ### A.3 Practical algorithmic considerations In this section, we discuss a number of practical challenges encountered when deploying the proposed real-time scheduling algorithm in a real-world setting. This first challenge pertains to the need to ensure that all necessary computations that are carried out at each time period are completed within the allowable time budget (less than one minute). Problem (3) is a nonconvex optimization problem due to the disjunctive constraint (3c). Specifically, it belongs to the class of mixed integer quadratic programs (MIQP). Although it is possible to solve MIQPs exactly using branch and bound methods, these approaches can be extremely slow when the number of decision variables is high. To circumvent these computational challenges, we employ a convex relaxation of the disjunctive constraint (3c) by allowing the charging rate to take any value between zero and the maximum charging rate. The resulting relaxation is a convex quadratic program that can be solved to optimality within seconds on a standard desktop computer using interior point methods. We convert solutions generated by the relaxed problem into feasible solutions for the original problem by appropriately rounding those solutions to ensure that the original disjunctive constraint (3c) is satisfied. The second challenge is related to the occasional constraint infeasibility that results from nonidealities in EV charging characteristics and uncertainty in user inputs. Specifically, in order to maintain feasibility of the optimization problem (3), it must be possible to satisfy every user’s residual energy requirement prior to their stated deadline subject to their maximum rated charging capacity. In other words, the slack associated with every active charging requirement must always be non-negative. Formally, this corresponds to the requirement that $\displaystyle d_{i}(t)-\frac{E_{i}(t)}{R_{i}\Delta}\,\geq\,0,$ (4) for all users $i\in\mathcal{N}(t)$ and time periods $t$. In order to enforce the satisfaction of this requirement, users are not permitted to make infeasible charging requests. However, because the power drawn by an EV may deviate from the power commands that it’s instructed to follow, charging requests may become infeasible over time. When a user’s charging request becomes infeasible, it is modified by reducing the user’s residual energy requirement $E_{i}(t)$ to the maximum amount of energy that can be delivered prior to the user’s stated deadline, to ensure the feasibility of the optimization problem (3). ### A.4 Demographics Prior to the start of the OptimizEV Project, households with EVs registered in Tompkins County were surveyed to assess their demographic characteristics and attitudes toward energy. Survey respondents were mostly middle or upper class, with 65% reporting an annual household income greater than $75,000. Respondents also tended to be highly educated, with 55% having a graduate or professional degree. Unsurprisingly, respondents also demonstrated positive attitudes toward energy innovation and conscientiousness in energy consumption: over half of the respondents reported being receptive to changing their energy consumption habits and adopting novel technologies, and approximately a third of the respondents reported strongly valuing sustainability. ## References * (1) * Andersen et al. (2019) Andersen, P. B., Toghroljerdi, S. H., Sørensen, T. M., Christensen, B. E., Høj, J. and Zecchino, A. (2019), ‘The Parker Project’, Final Report 1. * Anwar et al. (2022) Anwar, M. B., Muratori, M., Jadun, P., Hale, E., Bush, B., Denholm, P., Ma, O. and Podkaminer, K. (2022), ‘Assessing the value of electric vehicle managed charging: a review of methodologies and results’, Energy & Environmental Science . * Bauman et al. (2016) Bauman, J., Stevens, M., Hacikyan, S., Tremblay, L., Mallia, E. and Mendes, C. (2016), ‘Residential smart-charging pilot program in toronto: results of a utility controlled charging pilot’, World Electric Vehicle Journal 8(2), 531–542. * Bloomberg NEF (2021) Bloomberg NEF (2021), ‘Hitting the EV inflection point: Electric vehicle price parity and phasing out combustion vehicle sales in europe’. * Bohn and Glenn (2016) Bohn, T. and Glenn, H. (2016), A real world technology testbed for electric vehicle smart charging systems and PEV-EVSE interoperability evaluation, in ‘2016 IEEE Energy Conversion Congress and Exposition (ECCE)’, IEEE, pp. 1–8. * Camacho and Alba (2013) Camacho, E. F. and Alba, C. B. (2013), Model predictive control, Springer science & business media. * Chynoweth et al. (2014) Chynoweth, J., Chung, C.-Y., Qiu, C., Chu, P. and Gadh, R. (2014), Smart electric vehicle charging infrastructure overview, in ‘ISGT 2014’, IEEE, pp. 1–5. * Clement-Nyns et al. (2010) Clement-Nyns, K., Haesen, E. and Driesen, J. (2010), ‘The impact of charging plug-in hybrid electric vehicles on a residential distribution grid’, IEEE Transactions on Power Systems 25(1), 371–380. * Colias (2021) Colias, M. (2021), ‘GM aims to go all electric by 2035’, Wall Street Journal . * Daramy-Williams et al. (2019) Daramy-Williams, E., Anable, J. and Grant-Muller, S. (2019), ‘A systematic review of the evidence on plug-in electric vehicle user experience’, Transportation Research Part D: Transport and Environment 71, 22–36. * Davidson et al. (2018) Davidson, F. T., Tuttle, D., Rhodes, J. and Nagasawa, K. (2018), ‘Switching to electric vehicles could save the us billions, but timing is everything’, The Conversation . * Eisen (2015) Eisen, J. B. (2015), ‘Ferc’s expansive authority to transform the electric grid’, UCDL Rev. 49, 1783. * Fjells et al. (2021) Fjells, I. F., Silvast, A. and Skjølsvold, T. M. (2021), ‘Justice aspects of flexible household electricity consumption in future smart energy systems’, Environmental Innovation and Societal Transitions 38, 98–109. * Fung (2016) Fung, B. (2016), ‘The states where it pays to buy a new electric car’, The Washington Post . * Gan et al. (2012) Gan, L., Topcu, U. and Low, S. H. (2012), ‘Optimal decentralized protocol for electric vehicle charging’, IEEE Transactions on Power Systems 28(2), 940–951. * Gong et al. (2011) Gong, Q., Midlam-Mohler, S., Marano, V. and Rizzoni, G. (2011), ‘Study of PEV charging on residential distribution transformer life’, IEEE Transactions on Smart Grid 3(1), 404–412. * Grable (2016) Grable, J. E. (2016), Financial risk tolerance, in ‘Handbook of consumer finance research’, Springer, pp. 19–31. * Grandoni et al. (2020) Grandoni, D., Siddiqui, F. and Dennis, B. (2020), ‘California to phase out sales of new gas-powered cars by 2035’, The Washington Post . * Gruosso (2016) Gruosso, G. (2016), Analysis of impact of electrical vehicle charging on low voltage power grid, in ‘2016 International Conference on Electrical Systems for Aircraft, Railway, Ship Propulsion and Road Vehicles & International Transportation Electrification Conference (ESARS-ITEC)’, IEEE, pp. 1–6. * Hilshey et al. (2012) Hilshey, A. D., Hines, P. D., Rezaei, P. and Dowds, J. R. (2012), ‘Estimating the impact of electric vehicle smart charging on distribution transformer aging’, IEEE Transactions on Smart Grid 4(2), 905–913. * Honarmand et al. (2014) Honarmand, M., Zakariazadeh, A. and Jadid, S. (2014), ‘Integrated scheduling of renewable generation and electric vehicles parking lot in a smart microgrid’, Energy Conversion and Management 86, 745–755. * Idaho National Laboratory (2013) Idaho National Laboratory (2013), What Clustering Effects have been Seen by the EV Project, Technical report. * International Energy Agency (2020) International Energy Agency (2020), Global EV Outlook 2020, Technical report. * Jardini et al. (2000) Jardini, J. A., Schmidt, H. P., Tahan, C. M., De Oliveira, C. C. and Ahn, S. U. (2000), ‘Distribution transformer loss of life evaluation: a novel approach based on daily load profiles’, IEEE Transactions on Power Delivery 15(1), 361–366. * Jiang and Zhen (2019) Jiang, W. and Zhen, Y. (2019), ‘A real-time ev charging scheduling for parking lots with pv system and energy store system’, IEEE Access 7, 86184–86193. * Jin et al. (2013) Jin, C., Tang, J. and Ghosh, P. (2013), ‘Optimizing electric vehicle charging with energy storage in the electricity market’, IEEE Transactions on Smart Grid 4(1), 311–320. * Karfopoulos and Hatziargyriou (2012) Karfopoulos, E. L. and Hatziargyriou, N. D. (2012), ‘A multi-agent system for controlled charging of a large population of electric vehicles’, IEEE Transactions on Power Systems 28(2), 1196–1204. * Lee et al. (2016) Lee, G., Lee, T., Low, Z., Low, S. H. and Ortega, C. (2016), Adaptive charging network for electric vehicles, in ‘2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP)’, IEEE, pp. 891–895. * Lee et al. (2019) Lee, Z. J., Li, T. and Low, S. H. (2019), ACN-data: Analysis and applications of an open EV charging dataset, in ‘Proceedings of the Tenth ACM International Conference on Future Energy Systems’, pp. 139–149. * Leou et al. (2013) Leou, R.-C., Su, C.-L. and Lu, C.-N. (2013), ‘Stochastic analyses of electric vehicle charging impacts on distribution network’, IEEE Transactions on Power Systems 29(3), 1055–1063. * Lutsey and Nicholas (2019) Lutsey, N. and Nicholas, M. (2019), ‘Update on electric vehicle costs in the united states through 2030’, The International Council on Clean Transportation 2. * Ma et al. (2011) Ma, Z., Callaway, D. S. and Hiskens, I. A. (2011), ‘Decentralized charging control of large populations of plug-in electric vehicles’, IEEE Transactions on control systems technology 21(1), 67–78. * Muratori (2018) Muratori, M. (2018), ‘Impact of uncoordinated plug-in electric vehicle charging on residential power demand’, Nature Energy 3(3), 193–201. * Myers et al. (2019) Myers, E. H., Hargrave, J., Farinas, R., Hledik, R. and Burke, L. (2019), Residential electric vehicle rates that work, Technical report, Smart Electric Power Alliance. * Nykvist and Nilsson (2015) Nykvist, B. and Nilsson, M. (2015), ‘Rapidly falling costs of battery packs for electric vehicles’, Nature climate change 5(4), 329–332. * Parker et al. (2021) Parker, N., Breetz, H. L., Salon, D., Conway, M. W., Williams, J. and Patterson, M. (2021), ‘Who saves money buying electric vehicles? heterogeneity in total cost of ownership’, Transportation Research Part D: Transport and Environment 96, 102893. * Powell et al. (2020) Powell, S., Kara, E. C., Sevlian, R., Cezar, G. V., Kiliccote, S. and Rajagopal, R. (2020), ‘Controlled workplace charging of electric vehicles: The impact of rate schedules on transformer aging’, Applied Energy 276, 115352. * Powells and Fell (2019) Powells, G. and Fell, M. J. (2019), ‘Flexibility capital and flexibility justice in smart energy systems’, Energy Research & Social Science 54, 56–59. * Quiros-Tortos et al. (2018) Quiros-Tortos, J., Ochoa, L. and Butler, T. (2018), ‘How electric vehicles and the grid work together: Lessons learned from one of the largest electric vehicle trials in the world’, IEEE Power and Energy Magazine 16(6), 64–76. * Smart and Salisbury (2015) Smart, J. G. and Salisbury, S. D. (2015), Lessons learned about plug-in electric vehicle charging infrastructure from the EV project and chargepoint america, Technical report, Idaho National Lab (INL), Idaho Falls, ID, USA. * Spencer et al. (2021) Spencer, S. I., Fu, Z., Apostolaki-Iosifidou, E. and Lipman, T. E. (2021), ‘Evaluating smart charging strategies using real-world data from optimized plugin electric vehicles’, Transportation Research Part D: Transport and Environment 100, 103023. * Szinai et al. (2020) Szinai, J. K., Sheppard, C. J., Abhyankar, N. and Gopal, A. R. (2020), ‘Reduced grid operating costs and renewable energy curtailment with electric vehicle charge management’, Energy Policy 136, 111051. * Tushar et al. (2012) Tushar, W., Saad, W., Poor, H. V. and Smith, D. B. (2012), ‘Economics of electric vehicle charging: A game theoretic approach’, IEEE Transactions on Smart Grid 3(4), 1767–1778. * Wang et al. (2016) Wang, Q., Liu, X., Du, J. and Kong, F. (2016), ‘Smart charging for electric vehicles: A survey from the algorithmic perspective’, IEEE Communications Surveys & Tutorials 18(2), 1500–1517. * Wu et al. (2011) Wu, D., Aliprantis, D. C. and Ying, L. (2011), ‘Load scheduling and dispatch for aggregators of plug-in electric vehicles’, IEEE transactions on smart grid 3(1), 368–376. * Yang et al. (2015) Yang, Z., Li, K. and Foley, A. (2015), ‘Computational scheduling methods for integrating plug-in electric vehicles with power systems: A review’, Renewable and Sustainable Energy Reviews 51, 396–416. * Zhang et al. (2017) Zhang, G., Tan, S. T. and Wang, G. G. (2017), ‘Real-time smart charging of electric vehicles for demand charge reduction at non-residential sites’, IEEE Transactions on Smart Grid 9(5), 4027–4037.
# Uniform algebras and distinguished varieties Sushil Gorai and Golam Mostafa Mondal Department of Mathematics and Statistics, Indian Institute of Science Education and Research Kolkata, Mohanpur – 741 246<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Mathematics, Indian Institute of Science Education and Research Pune, Pune – 411 008<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract. In this article, we point out the connections between the distinguished varieties introduced by Agler and McCarthy with certain uniform algebras on bidisc studied by Samuelsson and Wold. We also prove analogues of Samuelsson- Wold result for the domains in $\mathbb{C}^{2}$ that are the images of the bidisc under certain proper polynomial map on $\mathbb{C}^{2}$. We also give a description of polynomial convex hull of graph of anti-holomorphic polynomial over the distinguished boundary of such domains. We mention the case for the symmetrized bidisc as an example. ###### Key words and phrases: Polynomial convexity; Uniform approximation; Wermer maximality theorem; Symmetrized bidisc; Distinguished variety ###### 2020 Mathematics Subject Classification: Primary: 32E30, 32E20; Secondary: 47A25 ## 1\. Introduction This article connects the theory of distinguished varieties–a well-explored topic in operator theory, with the notions of uniform algebra generated by holomorphic polynomials and certain pluriharmonic functions. The latter one is also a very well-studied object in several complex variables. In particular, we observe that the failure of uniform approximation for all continuous functions on the distinguished boundary of certain domains in $\mathbb{C}^{2}$ by elements of holomorphic polynomials in $z_{1}$ and $z_{2},$ and some pluriharmonic functions is the presence of certain distinguished variety in the domain where the pluriharmonic functions become holomorphic. Before making these precise, let us briefly mention the theory of distinguished varieties and the theory of uniform algebras one by one. In a seminal paper [4], Agler and McCarthy introduced the notion of distinguished variety in the bidisc $\mathbb{D}^{2}$ as follows: A non-empty set $V$ in $\mathbb{C}^{2}$ is said to be a distinguished variety if there exists a polynomial $p$ in $\mathbb{C}[z,w]$ such that $V=\\{(z,w)\in\mathbb{D}^{2}:p(z,w)=0\\}$ and such that $\displaystyle\overline{V}\cap\partial\mathbb{D}^{2}=\overline{V}\cap\mathbb{T}^{2}.$ (1.1) Here, $\partial\mathbb{D}^{2}$ represents the boundary of the $\mathbb{D}^{2}$, and $\mathbb{T}^{2}$ is the distinguished boundary of $\mathbb{D}^{2}$. A distinguished variety is an algebraic variety that exits the bidisc through the distinguished boundary. The set $\overline{V}$ is the closure of $V$ within $\overline{\mathbb{D}}^{2}$. We will use $\partial V$ to denote the set described by (1.1). From a topological standpoint, $\partial V$ represents the boundary of $V$ within the zero set of $p$ instead of encompassing the entirety of $\mathbb{C}^{2}$. One of the fundamental results in operator theory, known as Andô’s inequality [5], establishes that when $T_{1}$ and $T_{2}$ are commuting operators, each with a norm not exceeding 1, the following inequality holds for any two- variable polynomial $p$: $\displaystyle\|p(T_{1},T_{2})\|\leq\|p\|_{\mathbb{D}^{2}}$ (1.2) holds. Agler and McCarthy [4, Theorem 3.1] gave the following improvement of the inequality (1.2): if $T_{1}$ and $T_{2}$ are matrices, then $\|p(T_{1},T_{2})\|\leq\|p\|_{V},$ where $V$ is a distinguished variety that depends on $T_{1}$ and $T_{2}$. Additionally, in [4, Theorem 1.12], the authors have shown that all distinguished varieties in the bidisc can be expressed as $\\{(z,w)\in\mathbb{D}^{2}:\det(\Psi(z)-wI)=0\\},$ where $\Psi$ is an analytic matrix-valued function defined on the disk, and it is unitary on $\partial\mathbb{D}$. A similar description of distinguished varieties in the symmetrized bidisc is given in [24]. Consider a compact subset $K$ of $\mathbb{C}^{n}$. The space of all continuous complex-valued functions on $K$ is denoted as $\mathcal{C}(K)$, equipped with the norm $|g|=\sup_{K}|g(z)|$. We denote the closure of the set of polynomials in $\mathcal{C}(K)$ as $\mathcal{P}(K)$. For a collection of functions $g_{1},\ldots,g_{N}\in\mathcal{C}(K)$, we use $[g_{1},\ldots,g_{N};K]$ to represent the uniform algebra generated by $g_{1},\ldots,g_{N}$ on $K$. We define the set $X={(g_{1}(z),\ldots,g_{N}(z)):z\in K}$ associated with the uniform algebra $[g_{1},\ldots,g_{N};K]$. Using the Stone and Weierstrass theorem, we assert that $\displaystyle[g,\cdots,g_{N};K]=\mathcal{C}(K)$ if and only if $\mathcal{P}(X)=\mathcal{C}(X)$ and the generators $g_{1},\ldots,g_{N}$ separate points on $K$. If we consider $\mathcal{P}(K)$ and $\mathcal{C}(K)$ as Banach algebras, the equality $\mathcal{P}(K)=\mathcal{C}(K)$ implies the equality of their corresponding maximal ideal spaces. The maximal ideal space of $\mathcal{C}(K)$ corresponds to $K$, and that of $\mathcal{P}(K)$ corresponds to $\widehat{K}$, where $\widehat{K}$ is the polynomial convex hull of $K$ (see [13]). Here, the polynomial convex hull of $K$ is denoted as $\widehat{K}$ and is defined as follows: $\displaystyle\widehat{K}:=\left\\{\alpha\in\mathbb{C}^{n}:|p(\alpha)|\leq\max_{K}|p|~{}~{}\forall p\in\mathbb{C}[z_{1},z_{2},\cdots,z_{n}]\right\\}.$ We say $K$ is polynomially convex when $\widehat{K}=K$. Thus, polynomial convexity serves as a necessary condition for all compacts $K$ where $\mathcal{P}(K)=\mathcal{C}(K)$ holds. Recall that an _analytic disc_ in $\mathbb{C}^{n}$ is a holomorphic map $\phi:\mathbb{D}\rightarrow\mathbb{C}^{n}$ which is non-constant and continuous on $\overline{\mathbb{D}}$. Let $K\subset\mathbb{C}^{n}.$ We say an analytic disc $\phi$ is present in $K$ if $\phi(\mathbb{D})\subset K.$ In view of Lavrentiev’s [22] result, if $K$ be a compact subset of $\mathbb{C},$ then $\mathscr{P}(K)=\mathcal{C}(K)$ if and only if $K=\widehat{K}$ and there does not exist any analytic disc in $K.$ But this is far from being a sufficient condition for polynomially convex compacts in higher dimensions. This article discusses some results in which the presence of an analytic disc is the only obstruction for polynomially convex compact $K$ for which $\mathscr{P}(K)=\mathcal{C}(K).$ We now talk about the Wermer maximality theorem. Let $\mathbb{T}^{1}$ be the unit circle in the complex plane and $\mathcal{C}(\mathbb{T}^{1})$ be the set of all continuous complex-valued functions on $\mathbb{T}^{1}.$ Let $\mathcal{A}$ denote the set of all $f\in\mathcal{C}(\mathbb{T}^{1})$ which are boundary values of functions holomorphic on $\mathbb{D}$ and continuous on $\overline{\mathbb{D}}.$ In [23], the following question was asked: $\textit{if }g\in\mathcal{C}(\mathbb{T}^{1})\setminus\mathcal{A},\textit{ does the closed algebra generated by }g\textit{ and }\mathcal{A}\textit{ equal }\mathcal{C}(\mathbb{T}^{1})?$ In [23], it is shown that if $g$ is real-valued or if $g$ satisfies a Lipschitz condition, the algebra generated by $g$ and $\mathcal{A}$ equals $\mathcal{C}(\mathbb{T}^{1}).$ Wermer [33] settled this question by proving the following: ###### Result 1.1 (Wermer). If $\mathcal{B}$ is any closed subalgebra of $\mathcal{C}(\mathbb{T}^{1})$ with $\mathcal{A}\subset\mathcal{B}\subset\mathcal{C}(\mathbb{T}^{1}).$ Then either $\mathcal{A}=\mathcal{B}$ or $\mathcal{B}=\mathcal{C}(\mathbb{T}^{1}).$ A uniform algebra $\mathcal{U}$ defined on a compact subset $K$ is said to be a maximal subalgebra of $\mathcal{C}(K)$ if, for any other subalgebra $\mathcal{B}$ of $\mathcal{C}(K)$ such that $\mathcal{U}\subset\mathcal{B}\subset\mathcal{C}(K)$, it holds that either $\mathcal{U}=\mathcal{B}$ or $\mathcal{B}=\mathcal{C}(K)$. Result 1.1 is known as the Wermer Maximality Theorem. A similar related result due to Wermer is the following [34]: Let $g\in C^{1}(\overline{\mathbb{D}}).$ Assume that the graph ${\sf Gr}_{\overline{\mathbb{D}}}(g)\subset\mathbb{C}^{2}$ of $g$ is polynomially convex. Let $E:=\\{z\in\overline{\mathbb{D}}:\frac{\partial g}{\partial\bar{z}}(z)=0\\}.$ Then $\displaystyle[z,g;\overline{\mathbb{D}}]=\\{f\in C(\overline{\mathbb{D}}):f|_{E}\in\mathcal{O}(E)\\}.$ It is natural to ask the version of these results to the higher dimensions. The question in the higher dimension has no clear answer like the Wermer maximality theorem. The natural object is to generalization of the second result of Wermer, even when considering the algebra generated by polynomials and a pluriharmonic function. For a domain $\Omega\subset\mathbb{C}^{n},$ let $PH(\Omega)$ denote the class of all pluriharmonic function on $\Omega.$ The works of Čirka [32], Izzo [14, 15], Samuelsson and Wold [28], and Izzo, Samuelsson, and Wold [16] focused on the study of uniform algebras generated by holomorphic and pluriharmonic functions in higher dimensions. In [28], Samuelsson and Wold [28] proved the following results in the case of the bidisc $\mathbb{D}^{2}.$ ###### Result 1.2 (Samuelsson-Wold). Let $h_{j}\in PH(\mathbb{D}^{2})\cap\mathcal{C}^{1}(\overline{\mathbb{D}}^{2})$ for $j=1,\cdots,N.$ Then either there exists a holomorphic disc in $\overline{\mathbb{D}}^{2}$ where all $h_{j}$’s are holomorphic, or $[z_{1},z_{2},h_{1},\cdots,h_{N};\overline{\mathbb{D}}^{2}]=\mathcal{C}(\overline{\mathbb{D}}^{2}).$ The following result can be thought of an analogue of the Wermer maximality theorem in case of the bidisc. ###### Result 1.3 (Samuelsson-Wold). Let $f_{j}\in\mathcal{C}(\mathbb{T}^{2})$ for $j=1,\cdots,N$ with $N\geq 1$, and assume that each $f_{j}$ extends to a pluriharmonic function on $\mathbb{D}^{2}$. Then either $[z_{1},z_{2},f_{1},\cdots,f_{N};\mathbb{T}^{2}]=\mathcal{C}(\mathbb{T}^{2})$, or there exists a non-trivial algebraic variety $Z\subset\mathbb{C}^{2}$ with $\overline{V}\setminus V\subset\mathbb{T}^{2},$ and the pluriharmonic extensions of the $f_{j}$’s are holomorphic on $Z,$ where $V=Z\cap(\overline{\mathbb{D}^{2}}\setminus\mathbb{T}^{2}).$ ###### Remark 1.4. In Result 1.3 if not all of the functions $f_{1},\dots,f_{N}$ is holomorphic in any analytic disc that lies in $\partial\mathbb{D}^{2}$ and $[z_{1},z_{2},f_{1},\cdots,f_{N};\mathbb{T}^{2}]\neq\mathcal{C}(\mathbb{T}^{2})$, then the algebraic variety that exists is a distinguished variety. As mentioned earlier, by a result of Agler and McCarthy [4], every distinguished variety in the bidisc is of the form $\\{(z,w)\in\mathbb{D}^{2}:\det(\Psi(z)-wI)=0\\}$ for some matrix-valued holomorphic function $\Psi$ on $\mathbb{D}^{2}$. Therefore, the variety that exists in Result 1.3 is also of the above mentioned determinant form. We do not know what connections are there with the matrix-valued funtion $\Psi$ in [4] and the pluriharmonic functions in Result 1.3. ###### Remark 1.5. It might occur that the variety in Result 1.3 appears in the boundary of the bidisc. In this case, the variety is not a distinguished variety, but such variety can also be explained from the operator theoretic point of view from a result due to Das and Sarkar [11, Theorem 4.3]. From the proof of Result 1.3 it is clear that the form of such variety is $\\{\lambda\\}\times\mathbb{D}$ or $\mathbb{D}\times\\{\lambda\\}$ for some $\lambda\in\partial\mathbb{D}$, which matches with the description in [11, Theorem 4.3]. Consider the domain $\Omega=\phi(\mathbb{D}^{2})$ in $\mathbb{C}^{2}$ and we note that the distinguished boundary of $\Omega$ for the algebra $\mathcal{A}(\Omega)$ is $\Gamma_{\Omega}=\phi(\mathbb{T}^{2}).$ We prove the following generalization of Result 1.2 and Result 1.3 for the above domain. ###### Theorem 1.6. Let $h_{j}\in PH(\Omega)\cap\mathcal{C}^{1}(\overline{\Omega})$ for $j=1,\cdots,N,$ and $\phi^{-1}(\overline{\Omega})\subset\overline{\mathbb{D}}^{2}.$ Then, either there exists a holomorphic disc in $\overline{\Omega}$ where all $h_{j}$’s are holomorphic, or $[z_{1},z_{2},h_{1},\cdots,h_{N};\overline{\Omega}]=\mathcal{C}(\overline{\Omega}).$ ###### Theorem 1.7. Let $f_{j}\in\mathcal{C}(\Gamma_{\Omega})$ for $j=1,\cdots,N,N\geq 1$ and assume that each $f_{j}$ extends to a pluriharmonic function on $\Omega.$ If $\phi^{-1}(\Gamma_{\Omega})\subset\mathbb{T}^{2}$. If $f_{j}$ is not holomorphic on any analytic disc present in the boundary $\partial\Omega$ for at least one $j$, then either $\displaystyle[z_{1},z_{2},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}),$ or there exists a distinguished variety $V$ in $\Omega$ such that the pluriharmonic extensions of the $f_{j}$’s are holomorphic on $V.$ As a corollary we can extend Result 1.2 and Result 1.3 to the symmetrized bidisc. Recall that the symmetrized bidisc $\mathbb{G}_{2}$ is the image of the bidisc under the symmetrization map $\Pi:(z_{1},z_{2})\to(z_{1}+z_{2},z_{1}z_{2})$ i.e., $\displaystyle\mathbb{G}_{2}=\\{(z_{1}+z_{2},z_{1}z_{2}):|z_{1}|<1,|z_{2}|<1\\}.$ Since $\Pi^{-1}(\Pi(\overline{\mathbb{D}}^{2}))=\Pi^{-1}(\overline{\mathbb{G}}_{2})=\overline{\mathbb{D}}^{2},$ by using Result 2.1, we get that $\overline{\mathbb{G}}_{2}$ is polynomially convex. If $f:\mathbb{G}_{2}\to\mathbb{C}$ is a holomorphic function on $\mathbb{G}_{2},$ then $f\circ\Pi:\mathbb{D}^{2}\to\mathbb{C}$ is a symmetric function on $\mathbb{D}^{2}.$ Therefore, if $\mathcal{A}(\overline{\mathbb{G}}_{2})$ is the algebra of functions that are holomorphic on $\mathbb{G}_{2}$ and continuous on $\overline{\mathbb{G}}_{2},$ then the distinguished boundary $\Gamma_{{\mathbb{G}}_{2}}$ of $\mathbb{G}_{2}$ is the image $\Pi(\mathbb{T}^{2})$ of the torus $\mathbb{T}^{2}$ (the distinguished boundary of $\mathbb{D}^{2}$). Since $\mathbb{G}_{2}$ is neither convex (not even biholomorphic to any convex domain [10]) nor smooth (not even the Lipschitz domain [8]), and hence, many results in the theory of several complex variables does not apply to $\mathbb{G}_{2}.$ Several authors have studied this domain over the last three decades, and it has shown to be a domain with a highly rich complex geometry and function theory: see, among many other articles, [31, 20, 25, 17, 12, 10, 3, 2, 1, 6, 29]. There are significant similarities and contrasts between its geometry and function theory and those of the bidisc. Here we observe that Result 1.2 and Result 1.3 continues to hold if the bidisc is replaced by the symmetrized bidisc. More precisely: ###### Corollary 1.8. Let $h_{j}\in PH(\mathbb{G}_{2})\cap\mathcal{C}^{1}(\overline{\mathbb{G}}_{2})$ for $j=1,\cdots,N.$ Then either there exists a holomorphic disc in $\overline{\mathbb{G}}_{2}$ where all $h_{j}$’s are holomorphic, or $[z_{1},z_{2},h_{1},\cdots,h_{N};\overline{\mathbb{G}}_{2}]=\mathcal{C}(\overline{\mathbb{G}}_{2}).$ ###### Corollary 1.9. Let $f_{j}\in\mathcal{C}(\Gamma_{\mathbb{G}_{2}})$ for $j=1,\cdots,N,N\geq 1$ and assume that each $f_{j}$ extends to a pluriharmonic function on $\mathbb{G}_{2}.$ If $f_{j}$ is not holomorphic on any analytic disc present in the boundary $\partial\mathbb{G}_{2}$ for at least one $j$, then either $\displaystyle[z_{1},z_{2},f_{1},\cdots,f_{N};\Gamma_{\mathbb{G}_{2}}]=\mathcal{C}(\Gamma_{\mathbb{G}_{2}}),$ or there exists a distinguished variety $V$ in $\mathbb{G}_{2}$ such that the pluriharmonic extensions of the $f_{j}$’s are holomorphic on $V.$ ###### Remark 1.10. In view of a result by Pal and Shalit [24], we see that the variety that appears in Corollary 1.9 has the form of a zero set of a certain determinant. However, we do not know whether a similar type of determinant form can also given for the distinguished varieties that appear in Theorem 1.7. ## 2\. Technical Results In this section, we provide some known results and some preliminary lemmas that will be utilized to prove our results. ###### Result 2.1 ([30]). If $F:\mathbb{C}^{n}\to\mathbb{C}^{n}$ is a proper holomorphic map, and if $K\subset\mathbb{C}^{n}$ is a compact set, then the set $K$ is polynomially convex if and only if the set $F^{-1}(K)$ is polynomially convex, and $\mathcal{P}(K)=\mathcal{C}(K)$ if and only if $\mathcal{P}(F^{-1}(K))=\mathcal{C}(F^{-1}(K)).$ ###### Result 2.2 (Remmert Proper Mapping theorem [26, 27]). Let $M,N$ be complex spaces, and $f:M\to N$ is a proper holomorphic map. If $Z$ is an analytic subvariety in $M$ then $f(Z)$ is also an analytic subvariety in $N.$ Moreover, if $Z$ is irreducible then $f(Z)$ is also irreducible subvariety of $N.$ The following result is from the book [9, Page 29]. ###### Result 2.3. (Chirka) Let $\Omega_{1}\subset\mathbb{C}^{p},\Omega_{2}\subset\mathbb{C}^{m},$ are open subsets such that $\Omega=\Omega_{1}\times\Omega_{2},$ $p+m=n,$ and $\textit{proj}_{1}:(z,w)\to z.$ Let $V$ be an analytic subset in $\Omega$ such that $\textit{proj}_{1}:V\to\Omega_{1}$ is a proper map. Then $\textit{proj}_{1}(V)$ is an analytic subset in $\Omega_{1}.$ Moreover, if $\Omega=\mathbb{C}^{n},$ $\Omega_{1}=\mathbb{C}^{p},$ and $V$ is an algebraic subset in $\mathbb{C}^{n},$ then $\textit{proj}_{1}(V)$ is also an algebraic subset in $\mathbb{C}^{p}.$ The following lemma is well-known to experts. Since we have not found a explicit mention of this lemma in the literature, we decided to put it here for completeness. ###### Lemma 2.4. Let $\Psi:\mathbb{C}^{n}\to\mathbb{C}^{n}$ be a proper polynomial map. Let $Z$ be an algebraic variety in $\mathbb{C}^{n},$ then $\Psi(Z)$ is also an algebraic variety in $\mathbb{C}^{n}.$ ###### Proof. Consider the algebraic variety $V=\\{(\Psi(z),z):z\in Z\\}$ in $\mathbb{C}^{n}\times\mathbb{C}^{n}$ and $\Omega_{1}=\Omega_{2}=\mathbb{C}^{n}.$ We now show that that $\textit{proj}_{1}:V\to\Omega_{1}$ is a proper map. Let $K\subset\mathbb{C}^{n}$ be a compact subset of $\mathbb{C}^{n}.$ Then $\textit{proj}_{1}^{-1}\\{K\\}=(K\times\mathbb{C}^{n})\cap V=\\{(\xi,\eta)\in K\times\mathbb{C}^{n}:(\xi,\eta)\in V\\}=\\{(\Psi(\eta),\eta)\in K\times\mathbb{C}^{n}:\eta\in Z\\}=$compact (since $\Psi$ is a proper map). Therefore, $\textit{proj}_{1}:V\to\Omega_{1}$ is a proper map. Hence, by Result 2.3, we conclude that $\textit{proj}_{1}(V)=\Psi(Z)$ is an algebraic variety. ∎ ###### Remark 2.5. The case $\Psi=\Pi$ is available in [24, Lemma 3.1]. Let $\Psi:\mathbb{C}^{n}\to\mathbb{C}^{n}$ be a proper holomorphic polynomial map. Let $\Omega:=\Psi(\mathbb{D}^{n})$ be a domain such that $\Psi^{-1}(\Psi(\mathbb{D}^{n}))\subset\mathbb{D}^{n},$ $\Psi^{-1}(\Psi(\partial\mathbb{D}^{n}))\subset\partial\mathbb{D}^{n},$ and $\Psi^{-1}(\Psi(\mathbb{T}^{n}))\subset\mathbb{T}^{n}.$ The following lemma illustrates that every distinguished variety in $\Omega$ can be derived from a distinguished variety in $\mathbb{D}^{n}$. ###### Lemma 2.6. Let $Z\subset\Omega$. Then $Z$ is a distinguished variety in $\Omega$ if and only if there is a distinguished variety $V$ in $\mathbb{D}^{n}$ such that $\Psi(V)=Z.$ ###### Proof. Given that $\Psi$ is a proper map, it implies that $\Psi$ is onto, and therefore, $\Psi(\Psi^{-1}(Z))=Z$. Additionally, it can be easily demonstrated that $\Psi^{-1}(Z)$ is an algebraic variety. Let us define $V:=\Psi^{-1}(Z)$. Now, we need to prove the following: $V\cap\partial\mathbb{D}^{n}\subset V\cap\mathbb{T}^{n}$. Consider an element $\alpha\in V\cap\partial\mathbb{D}^{n}.$ This implies that $\alpha\in\Psi^{-1}(Z)\cap\partial\mathbb{D}^{n}.$ Hence, we have $\Psi(\alpha)\in Z\cap\Psi(\partial\mathbb{D}^{n})$. Since $Z$ is a distinguished variety, we can conclude that $\Psi(\alpha)\in Z\cap\Psi(\partial\mathbb{T}^{n})$. Consequently, we can deduce that $\alpha$ lies in $\Psi^{-1}(Z\cap\Psi(\mathbb{T}^{n}))=\Psi^{-1}(Z)\cap\Psi^{-1}(\Psi(\mathbb{T}^{n}))$. By our assumption, together with this, we get that $V\cap\partial\mathbb{D}^{n}\subset V\cap\mathbb{T}^{n}$. Conversely, let us assume that $V$ is a subset of $\mathbb{D}^{n}$ and is a distinguished variety. By using Lemma 2.4, we can conclude that $\Psi(V)$ is an algebraic variety in $\Omega$. Now, we claim that $Z=\Psi(V)$ is a distinguished variety in $\Omega$. Suppose $\alpha\in Z\cap\Psi(\partial\mathbb{D}^{n})=\Psi(V)\cap\Psi(\partial\mathbb{D}^{n}).$ We need to show that $\alpha$ also lies in $\Psi(\mathbb{T}^{n})$. Since $\alpha\in Z\cap\Psi(\partial\mathbb{D}^{n})$, there exist $\eta_{1}\in V$ and $\eta_{2}\in\partial\mathbb{D}^{n}$ such that $\Psi(\eta_{1})=\Psi(\eta_{2})=\alpha$. Consequently, $\eta_{2}$ belongs to $\Psi^{-1}(\Psi(\partial\mathbb{D}^{n}))$, which is a subset of $\partial\mathbb{D}^{n}$. Thus, we have $\eta_{2}\in V\cap\partial\mathbb{D}^{n}$, and as a result, $\Psi(\eta_{2})\in\Psi(V\cap\partial\mathbb{D}^{n})$. This implies that $\alpha$ lies in $\Psi(V\cap\mathbb{T}^{n})$. ∎ ###### Remark 2.7. The case $\Omega=\mathbb{G}_{2}$ is available in [24, Lemma 3.1]. ###### Lemma 2.8. Let $g:G\subset\mathbb{C}^{N}\to\mathbb{C}^{N}$ be a proper holomorphic mapping and $q:g(G)\to\mathbb{C}$ be a continuous function. If $q\circ g:G\to\mathbb{C}$ is holomorphic, then $q$ is holomorphic. ###### Proof. Let us define $\Omega:=g(G).$ Since $g$ is proper holomorphic, $\Omega$ is open. First, we assume $z\in G$ and $\det d{g}(z)\neq 0,$ where $\det dg(z)$ is the determinant of the complex Jacobian matrix of $g$ at $z.$ Then there exists a neighborhood $V$ of $z$ and a neighborhood $W$ of $g(z)$ such that $g^{-1}:W\to V$ is holomorphic. Therefore, $q\circ g\circ g^{-1}=q$ is holomorphic at $g(z).$ Next, we define $X:=\\{z\in G:\det d{g}(z)=0\\}.$ Hence, $q$ is holomorphic on $\Omega\setminus g(X).$ Clearly, $X$ is an analytic variety with $\dim_{\mathbb{C}}X\leq(N-1).$ Since $g$ is proper holomorphic mapping, by Result 2.2, $g(X)$ is also an analytic variety in $\Omega.$ Since $q$ is continuous on $\Omega$ and holomorphic on $\Omega\setminus g(X),$ by Riemann’s removable singularity theorem, we can say that $q$ is holomorphic on $\Omega.$ ∎ Let $\Psi:\mathbb{C}^{n}\to\mathbb{C}^{n}$ be a proper holomorphic map. Let $\Omega:=\Psi(\mathbb{D}^{n})$ be a domain such that $\Psi^{-1}(\Psi(\mathbb{D}^{n}))\subset\mathbb{D}^{n},$ $\Psi^{-1}(\Psi(\partial\mathbb{D}^{n}))\subset\partial\mathbb{D}^{n},$ and $\Psi^{-1}(\Psi(\mathbb{T}^{n}))\subset\mathbb{T}^{n}.$ We denote the distinguished boundary of $\Omega$ for the algebra $\mathcal{A}(\Omega)$ by $\Gamma_{\Omega}.$ Clearly, $\Gamma_{\Omega}$ is equal to $\Psi(\mathbb{T}^{n}).$ The following theorem might be of independent interest. We will use this in our proofs. ###### Theorem 2.9. Let $N\geq 1$ and $f_{1},\cdots,f_{N}\in\mathcal{C}(\Gamma_{\Omega}).$ Then $[z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})$ if and only if ${\sf Gr}_{f}(\Gamma_{\Omega})$ is polynomially convex, where $f=(f_{1},\cdots,f_{N}).$ ###### Proof. We denote $X:={\sf Gr}_{f}(\Gamma_{\Omega}).$ Since $[z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})$ implies $\mathscr{P}(X)=\mathcal{C}(X),$ hence $\widehat{X}=X.$ Conversely, suppose that $\widehat{X}=X.$ We consider the proper holomorphic map $\Phi:\mathbb{C}_{z}^{n}\times\mathbb{C}_{w}^{N}\to\mathbb{C}_{z}^{n}\times\mathbb{C}_{w}^{N},$ define by $\displaystyle\Phi(z,w)=(\Psi(z),w).$ Clearly, $\displaystyle\Phi^{-1}(X)={\sf Gr}_{f\circ\Psi}(\mathbb{T}^{n})=:Y.$ Since $X$ is polynomially convex, $Y$ is also polynomially convex (by Result 2.1). Let $U$ be a neighborhood of $\mathbb{T}^{n}$ such that $z_{1}\not=0~{}~{}\text{ on }U.$ Define $g(z_{1},z_{2},\cdots,z_{n})=\frac{1}{z_{1}}.$ Then $g$ is holomorphic on $U.$ Also, $g$ is holomorphic on $U\times\mathbb{C}^{N}.$ Since $Y\subset U\times\mathbb{C}^{N},$ by the Oka-Weil approximation theorem, there exists a sequence of polynomial $P_{j}$ in $\mathbb{C}^{n}_{z}\times\mathbb{C}_{w}^{N}$ such that $P_{j}(z,w)\to g$ uniformly on $Y.$ This implies $P_{j}(z,(f\circ\Psi)(z))\to g=\frac{1}{z_{1}}=\overline{z}_{1}$ uniformly on $\mathbb{T}^{n}.$ Hence $\overline{z}_{1}\in[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi;\mathbb{T}^{n}].$ By the similar method we can show that $\overline{z}_{j}\in[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi;\mathbb{T}^{n}],~{}\forall_{j}\in\\{1,\cdots,n\\}.$ Hence, $[z_{1},\cdots,z_{n},\overline{z}_{1},\cdots,\overline{z}_{n};\mathbb{T}^{n}]\subset[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi;\mathbb{T}^{n}].$ Therefore, $\displaystyle[z_{1},\cdots,z_{n},\overline{z}_{1},\cdots,\overline{z}_{n};\mathbb{T}^{n}]=\mathcal{C}(\mathbb{T}^{n})=[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi;\mathbb{T}^{n}].$ (2.1) Note that $\mathscr{P}(X)=\mathcal{C}(X)$ if and only if $\mathscr{P}(\Phi^{-1}(X))=\mathcal{C}(\Phi^{-1}(X))$ (see Result 2.1) i.e., $\mathscr{P}(Y)=\mathcal{C}(Y).$ Therefore, using (2.1), we get that $\displaystyle[z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).$ ∎ ###### Corollary 2.10. Let $N\geq 1,$ and $f_{1},\cdots,f_{N}\in\mathcal{C}(\Gamma_{\mathbb{G}_{n}}).$ Then $[z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\mathbb{G}_{n}}]=\mathcal{C}(\Gamma_{\mathbb{G}_{n}})$ if and only if ${\sf Gr}_{f}(\Gamma_{\mathbb{G}_{n}})$ is polynomially convex, where $f=(f_{1},\cdots,f_{N}).$ In [18, 19], Jimbo explored the structure of polynomial hulls concerning graphs of antiholomorphic polynomials on the torus. For the sake of completeness, we include Jimbo’s result from [19] here since we will use it multiple times in this paper. Let $\mathbb{T}^{2}$ be the torus in $\mathbb{C}^{2}$ and $P$ be an arbitrary polynomial in $\mathbb{C}^{2}.$ In [19], Jimbo gave a description for $\widehat{{\sf Gr}_{\overline{P}}(\mathbb{T}^{2})}.$ Let the polynomial $P(z_{1},z_{2})$ be of degree $m$ in $z_{1}$ and of degree $n$ in $z_{2}.$ We write $\displaystyle P(z_{1},z_{2}))=\sum_{\begin{subarray}{c}0\leq i\leq m\\\ 0\leq j\leq n\\\ \end{subarray}}a_{ij}z_{1}^{i}z_{2}^{j}.$ Therefore, on $\mathbb{T}^{2},$ we have $\displaystyle\overline{P(z_{1},z_{2})}$ $\displaystyle=\frac{1}{z^{m}_{1}z^{n}_{2}}\sum_{\begin{subarray}{c}0\leq i\leq m\\\ 0\leq j\leq n\\\ \end{subarray}}\overline{a}_{ij}z_{1}^{m-i}{z_{2}}^{n-j}$ $\displaystyle=\frac{K(z_{1},z_{2})}{z^{m}_{1}z^{n}_{2}}=h(z_{1},z_{2}),\text{ where }K(z_{1},z_{2})=\sum_{\begin{subarray}{c}0\leq i\leq m\\\ 0\leq j\leq n\\\ \end{subarray}}\overline{a}_{ij}z_{1}^{m-i}z_{2}^{n-j}.$ Hence on $\mathbb{T}^{2},$ we get that $\displaystyle\overline{P(z_{1},z_{2})}=h(z_{1},z_{2}),\text{ where }h(z_{1},z_{2})=\frac{K(z_{1},z_{2})}{z_{1}^{m}z_{2}^{n}}.$ We define $L:=\\{z_{1}=0,|z_{2}|\leq 1\\}\cup\\{z_{2}=0,|z_{1}|\leq 1\\}$ and $\displaystyle X=\left\\{(z_{1},z_{2})\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{P(z_{1},z_{2})}=h(z_{1},z_{2})\right\\}.$ (2.2) We set $\displaystyle\triangle({z}):=\begin{vmatrix}\frac{\partial P(z)}{\partial{z_{1}}}&\frac{\partial P(z)}{\partial z_{2}}\\\\[6.45831pt] \frac{\partial h(z)}{\partial{{z_{1}}}}&\frac{\partial h(z)}{\partial{z_{2}}}\\\\[6.45831pt] \end{vmatrix}.$ We can write $\displaystyle\triangle(z)=\frac{1}{z^{m+1}_{1}z^{n+1}_{2}}\prod^{l}_{j=1}q_{j}(z),$ where each $q_{j}$ is an irreducible polynomial in $\mathbb{C}^{2}.$ We define the corresponding irreducible algebraic variety $Z_{j}:=Z(q_{j})=\\{z\in\mathbb{C}^{2}:q_{j}(z)=0\\}.$ We assume $\triangle({z})\not\equiv 0$ on $X.$ Therefore, each $q_{j}$ is a non-zero holomorphic polynomial in $\mathbb{C}^{2}.$ We denote $Q_{j}=Z_{j}\cap\mathbb{T}^{2}.$ ###### Result 2.11 (Jimbo). We let $J=\\{j\in\\{1,\cdots,l\\}:\emptyset\neq Q_{j}\neq\widehat{Q_{j}},\widehat{Q_{j}}\setminus L\subset X\\}.$ 1. (i) If $J=\emptyset,$ then $\widehat{{\sf Gr}_{\overline{P}}(\mathbb{T}^{2})}={\sf Gr}_{\overline{P}}(\mathbb{T}^{2}),$ and $[z_{1},z_{2},\overline{P};\mathbb{T}^{2}]=\mathcal{C}(\mathbb{T}^{2});$ 2. (ii) If $J\neq\emptyset,$ then $\displaystyle\widehat{{\sf Gr}_{\overline{P}}(\mathbb{T}^{2})}={\sf Gr}_{\overline{P}}(\mathbb{T}^{2})\cup\bigg{(}\cup_{j\in J}{\sf Gr}_{\overline{P}}(\widehat{Q_{j}})\bigg{)}.$ ## 3\. proof of Theorems 1.6 and 1.7 Note that the map $\phi:\mathbb{C}^{2}\to\mathbb{C}^{2}$ is defined as $\phi(z)=(p_{1}(z),p_{2}(z)).$ We consider the proper holomorphic map $\widetilde{\Psi}:\mathbb{C}^{2+N}\to\mathbb{C}^{2+N},$ defined as follows: $\displaystyle\widetilde{\Psi}(z_{1},z_{2},w_{1},\cdots,w_{N})=\left(\phi(z_{1},z_{2}),w_{1},\cdots,w_{N}\right),$ (3.1) where $~{}(z_{1},z_{2})\in\mathbb{C}^{2},$ and $(w_{1},\cdots,w_{N})\in\mathbb{C}^{N}.$ Recall that $\Omega=\phi(\mathbb{D}^{2})$ and $\Gamma_{\Omega}=\phi(\mathbb{T}^{2}).$ ###### Proof of Theorem 1.6. We claim that $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))={\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^{2})$: let $\displaystyle(\alpha,\beta)\in\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))$ $\displaystyle\implies\widetilde{\Psi}(\alpha,\beta)\in{\sf Gr}_{h}(\overline{\Omega})$ $\displaystyle\implies(\phi(\alpha),\beta)\in{\sf Gr}_{h}(\overline{\Omega})$ $\displaystyle\implies\beta=h(\phi(\alpha))\text{ and }\phi(\alpha)\in\overline{\Omega}.$ Now $\displaystyle\phi(\alpha)\in\overline{\Omega}$ $\displaystyle\implies\alpha\in\phi^{-1}(\phi(\alpha))\subset\phi^{-1}(\overline{\Omega})\subset\overline{\mathbb{D}}^{2}.$ Therefore $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))\subset{\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^{2}).$ Conversely, let $\displaystyle(p,q)\in{\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})$ $\displaystyle\implies q=(h\circ\phi)(p)\text{ and }p\in\overline{\mathbb{D}}^{2}$ $\displaystyle\implies q=h(\phi(p))\text{ and }\Pi(p)\in\overline{\Omega}$ $\displaystyle\implies(\phi(p),q)\in{\sf Gr}_{h}(\overline{\Omega})$ $\displaystyle\implies\widetilde{\Psi}(p,q)\in{\sf Gr}_{h}(\overline{\Omega})$ $\displaystyle\implies(p,q)\in\widetilde{\Psi}^{-1}\left({\sf Gr}_{h}(\overline{\Omega})\right).$ Hence ${\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^{2})\subset\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega})).$ Therefore, $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))={\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^{2}).$ Since $\widetilde{\Psi}$ is proper holomorphic mapping and $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))=G_{h\circ\phi}(\mathbb{\overline{D}}^{2}),$ by Result 2.1, we can say that $\mathscr{P}\left({\sf Gr}_{h}(\overline{\Omega})\right)=\mathcal{C}\left({\sf Gr}_{h}(\overline{\Omega})\right)$ if and only if $\mathscr{P}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right)=\mathcal{C}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right).$ We note that $h\circ\phi$ is pluriharmonic on $\mathbb{D}^{2}$ and continuous on $\overline{\mathbb{D}}^{2}.$ Therefore, two cases hold. Case I: $\mathscr{P}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right)=\mathcal{C}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right).$ In this case we have $\mathscr{P}\left({\sf Gr}_{h}(\overline{\Omega})\right)=\mathcal{C}\left({\sf Gr}_{h}(\overline{\Omega})\right).$ Case II: $\mathscr{P}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right)\neq\mathcal{C}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right).$ Therefore, by Result 1.2, there exists an analytic disc $g:\mathbb{D}\hookrightarrow{}\overline{\mathbb{D}}^{2}$ where $(h_{j}\circ\phi)\circ g:\mathbb{D}\hookrightarrow{}\mathbb{\overline{D}}^{2}$ is holomorphic for all $j=1,\cdots,N.$ If we take $\gamma:=\phi\circ g,$ then clearly $\gamma:\mathbb{D}\hookrightarrow{}\overline{\Omega}$ is an analytic disc in $\overline{\Omega}$ such that $h_{j}$ is holomorphic on $\gamma(\mathbb{D})$ (by Lemma 2.8) for all $j=1,\cdots,N.$ This proves the theorem. ∎ ###### Proof of Theorem 1.7. Let $h_{j}$ denotes the pluriharmonic extension of $f_{j}$ to $\Omega$ and write $h=(h_{1},\cdots,h_{N}):\overline{\Omega}\to\mathbb{C}^{N}.$ We have $\widetilde{\Psi}$ is proper holomorphic mapping and $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\Gamma_{\Omega}))={\sf Gr}_{h\circ\phi}(\mathbb{T}^{2}).$ Therefore, by Result 2.1, ${\sf Gr}_{h}(\Gamma_{\Omega})$ is polynomially convex if and only if ${\sf Gr}_{h\circ\phi}(\mathbb{T}^{2})$ is polynomially convex. We note that $h\circ\phi$ is pluriharmonic on $\mathbb{D}^{2}$ and continuous on $\overline{\mathbb{D}}^{2}.$ Therefore, two cases hold. Case I: ${\sf Gr}_{h}(\Gamma_{\Omega})$ is polynomially convex. In view of Theorem 2.9, we have $[z_{1},z_{2},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).$ Case II: ${\sf Gr}_{h}(\Gamma_{\Omega})$ is not polynomially convex. Consequently, ${\sf Gr}_{h\circ\phi}(\mathbb{T}^{2})$ is not polynomially convex. Therefore, by Result 1.3, there exists a distinguished variety $Z\subset\mathbb{D}^{2}$ where $(h_{j}\circ\phi)$ is holomorphic for all $j=1,\cdots,N.$ Since $\phi$ is a proper holomorphic mapping, by Lemma 2.4, we have $\phi(Z)$ is also an algebraic variety. Since $\phi$ is proper holomorphic, $(h_{j}\circ\phi)$ is holomorphic on $Z,$ then $h_{j}$ is also holomorphic on $\phi(Z)$ (by Lemma 2.8). Since $\phi$ sends distinguished variety of $\mathbb{D}^{2}$ to distinguished variety of $\Omega$ (Lemma 2.6), we have $\phi(Z)\cap b\Omega\subset\Gamma_{\Omega}.$ ∎ ## 4\. Description of Polynomial Hull In this section, we provide a description of the polynomial convex hull of the graph of an anti-holomorphic polynomial over the distinguished boundary of the domain $\Omega,$ where $\Omega$ is the image of the bidisc under certain proper polynomial map from $\mathbb{C}^{2}$ to $\mathbb{C}^{2}.$ Let $F=(f_{1},f_{2},\cdots,f_{n}):\mathbb{C}^{n}\to\mathbb{C}^{n}$ be a proper map. Let $J_{f}(z)=\begin{vmatrix}\frac{\partial f_{1}}{\partial{z_{1}}}(z)&\frac{\partial f_{1}}{\partial{z_{2}}}(z)&\cdots&\frac{\partial f_{1}}{\partial{z_{n}}}(z)\\\\[6.45831pt] \vdots&\vdots&\cdots&\vdots\\\\[6.45831pt] \frac{\partial f_{n}}{\partial{z_{1}}}(z)&\frac{\partial f_{n}}{\partial{z_{2}}}(z)&\cdots&\frac{\partial f_{n}}{\partial{z_{n}}}(z)\\\\[6.45831pt] \end{vmatrix}.$ The critical locus of $f$ is the complex analytic variety $Z(J_{f})=\\{z\in\mathbb{C}^{n}:J_{f}(z)=0\\}\subset\mathbb{C}^{n}.$ The branch locus $B(f)$ of $f$ is the image of the critical locus. Since $f$ is proper, $\displaystyle f:\mathbb{C}^{n}\setminus f^{-1}(B(f))\to\mathbb{C}^{n}\setminus B(f)$ is a covering map of finite degree $d;$ $d$ is said to be the topological degree of $f.$ ###### Definition 4.1. Two proper map $\phi,\tilde{\phi}:\mathbb{C}^{2}\to\mathbb{C}^{2}$ are said to be equivalent if there exist $f,g\in\text{Aut}(\mathbb{C}^{2})$ such that $\phi=f\circ\tilde{\phi}\circ g.$ Consider two holomorphic polynomials, $p_{1}$ and $p_{2}$, defined in $\mathbb{C}^{2}$. Let $\phi(z)=(p_{1}(z),p_{2}(z))$ represent a proper holomorphic mapping from $\mathbb{C}^{2}$ to $\mathbb{C}^{2}$, equivalent to $\tilde{\phi}(z_{1},z_{2})=(z_{1}^{m},z_{2}^{n})$ for some natural numbers $m$ and $n$. There is a characterization due to Lamy [21] (see also Bisi and Polizzi [7]) for $m=1$ and $n=2$ as follows: a proper polynomial map $f:\mathbb{C}^{2}\to\mathbb{C}^{2}$ with a topological degree of 2 is equivalent to $g(z_{1},z_{2})=(z_{1},z^{2}_{2}).$ Let $P(z_{1},z_{2})$ be any polynomial in $\mathbb{C}^{2}$. We aim to calculate $\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\Omega})}$. It is evident that $\widetilde{\Psi}^{-1}({\sf Gr}_{\overline{P}}(\Gamma_{\Omega}))={\sf Gr}_{\overline{P}\circ\phi}(\mathbb{T}^{2})={\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})$ ($\widetilde{\Psi}$ is given by (3.1)). Consequently, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})=\widetilde{\Psi}\left(\operatorname{Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})\right)$. In this scenario, the following result holds. ###### Lemma 4.2. $\widehat{\widetilde{\Psi}(Y)}=\widetilde{\Psi}\left(\widehat{Y}\right),$ where $Y={\sf Gr}_{\overline{P\circ\phi}}(\mathbb{{T}}^{2}).$ ###### Proof. Since $\widetilde{\Psi}$ is a proper holomorphic map, by using Result 2.1, we have that $\widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right)$ is polynomially convex. Therefore $\displaystyle\widehat{Y}\subset\widehat{\widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right)}\subset\widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right).$ This implies, $\widetilde{\Psi}(\widehat{Y})\subset\widehat{\widetilde{\Psi}(Y)}.$ Next, we show that $\widetilde{\Psi}^{-1}\left(\widetilde{{\Psi}}(\widehat{Y})\right)\subset\widehat{Y}.$ To prove this, let $(\alpha_{1},\alpha_{2},\beta)\in\widetilde{\Psi}^{-1}\left(\widetilde{{\Psi}}(\widehat{Y})\right).$ Then there exists $(\xi_{1},\xi_{2},\eta)\in\widehat{Y}$ such that $\widetilde{\Psi}(\alpha_{1},\alpha_{2},\beta)=\Psi(\xi_{1},\xi_{2},\eta).$ This implies, $\phi(\alpha_{1},\alpha_{2})=\phi(\xi_{1},\xi_{2})$ and $\beta=\eta.$ Since, $\phi$ is proper polynomial map and is equivalent to $\tilde{\phi}(z_{1},z_{2})=(z^{m}_{1},z^{n}_{2}),$ there exist $f,g\in\text{Aut}(\mathbb{C}^{2})$ such that $\phi=f\circ\tilde{\phi}\circ g.$ Then $\displaystyle\phi(\alpha_{1},\alpha_{2})=\phi(\xi_{1},\xi_{2})$ $\displaystyle\implies$ $\displaystyle(f\circ\tilde{\phi}\circ g)(\alpha_{1},\alpha_{2})=(f\circ\tilde{\phi}\circ g)(\xi_{1},\xi_{2})$ $\displaystyle\implies$ $\displaystyle(\tilde{\phi}\circ g)(\alpha_{1},\alpha_{2})=(\tilde{\phi}\circ g)(\xi_{1},\xi_{2})$ $\displaystyle\implies$ $\displaystyle g^{m}_{1}(\alpha_{1},\alpha_{2})=g^{m}_{1}(\xi_{1},\xi_{2})\text{ and }g^{n}_{2}(\alpha_{1},\alpha_{2})=g^{n}_{2}(\xi_{1},\xi_{2}),\text{ where }g=(g_{1},g_{2}).$ $\displaystyle\implies$ $\displaystyle(\alpha_{1},\alpha_{2})=g^{-1}\\{(\lambda^{k}_{m}g_{1}(\xi_{1},\xi_{2}),\lambda^{r}_{n}g_{2}(\xi_{1},\xi_{2})\\}=(a_{k},b_{r}),$ where $\lambda_{l}=\cos\frac{2\pi}{l}+i\sin\frac{2\pi}{l},$ $k\in\\{0,\cdots,m-1\\}$ and $r\in\\{0,\cdots,n-1\\}.$ It remains to show that $(a_{k},b_{r},\eta)\in\widehat{Y}.$ If possible, assume that $(a_{k},b_{r},\eta)\notin\widehat{Y}$ for some $k\in\\{0,\cdots,m-1\\},r\in\\{0,\cdots,n-1\\}.$ Then there exists a polynomial $\chi$ in $\mathbb{C}^{2}_{z}\times\mathbb{C}_{w}$ such that $\displaystyle|\chi(a_{k},b_{r},\eta)|>\sup_{Y}|\chi(z,w)|.$ (4.1) Let us define $F(z_{1},z_{2}):=(\lambda^{k}_{m}z_{1},\lambda^{r}_{n}z_{2}),$ and $\tilde{F}(z_{1},z_{2},w):=((g^{-1}\circ F\circ\ g)(z),w).$ Since $\phi^{-1}(\phi(\mathbb{T}^{2}))\subset\mathbb{T}^{2}$ (hence $(g^{-1}\circ F\circ\ g)(z)\in\mathbb{T}^{2}$ if $z\in\mathbb{T}^{2}$), using (4.1), we get that $\displaystyle|(\chi\circ\tilde{F})(\xi,\eta)|>\sup_{Y}|(\chi\circ\tilde{F})(z,w)|.$ (4.2) Since $\tilde{F}\in\text{Aut}(\mathbb{C}^{3}),$ (4.2) says that $(\xi,\eta)\notin\widehat{Y}$ and this is a contradiction. Hence $(a_{k},b_{r},\eta)\in\widehat{Y}.$ Therefore, $\widetilde{\Psi}^{-1}\left(\widetilde{\Psi}(\widehat{Y})\right)=\widehat{Y}.$ Since $\widetilde{\Psi}$ is proper holomorphic map, by using Result 2.1, we can say that $\widetilde{\Psi}(\widehat{Y})$ is polynomially convex. Therefore, $\widehat{\widetilde{\Psi}(Y)}\subset\widetilde{\Psi}(\widehat{Y}).$ This proves the lemma. ∎ By using Lemma 4.3, we can say that $\displaystyle\widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\Omega})}}=\widetilde{\Psi}\left(\widehat{{\sf Gr}_{\overline{P\circ\phi}}(\mathbb{{T}}^{2})}\right).$ Therefore, to give a description for $\widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\Omega})}},$ it is enough to compute $\widehat{{\sf Gr}_{\overline{P\circ\phi}}(\mathbb{{T}}^{2})}.$ ### 4.1. Description of Hull on Symmetrized Bidisc Let $P(z_{1},z_{2})$ be any polynomial in $\mathbb{C}^{2}.$ By Lemma 4.2, we calculate $\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{{\mathbb{G}}_{2}})}.$ If we take $p_{1}(z)=z_{1}+z_{2}$ and $p_{1}(z)=z_{1}z_{2},$ then $\phi=\Pi$ and $\widetilde{\Psi}(z,w)=(\Pi(z),w)$ is a proper map from $\mathbb{C}^{3}$ to $\mathbb{C}^{3}.$ It is easy to show that $\Pi$ a proper polynomial map of topological degree $2,$ and hence equivalent to $(z_{1},z^{2}_{2}).$ Clearly, $\widetilde{\Psi}^{-1}({\sf Gr}_{\overline{P}}({\Gamma}_{\mathbb{G}_{2}}))={\sf Gr}_{\overline{P}\circ\Pi}(\mathbb{{T}}^{2})={\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^{2}).$ Therefore, ${\sf Gr}_{\overline{P}}({\Gamma}_{\mathbb{G}_{2}}))=\widetilde{\Psi}\left({\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^{2})\right).$ By Lemma 4.2, we get that ###### Lemma 4.3. $\widehat{\widetilde{\Psi}\left(Y\right)}=\widetilde{\Psi}\left(\widehat{Y}\right),$ where $Y={\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^{2}).$ By using Lemma 4.3, we can say that $\displaystyle\widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\mathbb{G}_{2}})}}=\widetilde{\Psi}\left(\widehat{{\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^{2})}\right).$ Therefore, to give a description for $\widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\mathbb{G}_{2}})}},$ it is enough to compute $\widehat{{\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^{2})}.$ ## 5\. Examples ###### Example 5.1. Let $P(z_{1},z_{2})=z_{1}-z_{2}.$ Then $[z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]\neq\mathcal{C}(\Gamma_{\mathbb{G}_{2}}).$ Explanation In view of Corollary 2.10, to demonstrate that $[z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]\neq\mathcal{C}(\Gamma_{\mathbb{G}_{2}}),$ it suffices to establish that the graph of $\overline{P}$ over $\Gamma_{\mathbb{G}_{2}}$ is not polynomially convex. To achieve this, it is sufficient to show that the graph of $\overline{P\circ\Pi}$ over $\Gamma_{\mathbb{G}_{2}}$ lacks polynomial convexity. Following the notation in Result 2.11, we define $h(z)=\frac{1}{z_{1}}+\frac{1}{z_{2}}-\frac{1}{z_{1}z_{2}}.$ Then $\displaystyle\triangle({z})=\begin{vmatrix}\frac{\partial(P\circ\Pi)}{\partial{z_{1}}}&\frac{\partial(P\circ\Pi)}{\partial z_{2}}\\\\[6.45831pt] \frac{\partial h}{\partial{{z_{1}}}}&\frac{\partial h}{\partial{z_{2}}}\\\\[6.45831pt] \end{vmatrix}=\begin{vmatrix}1-z_{2}&1-z_{1}\\\\[6.45831pt] \frac{-1}{z^{2}_{1}}+\frac{1}{z^{2}_{1}z_{2}}&\frac{-1}{z^{2}_{2}}+\frac{1}{z^{2}_{2}z_{1}}\\\\[6.45831pt] \end{vmatrix}$ $\displaystyle=\frac{1}{z^{2}_{1}z^{2}_{2}}(z_{1}-z_{2})(z_{1}-1)(z_{2}-1).$ We define $q_{1}:=z_{1}-1,~{}~{}q_{2}=z_{2}-1,q_{3}:=z_{1}-z_{2},$ and $Z_{j}=\\{z\in\mathbb{C}^{2}:q_{j}(z)=0\\},j=1,2,3.$ Therefore, $\displaystyle\Sigma$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\triangle(z)=0\right\\}$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2})\right\\}\cap[\cup^{3}_{j=1}Z_{j}],$ and $\displaystyle X$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{(P\circ\Pi)(z)}=h(z)\right\\}$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{z_{1}+z_{2}-z_{1}z_{2}}=\frac{1}{z_{1}}+\frac{1}{z_{1}}-\frac{1}{z_{1}z_{2}}\right\\}.$ Here $Q_{j}=Z_{j}\cap\mathbb{{T}}^{2}.$ Clearly, $\displaystyle\widehat{Q_{1}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{1}=1,|z_{2}|\leq 1\\}\neq Q_{1};$ $\displaystyle\widehat{Q_{2}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{2}=1,|z_{1}|\leq 1\\}\neq Q_{2};$ $\displaystyle\widehat{Q_{3}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{1}=z_{2},|z_{1}|\leq 1\\}\neq Q_{3}.$ It is evident that $\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\subset X$ holds true only for $j=1,2.$ On the other hand, we note that $(\frac{1}{2},\frac{1}{2})\in\widehat{Q_{3}}\setminus(\mathbb{T}^{2}\cup L),$ yet $(\frac{1}{2},\frac{1}{2})\notin X.$ Therefore, by Result 2.11, we deduce that: $\displaystyle\widehat{{\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})}={\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})\cup{\sf Gr}_{\overline{P\circ\Pi}}(\widehat{Q_{1}})\cup{\sf Gr}_{\overline{P\circ\Pi}}(\widehat{Q_{2}}).$ Hence $\displaystyle\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})}$ $\displaystyle=\Psi\left({\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})\right)\cup\Psi\left({\sf Gr}_{\overline{P\circ\Pi}}(\widehat{Q_{1})}\right)\cup\Psi\left({\sf Gr}_{\overline{P\circ\Pi}}(\widehat{Q_{2})}\right)$ $\displaystyle={\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup\\{(1+z,z,w):w=\overline{P(1+z,z)},z\in\overline{\mathbb{D}}\\}$ $\displaystyle\cup\\{(1+z,z,w):w=\overline{P(1+z,z)},z\in\overline{\mathbb{D}}\\}$ $\displaystyle={\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup\\{(1+z,z,w):w=\overline{P(1+z,z)},z\in\overline{\mathbb{D}}\\}$ $\displaystyle={\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup\\{(1+z,z,1):z\in\overline{\mathbb{D}}\\}.$ ###### Example 5.2. $P(z_{1},z_{2})=z_{1}-2z_{2}.$ Then $[z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]=\mathcal{C}(\Gamma_{\mathbb{G}_{2}}).$ Explanation In light of Corollary 2.10, in order to establish that $[z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]=\mathcal{C}(\Gamma_{\mathbb{G}_{2}}),$ it is sufficient to demonstrate the polynomial convexity of the graph of $\overline{P}$ over $\Gamma_{\mathbb{G}_{2}}$. To accomplish this, it is enough to prove that the graph of $\overline{P\circ\Pi}$ over $\mathbb{T}^{2}$ is polynomially convex. Following the notation in Result 2.11, we have $h(z)=\frac{1}{z_{1}}+\frac{1}{z_{2}}-\frac{2}{z_{1}z_{2}}.$ $\displaystyle\triangle({z})=$ $\displaystyle\begin{vmatrix}\frac{\partial(P\circ\Pi)}{\partial{z_{1}}}&\frac{\partial(P\circ\Pi)}{\partial z_{2}}\\\\[6.45831pt] \frac{\partial h}{\partial{{z_{1}}}}&\frac{\partial h}{\partial{z_{2}}}\\\\[6.45831pt] \end{vmatrix}=\begin{vmatrix}1-2z_{2}&1-2z_{1}\\\\[6.45831pt] \frac{-1}{z^{2}_{1}}+\frac{2}{z^{2}_{1}z_{2}}&\frac{-1}{z^{2}_{2}}+\frac{2}{z^{2}_{2}z_{1}}\\\\[6.45831pt] \end{vmatrix}$ $\displaystyle=$ $\displaystyle\frac{1}{z^{2}_{1}z^{2}_{2}}(z_{1}+z_{2}-2-2z_{1}z_{2})(z_{2}-z_{1}).$ We define $q_{1}:=z_{1}+z_{2}-2-2z_{1}z_{2},~{}q_{2}:=z_{2}-z_{1}.$ and $Z_{j}=\\{z\in\mathbb{C}^{2}:q_{j}(z)=0\\},j=1,2.$ Therefore, $\displaystyle\Sigma$ $\displaystyle=\left\\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\triangle_{(z,w)}=0\right\\}$ $\displaystyle=\left\\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2})\right\\}\cap[\cup^{2}_{j=1}Z_{j}],$ and $\displaystyle X$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{(P\circ\Pi)(z)}=h(z)\right\\}$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{z_{1}+z_{2}-2z_{1}z_{2}}=\frac{1}{z_{1}}+\frac{1}{z_{2}}-\frac{2}{z_{1}z_{2}}\right\\}.$ Here $Q_{j}=Z_{j}\cap\mathbb{{T}}^{2}.$ We now claim that $\displaystyle\widehat{Q_{1}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{1}+z_{2}-2z_{1}z_{2}-2=0,|z_{1}|=1,|z_{2}|=1\\}=Q_{1}.$ Clearly, $\widehat{Q_{1}}\subset\\{z\in\mathbb{C}^{2}:z_{1}+z_{2}-2z_{1}z_{2}-2=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}.$ Let $(\alpha,\beta)\in\\{z\in\mathbb{C}^{2}:z_{1}+z_{2}-2z_{1}z_{2}-2=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\setminus Q_{1}.$ First, we assume that $|\beta|<1.$ Since $\alpha+\beta-2\alpha\beta-2=0,$ hence $\displaystyle|2-\alpha|=|\beta||1-2\alpha|<|1-2\alpha|.$ (5.1) Let $\alpha=u+iv.$ Then from (5.1), we get that $\displaystyle(2-u)^{2}+v^{2}<(1-2u)^{2}+4v^{2}$ $\displaystyle\implies 4+u^{2}+v^{2}-4u<1+4(u^{2}+v^{2})-4u$ $\displaystyle\implies u^{2}+v^{2}>1\text{ i.e., }|\alpha|>1.$ Hence, we conclude that $(\alpha,\beta)\notin\widehat{Q_{1}}.$ In the case where $|\alpha|<1,$ we can similarly demonstrate that $|\beta|>1,$ leading to the same conclusion, $(\alpha,\beta)\notin\widehat{Q_{1}}.$ As a result, we establish that $Q_{1}$ is polynomially convex. Furthermore, consider $\widehat{Q_{2}}=\\{z\in\mathbb{C}^{2}:z_{1}=z_{2},|z_{1}|\leq 1\\}\neq Q_{2}.$ Notably, $(\frac{1}{2},\frac{1}{2})\in\widehat{Q_{2}}\setminus(\mathbb{T}^{2}\cup L),$ while $(\frac{1}{2},\frac{1}{2})\notin X.$ Hence, by Result 2.11, we can deduce that: $\displaystyle\widehat{{\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})}={\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2}).$ This implies: $\displaystyle\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}{2}})}=\Psi\left({\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})\right)={\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}}).$ ###### Example 5.3. Let $p_{1}(z_{1},z_{2})=2z_{1}+z^{2}_{2},~{}~{}p_{2}(z_{1},z_{2})=z_{1}-z^{2}_{2},~{}P(z_{1},z_{2})=z_{1}-z_{2}$ and $\phi(z_{1},z_{2})=(p_{1}(z_{1},z_{2}),p_{2}(z_{1},z_{2})).$ Therefore $\Omega=\phi(\mathbb{D}^{2}).$ Then $[z_{1},z_{2},\overline{P};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).$ Explanation According to Theorem 2.9, it follows that $[z_{1},z_{2},\overline{P};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})$ if, and only if, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ exhibits polynomial convexity. Furthermore, the polynomial convexity of ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ is equivalent to the polynomial convexity of ${\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})$. Here $\overline{P\circ\phi}=\overline{z_{1}+2z^{2}_{2}}=\frac{1}{z_{1}}+\frac{2}{z^{2}_{2}}=:h(z)$ on $\mathbb{T}^{2}.$ $\displaystyle\triangle({z})$ $\displaystyle=\begin{vmatrix}\frac{\partial(P\circ\phi)}{\partial{z_{1}}}&\frac{\partial(P\circ\phi)}{\partial z_{2}}\\\\[6.45831pt] \frac{\partial h}{\partial{{z_{1}}}}&\frac{\partial h}{\partial{z_{2}}}\\\\[6.45831pt] \end{vmatrix}=\begin{vmatrix}1&4z_{2}\\\\[6.45831pt] \frac{-1}{z^{2}_{1}}&\frac{-4}{z^{3}_{2}}\\\\[6.45831pt] \end{vmatrix}=\frac{1}{z^{2}_{1}z^{3}_{2}}(z_{1}+z^{2}_{2})(z^{2}_{2}-z_{1}).$ We define $q_{1}:=z_{1}+z^{2}_{2},~{}q_{2}:=z^{2}_{2}-z_{1},$ and $Z_{j}=\\{z\in\mathbb{C}^{2}:q_{j}(z)=0\\},j=1,2.$ Therefore, $\displaystyle\Sigma$ $\displaystyle=\left\\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\triangle_{(z,w)}=0\right\\}$ $\displaystyle=\left\\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2})\right\\}\cap[\cup^{2}_{j=1}Z_{j}],$ and $\displaystyle X$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{(P\circ\phi)(z)}=h(z)\right\\}$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{z_{1}+2z^{2}_{2}}=\frac{1}{z_{1}}+\frac{2}{z^{2}_{2}}\right\\}.$ Here $Q_{j}=Z_{j}\cap\mathbb{{T}}^{2}.$ Clearly, $\displaystyle\widehat{Q_{1}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{1}+z^{2}_{2}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\neq Q_{1},~{}\text{ and}$ $\displaystyle\widehat{Q_{2}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z^{2}_{2}-z_{1}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\neq Q_{2}.$ It is easy to see that $\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\nsubseteq X$ for $j=1,2.$ Therefore, by a Result 2.11, we get that $\displaystyle\widehat{{\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})}={\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2}).$ Hence $\displaystyle\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\Omega})}$ $\displaystyle=\Psi\left({\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})\right)={\sf Gr}_{\overline{P}}(\Gamma_{\Omega}).$ ###### Example 5.4. Let $p_{1}(z_{1},z_{2})=z_{1}+z_{2},~{}~{}p_{2}(z_{1},z_{2})=z^{2}_{1}+z^{2}_{2},~{}P(z_{1},z_{2})=z^{2}_{1}+z_{2}$ and $\phi(z_{1},z_{2})=(p_{1}(z_{1},z_{2}),p_{2}(z_{1},z_{2})).$ Therefore $\Omega=\phi(\mathbb{D}^{2}).$ Then $[z_{1},z_{2},\overline{P};\Gamma_{\Omega}]\neq\mathcal{C}(\Gamma_{\Omega}).$ Explanation Based on Theorem 2.9, we can assert that $[z_{1},z_{2},\overline{P};\Gamma_{\Omega}]\neq\mathcal{C}(\Gamma_{\Omega})$ if, and only if, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ lacks polynomial convexity. Furthermore, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ possesses polynomial convexity if, and only if, ${\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})$ is polynomially convex. Therefore, it is enough to show that ${\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})$ is not polynomailly convex. Here $P\circ\phi=2(z^{2}_{1}+z_{1}z_{2}+z^{2}_{2}).$ Hence, $\displaystyle\overline{P\circ\phi}=\overline{2(z^{2}_{1}+z_{1}z_{2}+z^{2}_{2})}=2\left(\frac{1}{z^{2}_{1}}+\frac{1}{z^{2}_{2}}+\frac{1}{z_{1}z_{2}}\right)=:h(z)\text{ on }\mathbb{T}^{2}.$ $\displaystyle\triangle({z})$ $\displaystyle=\begin{vmatrix}\frac{\partial(P\circ\phi)}{\partial{z_{1}}}&\frac{\partial(P\circ\phi)}{\partial z_{2}}\\\\[6.45831pt] \frac{\partial h}{\partial{{z_{1}}}}&\frac{\partial h}{\partial{z_{2}}}\\\\[6.45831pt] \end{vmatrix}=\begin{vmatrix}2(2z_{1}+z_{2})&2(2z_{2}+z_{1})\\\\[6.45831pt] 2(\frac{-2}{z^{3}_{1}}-\frac{-1}{z^{2}_{1}z_{2}})&2(\frac{-2}{z^{3}_{2}}-\frac{-1}{z_{1}z^{2}_{2}})\\\\[6.45831pt] \end{vmatrix}$ $\displaystyle=\frac{8\alpha^{-1}}{z^{3}_{1}z^{3}_{2}}(z_{1}+z_{2})(z_{2}-z_{1})(z_{1}-\alpha z_{2})(z_{2}-\alpha z_{1}),\text{ where }\alpha=e^{\frac{2\pi i}{3}}.$ We define $q_{1}:=z_{1}+z_{2},~{}q_{2}:=z_{2}-z_{1},~{},q_{3}=z_{1}-\alpha z_{2},~{}q_{4}=z_{2}-\alpha z_{1},$ and $Z_{j}=\\{z\in\mathbb{C}^{2}:q_{j}(z)=0\\},j=1,2,3,4.$ Therefore, $\displaystyle\Sigma$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\triangle(z)=0\right\\}$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2})\right\\}\cap[\cup^{3}_{j=1}Z_{j}],$ and $\displaystyle X$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{(P\circ\phi)(z)}=h(z)\right\\}$ $\displaystyle=\left\\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2}):\overline{2(z^{2}_{1}+z_{1}z_{2}+z^{2}_{2})}=2\left(\frac{1}{z^{2}_{1}}+\frac{1}{z^{2}_{2}}+\frac{1}{z_{1}z_{2}}\right)\right\\}.$ Here $Q_{j}=Z_{j}\cap\mathbb{{T}}^{2}.$ Clearly, $\displaystyle\widehat{Q_{1}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{1}+z_{2}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\neq Q_{1};$ $\displaystyle\widehat{Q_{2}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{2}-z_{1}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\neq Q_{2};$ $\displaystyle\widehat{Q_{3}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{1}-\alpha z_{2}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\neq Q_{3};$ $\displaystyle\widehat{Q_{4}}$ $\displaystyle=\\{z\in\mathbb{C}^{2}:z_{2}-\alpha z_{1}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\\}\neq Q_{4}.$ Again $\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\nsubseteq X$ for $j=1,2,$ and $\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\subset X$ for $j=3,4.$ Therefore, by Result 2.11, we get that $\displaystyle\widehat{{\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})}={\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})\cup{\sf Gr}_{\overline{P\circ\phi}}(\widehat{Q_{3}})\cup{\sf Gr}_{\overline{P\circ\phi}}(\widehat{Q_{4}}).$ Hence $\displaystyle\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\Omega})}$ $\displaystyle=\Psi\left({\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})\right)={\sf Gr}_{\overline{P}}(\Gamma_{\Omega})\cup\Psi\left({\sf Gr}_{\overline{P\circ\phi}}(\widehat{Q_{3}})\right)\cup\Psi\left({\sf Gr}_{\overline{P\circ\phi}}(\widehat{Q_{4}})\right).$ Acknowledgements. We would like to express our sincere gratitude to Professor Franc Forstnerič for pointing out Result 2.3 in [32] and showing us the proof of Lemma 2.4. The first named author was partially supported by a Matrics Research Grant (MTR/2017/000974) of SERB, Dept. of Science and Technology, Govt. of India, for the beginning of this work and is supported by a Core Research Grant (CRG/2022/003560) of SERB, Dept. of Science and Technology, Govt. of India, for the later part of the work. The second named author’s work received partial support from an INSPIRE Fellowship (IF 160487) provided by the Dept. of Science and Technology, Govt. of India, during the early stage of this work. Presently, this research is supported by a research grant from SERB (Grant No. CRG/2021/005884), Dept. of Science and Technology, Govt. of India. ## References * [1] J. Agler, Z. Lykova, and N. Young. Geodesics, retracts, and the norm-preserving extension property in the symmetrized bidisc. Mem. Amer. Math. Soc., 258(1242):vii+108, 2019. * [2] J. Agler, Z. Lykova, and N. J. Young. A geometric characterization of the symmetrized bidisc. J. Math. Anal. Appl., 473(2):1377–1413, 2019. * [3] J. Agler and N. J. Young. The hyperbolic geometry of the symmetrized bidisc. J. Geom. Anal., 14(3):375–403, 2004. * [4] Jim Agler and John E. McCarthy. Distinguished varieties. Acta Math., 194(2):133–153, 2005. * [5] T. Andô. On a pair of commutative contractions. Acta Sci. Math. (Szeged), 24:88–90, 1963. * [6] T. Bhattacharyya, S. Pal, and S. S. Roy. Dilations of $\Gamma$-contractions by solving operator equations. Adv. Math., 230(2):577–606, 2012. * [7] C. Bisi and F. Polizzi. On proper polynomial maps of $\mathbb{C}^{2}$. J. Geom. Anal., 20(1):72–89, 2010. * [8] D. Chakrabarti and S. Gorai. Function theory and holomorphic maps on symmetric products of planar domains. J. Geom. Anal., 25(4):2196–2225, 2015. * [9] E. M. Chirka. Complex analytic sets, volume 46 of Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1989. Translated from the Russian by R. A. M. Hoksbergen. * [10] C. Costara. The symmetrized bidisc and Lempert’s theorem. Bull. London Math. Soc., 36(5):656–662, 2004. * [11] B. Krishna Das and Jaydeb Sarkar. Ando dilations, von Neumann inequality, and distinguished varieties. J. Funct. Anal., 272(5):2114–2131, 2017. * [12] A. Edigarian and W. Zwonek. Geometry of the symmetrized polydisc. Arch. Math. (Basel), 84(4):364–374, 2005. * [13] T. W. Gamelin. Uniform algebras. Prentice-Hall, Inc., Englewood Cliffs, N. J., 1969. * [14] A. J. Izzo. Uniform algebras generated by holomorphic and pluriharmonic functions. Trans. Amer. Math. Soc., 339(2):835–847, 1993. * [15] A. J. Izzo. Uniform algebras generated by holomorphic and pluriharmonic functions on strictly pseudoconvex domains. Pacific J. Math., 171(2):429–436, 1995. * [16] A. J. Izzo, Samuelsson K. H., and E. F. Wold. Presence or absence of analytic structure in maximal ideal spaces. Math. Ann., 366(1-2):459–478, 2016. * [17] M. Jarnicki and P. Pflug. On automorphisms of the symmetrized bidisc. Arch. Math. (Basel), 83(3):264–266, 2004. * [18] T. Jimbo. Polynomial hulls of graphs of antiholomorphic functions. volume 57, pages 157–163. 2003. Japanese Association of Mathematical Sciences 2001 Annual Meeting (Tennoji). * [19] T. Jimbo. Polynomial hulls of graphs on the torus in $C^{2}$. Sci. Math. Jpn., 62(3):335–342, 2005. * [20] L. Kosiński and W. Zwonek. Nevanlinna-Pick problem and uniqueness of left inverses in convex domains, symmetrized bidisc and tetrablock. J. Geom. Anal., 26(3):1863–1890, 2016. * [21] S. Lamy. Sur la structure du groupe d’automorphismes de certaines surfaces affines. Publ. Mat., 49(1):3–20, 2005. * [22] M. A. Lavrentiev. Sur les fonctions d’une variable complexe, représentables par des séries de polynomes. 1936\. * [23] Z. L. Leĭbenzon. On the ring of continuous functions on a circle. Uspehi Matem. Nauk (N.S.), 7(4(50)):163–164, 1952. * [24] S Pal and O. M. Shalit. Spectral sets and distinguished varieties in the symmetrized bidisc. J. Funct. Anal., 266(9):5779–5800, 2014. * [25] P. Pflug and W. Zwonek. Description of all complex geodesics in the symmetrized bidisc. Bull. London Math. Soc., 37(4):575–584, 2005. * [26] R. Remmert. Projektionen analytischer Mengen. Math. Ann., 130:410–441, 1956. * [27] R. Remmert. Holomorphe und meromorphe Abbildungen komplexer Räume. Math. Ann., 133:328–370, 1957. * [28] H. Samuelsson and E. F. Wold. Uniform algebras and approximation on manifolds. Invent. Math., 188(3):505–523, 2012. * [29] J. Sarkar. Operator theory on symmetrized bidisc. Indiana Univ. Math. J., 64(3):847–873, 2015. * [30] E. L. Stout. Polynomial convexity, volume 261 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston, MA, 2007. * [31] M. Trybuła. Invariant metrics on the symmetrized bidisc. Complex Var. Elliptic Equ., 60(4):559–565, 2015. * [32] E. M. Čirka. Approximation by holomorphic functions on smooth manifolds in ${\bf C}^{n}$. Mat. Sb. (N.S.), 78 (120):101–123, 1969. * [33] J. Wermer. On algebras of continuous functions. Proc. Amer. Math. Soc., 4:866–869, 1953. * [34] J. Wermer. Polynomially convex disks. Math. Ann., 158:6–10, 1965.
A note on simple zeros related to Dedekind zeta functions Wei Zhang Abstract We give a conditional lower bound on the number of non-trivial simple zeros for the Dedekind zeta function $\zeta_{K}(s)$, where $K$ is a quadratic number field. The conditional result is given by assuming a Lindelöf on average (in the $L^{6}$ sense) for both $\zeta(s)$ and $L(s,\chi)$, which can be seen as a stronger version of Conrey-Gonek-Ghosh’s [1] conditional result. This improves upon the work of Wu and Zhao [8], who had a similar result. Keywords: Simple zeros, zeta function. 2000 Mathematics Subject Classification:11M06, 11M41. ## 1\. Introduction A folklore conjecture in analytic number theory is that for any number field $K$, almost all zeros of $\zeta_{K}(s)$ in the critical strip are simple. This conjecture is attractive because of how hard it is to prove to the best of my knowledge of the existing literature, we still do not know if $\zeta_{K}(s)$ has infinitely many simple zeros if $[K:Q]\geq 3$ and it was not known if even the simplest case of $\zeta(s)$ (Riemann zeta function) has infinitely many simple zeros in the critical strip until it was observed independently by Heath-Brown [3] and Selberg [6] that Levinson’s method [5] yields a positive proportion of simple zeros on the critical line. In this note, we are interested in the non-real zeros of $\zeta_{K}(s)$ in the critical strip $\mathcal{R}=\\{s=\sigma+it:0<\sigma<1,0<t<T\\}.$ Let $K$ be a certain quadratic extension of the $\mathbb{Q}$ and let $\zeta_{K}(s)$ be the corresponding Dedekind zeta function. Then we have $\zeta_{K}(s)=\sum_{\mathfrak{a}}\frac{1}{(\mathfrak{Na})^{s}}=\sum_{n=1}^{\infty}\frac{a_{K}(n)}{n^{s}},\ \ \Re(s)>1,$ where $\mathfrak{a}$ suns over the non-zero integral ideals of $K,$ $\mathfrak{Na}$ is the norm of $\mathfrak{a},$ and $a_{K}(n)=\\#\\{\mathfrak{a}:\mathfrak{Na}=n\\}$ is the ideal counting function of $K.$ It is well known that $N_{K}(T)=\\#\\{\rho_{K}:\zeta_{K}(\rho_{K})=0,\rho_{K}\in\mathcal{R}\\}\sim\pi^{-1}T\log T.$ Moreover, it is also generally believed that all the non-real zeros of $\zeta_{K}(s)$ in $\mathcal{R}$ are also simple zeros. However, one even cannot confirm that whether there are infinitely many non-real simple zeros in $\mathcal{R}$ before 1986. In 1986, Conrey, Ghosh and Gonek [1] introduced a new method to deal with the simple zeros for such type problems. For sufficiently large $T,$ Conrey, Ghosh and Gonek [1] proved that $N^{\prime}_{K}(T)\gg T^{\frac{6}{11}},$ where $N^{\prime}_{K}(T)=\\#\\{\rho_{K}:\zeta_{K}(\rho_{K})=0,\zeta^{\prime}_{K}(\rho_{K})\neq 0,\rho_{K}\in\mathcal{R}\\}.$ This implies that there are infinitely many simple zeros for the $\zeta_{K}(s)$ associated to the quadratic number fields. The exponent in this result depends upon progress towards the Lindelöf hypothesis. In fact, a better result is also given in [1]. Precisely, for any $\varepsilon>0,$ sufficiently large $T$ and $\theta=\max\left\\{\frac{1}{1+6c},\frac{\sqrt{1+16c+16c^{2}}-1-4c}{4c}\right\\},$ it is proved in [1] that $N^{\prime}_{K}(T)\gg T^{\theta-\varepsilon},$ where $c$ is determined by the subconvexity $\zeta(1/2+it)\ll(|t|+1)^{c+\varepsilon}.$ One can find that the Lindelöf hypothesis implies that $N^{\prime}_{K}(T)\gg T^{1-\varepsilon},$ which means that one can get almost to a positive proportion. It is worth emphasizing that the zeros what they find are also simple zeros of $\zeta(s),$ where $\zeta(s)$ is the well known Riemann zeta function. In another paper [2], the authors also proved that the Riemann Hypothesis implies that more than $1/54$ of the non-real zeros of $\zeta_{K}(s)$ are simple. Very recently, in [8], some results in the above are improved by showing (unconditionally) that $N^{\prime}_{K}(T)\gg T^{\frac{6}{7}-\varepsilon}.$ In [8], some conditional results were also given. To state these conditional results precisely, we need introduce two conjectures. Conjecture 1. Let $N(\sigma,T)$ denote the number of zeros of $\zeta(s)$ for $\beta\geq\sigma$ and $|\gamma|\leq T.$ Then for any $\varepsilon>0$ and $1/2\leq\sigma\leq 1,$ we have $N(\sigma,T)\ll T^{A(\sigma)(1-\sigma)+\varepsilon}.$ Conjecture 2. Let $q\geq 1$ be a fixed integer, and let $\chi$ be a real primitive character to the modulus $q$. For $\bar{s}=1/2+it,$ any $\varepsilon>0,$ any real number $k>4$ and sufficiently $T,$ we have $\int_{1}^{T}\left|L(\bar{s},\chi)\right|^{k}\ \mathrm{d}t\ll_{q}T^{1+\varepsilon},$ where $L(s,\chi)=\sum_{n=1}^{\infty}\chi(n)n^{-s}$ $(\Re s>1)$ is the Dirichlet $L$-function. In [8], Wu and Zhao also proved that * • If Conjecture 1 is true with $A(\sigma)=2$, then $N^{\prime}_{K}(T)\gg T^{1-\varepsilon}.$ * • If Conjecture 2 is true with $k=\bf{8}$, then $N^{\prime}_{K}(T)\gg T^{1-\varepsilon}.$ Surprisingly, we find that $k=\bf{6}$ for Conjecture 2 is sufficient enough for this purpose. We also remark that in this method, only with a real number $k\in(4,6),$ one cannot get the desired conclusion. Now we intend to give a concrete reason for this. We assume that Conjecture 2 is true with certain $k$ and assume that $N(\sigma,T)\ll T^{(2l+2)(1-\sigma)/(l+2-2\sigma)+\varepsilon},$ which is from the result of [9]. Then in the process of this paper, we can obtain that $S(T,1)-S(T,a)\ll T^{1-B(\sigma,k,l)(\sigma-1/2)+(1-2/k)\varepsilon},$ where $B(\sigma,k,l)=2l(1-2/k)/(l+2-2\sigma)-1.$ Then we have $B(\sigma,k,l)=\frac{2l(1-2/k)-l-2+2\sigma}{l+2-2\sigma}.$ We need $2l(1-2/k)-l-2+2\sigma>0.$ Hence for $1/2+\varepsilon<\sigma<1,$ we have $\frac{k}{k-4}<l+\varepsilon.$ When we choose $k={\bf 6}$ in Conjecture 1, Conjecture 1 just implies the density result we needed of $l=3,\theta=1,\alpha=1$ in [9] (see also in [4]). Moreover, one can also give conditional results for this problem by assuming Conjecture 2 ($k\in(4,6)$) with assumed results that are weaker than the density hypothesis but stronger than $N(\sigma,T)\ll T^{A(\sigma)(1-\sigma)+\varepsilon},$ where $A(\sigma)=8/(5-2\sigma).$ ###### Theorem 1.1. If Conjecture 2 is true with $k=\bf{6}$, then $N^{\prime}_{K}(T)\gg T^{1-\varepsilon}.$ ###### Remark 1. It is worth remarking that if the $L^{6}$-Lindelöf on average could be replaced by an $L^{4}$-Lindelöf on average, then the theorem would become unconditional. Since we have known how to prove $L^{4}$-Lindelöf-on-average for Dirichlet $L$-functions by the classical work of Ingham on the fourth moment of $\zeta(s)$ (see Titchmarsh [7] or [4]). ## 2\. Preliminaries For proof of Theorem 1.1, the method in this paper relies heavily on the relation $\zeta_{K}(s)=\zeta(s)L(s,\chi),$ where $\chi$ is a real primitive character to the modulus $q=|D|$, where $D$ is the discriminant of $K.$ It is well known that $\zeta(1-s)=2(2\pi)^{-s}\Gamma(s)\cos(\pi s/2)\zeta(s),$ and $\displaystyle L(1-s,\chi)$ $\displaystyle=\frac{q^{s}}{\sum_{l=1}^{q}\chi(l)e(l/q)}(2\pi)^{-s}\Gamma(s)(e^{\pi is/2}+\chi(-1)e^{-\pi is/2})L(s,\chi),$ and $\zeta^{\prime}_{K}(s)=\zeta^{\prime}(s)L(s,\chi)+\zeta(s)L^{\prime}(s,\chi).$ Let $\rho=\beta+i\gamma$ denote the non-trivial zeros of $\zeta(s)$. Then it is implied that (for instance, see (8) in [1] and (2.6)-(2.8) in [8]) $\displaystyle|L(1-\rho,\chi)|\ll T^{\beta-1/2}|L(\rho,\chi)|$ (2.1) and $\displaystyle|\zeta^{\prime}(1-\rho)|\ll T^{\beta-1/2}|\zeta^{\prime}(\rho)|.$ (2.2) For $1/2\leq a\leq 1,$ let $S(T,a)=\sum_{\rho\in\mathcal{R},1-a\leq\beta\leq a}\zeta^{\prime}(\rho)L(1-\rho,\chi).$ Moreover, let $N^{\prime}_{K}(T,a)=\\#\\{\rho_{K}:\rho_{K}=\beta_{K}+i\gamma_{K}\in\mathcal{R},\rho_{K}\ \textup{is\ a\ simple\ zero\ of}\ \zeta_{K}\ \textup{and}\ 1/2\leq\beta_{K}\leq a\\}.$ Then we can introduce the following lemmas. ###### Lemma 2.1. For some $1/2\leq a\leq 5/8,$ suppose that $|S(T,a)|\gg T.$ Then $N^{\prime}_{K}(T,a)\gg T^{\frac{2a}{4a-1}-\varepsilon}.$ ###### Proof. See Lemma 8 in [8]. ∎ ###### Lemma 2.2. With the notations above, we have $S(T,1)=L(1,\chi)\frac{T}{4\pi}\log^{2}T+O(T\log T).$ ###### Proof. This is (18) in [1]. ∎ ###### Lemma 2.3. For $k\geq 1$ a fixed integer let $1/2\leq\sigma_{k}^{*}<1$ denote the infimum of all numbers $\sigma_{k}^{*}$ for which $\int_{1}^{T}|\zeta(\sigma_{k}^{*}+it)|^{2k}dt\ll T^{1+\varepsilon}$ holds for any $\varepsilon>0.$ Then the asymptotic formula $\int_{1}^{T}|\zeta(\sigma+it)|^{2k}dt=T\sum_{n=1}^{\infty}d_{k}(n)^{2}n^{-2\sigma}+R(k,\sigma,T)$ holds for $\sigma_{k}^{*}<\sigma<1$ with $R(k,\sigma,T)\ll T^{(2-\sigma-\sigma_{k}^{*})/(2-2\sigma_{k}^{*})}.$ ###### Proof. See Lemma 8.4 in [4]. ∎ ###### Lemma 2.4. If Conjecture 2 is true with $k={\bf 6},$ then for $1/2\leq\sigma\leq 1,$ we have $\displaystyle N(\sigma,T)\ll T^{\frac{8-8\sigma}{5-2\sigma}+\varepsilon},$ (2.3) $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)|^{6}\ll T^{1+\varepsilon},$ (2.4) and $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|L(\rho,\chi)|^{6}\ll T^{1+\varepsilon}.$ (2.5) ###### Proof. (2.3) can be obtained by Theorem 1 in [9] for the case $l=3,$ $\alpha=1,$ and $\theta=1.$ (see also in [4]). Now we begin to prove (2.4). By Cauchy’s theorem, $\zeta^{\prime}(\rho)=\frac{1}{\Delta}\int_{\Delta}^{2\Delta}\frac{1}{2\pi i}\int_{|s-\rho|=\delta}\frac{\zeta(s)}{(s-\rho)^{2}}dsd\delta,$ where $\Delta=(\log T)^{-1}.$ Hence if $1/2\leq\beta<1,$ then using Hölder’s inequality we have $|\zeta^{\prime}(\rho)|^{6}\ll T^{\varepsilon}\int_{1/2-2\Delta}^{1+2\Delta}\int_{1}^{T+1}|\zeta(\sigma+it)|^{6}dtd\sigma.$ Since the number of $\rho$ in a square of side length 1 is $\ll(\log T),$ then by Lemma 2.3 we have $\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq 1/2\end{subarray}}|\zeta^{\prime}(\rho)|^{6}\ll T^{\varepsilon}\int_{1/2-2\Delta}^{1+2\Delta}\int_{1}^{T+1}|\zeta(\sigma+it)|^{6}dtd\delta\ll T^{1+\varepsilon}.$ (2.5) can be proved by adapting the argument of [1, 8] and Lemma 2.3 in assuming Conjecture 2 with $k=6.$ ∎ ## 3\. Proof of Theorem 1.1 Then we will prove the following lemma. ###### Lemma 3.1. For any sufficiently small $\varepsilon>0$ and $a\geq 1/2+10\sqrt{\varepsilon},$ if Conjecture 2 is true with $k={\bf 6},$ we have $S(T,a)\asymp T\log^{2}T.$ ###### Proof. Recall that $S(T,1)-S(T,a)\ll\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq a\end{subarray}}|\zeta^{\prime}(\rho)L(1-\rho,\chi)|+\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq a\end{subarray}}|\zeta^{\prime}(1-\rho)L(\rho,\chi)|.$ Then by (2.1) and (2.2), one has $\displaystyle S(T,1)-S(T,a)\ll\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq a\end{subarray}}T^{\beta-\frac{1}{2}}|\zeta^{\prime}(\rho)L(\rho,\chi)|.$ (3.1) By using the Riemann-Stieltjes integral (or the partial summation), one can obtain that $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq a\end{subarray}}T^{\beta-\frac{1}{2}}|\zeta^{\prime}(\rho)L(\rho,\chi)|$ $\displaystyle=\int_{a}^{1}T^{\sigma-1/2}dF(\sigma,\mathcal{R}),$ where $F(\sigma,\mathcal{R}):=\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)L(\rho,\chi)|.$ Then applying integration by parts, we have $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq a\end{subarray}}T^{\beta-\frac{1}{2}}|\zeta^{\prime}(\rho)L(\rho,\chi)|$ $\displaystyle\ll T^{1-\frac{1}{2}}F(1,\mathcal{R})-T^{a-\frac{1}{2}}F(a,\mathcal{R})$ $\displaystyle-\int_{a}^{1}T^{\sigma-1/2}F(\sigma,\mathcal{R})(\log T)d\sigma.$ This gives that $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq a\end{subarray}}T^{\beta-\frac{1}{2}}|\zeta^{\prime}(\rho)L(\rho,\chi)|$ $\displaystyle\ll\max_{a\leq\sigma\leq 1}T^{\sigma-\frac{1}{2}}\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)L(\rho,\chi)|(\log T)$ $\displaystyle\ll\max_{a\leq\sigma\leq 1}T^{\sigma-\frac{1}{2}+\varepsilon}\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)L(\rho,\chi)|.$ By Hölder’s inequalities, we have $\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)L(\rho,\chi)|\leq N(\sigma,T)^{\frac{2}{3}}\left(\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)|^{6}\right)^{\frac{1}{6}}\left(\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|L(\rho,\chi)|^{6}\right)^{\frac{1}{6}}.$ Recall Lemma 2.4 and the 6th moment hypothesis, one can get $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)L(\rho,\chi)|\ll T^{\frac{8-8\sigma}{5-2\sigma}\times\frac{2}{3}+\frac{1}{3}+\varepsilon}.$ Then $\displaystyle\sum_{\begin{subarray}{c}\rho\in\mathcal{R}\\\ \beta\geq\sigma\end{subarray}}|\zeta^{\prime}(\rho)L(\rho,\chi)|\ll T^{1+\frac{3-6\sigma}{5-2\sigma}\times\frac{2}{3}+\varepsilon}\ll T^{1-\frac{4}{5-2\sigma}\times(\sigma-\frac{1}{2})+\varepsilon}.$ Hence by (3.1), we have $\displaystyle S(T,1)-S(T,a)$ $\displaystyle\ll\max_{a\leq\sigma\leq 1}T^{\sigma-1/2}T^{1-\frac{4}{5-2\sigma}\times(\sigma-\frac{1}{2})+\varepsilon}$ $\displaystyle\ll\max_{a\leq\sigma\leq 1}T^{1-\frac{4}{5-2\sigma}\times(\sigma-\frac{1}{2})+\frac{5-2\sigma}{5-2\sigma}\times(\sigma-\frac{1}{2})+\varepsilon}.$ This gives that $S(T,1)-S(T,a)\ll\max_{a\leq\sigma\leq 1}T^{1-\frac{2}{5-2\sigma}\times(\sigma-\frac{1}{2})^{2}+\varepsilon}.$ For suitable $\varepsilon^{\prime}=10\sqrt{\varepsilon},$ $a\geq 1/2+\varepsilon^{\prime}$ and $a\leq\sigma\leq 1,$ we can get that $\displaystyle S(T,1)-S(T,a)$ $\displaystyle\ll T^{1-\frac{25}{1-5\sqrt{\varepsilon}}+\varepsilon}$ $\displaystyle\ll T^{1-24\varepsilon}$ $\displaystyle\ll T^{1-\varepsilon}.$ Recall (see Lemma 2.2) that $S(T,1)=L(1,\chi)\frac{T}{4\pi}\log^{2}T+O(T\log T).$ Hence for $a\geq 1/2+\varepsilon^{\prime},$ we can obtain that $S(T,a)\asymp T\log^{2}T.$ This completes the proof of Lemma 3.1. ∎ Now our theorem can be deduced by Lemma 2.1, Lemma 3.1 and the choice of $a=1/2+\varepsilon^{\prime}.$ $\mathbf{Acknowledgements}$ I am deeply grateful to the referee(s) for carefully reading the manuscript and making useful suggestions. ## References * [1] J.B. Conrey, A. Ghosh and S.M. Gonek, Simple zeros of the zeta function of a quadratic number field. I. Invent. Math. 86, 563-576 (1986) * [2] J.B. Conrey, A. Ghosh and S.M. Gonek, Simple zeros of the zeta function of a quadratic number field. II, In Analytic Number Theory and Diophantine Problems (Stillwater, OK, 1984) (Progress in Mathematics 70) Birkh auser (Boston, MA, 1987) 87-114. * [3] D.R. Heath-Brown, Simple zeros of the Riemann zeta-function on the critical line. Bull. Lond. Math. Soc. 11 (1979), 17- 18\. * [4] A. Ivić, The Riemann Zeta-function. John Wiley $\&$ Sons, New York, 1985. * [5] N. Levinson, More than one third of zeros of Riemann’s zeta-function are on $\sigma=1/2$. Advances in Math., 13 (1974), 383-436. * [6] A. Selberg, On the zeros of Riemann’s zeta-function. Skr. Nor. Vidensk.-Akad. Oslo, I 10 (1942), 1-59. * [7] E.C. Titchmarsh, The Theory of the Riemann Zeta-function. revised by D. R. HeathBrown, 2nd edn., Clarendon Press (Oxford, 1986). * [8] X.S. Wu and L.L. Zhao, On simple zeros of the Dedekind zeta function of a quadratic number field. Mathematika 65, 851-861 (2019) * [9] Y.B. Ye and D.Y. Zhang, Zero density for automorphic $L$ -functions. J. Number Theory 133, 3877-3901 (2013)
$\mathbb{H}=F^{2\times 2}$ and $w^{2}=1$. 1. ((a)) If $\dim{A}=1$ then $A=F$. 2. ((b)) If $\dim{A}=2$ then we have one of the following cases: 1. ((i)) $A$ is a separable quadratic extension field over $F$. 2. ((ii)) $A$ is an inseparable quadratic extension field over $F$ (and $\operatorname{char}{F}=2$). 3. ((iii)) $A\cong F[X]/(X^{2}-X)\cong F\times F$ is a split composition algebra. 4. ((iv)) $A\cong F[X]/(X^{2})$ is a local algebra. 3. ((c)) If $\dim{A}=3$ then the radical $R:=A^{\perp}\cap A$ is not trivial, and one of the following occurs. 1. ((i)) $\dim{R}=1$, and $A$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of the subalgebra $F+Fp_{0}+Fn_{0}$ of upper triangular matrices in $\mathbb{H}$ (see 5.7). 2. ((ii)) $\dim{R}\geq 2$, and $A$ is in the orbit of the subalgebra $F+(Fp_{0}+Fn_{0})w$ (see 5.9). 4. ((d)) If $\dim{A}=4$ then one of the following occurs: 1. ((i)) $A$ is a quaternion algebra (split or not). 2. ((ii)) $A$ is in the orbit of $B+(Fp_{0}+Fn_{0})w$, where $B\subseteq\mathbb{H}$ is either a two-dimensional composition algebra, or an inseparable quadratic field extension. 3. ((iii)) The radical $Q:=\left\\{{x\in A^{\perp}\cap A}\left|\vphantom{x\in A^{\perp}\cap A{\mathrm{N}}(x)=0}\right.\,{{\mathrm{N}}(x)=0}\right\\}$ of the restriction ${\mathrm{N}}|_{A}$ of the quadratic form has dimension $3$. Then $Q$ is isomorphic to the Heisenberg algebra, and $A$ is in the orbit of $F+(n_{0}\mathbb{O}\cap\mathbb{O}n_{0})$. 4. ((iv)) $A=A^{\perp}$ is a totally inseparable field extension of degree $4$ and exponent $1$ (and $\operatorname{char}{F}=2$). Of course, this case can only occur if $\operatorname{char}{F}=2$. Two such subfields are in the same orbit under $\operatorname{Aut}_{F}(\mathbb{O})$ if, and only if, they are isomorphic as $F$-algebras, see [7, 5.7]. In each of these cases, subalgebras are in the same orbit under $\operatorname{Aut}_{F}(\mathbb{O})$ precisely if they are isomorphic as $F$-algebras (and contain $1$, as assumed here). Each subalgebra $A$ with $1\in A$ and $\dim{A}\leq 4$ is associative. ###### Proof. If $\dim{A}=2$, pick $a\in A\smallsetminus F$. Then $1,a$ is a basis for $A$, and the minimal polynomial of $a$ is $m_{a}(X)=X^{2}-{\mathrm{T}}(a)X+{\mathrm{N}}(a)$. The four cases in assertion (b) now belong to the cases where $m_{a}(X)$ is irreducible and separable (viz., $\operatorname{char}{F}\neq 2$ or ${\mathrm{T}}(a)\neq 0$), or $m_{a}(X)$ is irreducible and inseparable (viz., $\operatorname{char}{F}=2$ and ${\mathrm{T}}(a)=0$), or $m_{a}(X)$ is reducible with two different roots in $F$, or $m_{a}(X)$ has a double root in $F$, respectively. Two such algebras $A$ and $A^{\prime}$ are in the same orbit under $\operatorname{Aut}_{F}(\mathbb{O})$ if, and only if, there are elements $a\in A\smallsetminus F$ and $a^{\prime}\in A^{\prime}\smallsetminus F$ such that $a$ and $a^{\prime}$ have the same norm and trace (see 4.4). In that case, the algebras $A$ and $A^{\prime}$ are both isomorphic to $F[X]/(m_{a}(X))$. If $\dim{A}=3$ then $A$ is not a composition algebra. Therefore, the restriction of the polar form is degenerate, and the radical $R:=A^{\perp}\cap A$ is not trivial. From 6.1(a) we know $AR=R=RA$. Assume that $\dim{R}=1$, and choose a vector space complement $B$ for $R$ in $A$ such that $1\in B$. Then $B$ is a subalgebra, and the restriction of the polar form to $B$ is not degenerate. So $B$ is a composition algebra, and $R$ is a $B$-module. As $\dim{R}=1$, the annihilator $\left\\{{a\in B}\left|\vphantom{a\in BaR=\\{0\\}}\right.\,{aR=\\{0\\}}\right\\}$ is not trivial. Therefore, the algebra $B$ is not a field but a split composition algebra, and $R$ does not contain invertible elements (see also 6.1). Then $B\cong F\times F$, and each non-trivial element of $1^{\perp}\cap B$ is invertible. Choose $a\neq 0$ in the annihilator of $R$ in $B$. Then ${\mathrm{N}}(a)=0\neq{\mathrm{T}}(a)$, and $M:=Fa+R$ is a totally singular subalgebra containing an idempotent. From 5.11 we now infer that $M$ is in the orbit of either $Fp_{0}+Fn_{0}$ or $\kappa(Fp_{0}+Fn_{0})$. In any case, the algebra $A=F+M$ is in the orbit of the algebra $F+Fp_{0}+Fn_{0}=F+\kappa(Fp_{0}+Fn_{0})$ of upper triangular matrices, as claimed. Now assume $\dim{R}\geq 2$ (and still $\dim{A}=3$). Then the radical $Q$ of ${\mathrm{N}}|_{A}$ has dimension $2$ by 6.3, and forms a totally singular subalgebra, see 6.1. From 5.11 we conclude that $Q$ is in the orbit of $(Fp_{0}+Fn_{0})w$. So $A=F+Q$ is in the orbit of $F+(Fp_{0}+Fn_{0})w$, as claimed. Finally, consider the case where $\dim{A}=4$. From 6.1 we know that $A/R$ is a composition algebra if $R\neq A$. So $\dim{R}=4-\dim(A/R)\in\\{0,2,3,4\\}$. If $\dim{R}=0$ then $A$ is a composition algebra, and then a quaternion algebra. If $\dim{R}=2$ then the totally singular subalgebra $R$ is in the orbit of $(Fp_{0}+Fn_{0})w$, see 5.11, and we assume $R=(Fp_{0}+Fn_{0})w$ without loss of generality. We have $A\subseteq\\{p_{0}w,n_{0}w\\}^{\perp}=\mathbb{H}+R$, and $B:=A\cap\mathbb{H}$ has dimension $2$. As $A$ contains invertible elements, we have $1\in B$. So $B$ is a subalgebra. From $B\cap R=\\{0\\}$ we infer that the restriction of the polar form to $B$ is not degenerate. So $B$ is a composition algebra (either a separable quadratic extension field of $F$, or isomorphic to $F\times F$), and $A=B+(Fp_{0}+Fn_{0})w$. Consider a four-dimensional subalgebra $A^{\prime}$ with $R\subseteq A^{\prime}\subseteq\mathbb{H}+R$. Then $A$ and $A^{\prime}$ are isomorphic as $F$-algebras if, and only, the subalgebras $B$ and $B^{\prime}:=A^{\prime}\cap\mathbb{H}$ are isomorphic. In that case, we find $b\in B\smallsetminus F$ and $b^{\prime}\in B^{\prime}\smallsetminus F$ with the same norm (i.e., the same determinant) and the same trace. So there exists $s\in\mathrm{GL}_{2}{(F)}=\mathbb{H}^{\times}$ with $sbs^{-1}=b^{\prime}$. Pick an upper triangular matrix $t\in\mathbb{H}$ (${}=F^{2\times 2}$) with $\det t=\det s$. The map $\alpha_{s,t}\colon a+xw\mapsto sas^{-1}+(txs^{-1})w$ is an $F$-linear automorphism of $\mathbb{O}$, see 4.2. Now $\alpha_{s,t}(b)=b^{\prime}$, and it is easy to verify $\alpha_{s,t}(R)=R$. So $\alpha_{s,t}(A)=A^{\prime}$. This settles those cases in assertion (d)(ii) where $\dim{R}=2$. The remaining cases that belong to assertion (d)(ii) will be discussed below; those are cases where $R=A$ but $\dim{Q}=2$. Now assume $\dim{R}\geq 3$, and consider the radical $Q:=\left\\{{x\in A^{\perp}\cap A}\left|\vphantom{x\in A^{\perp}\cap A{\mathrm{N}}(x)=0}\right.\,{{\mathrm{N}}(x)=0}\right\\}$ of the restriction ${\mathrm{N}}|_{A}$ of the quadratic form. From 6.1 we know that $Q$ is a subalgebra. As $1\notin Q$, we have $\dim{Q}\leq 3$. If $\dim{Q}=3$ then the totally singular subalgebra $Q$ is in the orbit of the Heisenberg algebra $n_{0}\mathbb{O}\cap\mathbb{O}n_{0}$ (see 5.11), and assertion (d)(iii) follows. If $\dim{Q}<3$ then $Q\neq R$, and $\operatorname{char}{F}=2$. In particular, the polar form is alternating, and $R=A$ because the co-dimension of $R$ in $A$ is even. From 6.1 we know that $A$ is commutative. If $\dim{Q}=0$, we pick $b\in A\smallsetminus F$, and note that $B:=F+Fb$ is an inseparable quadratic field extension of $F$. For any $c\in A\smallsetminus B$, we then have $B\cap Bc=\\{0\\}$, and $A=B+Bc$ shows that the algebra $A$ is generated by $\\{1,b,c\\}$. So $A$ is associative, commutative, and thus a totally inseparable field extension of $F$ (of degree $4$ and exponent $1$), as claimed in (d)(iv). If $\dim{Q}\neq 0$ then there are $u,v\in A\smallsetminus Q$ such that $F+Fu+Fv$ contains a vector space complement of $Q$ in $A$. Then the subalgebra spanned by $\\{1,x,y\\}$ is associative, and the quotient $A/Q$ is a totally inseparable field extension of $F$. In particular, the degree of that extension is a power of $2$, and we are left with the case that $\dim{Q}=2$. Now 5.11 yields that $Q$ is in the orbit of $(Fp_{0}+Fn_{0})w$, and we may assume $Q=(Fp_{0}+Fn_{0})w$. Then $A\subseteq\mathbb{H}+Q$, and $B:=A\cap\mathbb{H}$ is a two-dimensional subalgebra, again (albeit not a composition algebra, but an inseparable quadratic extension field). If $A^{\prime}$ is another such algebra (contained in $\mathbb{H}+Q$), we note that $A$ is isomorphic to $A^{\prime}$ precisely if $A/Q$ is isomorphic to $A^{\prime}/Q$. So $A\cong A^{\prime}$ means $B\cong B^{\prime}:=A^{\prime}\cap\mathbb{H}$. As above, any algebra isomorphism between $B$ and $B^{\prime}$ extends to an algebra automorphism $a\mapsto sas^{-1}$ of $\mathbb{H}$, any such automorphism extends to an automorphism $\alpha_{s,t}$ of $\mathbb{O}$ such that $\alpha_{s,t}(Q)=Q$, and $\alpha_{s,t}(A)=A^{\prime}$ follows. We have thus also completed the proof of assertion (d)(ii). It remains to verify that each subalgebra $A$ with $1\in A$ and $\dim{A}\leq 4$ is associative. For the cases in (a) and in (b), this is obvious. For the cases in (c), we note that $A$ is of the form $A=F+X$, where $X$ is a subalgebra generated by two elements. So $X$ is associative by Artin’s theorem (see 1.4(e)), and $A$ is associative, as well. In (d), the quaternion algebras and the field extensions are associative. For the algebra $F+(n_{0}\mathbb{O}\cap\mathbb{O}n_{0})$ we use the fact that the subalgebra $n_{0}\mathbb{O}\cap\mathbb{O}n_{0}$ is associative (it is a Heisenberg algebra, generated by $n_{0}w$ and ${\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{p_{0}}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{p_{0}}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{p_{0}}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{p_{0}}\hfil$\crcr}}}}w$, see 3.7). In the last remaining case, we have to consider $A=B+X$, where $B=F+Fb\subseteq\mathbb{H}$ is a two-dimensional (associative, and commutative) subalgebra of $\mathbb{H}=F^{2\times 2}$, and $X=(Fp_{0}+Fn_{0})w$. Since $({\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{yw}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{yw}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{yw}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{yw}\hfil$\crcr}}}})(xw)=0$ holds for all $x,y\in X$, the multiplication in $A$ is given by $(a+xw)(b+yw)=ab+(ya+x{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{b}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{b}\hfil$\crcr}}}})w$, where $a,b\in B$ and $x,y\in Fp_{0}+Fn_{0}$. Using the associative and commutative laws in $B$, we find that $A$ is associative. ∎ ###### 6.5 Remark. The examples in 6.4(d)(ii) show that not every associative subalgebra of a split octonion algebra is contained in a quaternion algebra. ## 7\. Associative and commutative subalgebras ###### 7.1 Associativity of subalgebras. Let $\mathbb{O}$ be an octonion algebra. 1. ((a)) Every subalgebra of dimension less than $4$ is associative. 2. ((b)) If $\mathbb{O}$ is split then there do exist subalgebras of dimension $4$ that are not associative, namely, the totally singular ones (of the form $n\mathbb{O}$ or $\mathbb{O}n$ with ${n^{2}=0\neq n\in\mathbb{O}}$). Every other subalgebra of dimension $4$ is associative. 3. ((c)) Every subalgebra of dimension greater than $4$ contains a non-associative subalgebra of dimension $4$. 4. ((d)) If $\mathbb{O}$ is split then a given subalgebra is not associative if, and only if, it contains a maximal totally singular subspace. 5. ((e)) If $\mathbb{O}$ is not split then every proper subalgebra is associative. ###### Proof. If a subalgebra is not totally singular then it contains $1$. We have seen in 6.4 that each subalgebra $A$ with $1\in A$ and $\dim{A}\leq 4$ is associative. A totally singular subalgebra is associative if, and only if, it has dimension less than $4$; see 5.11. By 5.11.(d), the totally singular subalgebras of dimension $4$ are those of the form $n\mathbb{O}$ and $\mathbb{O}n$, repsectively, with $n^{2}=0\neq n$. So assertions (a) and (b) are established. Each proper subalgebra of dimension greater than $4$ contains a totally singular subalgebra of dimension $4$ (of the form $n\mathbb{O}$ for some nilpotent element $n$, see 6.2). These subalgebras are not associative. This gives assertions (c) and (d). If $\mathbb{O}$ is not split then every proper subalgebra has dimension at most $4$, and there are no nilpotent elements apart from $0$. So assertions (a) and (b) yield assertion (e). ∎ ###### 7.2 Corollary. A $4$-dimensional subalgebra of $\mathbb{O}$ is associative if, and only if, it contains a neutral element for its multiplication. By 5.5, such a neutral element in a $4$-dimensional subalgebra necessarily equals $1$. ∎ ###### 7.3 Commutativity of subalgebras. Let $\mathbb{O}$ be an octonion algebra, and let $A$ be a subalgebra. If $\mathbb{O}$ is split, we write $\mathbb{O}=\mathbb{H}+\mathbb{H}w$, with $\mathbb{H}=F^{2\times 2}$ and $w^{2}=1$. The subalgebra $A$ is commutative in the following cases: 1. ((a)) $\dim{A}=1$. 2. ((b)) $\dim{A}=2$, and $A=F+Fa\cong F[X]/(X^{2}-{\mathrm{T}}(a)X+{\mathrm{N}}(a))$. 3. ((c)) $\dim{A}=2$, and $A$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $(Fp_{0}+Fn_{0})w$. 4. ((d)) $\dim{A}=3$, and $A$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $F+(Fp_{0}+Fn_{0})w$. If $\operatorname{char}{F}\neq 2$ then these are the only commutative subalgebras. If $\operatorname{char}{F}=2$ we have the following additional cases (and no others): 1. ((e)) $\dim{A}=3$, and $A$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of the Heisenberg algebra ${Fn_{0}+(Fp_{0}+Fn_{0})w}$. 2. ((f)) $\dim{A}=4$, and $A$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $B+(Fp_{0}+Fn_{0})w$, where $B=F+Fb$ with $b\in(1^{\perp}\cap\mathbb{H})\smallsetminus F$. 3. ((g)) $\dim{A}=4$, and $A$ is a totally inseparable field extension of exponent $1$. In any case, every commutative subalgebra of $\mathbb{O}$ is associative, as well. ###### Proof. We use our classification of subalgebras, see 5.11, 6.4, and 6.2. We note that $Fp_{0}+Fn_{0}$ is not commutative, in fact $p_{0}n_{0}=n_{0}\neq 0=n_{0}p_{0}$. So we can exclude from the list of subalgebras each algebra that contains a subalgebra isomorphic to $Fp_{0}+Fn_{0}$, or isomorphic to $\kappa(Fp_{0}+Fn_{0})=F{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{p_{0}}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{p_{0}}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{p_{0}}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{p_{0}}\hfil$\crcr}}}}+Fn_{0}$. In particular, we note that $n_{0}\mathbb{O}$ is excluded by the observation $p_{0}=\left(\begin{smallmatrix}1&0\\\ 0&0\end{smallmatrix}\right)=\left(\begin{smallmatrix}0&1\\\ 0&0\end{smallmatrix}\right)\left(\begin{smallmatrix}0&0\\\ 1&0\end{smallmatrix}\right)\in n_{0}\mathbb{H}\subseteq n_{0}\mathbb{O}$. This leaves us with the following list: * • Subalgebras of dimension $1$: these are clearly commutative. * • Subalgebras in the orbit of $Q:=(Fp_{0}+Fn_{0})w$: here the multiplication is trivial, and thus commutative. * • Subalgebras in the orbit of $F+Q$: these algebras are commutative because $Q$ is. * • Heisenberg algebras (in the orbit of $Fn_{0}+Q=(n_{0}w)\mathbb{O}\cap\mathbb{O}(n_{0}w)$, see 2.7(b)); the only nontrivial products of elements in $\\{n_{0},p_{0}w,n_{0}w\\}$ are $n_{0}(p_{0}w)=n_{0}w$ and $(p_{0}w)n_{0}=-n_{0}w$. So the Heisenberg algebras are commutative if, and only if, the field $F$ has characteristic two. * • Subalgebras in the orbit of $F+Fn_{0}+Q$: such an algebra is commutative if, and only if, the Heisenberg algebra $Fn_{0}+Q$ is commutative, and we already know that this happens precisely if $\operatorname{char}{F}=2$. * • Totally inseparable extensions fields of degree $4$ (only possible if $\operatorname{char}{F}=2$) are commutative. * • Subalgebras in the orbit of $B+Q$, where $B\subseteq\mathbb{H}$ is either a two-dimensional composition algebra, or an inseparable quadratic field extension. For $b\in B$ and $x\in Fp_{0}+Fn_{0}$ we compute $b(xw)=(xb)w$ and $(xw)b=(x{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{b}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{b}\hfil$\crcr}}}})w$. If $B\subseteq 1^{\perp}$ then ${\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{b}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{b}\hfil$\crcr}}}}=b$ holds for each $b\in B$, and $B+Q$ is commutative. If $B\not\subseteq 1^{\perp}$ then $B$ is a composition algebra. So either $B\cong F\times F$, or $B$ is a separable quadratic extension field. In both cases, we find $b\in B$ such $b-{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{b}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{b}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{b}\hfil$\crcr}}}}$ is invertible, and $b(xw)=(xw)b$ implies $x=0$. So the algebra $B+Q$ is not commutative in these cases. Our discussion yields that the commutative subalgebras of $\mathbb{O}$ are exactly those given in the statement of the theorem. From 7.1 we infer that each one of these commutative subalgebras is in fact associative. ∎ ## 8\. Maximal subalgebras ###### 8.1 The lattice of isomorphism types of subalgebras. Let $p,n,m$ be elements of a (necessarily split) octonion algebra $\mathbb{O}$ with $p^{2}=p\notin\\{0,1\\}$, $n^{2}=0=m^{2}$, and such that $m,n$ are linearly independent. Moreover, assume that $pn=n$ and $np=0=nm$ (then $mn=0$). We abbreviate $S:=F+Fp\cong F\times F$, and $Q:=Fm+Fn$. Note that $S$ is a split two-dimensional composition algebra, while the multiplication on $Q$ is trivial. Note also that $T:={F+Fp+Fn}$ is isomorphic to the algebra of upper triangular matrices in $F^{2\times 2}$, and that $n\mathbb{O}\cap\mathbb{O}n$ is a Heisenberg algebra (see 2.7). Let $X$ be any subalgebra of dimension $d$ in $\mathbb{O}$. Our classification of subalgebras (see 5.11, 6.2, and 6.4) can be rephrased, as follows. $d=1$:: Then $X$ is in the orbit of $Y\in\\{F,Fp,Fn\\}$ under $\operatorname{Aut}_{F}(\mathbb{O})$. $d=2$:: Either $X$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $Y\in\\{S,{F+Fn},{Fn+Fp},{Fn+F{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{p}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{p}\hfil$\crcr}}}}},Q\\}\,,$ or $X$ is a separable extension field $E$ over $F$, or $\operatorname{char}{F}=2$ and $X$ is an inseparable extension field $D$ of $F$. Moreover, any subalgebra isomorphic to $E$ or $D$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $E$ or $D$, respectively. $d=3$:: Then $X$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $Y\in\\{T,F+Q,{m\mathbb{O}\cap\mathbb{O}n},{n\mathbb{O}\cap\mathbb{O}n}\\}$. $d=4$:: Then $X$ is in the orbit of $Y$ such that either $Y\in\\{{F+(n\mathbb{O}\cap\mathbb{O}n)},{n\mathbb{O}},{\mathbb{O}n}\\}$, or one of the following holds: * •: $Y\cong F^{2\times 2}$ is a split quaternion algebra, * •: there is a subalgebra $S\cong F\times F$ contained in $Q^{\perp}$ such that $Y={S+Q}$, * •: $Y$ is a quaternion field (denoted by $H$ in Figure 1), * •: there exists a separable extension field $E$ of degree $2$ contained in $Q^{\perp}$ such that $Y=E+Q$, * •: $\operatorname{char}{F}=2$ and there exists an inseparable extension field $D$ of degree $2$ contained in $Q^{\perp}$ such that $Y=D+Q$, * •: $\operatorname{char}{F}=2$ and $Y$ is a totally inseparable extension field (denoted by $K$ in Figure 1) of degree $4$ and exponent $1$. $d=5$:: Then $X$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $n\mathbb{O}+\mathbb{O}n$. $d=6$:: Then $X$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $Q^{\perp}=m\mathbb{O}+n\mathbb{O}=\mathbb{O}m+\mathbb{O}n$. The algebras $F$ and $Fp$ are isomorphic but not in the same orbit under $\operatorname{Aut}_{F}(\mathbb{O})$. In any other case, two subalgebras of $\mathbb{O}$ are in the same orbit under $\operatorname{Aut}_{F}(\mathbb{O})$ if, and only if, the subalgebras are isomorphic (as abstract $F$-algebras). Consider subalgebras $X\neq F$ and $Y^{*}$ with $\dim{Y^{*}}>1$. Since every subalgebra isomorphic to $Y^{*}$ is in the $\operatorname{Aut}_{F}(\mathbb{O})$-orbit of $Y^{*}$, there exists a subalgebra $X^{*}$ isomorphic to $Y^{*}$ with $X\subseteq X^{*}$ if, and only if, the algebra $Y^{*}$ has a subalgebra $Y$ isomorphic to $X$. We can thus describe inclusions between subalgebras by a graph with the $\operatorname{Aut}_{F}(\mathbb{O})$-orbits of subalgebras as vertices, and edges indicating embeddings. In Figure 1 we have further simplified that graph by identifying orbits of subalgebras that contain extension fields; corresponding vertices are indicated by doubled boundaries at their labels (namely, $E$, $H$, $E+Q$, $D$, $K$, $D+Q$). Recall that the existence of such extensions heavily depends on the structure of $F$ (e.g., no such extension exists if $F$ is algebraically closed); the corresponding embeddings (indicated by dotted lines in Figure 1) exist only if the subalgebras in question exist. The inseparable extensions only exist if $\operatorname{char}{F}=2$, this is indicated by dashed boundaries (and dashed edges indicating embeddings). $F\vphantom{p}$$Fp$$Fn\vphantom{p}$$Fn+Fp$$Q\vphantom{p}$$Fn+F{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{p}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{p}\hfil$\crcr}}}}$$m\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\cap\mathbb{O}n\vphantom{p}$ $n\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\vphantom{p}$$\mathbb{O}n\vphantom{p}$$n\mathbb{O}+\mathbb{O}n\vphantom{p}$$\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}Q^{\perp}$$D\vphantom{p}$$E\vphantom{p}$$\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}K\vphantom{p}$$\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}H\vphantom{p}$$S\vphantom{p}$$F+Fn\vphantom{p}$$T\vphantom{p}$$F+Q\vphantom{p}$$F^{2\times 2}$$\,F+nO\cap On\,$ $F+(n\mathbb{O}\cap\mathbb{O}n)\vphantom{p}$ $D+Q\vphantom{p}$ $E+Q\vphantom{p}$ $S+Q\vphantom{p}$ Figure 1. The lattice of orbits of subalgebras, see 8.1. In Figure 1, the maximal element $\mathbb{O}$ is omitted. The maximal proper subalgebras are in the orbits with black labels, i.e., those represented by non-split quaternion subalgebras, totally inseparable extensions of degree $4$ and exponent $1$, and $Q^{\perp}$. Grey labels indicate subalgebras that do not contain $1$ (i.e., totally singular ones), and rectangular labels (as opposed to elliptical ones) indicate associative algebras. Rectangular labels with rounded corners indicate commutative algebras (all of them are associative). Double lining is used for subalgebras whose existence depends on the ground field, and dashed lines indicate subalgebras that only exist if $\operatorname{char}{F}=2$. Finally, the algebras $n\mathbb{O}\cap\mathbb{O}n$ and $F+(n\mathbb{O}\cap\mathbb{O}n)$ are commutative precisely if $\operatorname{char}{F}=2$. Figures 2 and 3 give the simpler graphs for the cases where $\operatorname{char}{F}\neq 2$, and where $\operatorname{char}{F}\neq 2$ and $F$ allows no quadratic field extension. If $F$ is finite then the vertices labeled by “$K$” or by “$H$” have to be omitted from the graph in Figure 1. ###### 8.2 Remarks. Let $\mathbb{O}$ be the split octonion algebra over $F$; we know that $\mathbb{O}$ is unique up to isomorphism. If there does exist a quadratic extension field (separable or not) over $F$ then the split quaternion algebra $F^{2\times 2}$ over $F$ contains subalgebras isomorphic to that extension field. Doubling $F^{2\times 2}$ we obtain a split octonion algebra, and thus see that $\mathbb{O}$ contains subalgebras isomorphic to any quadratic extension field of $F$. If there exists a quaternion field $H$ over $F$ then a suitable double of $H$ is a split octonion algebra, and we obtain that $\mathbb{O}$ contains subalgebras isomorphic to $H$. However, using the Kaplansky radical (cp. [14, Ch. XII, Prop. 6.1, pp. 450 f]) one sees that there do exist fields $F$ with quadratic extensions of $F$ that are not contained in any quaternion field over $F$ — although there do exist quaternion fields over $F$ (see the examples in [14, p. 461] or [2, 2.4]). If $F$ is such a field, then the split octonion algebra over $F$ contains quaternion fields and also quadratic extensions that are not contained in any quaternion field, so the inclusion between $E$ and $H$ in Figure 1 has to be interpreted with due care. Finally, assume that there exists a totally inseparable extension field $K$ of degree $4$ and exponent $1$ over $F$. Then [7, 4.3, 4.6] shows that there is a split octonion algebra containing a subalgebra isomorphic to $K$. $F\vphantom{p}$$Fp$$Fn\vphantom{p}$$Fn+Fp$$Q\vphantom{p}$$Fn+F{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{p}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{p}\hfil$\crcr}}}}$$m\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\vphantom{p}$$\mathbb{O}n\vphantom{p}$$n\mathbb{O}+\mathbb{O}n\vphantom{p}$$\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}Q^{\perp}$$E\vphantom{p}$$\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}H\vphantom{p}$$S\vphantom{p}$$F+Fn\vphantom{p}$$T\vphantom{p}$$F+Q\vphantom{p}$$F^{2\times 2}$$F+(n\mathbb{O}\cap\mathbb{O}n)\vphantom{p}$ $E+Q\vphantom{p}$ $S+Q\vphantom{p}$ Figure 2. The lattice of orbits of subalgebras for the case where $\operatorname{char}{F}\neq 2$, see 8.1. $F\vphantom{p}$$Fp$$Fn\vphantom{p}$$Fn+Fp$$Q\vphantom{p}$$Fn+F{\mathchoice{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\displaystyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\textstyle{p}$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptstyle{p}\hfil$\crcr}}}{\vbox{\halign{#\cr\kern 1.0pt\cr\kern 1.0pt\hrulefill\crcr\kern 1.0pt\nointerlineskip\cr$\hfil\scriptscriptstyle{p}\hfil$\crcr}}}}$$m\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\cap\mathbb{O}n\vphantom{p}$$n\mathbb{O}\vphantom{p}$$\mathbb{O}n\vphantom{p}$$n\mathbb{O}+\mathbb{O}n\vphantom{p}$$\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}Q^{\perp}$$S\vphantom{p}$$F+Fn\vphantom{p}$$T\vphantom{p}$$F+Q\vphantom{p}$$F^{2\times 2}$$F+(n\mathbb{O}\cap\mathbb{O}n)\vphantom{p}$ $S+Q\vphantom{p}$ Figure 3. The lattice of orbits of subalgebras for the case where $F$ is quadratically closed and $\operatorname{char}{F}\neq 2$, see 8.1. ###### 8.3 Open Problems. For each subalgebra $X$ in a given octonion algebra $\mathbb{O}$ over $F$: 1. ((a)) Determine the full group $\operatorname{Aut}(X)$ of all $\mathbb{Z}$-linear automorphisms of $X$, and the group $\operatorname{Aut}_{F}(X)$ of all algebra automorphisms of $X$. 2. ((b)) Determine the stabilizers $\operatorname{Aut}(\mathbb{O})_{X}$ and $\operatorname{Aut}_{F}(\mathbb{O})_{X}$ of $X$ in $\operatorname{Aut}(\mathbb{O})$ and in $\operatorname{Aut}_{F}(\mathbb{O})$, repsectively. 3. ((c)) Determine the subgroups of $\operatorname{Aut}(X)$ induced by $\operatorname{Aut}(\mathbb{O})_{X}$ and by $\operatorname{Aut}_{F}(\mathbb{O})_{X}$, respectively. ## References * [1] C. Arf, _Untersuchungen über quadratische Formen in Körpern der Charakteristik 2. I_ , J. Reine Angew. Math. 183 (1941), 148–167, 10.1515/crll.1941.183.148. MR 0008069. Zbl 0025.01403. JfM 67.0055.02. * [2] K. J. Becher and D. B. Leep, _The Kaplansky radical of a quadratic field extension_ , J. Pure Appl. Algebra 218 (2014), no. 9, 1577–1582, 10.1016/j.jpaa.2013.12.009. MR 3188856. Zbl 1346.11031. * [3] A. Blunck, N. Knarr, B. Stroppel, and M. J. Stroppel, _Transitive groups of similitudes generated by octonions_ , J. Group Theory 21 (2018), no. 6, 1001–1050, 10.1515/jgth-2018-0018. MR 3871471. Zbl 1439.20059. * [4] P. M. Cohn, _Further algebra and applications_ , Springer-Verlag London Ltd., London, 2003, 10.1007/978-1-4471-0039-3. MR 1953987. Zbl 1006.00001. * [5] J. A. Dieudonné, _La géométrie des groupes classiques_ , Ergebnisse der Mathematik und ihrer Grenzgebiete (N.F.) 5, Springer-Verlag, Berlin, 1955, 10.1007/978-3-662-59144-4. MR 0072144. Zbl 0221.20056. * [6] S. M. Gagola, III, _Maximal subalgebras of the octonions_ , J. Pure Appl. Algebra 217 (2013), no. 1, 20–21, 10.1016/j.jpaa.2012.03.015. MR 2965899. Zbl 1323.17004. * [7] S. Garibaldi and H. P. Petersson, _Wild Pfister forms over Henselian fields, $K$-theory, and conic division algebras_, J. Algebra 327 (2011), 386–465, 10.1016/j.jalgebra.2010.07.039. MR 2746044. Zbl 1222.17009. * [8] A. N. Grishkov, M. d. L. M. Giuliani, and A. V. Zavarnitsine, _Classification of subalgebras of the Cayley algebra over a finite field_ , J. Algebra Appl. 9 (2010), no. 5, 791–808, 10.1142/S0219498810004233. MR 2726555. Zbl 1209.17026. * [9] L. C. Grove, _Classical groups and geometric algebra_ , Graduate Studies in Mathematics 39, American Mathematical Society, Providence, RI, 2002, 10.1090/gsm/039. MR 1859189. Zbl 0990.20001. * [10] N. Jacobson, _Basic algebra. II_ , W. H. Freeman and Company, New York, 2nd ed., 1989. MR 1009787. Zbl 0694.16001. * [11] N. Knarr and M. J. Stroppel, _Baer involutions and polarities in Moufang planes of characteristic two_ , Adv. Geom. 13 (2013), no. 3, 533–546, 10.1515/advgeom-2012-0016. MR 3100925. Zbl 06202450. * [12] N. Knarr and M. J. Stroppel, _Polarities and planar collineations of Moufang planes_ , Monatsh. Math. 169 (2013), no. 3-4, 383–395, 10.1007/s00605-012-0409-6. MR 3019290. Zbl 06146027. * [13] N. Knarr and M. J. Stroppel, _Subforms of norm forms of octonion fields_ , Arch. Math. (Basel) 110 (2018), no. 3, 213–224, 10.1007/s00013-017-1129-x. MR 3761133. Zbl 06844626. * [14] T. Y. Lam, _Introduction to quadratic forms over fields_ , Graduate Studies in Mathematics 67, American Mathematical Society, Providence, RI, 2005, 10.1090/gsm/067. MR 2104929. Zbl 1068.11023. * [15] H. Mäurer, _Die Quaternionenschiefkörper_ , Math. Semesterber. 46 (1999), no. 1, 93–96, 10.1007/s005910050055. MR 1681303. Zbl 0937.11055. * [16] R. Moufang, _Alternativkörper und der Satz vom vollständigen Vierseit $(\mathrm{D}_{9})$_, Abh. Math. Sem. Univ. Hamburg 9 (1933), 207–222, 10.1007/BF02940648. Zbl 0007.07205. JfM 59.0551.03. * [17] H. P. Petersson, _Maximal subalgebras of octonions_ , J. Pure Appl. Algebra 217 (2013), no. 9, 1700–1701, 10.1016/j.jpaa.2012.12.004. MR 3042630. Zbl 1323.17005. * [18] G. Pickert, _Projektive Ebenen_ , Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete LXXX, Springer-Verlag, Berlin, 1955. MR 0073211. Zbl 0066.38707. * [19] M. L. Racine, _On maximal subalgebras_ , J. Algebra 30 (1974), 155–180, 10.1016/0021-8693(74)90198-7. MR 349771. Zbl 0282.17009. * [20] R. D. Schafer, _An introduction to nonassociative algebras_ , Pure and Applied Mathematics 22, Academic Press, New York, 1966. MR 0210757. Zbl 0145.25601. * [21] G. J. Schellekens, _On a hexagonic structure. I_ , Nederl. Akad. Wetensch. Proc. Ser. A 65 = Indag. Math. 24 (1962), 201–217. 10.1016/S1385-7258(62)50019-X, MR 0143075. Zbl 0105.13001. * [22] T. A. Springer and F. D. Veldkamp, _Octonions, Jordan algebras and exceptional groups_ , Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2000, 10.1007/978-3-662-12622-6. MR 1763974. Zbl 1087.17001. * [23] J. Tits and R. M. Weiss, _Moufang polygons_ , Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2002. MR 1938841. Zbl 1010.20017. * [24] F. van der Blij and T. A. Springer, _Octaves and triality_ , Nieuw Arch. Wisk. (3) 8 (1960), 158–169. MR 0123622. Zbl 0127.11804. * [25] E. Witt, _Theorie der quadratischen Formen in beliebigen Körpern_ , J. Reine Angew. Math. 176 (1937), 31–44, 10.1515/crll.1937.176.31, https://gdz.sub.uni-goettingen.de/download/pdf/PPN243919689_0176/LOG_0008.pdf. MR 1581519. Zbl 0015.05701. JfM 62.0106.02. Norbert Knarr, Markus J. Stroppel LExMath Fakultät 8 Universität Stuttgart 70550 Stuttgart <EMAIL_ADDRESS>
MnLargeSymbols’164 MnLargeSymbols’171 # flag transitive geometries with trialities and no dualities coming from Suzuki Groups Dimitri Leemans Dimitri Leemans, Université Libre de Bruxelles, Département de Mathématique, C.P.216 - Algèbre et Combinatoire, Boulevard du Triomphe, 1050 Brussels, Belgium, Orcid number 0000-0002-4439-502X. <EMAIL_ADDRESS>, Klara Stokes Klara Stokes, Department of Mathematics and Mathematical Statistics, Umeå University, 901 87 Umeå, Sweden, Orcid number 0000-0002-5040-2089<EMAIL_ADDRESS>and Philippe Tranchida Philippe Tranchida, Université Libre de Bruxelles, Département de Mathématique, C.P.216 - Algèbre et Combinatoire, Boulevard du Triomphe, 1050 Brussels, Belgium, Orcid number 0000-0003-0744-4934. <EMAIL_ADDRESS> ###### Abstract. Recently, Leemans and Stokes constructed an infinite family of incidence geometries admitting trialities but no dualities from the groups $\operatorname{PSL}(2,q)$ (where $q=p^{3n}$ with $p$ a prime and $n>0$ a positive integer). Unfortunately these geometries are not flag transitive. In this paper, we construct the first infinite family of incidence geometries of rank three that are flag transitive and have trialities but no dualities. These geometries are constructed using chamber systems of Suzuki groups $\operatorname{Sz}(q)$ (where $q=2^{2e+1}$ with $e$ a positive integer and $2e+1$ is divisible by 3) and the trialities come from field automorphisms. We also construct an infinite family of regular hypermaps with automorphism group $\operatorname{Sz}(q)$ that admit trialities but no dualities. ###### Key words and phrases: Incidence geometry, triality, Suzuki groups ###### 1991 Mathematics Subject Classification: 51A10, 51E24, 20C33 This research was made possible thanks to an Action de Recherche Concertée grant from the Communauté Française Wallonie-Bruxelles. ## 1\. Introduction In [9], Leemans and Stokes showed how to construct coset geometries having trialities by using a group $G$ that has outer automorphisms of order three. They gave examples for the smallest simple Suzuki group $\operatorname{Sz}(8)$, some small $\operatorname{PSL}(2,q)$ groups (with $q\in\\{8,27,64,125,512\\}$), and the Mathieu group $M_{11}$. They used this technique in [10] to construct the first infinite family of coset geometries admitting trialities but no dualities. These geometries are of rank four, and constructed using regular maps of Wilson class III for the groups $\operatorname{PSL}(2,q)$, where $q=p^{3n}$ for a prime $p$ and a positive integer $n>0$. These geometries are not flag transitive. The group $\operatorname{PSL}(2,q)$ has two orbits on their chambers. The geometries for $\operatorname{Sz}(8)$ mentioned in [9] are known to be flag transitive thanks to [8] where Leemans classified all thin flag transitive geometries having a group $\operatorname{Sz}(8)$ as type-preserving automorphism group. Among the 183 thin geometries found for $\operatorname{Sz}(8)$, four of them admit trialities but no dualities. Nevertheless, in [9], Leemans and Stokes point out that they do not know if, in the general case, the regular hypermaps they construct are flag transitive coset geometries. The goal of this article is to prove that at least some of these hypermaps are indeed flag transitive coset geometries, therefore providing the first infinite family of flag transitive geometries having trialities but no dualities. Using chamber systems, we give a geometric description of rank three flag transitive coset geometries $\Gamma(G,(G_{i})_{i\in\\{0,1,2\\}})$ admitting trialities but no dualities. Here $G=\operatorname{Sz}(q)$ is the Suzuki group over the finite field of order $q$ (where $q=2^{2e+1}$ for some positive integer $e$ such that $2e+1$ is divisible by 3) and the $G_{i}$’s are suitably chosen dihedral subgroups of $G$. These geometries all have the following diagram. $\;\;\;\;\;5\;\;\;\;\;$$5$$5\ $ Most of the work in this article generalizes to constructions of similar geometries with diagrams with labels any integer $m$ dividing one of $q-1$, $q+\sqrt{2q}+1$ or $q-\sqrt{2q}+1$ instead of 5. At the moment, we can only prove that these geometries are flag transitive for $m=5$. For any integer $m$ dividing one of $q-1$, $q+\sqrt{2q}+1$ or $q-\sqrt{2q}+1$ and such that $m\equiv 2(\bmod\;3)$, we nevertheless show that they are regular hypermaps, see Corollary 3.7. These hypermaps also admit trialities but no dualities, in the sense of operations on hypermaps (see [6]). Hypermaps in the context of Suzuki groups were already investigated in [3], where the authors give a formula to count the number of regular and orientably regular maps and hypermaps with automorphism group $\operatorname{Sz}(q)$. The paper is organised as follows. In Section 2, we recall the basic properties of the Suzuki groups and the notions in incidence geometry needed to understand this paper. We also recall the definition of an hypermap. In Section 3, we construct the geometries mentioned above and show that they are residually connected, flag transitive, and that they admit trialities and no dualities. We conclude the paper with a series of remarks in Section 4. ## 2\. Preliminaries We start by an introduction to the Suzuki groups $\operatorname{Sz}(q)$ and some of their properties. We then define incidence geometries, chamber systems and hypermaps. ### 2.1. Suzuki groups Here we introduce the Suzuki groups, one of the 18 infinite families of finite simple groups. We mainly follow the exposition in [11], where many more details can be found. Let $\operatorname{\mathbb{F}}=\operatorname{\mathbb{F}}(q)$ be the field of order $q=2^{2e+1}$ for some integer $e\geq 1$. Let $\Phi\colon\operatorname{\mathbb{F}}\to\operatorname{\mathbb{F}}$ be the Frobenius automorphism sending $x$ to $x^{2}$ and let $\theta$ be the automorphism $x\to x^{r}$ with $r=2^{e+1}$, so that $\theta^{2}=\Phi$. Let ${\sf P}=\operatorname{\mathbf{P}\mathbf{G}}(3,\operatorname{\mathbb{F}})$ be the $3$-dimensional projective space over $\operatorname{\mathbb{F}}$ and let $(X_{0}:X_{1}:X_{2}:X_{3})$ be homogeneous coordinates for $\sf P$. Let $E$ be the plane of $\sf P$ defined by the equation $X_{0}=0$ and set $U=(0:1:0:0)$. Let $x=X_{2}X_{0}^{-1}\quad\quad\quad\quad y=X_{3}X_{0}^{-1}\quad\quad\quad\quad z=X_{1}X_{0}^{-1}$ be affine coordinates on the affine space ${\sf P}_{E}=\sf P$ \$E$. Finally, define $\operatorname{\mathcal{O}}$ to be the set of points of $\sf P$ consisting of $U$ together with all the points of ${\sf P}_{E}$ that satisfy the equation $z=xy+x^{\theta+2}+y^{\theta}$ This set $\operatorname{\mathcal{O}}$ is called a Suzuki-Tits ovoid. The Suzuki group $G:=\operatorname{Sz}(q)$ is then the group of projectivities of $\sf P$ that preserve $\operatorname{\mathcal{O}}$. The definition of ovoid is due to Tits [14]. Let $\operatorname{\mathcal{O}}$ be a set of points of some projective space $\Sigma$. Then $\operatorname{\mathcal{O}}$ is called an ovoid if every line of $\Sigma$ intersects $\operatorname{\mathcal{O}}$ in at most two points, and if, for each point $p$, the union of the set of lines of $\Sigma$ intersecting $\operatorname{\mathcal{O}}$ in precisely $p$ is a hyperplane. It was proven by Tits that the set $\operatorname{\mathcal{O}}$ we defined here is indeed an ovoid under this definition. We will mostly understand the Suzuki group $G$ by its action on $\operatorname{\mathcal{O}}$. We very briefly state some of the properties of this action. The group $G$ acts doubly transitively on $\operatorname{\mathcal{O}}$ and only the identity element fixes $3$ points of $\operatorname{\mathcal{O}}$. Let $S$ be a Sylow $2$-subgroup of $G$. Then $S$ is contained in its normalizer $N_{G}(S)$, a Frobenius group which is the stabilizer $G_{P}$ of some point $P$ of $\operatorname{\mathcal{O}}$ and has cardinality $q^{2}(q-1)$. The center $Z(S)$ of $S$ is $Z(S)=\\{\gamma\in S\mid\gamma^{2}=1\\}$ and is of cardinality $q$. All elements of even order of $G$ are conjugate to some element in $S$, and therefore have a unique fixed point. The stabilizer $G_{P,Q}$ of two points $P$ and $Q$ of $\operatorname{\mathcal{O}}$ is a cyclic group of order $q-1$, which is a Frobenius complement of $G_{P}$. The cardinality of $G$ is $(q^{2}+1)q^{2}(q-1)$, which can be deduced from the double transitivity together with $|G_{P,Q}|=q-1$. The Frobenius automorphism $\Phi$ also acts on $\operatorname{\mathcal{O}}$. Conjugating elements of $G$ by the action of $\Phi$ gives outer automorphisms of $G$. In fact, $\operatorname{Out}(G)$ is cyclic of order $2e+1$ and is generated by the action of $\Phi$. The structure of the maximal subgroups of $G$ is very well understood. For two groups $H$ and $K$, the notation $H:K$ denotes a semi-direct product of $H$ by $K$. Here are the facts we will need about maximal subgroups, the proofs can be found in [12]. ###### Theorem 2.1. Let $\alpha_{q}=2^{2e+1}+2^{e+1}+1$ and $\beta_{q}=2^{2e+1}-2^{e+1}+1$. The maximal subgroups of $\operatorname{Sz}(q)$ are all conjugate to one of the following. 1. (1) A Suzuki subgroup $\operatorname{Sz}(q_{0})$ where $\operatorname{\mathbb{F}}_{q_{0}}\subset\operatorname{\mathbb{F}}_{q}$ is a field extension with $q_{0}=2^{2e_{0}+1}$ for some positive integer $e_{0}$ and $q_{0}$ is maximal for this property, 2. (2) A dihedral group $D_{2(q-1)}$, 3. (3) A group isomorphic to $C_{\alpha_{q}}:C_{4}$, 4. (4) A group isomorphic to $C_{\beta_{q}}:C_{4}$. Moreover, the cyclic subgroups $C_{q-1},C_{\alpha_{q}}$ and $C_{\beta_{q}}$ all intersect trivially. We conclude this section by a technical lemma that will be of crucial importance later. ###### Lemma 2.2. [11, Lemma 24.3] Let $A$ be the set of involutions of $G$ fixing a point $P\in\operatorname{\mathcal{O}}$ and let $B$ be the set of involutions of $G$ fixing $Q\in\operatorname{\mathcal{O}}$. Then, for any $\omega\in A,\zeta,\zeta^{\prime}\in B$ we have that $\omega\zeta$ is never conjugated to $\omega\zeta^{\prime}$. ### 2.2. Incidence and coset geometries To their core, most of the geometric objects of interest to mathematicians are composed of elements together with some relation between them. This very general notion is made precise by the notion of an incidence system, or an incidence geometry. For a more detailed introduction to incidence geometry, we refer to [1]. A triple $\Gamma=(X,*,t)$ is called an incidence system over $I$ if 1. (1) $X$ is a set whose elements are called the elements of $\Gamma$, 2. (2) $*$ is a symmetric and reflexive relation (called the incidence relation) on $X$, and 3. (3) $t$ is a map from $X$ to $I$, called the type map of $\Gamma$, such that distinct elements $x,y\in X$ with $x*y$ satisfy $t(x)\neq t(y)$. Elements of $t^{-1}(i)$ are called the elements of type $i$. The rank of $\Gamma$ is the cardinality of the type set $I$. A flag in an incidence system $\Gamma$ over $I$ is a set of pairwise incident elements. The type of a flag $F$ is $t(F)$, that is the set of types of the elements of $F.$ A chamber is a flag of type $I$. An incidence system $\Gamma$ is an incidence geometry if all its maximal flags are chambers. Let $F$ be a flag of $\Gamma$. An element $x\in X$ is incident to $F$ if $x*y$ for all $y\in F$. The residue of $\Gamma$ with respect to $F$, denoted by $\Gamma_{F}$, is the incidence system formed by all the elements of $\Gamma$ incident to $F$ but not in $F$. The rank of the residue $\Gamma_{F}$ is equal to rank$(\Gamma)$ \- $|F|$. The incidence graph of $\Gamma$ is a graph with vertex set $X$ and where two elements $x$ and $y$ are connected by an edge if and only if $x*y$. Whenever we talk about the distance between two elements $x$ and $y$ of a geometry $\Gamma$, we mean the distance in the incidence graph of $\Gamma$ and simply denote it by $d_{\Gamma}(x,y)$, or even $d(x,y)$ if the context allows. Let $\Gamma=\Gamma(X,*,t)$ be an incidence geometry over the type set $I$. A correlation of $\Gamma$ is a bijection $\phi$ of $X$ respecting the incidence relation $*$ and such that, for every $x,y\in X$, if $t(x)=t(y)$ then $t(\phi(x))=t(\phi(y))$. If, moreover, $\phi$ fixes the types of every element (i.e $t(\phi(x))=t(x)$ for all $x\in X$), then $\phi$ is said to be an automorphism of $\Gamma$. The _type_ of a correlation $\phi$ is the permutation it induces on the type set $I$. A correlation of type $(i,j)$ is called a duality if it has order $2$ and a correlation of type $(i,j,k)$ is called a triality if it has order $3$. The group of all correlations of $\Gamma$ is denoted by $\operatorname{Cor}(\Gamma)$ and the automorphism group of $\Gamma$ is denoted by $\operatorname{Aut}(\Gamma)$. Remark that $\operatorname{Aut}(\Gamma)$ is a normal subgroup of $\operatorname{Cor}(\Gamma)$ since it is the kernel of the action of $\operatorname{Cor}(\Gamma)$ on $I$. If $\operatorname{Aut}(\Gamma)$ is transitive on the set of chambers of $\Gamma$ then we say that $\Gamma$ is flag transitive. If moreover, the stabilizer of a chamber in $\operatorname{Aut}(\Gamma)$ is reduced to the identity, we say that $\Gamma$ is simply transitive or regular. Incidence geometries can be obtained from a group $G$ together with a set $(G_{i})_{i\in I}$ of subgroups of $G$ as described in [13]. The _coset geometry_ $\Gamma(G,(G_{i})_{i\in I})$ is the incidence geometry over the type set $I$ where: 1. (1) The elements of type $i\in I$ are right cosets of the form $G_{i}\cdot g$, $g\in G$. 2. (2) The incidence relation is given by non empty intersection. More precisely, the element $G_{i}\cdot g$ is incident to the element $G_{j}\cdot k$ if and only if $i\neq j$ and $G_{i}\cdot g\cap G_{j}\cdot k\neq\emptyset$. ### 2.3. Chamber systems The concept of chamber system was invented by Tits in [16]. A chamber system over $I$ is a pair $\operatorname{\mathcal{C}}=(C,\\{\sim_{i},i\in I\\})$ consisting of a set $C$, whose members are called chambers, and a collection of equivalence relations $\sim_{i}$ on $C$, indexed by $i\in I$. Two chambers $c$ and $d$ are called $i-$adjacent if $c\sim_{i}d$. For $i\in I$, each $\sim_{i}$-equivalence class is called an i-panel. The rank of $\operatorname{\mathcal{C}}$ is $|I|$. The chamber system $\operatorname{\mathcal{C}}$ is called thin if every $i-$panel is of size exactly $2$. A weak homomorphism $\varphi\colon(C,\\{\sim_{i},i\in I\\})\to(C^{\prime},\\{\sim_{i^{\prime}},i^{\prime}\in I\\})$ of chamber systems over $I$ is a map $\varphi\colon C\to C^{\prime}$ for which a permutation $\pi$ of $I$ can be found such that, for all $c,d\in C$, the relation $c\sim_{i}d$ implies $\varphi(c)\sim_{\pi(i)}\varphi(d)$. If $\pi=\operatorname{Id}$, the weak homomorphism $\varphi$ is said to be a homomorphism. A bijective homomorphism whose inverse is an homomorphism is called an isomorphism and an isomorphism from $\operatorname{\mathcal{C}}$ to $\operatorname{\mathcal{C}}$ is called an automorphism of $\operatorname{\mathcal{C}}$. We denote by $\operatorname{Aut}(\operatorname{\mathcal{C}})$ the group of all automorphisms of $\operatorname{\mathcal{C}}$ The graph of $\operatorname{\mathcal{C}}$ is the graph whose vertices are the chambers of $\operatorname{\mathcal{C}}$ and where two chambers $c,d$ are connected by an edge if there exists an $i\in I$ such that $c\sim_{i}d$. The chamber system $\operatorname{\mathcal{C}}$ is connected if the graph of $\operatorname{\mathcal{C}}$ is connected. For $J\subset I$, we denote by $\sim_{J}$ the union of all $\sim_{j}$ with $j\in J$. A connected component of $(\operatorname{\mathcal{C}},\sim_{J})$ is called a J-cell of $\operatorname{\mathcal{C}}$. For $i\in I$, the $(I$ \$\\{i\\})$-cells are called $i$-objects of $\operatorname{\mathcal{C}}$. If $\operatorname{\mathcal{C}}$ is a chamber system, the incidence system of $\operatorname{\mathcal{C}}$, denoted by $\Gamma(\operatorname{\mathcal{C}})$, is the incidence system over $I$ determined as follows. Its $i$-elements, for $i\in I$ are the pairs $(x,i)$ with $x$ an $i$-object of $\operatorname{\mathcal{C}}$; two elements $(x,k)$ and $(y,l)$ of $\Gamma(\operatorname{\mathcal{C}})$ are incident if and only if $x\cap y\neq\emptyset$ in $\operatorname{\mathcal{C}}$, i.e., $x$ and $y$ have a chamber in common. A chamber system $\operatorname{\mathcal{C}}$ over $I$ is residually connected if, for every subset $J$ of $I$ and every system of $j$-objects $Z_{j}$, one for each $j\in J$, with the property that any two have a non-empty intersection, it follows that $\bigcap_{j\in J}Z_{j}$ is an $(I$ \$J$)-cell. Let $\psi\colon G\to\operatorname{Aut}(\operatorname{\mathcal{C}})$ be a representation of a group $G$ on a chamber system $\operatorname{\mathcal{C}}$. When $G$ is transitive on the chambers of $\operatorname{\mathcal{C}}$, we say that $G$ is chamber transitive on $\operatorname{\mathcal{C}}$. We also say that $\operatorname{\mathcal{C}}$ is chamber transitive if $\operatorname{Aut}(\operatorname{\mathcal{C}})$ is chamber transitive. Let $G$ be a group, $B$ a subgroup, $(G^{(i)})_{i\in I}$ a system of subgroups of $G$ with $B\subset G^{(i)}$. The coset chamber system of $G$ on $B$ with respect to $(G^{(i)})_{i\in I}$, denote by $\operatorname{\mathcal{C}}(G,B,(G^{(i)})_{i\in I})$, has the chamber set consisting of all cosets $gB,g\in G$, and $i$-adjacency determined by $gB\sim_{i}hB$ if and only if $gG^{(i)}=hG^{(i)}$. For $i\in I$, the group $G^{(i)}$ is called the standard parabolic subgroup of type $I\textbackslash\\{i\\}$. The subgroup $B$ is called the Borel subgroup of $G$. In the cases we will explore in this article, $B$ will always be the identity subgroup. ###### Proposition 2.3. [1, Proposition 3.6.4] If $\psi\colon G\to\operatorname{Aut}(\operatorname{\mathcal{C}})$ is a chamber transitive representation of $G$ on a chamber system $\operatorname{\mathcal{C}}$ over $I$, then, for every chamber $c$ of $\operatorname{\mathcal{C}}$, the canonical representation of $G$ on $\operatorname{\mathcal{C}}(G,B,(G^{(i)})_{i\in I})$ is equivalent to $\psi$, where $G^{(i)}$ is the stabilizer of the $i$-cell containing $c$. This proposition allows us to work with chamber transitive chamber systems in a group theoretical way. The notion of connectedness and residual connectedness can then also be translated into group theoretical language. ###### Lemma 2.4. [1, Lemma 3.6.7] Let $G$ be a group, $B$ a subgroup of $G$, $(G^{(i)})_{i\in I}$ a system of subgroups of $G$ with $B\leq G^{(i)}$. The chamber system $\operatorname{\mathcal{C}}(G,B,(G^{(i)})_{i\in I})$ is connected if and only if $G$ is generated by the subgroups $G^{(i)}$, $i\in I$. For $G$ a group with a system of subgroups $(G^{(i)})_{i\in I}$ and for $J\subset I$, we write $G^{(J)}=\langle G^{(j)}\mid j\in J\rangle$. We use the following theorem. Observe that in [1], the hypothesis that the chamber system is connected is missing. ###### Theorem 2.5. [1, Theorem 3.6.9] Let $I$ be a finite index set. Suppose that $G$ is a group with a system of subgroups $(G^{(i)})_{i\in I}$ and that $B$ is a subgroup of $\bigcap_{i\in I}G^{(i)}$. If the chamber system $\operatorname{\mathcal{C}}(G,B,(G^{(i)})_{i\in I})$ is connected, the following statements are equivalent. 1. (1) The chamber system $\operatorname{\mathcal{C}}(G,B,(G^{(i)})_{i\in I})$ is residually connected. 2. (2) For all $J,K,L\subseteq I$, we have $G^{(L)}\cap G^{(J)}G^{(K)}=G^{(L\cap J)}G^{(L\cap K)}$. 3. (3) For all $J,K,L\subseteq I$, we have $G^{(J)}G^{(L)}\cap G^{(K)}G^{(L)}=G^{(J\cap K)}G^{(L)}$. Finally, if we restrict ourselves to residually connected incidence geometries and residually connected chamber system, we can go back and forth between the two notions freely. ###### Theorem 2.6. [1, Theorem 3.4.6] Let $\mathcal{G}(I)$ be the collection of all residually connected geometries over $I$ and let $\operatorname{\mathcal{C}}(I)$ be the collection of all residually connected chamber systems over $I$. There is a bijective homomorphism preserving structures between the collections $\mathcal{G}(I)$ and $\operatorname{\mathcal{C}}(I)$. Thus, each residually connected geometry over $I$ corresponds to a unique residually connected chamber system over $I$ (up to isomorphism), with same automorphism group, and vice versa. ### 2.4. Hypermaps Hypermaps were introduced in [2], see also [7, 5]. A hypermap $\mathcal{M}$ is a transitive permutation representation $\Delta\rightarrow\textup{Sym}(\Sigma)$ of the group $\Delta=\left\langle r_{0},r_{1},r_{2}~{}|~{}r_{0}^{2}=r_{1}^{2}=r_{2}^{2}=1\right\rangle\cong C_{2}*C_{2}*C_{2},$ where $\Sigma$ is a set, the elements of which are called flags. One can think of each flag as consisting of a hypervertex, a hyperedge and a hyperface. If the automorphism group $\textup{Aut}(M)$ of the hypermap acts transitively on the flags, the hypermap is called regular. In that case, the hypervertices, the hyperedges and the hyperfaces are the orbits of the dihedral subgroups $\langle r_{1},r_{2}\rangle$, $\langle r_{2},r_{0}\rangle$ and $\langle r_{0},r_{1}\rangle$, respectively and the hypermap is said to be of type $(p,q,r)$ where $p$ is the order of $r_{1}r_{2}$, $q$ is the order of $r_{2}r_{0}$ and $r$ is the order of $r_{0}r_{1}$. The coset geometry constructed from the automorphism group $\textup{Aut}(M)$ and the three dihedral subgroups is a rank 3 incidence geometry on the type set $\\{\mbox{hypervertex, hyperedge, hyperface}\\}$. A maximal flag (a chamber) of this incidence geometry is a triple consisting of one element of each type such that they are pairwise incident. It is important to note that the notion of a flag is different for hypermaps and incidence geometries; in a hypermap flag the incidence between the objects is a ternary relation, while the incidence relation in the coset geometry is binary. This implies that the coset geometry constructed from a regular hypermap can have more flags than the hypermap. In particular, the transitive action of the automorphism group on the flags of the hypermap does not imply the transitive action of the automorphism group on the chambers of the corresponding coset geometry. See for instance [4, Example 4.4]. ## 3\. The Chamber System approach Let $e>1$ be a positive integer such that $3\mid 2e+1$ and let $q=2^{2e+1}$. Let $G\cong\operatorname{Sz}(q)$ be the Suzuki group acting on the $q^{2}+1$ points of the ovoid $\mathcal{O}$ (see section 2.1). The aim of this section is to construct a chamber system over $I:=\\{0,1,2\\}$ from $G$. We first fix a few conventions. Regular letters like $q,p$ and $m$ designate integers. Capital letters like $P,Q,R$ designate points of $\operatorname{\mathcal{O}}$. Greek letters like $\rho,\gamma$ and $\tau$ designate group elements, with $\tau$ restricted to outer automorphisms of $G$ of order $3$. Since $3\mid 2e+1$, the underlying field $\operatorname{\mathbb{F}}_{q}$ is a extension of degree 3 of a subfield $\operatorname{\mathbb{F}}_{q_{0}}$ and, therefore, there are outer automorphisms of $G$ of order $3$. We will call any such automorphism a triality of $G$. Let $\tau$ be a triality of $G$ and let $P,Q,R$ be a triple of points of $\mathcal{O}$ permuted cyclically by $\tau$. Since the triality $\tau$ comes from the action of a power of the Frobenius automorphism $\Phi$ on the ovoid $\operatorname{\mathcal{O}}$, we can consider $\tau$ as a permutation of the points $\operatorname{\mathcal{O}}$. With that in mind, for any point $P\in\operatorname{\mathcal{O}}$, $\tau(P)$ is just the image of $P$ by that permutation, while for an element $\gamma\in G$, we will denote by $\tau(\gamma):=\tau\gamma\tau^{-1}=\gamma^{\tau}$, the element of $G$ that sends any $X\in\operatorname{\mathcal{O}}$ to $\tau\circ\gamma\circ\tau^{-1}(X)$. As such, if $\gamma$ fixes some point $P\in\operatorname{\mathcal{O}}$, then $\tau(\gamma)$ fixes $\tau(P)$. ###### Lemma 3.1. Let $P$ be a point of $\operatorname{\mathcal{O}}$ and let $\tau$ be a triality of $G$ such that $\tau(P)\neq P$. Let $A$ be the set of involutions of $G$ with fixed point $P$. Then the set $\\{\rho\cdot\tau(\rho)\mid\rho\in A\\}$ contains only elements of odd order, no two of which are conjugate. ###### Proof. All elements of even order of $G$ have exactly one fixed point. Since $\rho$ and $\tau(\rho)$ are always involutions with different fixed points, their product has either $0$ or $2$ fixed points, and must thus have odd order. It remains to show that, if $\rho\neq\rho^{\prime}\in A$, then $\rho\tau(\rho)$ cannot be conjugated to $\rho^{\prime}\tau(\rho^{\prime})$. Let $H:=G_{P,\tau^{2}(P)}$ be the stabilizer in $G$ of $p$ and $\tau^{2}(P)$. Hence, $H$ is a cyclic group of order $q-1$. Let $\zeta$ be a generator of $H$. For $\omega:=\zeta^{i}$, $i=1,2,\cdots,q-2$, we have that $(\rho\tau(\rho))^{\omega}=\rho^{\omega}\tau(\rho)^{\omega}=\rho^{\omega}\omega\tau(\rho)\omega^{-1}=\rho^{\omega}\tau(\tau^{-1}(\omega)\rho\tau^{-1}(\omega^{-1}))=\rho^{\omega}\tau(\rho^{\tau^{-1}(\omega)})$ We claim that $\rho^{\omega}\neq\rho^{\tau^{-1}(\omega)}$. Indeed, suppose that $\rho^{\omega}=\rho^{\tau^{-1}(\omega)}$. Conjugating both sides by $\omega^{-1}$, we obtain that $\rho=\rho^{\tau^{-1}(\omega)\omega^{-1}}$. Set $\beta=\tau^{-1}(\omega)\omega^{-1}=\tau^{2}(\omega)\omega^{-1}$. We have that $\rho^{\beta}=\rho$ thus $\rho^{\beta}$ fixes $P$ and $\beta$ fixes $P$ as well. This means that $\tau^{-1}(\omega)$ must fix $P$ since $\omega^{-1}$ and $\beta$ fix $P$. But $\tau^{-1}(\omega)$ also fixes $\tau^{-1}(P)=\tau^{2}(P)$ and $\tau^{-1}(\tau^{2}(P))=\tau(P)$ so $\tau^{-1}(\omega)$ would have three fixed points, a contradiction. We conclude using Lemma 2.2. Indeed we showed that $\rho\tau(\rho)$ is conjugated to $\rho^{\omega}\tau(\rho^{\tau^{-1}(\omega)})$ which in turn cannot be conjugated to $\rho^{\omega}\tau(\rho^{\omega})$ by Lemma 2.2, unless $\rho^{\omega}=\rho^{\tau^{-1}(\omega)}$, which we ruled out. Since $H$ acts transitively on $A$, this concludes the lemma. ∎ ###### Theorem 3.2 (Existence of base Chamber). Let $m$ be any odd integer that arises as the order of some element of $G$. For any point $P$ of $\operatorname{\mathcal{O}}$ and for any triality $\tau$ of $G$ such that $\tau(P)\neq P$, there exists an involution $\rho_{0}$ fixing $P$ such that $\rho_{0}\cdot\tau(\rho_{0})$ has order $m$. Moreover, if $\langle\rho_{0}\tau(\rho_{0})\rangle\neq\langle\rho_{0}\tau(\rho_{0})\rangle^{\tau}$ then $G$ is generated by $\rho_{0},\tau(\rho_{0})$ and $\tau^{2}(\rho_{0})$. ###### Proof. By [12, §11, Theorem 5], the Suzuki group $G$ has exactly $q-1$ conjugacy classes of elements of odd order. The first part of the statement is thus a direct consequence of Lemma 3.1 as there are exactly $q-1$ involutions fixing a given point $P$. Let $m$ be the order of $\rho_{0}\tau(\rho_{0})$, let $H=\langle\rho_{0},\tau(\rho_{0}),\tau^{2}(\rho_{0})\rangle$, and suppose that $H$ is a proper subgroup of $G$. Then $H$ must be contained in some maximal subgroup of $G$. We use the classification of maximal subgroups provided by Theorem 2.1. Suppose first that $H$ is contained in a maximal subgroup $M$, isomorphic to either $D_{2(q-1)}$, $C_{\alpha_{q}}:C_{4}$. or $\operatorname{C}_{\beta_{q}}:C_{4}$. By the last sentence of Theorem 2.1, there is a unique cyclic subgroup $K$ of order $m$ in $M$. Hence, the three subgroups $\langle\rho_{0}\tau(\rho_{0})\rangle,\langle\tau(\rho_{0})\tau^{2}(\rho_{0})\rangle$ and $\langle\tau^{2}(\rho_{0})\rho_{0}\rangle$ all contain $K$. But any two of these subgroups also have an involution in common, so they must then all be equal, a contradiction. Suppose instead that $M$ is isomorphic to a maximal Suzuki subgroup $\operatorname{Sz}(q_{0})$. If $H=M$, since $H$ is always stabilized by $\tau$, it would imply that $M$ is also stabilized by $\tau$. This means that $\tau$ restricts to an automorphism of $\operatorname{Sz}(q_{0})$. This implies that $q_{0}=2^{2e_{0}+1}$ with $2e_{0}+1$ divisible by $3$ and therefore, by maximality, $q_{0}=q^{1/3}$. But the only Suzuki subgroup $\operatorname{Sz}(q^{1/3})$ stabilized by $\tau$ is precisely the one fixed element-wise by $\tau$. This is a contradiction since $\tau$ clearly moves $\rho_{0}$. If instead $H$ is again a proper subgroup of $M$, we can repeat the same argument starting with $\operatorname{Sz}(q_{0})$ this time. At some point, we arrive at a contradiction, hence showing that $G=\langle\rho_{0},\tau(\rho_{0}),\tau^{2}(\rho_{0})\rangle$. ∎ Using Theorem 3.2, we can find the base chamber of the chamber system we want to construct. We first investigate some geometric properties of such base chambers and show their algebraic implications. Let $\rho_{0}$ be an involution with fixed point $P$ such that the order of $(\rho_{0}\tau(\rho_{0}))$ is equal to $m$ for some integer $m$. Let $\rho_{1}:=\tau(\rho_{0})$ and $\rho_{2}:=\tau^{2}(\rho_{0})$. Then $\rho_{1}$ has fixed point $Q=\tau(P)$ and $\rho_{2}$ has fixed point $R=\tau^{2}(P)$. For $i,j,k$ such that $I=\\{i,j,k\\},$ the subgroups generated by $\langle\rho_{j},\rho_{k}\rangle$ are dihedral groups $H_{i}=H_{i}(\rho_{0},\tau)$ of order $2m$. The existence of such a configuration is guaranteed by Corollary 3.2. Let $\operatorname{\mathcal{O}}_{i}=\operatorname{\mathcal{O}}(H_{i})$ be the set of points of $\operatorname{\mathcal{O}}$ that are fixed by some involution in $H_{i}$. Thus $\operatorname{\mathcal{O}}_{i}$ contains exactly $m$ points for each $i\in I$. ###### Definition 3.3. Let $\operatorname{\mathcal{O}}_{i}$ be set of points of $\operatorname{\mathcal{O}}$ for $i\in I$. We say that an ordered triple $(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$ forms a triangle if $\operatorname{\mathcal{O}}_{i}\cap\operatorname{\mathcal{O}}_{j}\neq\emptyset$ for all $i\neq j=0,1,2$. We say that the triangle $(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$ is non-degenerate if the $\operatorname{\mathcal{O}}_{i}$’s are all distinct for $i=0,1,2$. We say that the triangle $(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$ is proper if $\operatorname{\mathcal{O}}_{0}\cap\operatorname{\mathcal{O}}_{1}\cap\operatorname{\mathcal{O}}_{2}=\emptyset$. Finally, if $\rho_{0}$ is an involution of $G$ and $\tau$ is a triality as above, we say that $T_{\rho_{0},\tau}:=(\operatorname{\mathcal{O}}_{0}=\operatorname{\mathcal{O}}(H_{0}(\rho_{0},\tau)),\operatorname{\mathcal{O}}_{1}=\operatorname{\mathcal{O}}(H_{1}(\rho_{0},\tau)),\operatorname{\mathcal{O}}_{2}=\operatorname{\mathcal{O}}(H_{2}(\rho_{0},\tau)))$ is the triangle associated to $(\rho_{0},\tau)$. We now show that non-degeneracy of $T_{\rho_{0},\tau}$ implies that $G$ is generated by $\rho_{0},\tau(\rho_{0})$ and $\tau^{2}(\rho_{0})$. ###### Lemma 3.4. If $T_{\rho_{0},\tau}$ is degenerate, then $H_{0}(\rho_{0},\tau)=H_{1}(\rho_{0},\tau)=H_{2}(\rho_{0},\tau)$ ###### Proof. Suppose that $\operatorname{\mathcal{O}}_{0}=\operatorname{\mathcal{O}}_{1}=\operatorname{\mathcal{O}}_{2}$ but that the three subgroups $H_{0},H_{1}$ and $H_{2}$ are distinct. By the second part of Theorem 3.2, we can then deduce that $G=\langle\rho_{0},\tau(\rho_{0}),\tau^{2}(\rho_{0})\rangle$. But all three of $\rho_{0},\tau(\rho_{0})$ and $\tau^{2}(\rho_{0})$ stabilize $\operatorname{\mathcal{O}}_{0}$. Therefore, $\langle\rho_{0},\tau(\rho_{0}),\tau^{2}(\rho_{0})\rangle$ is a subgroup of the stabilizer in $G$ of $\operatorname{\mathcal{O}}_{0}$, which is a proper subgroup since $G$ acts transitively on the whole ovoid $\operatorname{\mathcal{O}}$. This is a contradiction. ∎ ###### Corollary 3.5. If $T_{\rho_{0},\tau}$ is non-degenerate, then $G=\langle\rho_{0},\tau(\rho_{0}),\tau^{2}(\rho_{0})\rangle$. ###### Proof. By Lemma 3.4, we know that the subgroups $H_{0},H_{1},H_{2}$ are distinct. We can then conclude by using the second part of Theorem 3.2. ∎ $P$$\tau(P)$$\tau^{2}(P)$$O_{2}$$O_{1}$$O_{0}$ Figure 1. The base chamber $T_{\rho_{0},\tau}$ of the chamber system The next Lemma gives criterion for $T_{\rho_{0},\tau}$ to be non-degenerate or proper. ###### Lemma 3.6. If $\rho_{0}$ in an involution and $\tau$ is a triality such that $\rho_{0}\tau(\rho_{0})$ is of order $m$ with $m\equiv 2(\bmod\;3)$, then the triangle $T_{\rho_{0},\tau}=(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$ associated to $(\rho_{0},\tau)$ is non-degenerate. Furthermore, if $m=5$, the triangle $T_{\rho_{0},\tau}=(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$ is also proper. ###### Proof. Since $\operatorname{\mathcal{O}}_{i}\cap\operatorname{\mathcal{O}}_{j}$ always contains the fixed point of $\rho_{k}$, the triangle associated to any pair $(\rho_{0},\tau)$ is always a triangle in the sense of Definition 3.3. Suppose now that $\operatorname{\mathcal{O}}_{0}=\operatorname{\mathcal{O}}_{1}=\operatorname{\mathcal{O}}_{2}$. Then the triality $\tau$ acts on $\operatorname{\mathcal{O}}_{0}$ by permuting its points. Then, since $|\operatorname{\mathcal{O}}_{0}|=m=2(\bmod\;3)$, the triality $\tau$ must fix at least $2$ points $X$ and $Y$ of $\operatorname{\mathcal{O}}_{0}$. Since $\tau$ also sends $H_{0}$ to itself by Lemma 3.4, we conclude that $\tau$ must fix $\rho_{X}$ and $\rho_{Y}$, the two involutions of $H_{0}$ having respectively $X$ and $Y$ as fixed points. But $H_{0}$ is generated by $\rho_{X}$ and $\rho_{Y}$, so $\tau$ should act trivially on $H_{0}$, a contradiction. This shows that the triangle is non- degenerate. Suppose now that $m=5$ and that $\operatorname{\mathcal{O}}_{0}\cap\operatorname{\mathcal{O}}_{1}\cap\operatorname{\mathcal{O}}_{2}$ is not empty and contains some point $X\in\operatorname{\mathcal{O}}$. Let $P$ be a point in $\operatorname{\mathcal{O}}_{0}\cap\operatorname{\mathcal{O}}_{1}$ which is not $X$. Let $\rho_{X}^{0}$ be the involution fixing $X$ in $H_{0}$, $\rho_{X}^{1}$ be the involution fixing $X$ in $H_{1}$ and $\rho$ be the involution fixing $P$ in $H_{0}\cap H_{1}$. Then both $\rho\rho_{X}^{0}$ and $\rho\rho_{X}^{1}$ have order $5$. By Lemma 2.2 $\rho\rho_{X}^{0}$ and $\rho\rho_{X}^{1}$ cannot be conjugated. But there is only one conjugacy class of elements of order $5$ in $G$, a contradiction. This shows that the triangle is proper, and concludes the proof. ∎ We are now ready to show that for some well chosen value of $m=O(\rho_{0}\tau(\rho_{0}))$, we can construct regular hypermaps of type $(m,m,m)$ by letting $G$ acts on $T_{\rho_{0},\tau}$. ###### Corollary 3.7. Any triangle $T_{\rho_{0},\tau}$ with $\rho_{0}\tau(\rho_{0})$ of order $m$ and $m\equiv 2(\bmod\;3)$ defines a hypermap of type $(m,m,m)$ with automorphism group $G$. In particular, for any $q=2^{2e+1}$ with $2e+1$ divisble by 3, there exists an hypermap of type $(\alpha_{q},\alpha_{q},\alpha_{q})$ where $\alpha_{q}=2^{2e+1}+2^{e+1}+1$ or of type $(\beta_{q},\beta_{q},\beta_{q})$ where $\beta_{q}=2^{2e+1}-2^{e+1}+1$, depending on the parity of $e$. ###### Proof. Lemma 3.6 tells us that $G=\langle\rho_{0},\tau(\rho_{0}),\tau^{2}(\rho_{0})\rangle$ whenever the order of $\rho_{0}\tau(\rho_{0})$ is equal to $2$ modulo $3$. Notice that $2^{2e+1}+2^{e+1}+1\equiv 2\pm 2+1(\bmod\;3)$ where the sign depends on whether $e$ is odd or even. So for every value of $e$, exactly one of $\alpha_{q}$ and $\beta_{q}$ is congruent to $2\bmod 3$ while the other is congruent to $1\bmod 3$. Choosing the right one then yields a regular hypermap of the wanted type. ∎ Let $\mathcal{M}$ be one of the hypermaps obtained by Corollary 3.7. The triality $\tau$ then sends $\mathcal{M}$ to a map $\tau(\mathcal{M})$ which is isomorphic to $\mathcal{M}$, but where the roles of vertices, hyperedges and hyperfaces are permuted cyclically. Such operation of the finite order on hypermaps have been studied in [6]. An operation of order three is called a triality and an operation of order two is called a duality. Since $\operatorname{Out}(G)$ has no elements of order $2$, the map $\mathcal{M}$ cannot have dualities. The hypermaps of Corollary 3.7 thus admit trialities but no dualities. We now proceed, after a few results, to define the chamber system $C=\\{\gamma(T_{\rho_{0},\tau})\mid\gamma\in G\\}$. From here on, we suppose that we have a pair $(\rho_{0},\tau)$ such that the associated triangle $T_{\rho_{0},\tau}$ is proper and non-degenerate. ###### Lemma 3.8. Let $(P,Q,R)$ be a triple of points permuted by a triality $\tau$. Then, $G_{\\{P,Q,R\\}}$ is trivial. ###### Proof. Suppose there is a $\gamma\in G$ preserving the set $\\{P,Q,R\\}$. If $\gamma$ fixes $\\{P,Q,R\\}$ pointwise, it must be the identity as the only element of $G$ that fixes three points of $\operatorname{\mathcal{O}}$ is the identity element. Also, $\gamma$ cannot permute $P,Q$ and $R$ cyclically, since $3$ does not divide the order of $G$. Suppose, without loss of generality, that $\gamma(P)=P$ and that $\gamma$ exchanges $Q$ and $R$. Then $\tau(\gamma)$ fixes $Q$ and exchanges $P$ and $R$. But then one easily checks that $\gamma\cdot\tau(\gamma)$ permutes cyclically $P,Q$ and $R$, a contradiction. ∎ For a triangle $(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$, its image $\gamma(\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2})$ by the action of $\gamma\in G$ is the triangle $(\gamma(\operatorname{\mathcal{O}}_{0}),\gamma(\operatorname{\mathcal{O}}_{1}),\gamma(\operatorname{\mathcal{O}}_{2}))$. ###### Lemma 3.9. The image $\gamma(T_{\rho_{0},\tau})$ by $\gamma\in G$ of the triangle associated to $(\rho_{0},\tau)$ is $T_{\rho_{0}^{\gamma},\tau^{\gamma}}$, the triangle associated to $(\rho_{0}^{\gamma},\tau^{\gamma})$. ###### Proof. In other words, we need to show that $\operatorname{\mathcal{O}}_{i}(\rho_{0}^{\gamma},\tau^{\gamma})=\gamma(\operatorname{\mathcal{O}}_{i}(\rho_{0},\tau))$. By definition, $\operatorname{\mathcal{O}}_{2}(\rho_{0}^{\gamma},\tau^{\gamma})$ is the set of fixed points of the involutions of $\langle\rho_{0}^{\gamma},\tau^{\gamma}\rho_{0}^{\gamma}(\tau^{\gamma})^{-1}\rangle=\langle\rho_{0},\tau\rho_{0}\tau^{-1}\rangle^{\gamma}$. Since $\operatorname{\mathcal{O}}_{2}(\rho_{0},\tau)$ is the set of fixed points of involutions in $\langle\rho_{0},\tau\rho_{0}\tau^{-1}\rangle$, this concludes the Lemma for $\operatorname{\mathcal{O}}_{2}$. The cases of $\operatorname{\mathcal{O}}_{0}$ and $\operatorname{\mathcal{O}}_{1}$ are identical. ∎ ###### Corollary 3.10 (Chamber-transitivity). Let $C=\\{\gamma(T_{\rho_{0},\tau})\mid\gamma\in G\\}=\\{(\gamma(\operatorname{\mathcal{O}}_{0}),\gamma(\operatorname{\mathcal{O}}_{1}),\gamma(\operatorname{\mathcal{O}}_{2}))\mid\gamma\in G\\}$. Then $|C|=|G|$. Moreover, if $UC:=\\{\\{\gamma(\operatorname{\mathcal{O}}_{0}),\gamma(\operatorname{\mathcal{O}}_{1}),\gamma(\operatorname{\mathcal{O}}_{2})\\}\mid\gamma\in G\\}$ is the set of unordered triples, we also have that $|UC|=|G|$. ###### Proof. We only prove the second statement as it implies the first. Let $c=T_{\rho_{0},\tau}$ and $c^{\prime}\in C$. Then $c^{\prime}=\gamma(c)$ for some $\gamma\in G$. Suppose that $c^{\prime}=c$. Then $\gamma$ must send the triple $\\{\operatorname{\mathcal{O}}_{0},\operatorname{\mathcal{O}}_{1},\operatorname{\mathcal{O}}_{2}\\}$ to itself. This also means that $\gamma$ must conjugate the triple $\\{H_{0},H_{1},H_{2}\\}$ to itself. Since $H_{i}\cap H_{j}=\langle\rho_{k}\rangle$ for $\\{i,j,k\\}=I$, we conclude that $\gamma$ must conjugate the triple $\\{\rho_{0},\rho_{1},\rho_{2}\\}$ to itself. Therefore, if $P,Q,R$ are the fixed points of $\rho_{i}$, we can conclude that $P,Q,R$ must be fixed setwise by $\gamma$, and thus that $\gamma=1_{G}$, by Lemma 3.8. ∎ Let $C=\\{\gamma(T_{\rho_{0},\tau})\mid\gamma\in G\\}$ and let $d\in C$. We say that $d$ is a chamber of $\operatorname{C}$. Then $d=T_{\rho^{\gamma},\tau^{\gamma}}$ for some $\gamma\in G$ by Lemma 3.9. Let $H_{i}(d)$ be the unique dihedral subgroup of order $2m$ inside of the stabilizer of $\operatorname{\mathcal{O}}_{i}(\rho^{\gamma},\tau^{\gamma})$. Then $\rho_{i}(d)$ is defined to be the unique involution in $H_{j}(d)\cap H_{k}(d)$, whenever $\\{i,j,k\\}=I$. ###### Definition 3.11 (The chamber system). Let $C=\\{\gamma(T_{\rho_{0},\tau})\mid\gamma\in G\\}$ as before and define equivalence relations $\sim_{i},i\in I$ on $C$ as follows: for $c_{1},c_{2}\in C$, $c_{1}\sim_{i}c_{2}$ if and only if $c_{1}=\rho_{i}(c_{1})(c_{2})$ or $c_{1}=c_{2}$. Note that if $c_{1}=\rho_{i}(c_{1})(c_{2})$, then $\rho_{i}(c_{1})=\rho_{i}(c_{2})$ so that $\sim_{i}$ is indeed symmetric. ###### Theorem 3.12. $G$ acts chamber transitively by automorphisms on the chamber system $(C,\sim_{i})$, $i\in I$. ###### Proof. Chamber-transitivity is clear by Corollary 3.10. To check that $G$ acts by automorphisms, it suffices to check that the following diagram commutes: ${C}$${C}$${C}$${C}$$\scriptstyle{\rho_{i}}$$\scriptstyle{\gamma}$$\scriptstyle{\gamma}$$\scriptstyle{\rho_{i}}$ To check that, let $c\in C$. Then we need to compare $(\gamma\circ\rho_{i}(c))(c)$ to $(\rho_{i}(\gamma(c))\circ\gamma)(c)$. But since $\rho_{i}(\gamma(c))=\rho_{i}(c)^{\gamma}$, the two expressions are indeed equal. ∎ We have thus constructed so far a chamber system $(C,\sim_{i})$, $i\in I$ on which $G$ acts chamber transitively by automorphisms. It remains to show that $(C,\sim_{i})$, $i\in I$ is residually connected. For that we first need a technical result. ###### Lemma 3.13. Let $K$ be a cyclic subgroup of $G=\operatorname{Sz}(q)$ of prime odd order $p$. Let $N_{G}(K)$ be the normalizer in $G$ of $K$ and let $H$ be a dihedral subgroup of $G$ of order $2p$. Then $H$ contains $K$ if and only if $H\leq N_{G}(K)$. ###### Proof. Let $H$ be a dihedral subgroup containing $K$. Then $H$ normalizes $K$ as $K$ is of odd order. So $H\leq N_{G}(K)$. Suppose that $H\leq N_{G}(K)$ and $K\not\leq H$. Then $N_{G}(K)$ contains two cyclic subgroups of order $p$, namely $K$ and the subgroup of order $p$ of $H$. Also $N_{G}(K)=D_{2(q-1)}$ if $p|q-1$ and $N_{G}(K)=C_{x}:C_{4}$ otherwise (where $x=q+\sqrt{2q}+1$ or $q-\sqrt{2q}+1$). In any case, $N_{G}(K)$ has a unique cyclic subgroup of odd order $p$, a contradiction. ∎ We are now ready to show that $(C,\sim_{i})$, $i\in I$ is residually connected, under the assumption that $\gamma(T_{\rho_{0},\tau})$ is proper and non-degenerate. ###### Theorem 3.14. Let $C=\\{\gamma(T_{\rho_{0},\tau})\mid\gamma\in G\\}$ for a proper and non- degenerate triangle $\gamma(T_{\rho_{0},\tau})$. Suppose moreover that $\rho_{0}\tau(\rho_{0})$ is an element of prime order $p$. The chamber system $(C,\sim_{i}),i\in I$ is residually connected. ###### Proof. Since $G$ acts chamber transitively by automorphisms on $(C,\sim_{i})$, by Proposition 2.3, the coset chamber system $(G,G^{i})$ is equivalent to $(C,\sim_{i})$, where $G^{i}=\langle\rho_{i}\rangle$ where $\rho_{0}$ is the involution used to define the triangle $T_{\rho_{0},\tau}$ and $\rho_{1}=\tau(\rho_{0})$, $\rho_{2}=\tau^{2}(\rho_{0})$. Therefore, we can use Theorem 2.5. We will check that for all $J,K,L\subseteq I$, we have (1) $G^{(L)}\cap G^{(J)}G^{(K)}=G^{(L\cap J)}G^{(L\cap K)}.$ Note that $G^{(L)}\cap G^{(K)}=G^{(L\cap K)}$ for all $L,K\subset I$ and that $G^{(K)}G^{(L)}=G^{(L)}$ whenever $K\subset L$. Most of the cases hold in a quite straightforward manner. For example, if $J=I$, equation (1) becomes $G^{(L)}\cap G^{(K)}=G^{(L)}G^{(L\cap K)}=G^{(L\cap K)}.$ The case of $J=I$ or $K=I$ is similar. Suppose now that $J=\emptyset$. Then equation (1) becomes $G^{(L)}\cap G^{(K)}=G^{(L\cap K)}.$ The same happens if $K=\emptyset$ or $L=\emptyset$. If $J=\\{0\\},K=\\{1\\}$ and $L=\\{2\\}$ then equation (1) becomes $\langle\rho_{2}\rangle\cap\langle\rho_{0}\rangle\langle\rho_{1}\rangle=\\{\operatorname{Id}\\}$ which holds since $\rho_{2}$ is not contained in $H_{2}$ since $(\rho_{0},\tau)$ is non-degenerated. The harder case arises when $|J|=|K|=|L|=2$. So suppose that $J=\\{0,1\\},K=\\{1,2\\}$ and $L=\\{0,2\\}$. Equation (1) then becomes (2) $\langle\rho_{0},\rho_{2}\rangle\cap\langle\rho_{0},\rho_{1}\rangle\langle\rho_{1},\rho_{2}\rangle=\langle\rho_{0}\rangle\langle\rho_{2}\rangle=\\{e,\rho_{0},\rho_{2},\rho_{0}\rho_{2}\\}$ which is the same as $H_{1}\cap H_{2}H_{0}=\\{e,\rho_{0},\rho_{2},\rho_{0}\rho_{2}\\}\subset H_{1}.$ Note that $\\{e,\rho_{0},\rho_{2},\rho_{0}\rho_{2}\\}\subseteq H_{1}\cap H_{2}H_{0}$ is evidently true, so it suffices to show the opposite inclusion. To show this, let $\gamma_{2}\in H_{2}$ and $\gamma_{0}\in H_{0}$. Suppose first that $\gamma_{2}$ and $\gamma_{0}$ are both involutions. If they have the same fixed point $x$, then $\gamma_{2}\gamma_{0}$ is also an involution with fixed point $x$ which can then not be an element of $H_{1}$ since $(\rho_{0},\tau)$ is non-degenerated. So suppose that $\gamma_{2}$ and $\gamma_{0}$ have different fixed points. If $\gamma_{2}=\rho_{0}$ and $\gamma_{0}=\rho_{2}$, their product is $\rho_{0}\rho_{2}$ so we are done. So we can suppose that either $\gamma_{2}\neq\rho_{0}$ or $\gamma_{0}\neq\rho_{2}$. Suppose that $\gamma_{2}\neq\rho_{0}$. The product $\gamma_{2}\gamma_{0}$ cannot be an involution since it has $0$ or $2$ fixed points. Hence, either $\gamma_{2}\gamma_{0}$ is not in $H_{1}$ or it is of order $p$. If it is of order $p$, then the subgroup generated by $\gamma_{2}$ and $\gamma_{0}$ is a dihedral subgroup $H$ of order $2p$ such that $H\cap H_{1}=\langle\rho_{0}\rho_{2}\rangle$. Therefore, by Lemma 3.13, $H$ must be contained in $N=N_{G}(\langle\rho_{0}\rho_{2}\rangle)$. By the same Lemma, we know that $H_{2}$ cannot be in $N$. But that means that $\gamma_{2}\notin N$ since $H_{2}=\langle\rho_{0},\gamma_{2}\rangle$ and $\rho_{0}$ clearly is in $N$. This is a contradiction, which shows that $\gamma_{2}\gamma_{0}\notin H_{1}$ as long as $\gamma_{2}\neq\rho_{0}$, or $\gamma_{0}\neq\rho_{2}$. Suppose now that both $\gamma_{2}\in H_{2}$ and $\gamma_{0}\in H_{0}$ have order $p$. Then $\gamma_{2}=(\rho_{0}\rho_{1})^{k}$ for some $k=1,\ldots,p-1$ and $\gamma_{0}=(\rho_{1}\rho_{2})^{m}$ for $m=1,\ldots,p-1$. Therefore, $\gamma_{2}\gamma_{0}=(\rho_{0}\rho_{1})^{k}(\rho_{1}\rho_{2})^{m}=(\rho_{0}\rho_{1})^{k-1}(\rho_{0}\rho_{2})(\rho_{1}\rho_{2})^{m-1}$ If $k=m=1$ then $\gamma_{2}\gamma_{0}=\rho_{0}\rho_{2}\in H_{1}$. Else, note that $(\rho_{0}\rho_{1})^{k-1}\rho_{0}=\rho_{0}^{h}$ with $h\in H_{2}$ if $k$ is odd and $(\rho_{0}\rho_{1})^{k-1}\rho_{0}=\rho_{1}^{h}$ with $h$ in $H_{2}$ if $k$ is even. The same thing holds for $\rho_{2}(\rho_{1}\rho_{2})^{m-1}$. This means that if at least one of $k$ and $m$ is not $1$, $\gamma_{2}\gamma_{0}$ becomes equal to the product of two involutions, one in $H_{2}$ and one in $H_{0}$. We have already shown that such a product cannot be in $H_{1}$. We note that, from the above argument, it can be deduced that if $h_{i}$ is an element of order $m$ in $H_{i}$ and $h_{j}$ is an element of order $m$ in $H_{j}$, then if $h_{i}h_{j}$ is in $H_{k}$, it must be of order $p$. Finally suppose without loss of generality that $\gamma_{2}$ has order $p$ and $\gamma_{0}$ has order $2$. Suppose that $\gamma_{2}\gamma_{0}\in H_{1}$. If $\gamma_{0}=\rho_{2}$ then $\gamma_{0}\in H_{1}$ and $\gamma_{2}=\gamma_{2}(\gamma_{0})^{2}\in H_{1}$ as well, a contradiction with Lemma 3.13. Suppose thus that $\gamma_{0}\neq\rho_{0}$. If the order of $\gamma_{2}\gamma_{0}$ is $2$, then $\langle\gamma_{2},\gamma_{2}\gamma_{0}\rangle$ is a dihedral group of order $2p$ and we can use the same method as the first case. If the order of $\gamma_{2}\gamma_{0}$ is $p$, then $\gamma_{2}$ is of order $p$ in $H_{2}$, $\gamma_{2}\gamma_{0}$ is of order $p$ is $H_{1}$ so, by the comment at the end of the previous case, $\gamma_{0}=\gamma_{2}^{-1}(\gamma_{2}\gamma_{0})$ should be of order $p$, a contradiction. ∎ Finally, let $C=\\{\gamma(T_{\rho_{0},\tau})\mid\gamma\in G\\}$ be a chamber system obtained by Theorem 3.14, and let $\Gamma=(G,(G_{i})_{i\in I}$ be the associated coset geometry. We want to show that $\Gamma$ admits trialities but no dualities. The triality $\tau$ naturally induces a triality of $\Gamma$ since it permutes the three generators $\rho_{0},\rho_{1}$ and $\rho_{3}$. To prove that $\Gamma$ admits no dualities, we show a results that associates to any correlation of $\Gamma$ an automorphism of $G$. Let $\Gamma$ be a thin, simply flag transitive incidence geometry, let $G=\operatorname{Aut}(\Gamma)$ and let $c$ be a chamber of $\Gamma$. Let $\\{c_{0},\cdots,c_{n}\\}$ be the set of chambers of $\Gamma$ incident to $c$. For each $c_{i}$, there is a unique element $\gamma_{i}\in G$ such that $c_{i}=\gamma_{i}(c)$. Let $S=\\{\gamma_{0},\cdots,\gamma_{n}\\}$. We then have that each $\gamma_{i}$ has order $2$ and that $G=\langle\gamma_{0},\cdots,\gamma_{n}\rangle$. For any correlation $\alpha$ of $\Gamma$ we can definite a map $\phi_{\alpha}\colon S\to S$ as follows. For each $i$, there is exactly one chamber $c_{i}$ which is $i$-adjacent to $c$. The image $\alpha(c_{i})$ is a chamber which is $j-$adjacent to $\alpha(c)$ for some $j\in I$. We then define $\phi_{\alpha}(\gamma_{i}):=\gamma_{j}$ and note $j=\alpha(i)$. In other words, $\phi_{\alpha}(\gamma_{i})=\gamma_{\alpha(i)}$ where we let $\alpha$ naturally act on the type set $I$. ###### Proposition 3.15. Let $\Gamma$ be a thin, simply flag transitive incidence geometry and let $\alpha$ be a correlation of $\Gamma$. Then the map $\phi_{\alpha}\colon S\to S$ naturally extends to an automorphism $\Phi_{\alpha}$ of $G$. ###### Proof. To show that $\Phi_{\alpha}$ is a homomorphism, it suffices to show that any word $w$ in the alphabet $S$ representing the identity in $G$ is sent by $\alpha$ to a word $\alpha(w)$ which also represents the identity in $G$. Let $w$ be such a word. Then it uniquely defines a loop $l$ with base point $c$ in the chamber graph $CG(\Gamma)$. This loop can be recorded by a sequence $i_{1},i_{2},\cdots,i_{n}$, $i_{j}\in I$, of adjacencies. The image $\alpha(l)$ of this loop is once again a loop, now with base point $\alpha(c)$, which is recorded by the sequence $\alpha(i_{1}),\alpha(i_{2}),\cdots,\alpha(i_{n})$. This loop $\alpha(l)$ also uniquely defines a word which by definition is $\alpha(w)$. Thus $\alpha(w)$ represents the identity and $\Phi_{\alpha}$ is a homomorphism. The homomorphism $\Phi_{\alpha}$ is surjective since any element $\gamma\in G$ can be represented by a path $p$ in $CG(\Gamma)$. Any path $p^{\prime}$ such that $\alpha(p^{\prime})=p$ represents an elements of $\gamma^{\prime}\in G$ such that $\alpha(\gamma^{\prime})=\gamma$. The homomorphism $\Phi_{\alpha}$ is injective since $\alpha$ is injective on the vertices of $CG(\Gamma)$. Therefore, if $p$ is a path in $CG(\Gamma)$ which is not a loop, the image $\alpha(p)$ cannot be a loop so that any word not representing the identity in $G$ cannot be sent to the identity by $\alpha$. ∎ ###### Theorem 3.16. Let $G=\operatorname{Sz}(q)$ with $q=2^{2e+1}$ where $2e+1$ is a multiple of $3$. There exists a flag transitive, residually connected, thin incidence geometry $\Gamma$ over $I=\\{0,1,2\\}$ with diagram $\;\;\;\;\;5\;\;\;\;\;$$5$$5\ $ such that $\operatorname{Aut}(\Gamma)\cong\operatorname{Sz}(q)$ has index 3 in $Cor(\Gamma)\cong\operatorname{Sz}(q):C_{3}$. Moreover, $\Gamma$ admits trialities but no dualities. ###### Proof. By Lemma 3.6, we know that the triangle $T_{\rho_{0},\tau}$ associated to any pair $(\rho_{0},\tau)$ with $\rho_{0}\tau(\rho_{0})$ of order $5$ is always proper and non-degenerate. Theorems 3.12 and 3.14 then show that we can construct a chamber transitive and residually connected chamber system $C$ with automorphism group $G$. Theorem 2.6 implies the existence of a coset geometry $\Gamma$ with automorphism group $G$ and parabolic subgroups $H_{i}=\langle\rho_{j},\rho_{k}\rangle,$ for $\\{i,j,k\\}=\\{0,1,2\\}$. The triality $\tau$ then acts on $\Gamma$ as a correlation of order three. Proposition 3.15 guarantees that $\Gamma$ cannot have dualities. Indeed, if a duality were to exist, the map $\Phi_{\alpha}$ fixing $\rho_{i}$ and exchanging $\rho_{j}$ and $\rho_{k}$ should be an automorphism of $G$. This automorphism cannot be inner, by Lemma 3.8. It also cannot be outer since $G$ has no outer automorphism of order $2$. ∎ ## 4\. Concluding remarks We conclude this article with a few remarks. In this article, we decided to first construct a chamber system with the desired properties and then to use Theorem 2.6 to obtain the desired incidence geometries. There are two main reasons for that decision. Indeed, the chamber system approach not only simplifies some technical difficulties, as explained in more details below, but it also allows the construction to remain more concrete and geometric. Suppose we were to try to construct directly and geometrically the incidence geometries of Theorem 3.16. For that, we need to find a subset $S$ of points of $\operatorname{\mathcal{O}}$ such that the stabilizer of $S$ under the action of $\operatorname{Sz}(q)$ is a dihedral group of order $10$. This means we cannot simply set $S=\operatorname{\mathcal{O}}_{5}$, a sub-ovoid of $5$ points, as we did in the chamber system approach. Indeed the stabilizer in $\operatorname{Sz}(q)$ of $S$ would then be a subgroup isomorphic to $\operatorname{Sz}(2)$. The group $\operatorname{Sz}(2)$ nonetheless contains a dihedral group of order $10$ as an index $2$ subgroup, so it is most likely possible to fix this problem, but it requires additional effort. On the other hand, if we accept to loose the geometric interpretation, the incidence geometries of Theorem 3.16 can be constructed directly as coset incidence geometries in the following way. Let $T_{\rho_{0},\tau}$ is a proper and non-degenerate triangle and let $H_{0},H_{1}$ and $H_{2}$ be the $3$ dihedral subgroups associated to $T_{\rho_{0},\tau}$. We can then construct the coset geometry $\Gamma(G,(H_{0},H_{1},H_{2}))$. Equation (2) is satisfied, meaning that this coset geometry is flag transitive. Indeed, by a result of Tits [15, Section 1.4], this equation implies that any triple of cosets that are pairwise incident have a common element, and therefore that $G$ acts transitively on the chambers of $\Gamma$. Residual connectedness is straightforward. ## References * [1] F. Buekenhout and A. M. Cohen. Diagram geometry: related to classical groups and buildings, volume 57. Springer Science & Business Media, 2013. * [2] R. Cori. Un code pour les graphes planaires et ses applications. Astérisque, No. 27. Société Mathématique de France, Paris, 1975. With an English abstract. * [3] M. Downs and G. A. Jones. Möbius inversion in Suzuki groups and enumeration of regular objects. In Symmetries in Graphs, Maps, and Polytopes: 5th SIGMAP Workshop, West Malvern, UK, July 2014, pages 97–127. Springer, 2016. * [4] M. E. Fernandes, D. Leemans, and A. I. Weiss. Highly symmetric hypertopes. Aequationes Math., 90(5):1045–1067, 2016. * [5] G. Jones and D. Singerman. Maps, hypermaps and triangle groups. In The Grothendieck theory of dessins d’enfants (Luminy, 1993), volume 200 of London Math. Soc. Lecture Note Ser., pages 115–145. Cambridge Univ. Press, Cambridge, 1994. * [6] G. A. Jones and D. Pinto. Hypermap operations of finite order. Discrete mathematics, 310(12):1820–1827, 2010. * [7] G. A. Jones and J. Wolfart. Dessins d’enfants on Riemann surfaces. Springer Monographs in Mathematics. Springer, Cham, 2016. * [8] D. Leemans. Thin geometries for the Suzuki simple group ${\rm Sz}(8)$. Bull. Belg. Math. Soc. Simon Stevin, 5(2-3):373–387, 1998. * [9] D. Leemans and K. Stokes. Coset geometries with trialities and their reduced incidence graphs. Acta Math. Univ. Comenian. (N.S.), 88(3):911–916, 2019. * [10] D. Leemans and K. Stokes. Incidence geometries with trialities coming from maps with Wilson trialities. Innov. Incidence Geom., 20(2–3):325–340, 2023. * [11] H. Lüneburg. Translation planes. Springer-Verlag, 1980. * [12] M. Suzuki. On a class of doubly transitive groups. II. Ann. of Math. (2), 79:514–589, 1964. * [13] J. Tits. Sur les analogues algébriques des groupes semi-simples complexes. In Colloque d’algèbre supérieure, tenu à Bruxelles du 19 au 22 décembre 1956, Centre Belge de Recherches Mathématiques, pages 261–289. Établissements Ceuterick, Louvain, 1957. * [14] J. Tits. Ovoïdes à translations. Rend. Mat. Appl., 5(21):37–59, 1962. * [15] J. Tits. Buildings of spherical type and finite BN-pairs. Lecture Notes in Mathematics, Vol. 386. Springer-Verlag, Berlin-New York, 1974. * [16] J. Tits. A local approach to buildings. In The geometric vein, pages 519–547. Springer, New York-Berlin, 1981.
# Decomposition numbers for abelian defect RoCK blocks of double covers of symmetric groups Matthew Fayers Queen Mary University of London, Mile End Road, London E1 4NS, U.K<EMAIL_ADDRESS>, Alexander Kleshchev Department of Mathematics University of Oregon Eugene OR 97403, USA<EMAIL_ADDRESS>and Lucia Morotti Leibniz Universität Hannover Institut für Algebra, Zahlentheorie und Diskrete Mathematik 30167 Hannover Germany<EMAIL_ADDRESS> ###### Abstract. We calculate the (super)decomposition matrix for a RoCK block of a double cover of the symmetric group with abelian defect, verifying a conjecture of the first author. To do this, we exploit a theorem of the second author and Livesey that a RoCK block $\mathcal{B}^{\rho,d}$ is Morita superequivalent to a wreath superproduct of a certain quiver (super)algebra with the symmetric group $\mathfrak{S}_{d}$. We develop the representation theory of this wreath superproduct to compute its Cartan invariants. We then directly construct projective characters for $\mathcal{B}^{\rho,d}$ to calculate its decomposition matrix up to a triangular adjustment, and show that this adjustment is trivial by comparing Cartan invariants. ###### 2020 Mathematics Subject Classification: 20C30, 20C20, 05E10 The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme ‘Groups, representations and applications: new perspectives’ where work on this paper was undertaken. The first author was supported by EPSRC grant no and EP/W005751/1. The second author was supported by the NSF grant DMS-2101791, Charles Simonyi Endowment at the Institute for Advanced Study, and the Simons Foundation. While finishing writing the paper the third author was working at the Mathematisches Institut of the Heinrich-Heine-Universität Düsseldorf. ###### Contents 1. 1 Introduction 2. 2 Combinatorial preliminaries 1. 2.1 Compositions and partitions 2. 2.2 Littlewood–Richardson coefficients, Specht modules and permutation modules 3. 2.3 Kostka polynomials 3. 3 Rouquier bar-cores 1. 3.1 Definition and first properties 2. 3.2 Rouquier bar-cores and dominance order 3. 3.3 Rouquier bar-cores and containment of partitions 4. 4 Superalgebras, supermodules and wreath superproducts 1. 4.1 Superspaces 2. 4.2 Superalgebras 3. 4.3 Supermodules 4. 4.4 Representations of wreath superproducts ${\sf W}_{d}$ 5. 4.5 The super-Cartan matrix for ${\sf W}_{d}$ 5. 5 Representations of double covers of symmetric groups 1. 5.1 The double cover of the symmetric group 2. 5.2 Branching rules and weights 3. 5.3 Virtual projective characters 4. 5.4 Projective characters from the $q$-deformed Fock space 5. 5.5 RoCK blocks for double covers and the Kleshchev–Livesey Morita equivalence 6. 5.6 The regularization theorem 6. 6 Projective characters 1. 6.1 Projective characters $\hat{{\varphi}}^{{\mu}}$ in RoCK blocks 2. 6.2 Gelfand–Graev induction 3. 6.3 Projective characters obtained by induction 4. 6.4 The bijection $\lambda\mapsto\lambda_{\circ}$ 5. 6.5 Adjustment matrix 7. 7 Cartan matrices and proof of the main theorem 1. 7.1 The super-Cartan matrix and the adjustment matrix 2. 7.2 Entries in the unadjusted Cartan matrix 3. 7.3 Proof of the main theorem. ## 1\. Introduction In the modular representation theory of the symmetric groups and their double covers, the central outstanding question is the _decomposition number problem_ : determining the composition factors of the $p$-modular reductions of ordinary irreducible representations. Even for the symmetric groups a solution to this problem seems far out of reach, but there is a remarkable family of blocks for which the problem has been solved. These are called _RoCK blocks_. They are defined in a combinatorial way using the abacus, and were identified by Rouquier [R] as being of particular importance. RoCK blocks have been pivotal in the proofs of several results, most importantly in the proof of Broué’s abelian defect group conjecture for symmetric groups [CR]. This hinges on the proof by Chuang and Kessar [CK] that a RoCK block of defect $d<p$ is Morita equivalent to the principal block of the wreath product $\mathfrak{S}_{p}\wr\mathfrak{S}_{d}$. A consequence of this is the formula due to Chuang–Tan [CT2] for the decomposition numbers for RoCK blocks. The same formula appears in a computation of certain canonical basis coefficients, due independently to Leclerc–Miyachi and Chuang–Tan [CT1, LM]. In recent years, the representation theory of double covers of symmetric groups (or equivalently, the study of projective representations of symmetric groups) has been studied extensively. Let $p=2\ell+1$ be an odd prime (see [Fa2] for corresponding results in characteristic $2$), and ${\mathbb{F}}$ an algebraically closed field of characteristic $p$. Let $\hat{\mathfrak{S}}_{n}$ denote one of the proper double covers of the symmetric group $\mathfrak{S}_{n}$, for $n\geqslant 4$, and let $z\in\hat{\mathfrak{S}}_{n}$ denote the central element of order $2$. An irreducible ${\mathbb{F}}\hat{\mathfrak{S}}_{n}$-module $M$ is a _spin module_ if $z$ acts as $-1$ on $M$, and a block of ${\mathbb{F}}\hat{\mathfrak{S}}_{n}$ is a spin block if it contains spin modules. In fact, for studying spin modules it is more natural to consider ${\mathbb{F}}\hat{\mathfrak{S}}_{n}$ as a superalgebra (i.e. a $\mathbb{Z}/2\mathbb{Z}$-graded algebra), and study spin supermodules and spin superblocks. The modular spin representation theory of $\hat{\mathfrak{S}}_{n}$ has been developed by Brundan and the second author in [BK1, BK2] (using two different approaches which were later unified by the second author and Shchigolev [KS]). The combinatorial part of this theory revolves around the combinatorics of $p$-strict partitions. The definition of spin RoCK blocks for $\hat{\mathfrak{S}}_{n}$ was given by the second author and Livesey [KL], who proved an analogue of Chuang and Kessar’s Morita equivalence result, and used this to show that Broué’s conjecture holds for spin RoCK blocks. Our purpose in this paper is to give a formula for the (super)decomposition numbers for spin RoCK blocks of abelian defect; in particular, we prove a formula conjectured by the first author in [Fa3] based on calculations of canonical basis coefficients. To state our main theorem we briefly introduce some notation. For a strict partition $\lambda$, we let $\mathrm{S}(\lambda)$ denote a $p$-modular reduction of the irreducible spin supermodule for $\mathbb{C}\hat{\mathfrak{S}}_{n}$ labelled by $\lambda$, and for a restricted $p$-strict partition $\mu$, we let $\mathrm{D}(\mu)$ denote the irreducible spin supermodule for $\mathbb{F}\hat{\mathfrak{S}}_{n}$ labelled by $\mu$; see Section 5 for details on these. If $\lambda$ is any partition, we write $h(\lambda)$ for the number of positive parts of $\lambda$, and $a(\lambda)=0$ or $1$ as $\lambda$ has an even or odd number of positive even parts. Finally, $\operatorname{c}(\alpha;\sigma,\tau)$ denotes the Littlewood–Richardson coefficient corresponding to partitions $\alpha,\sigma,\tau$, and $K^{-1}_{\tau\sigma}(q)$ the inverse Kostka polynomial corresponding to $\sigma,\tau$; see §2.2 and §2.3 for details on these. Rouquier $p$-bar-cores are discussed in Section 3 – these correspond to spin RoCK blocks of double covers of symmetric groups. Now our main theorem can be stated as follows. ###### Main Theorem. Suppose $p=2\ell+1$ is an odd prime and $1\leqslant d<p$, and that $\rho$ is a $d$-Rouquier $p$-bar-core. Suppose $\lambda$ is a strict partition and $\mu$ a restricted $p$-strict partition, both with $p$-bar-core $\rho$ and $p$-bar- weight $d$. Let $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ and $(\mu^{(0)},\dots,\mu^{(\ell)})$ be the $p$-bar-quotients of $\lambda,\mu$. Then the decomposition number $[\mathrm{S}(\lambda):\mathrm{D}(\mu)]$ equals $2^{\lfloor\frac{1}{2}(h(\lambda^{(0)})+a(\lambda))\rfloor}\sum K^{-1}_{\lambda^{(0)}\sigma^{(0)}}(-1)\prod_{i=1}^{\ell}\operatorname{c}(\lambda^{(i)};\sigma^{(i)},\tau^{(i)})\operatorname{c}(\mu^{(i-1)};\sigma^{(i-1)},{\tau^{(i)}}^{\prime})$ where the sum is over all partitions $\sigma^{(0)},\dots,\sigma^{(\ell-1)},\tau^{(1)},\dots,\tau^{(\ell)}$, and we read $\sigma^{(\ell)}$ as $\varnothing$. We note that the assumption $d\geqslant 1$ made in the theorem is harmless – it simply means that we are dealing with blocks of non-trivial defect; on the other hand, the assumption $d<p$ is equivalent to the assumption that the blocks we are dealing with have abelian defect groups. The proof of our main theorem involves two parts. First, we use the Morita equivalence result of Kleshchev–Livesey which shows that a RoCK block $\mathcal{B}^{\rho,d}$ with $p$-bar-weight $d<p$ is Morita superequivalent to a wreath superproduct ${\sf W}_{d}={\sf A}_{\ell}\wr\mathfrak{S}_{d}$, where ${\sf A}_{\ell}$ is an explicitly defined quiver superalgebra. In Section 4 we develop superalgebra analogues of results of Chuang and Tan describing the representation theory of wreath products. In particular, by explicitly constructing indecomposable projective supermodules we are able to determine the (super)Cartan matrix of ${\sf W}_{d}$ when $d<p$, and hence of $\mathcal{B}^{\rho,d}$ (but without any information on the labels of rows and columns). For the second part of the proof (in Section 6) we explicitly consider projective characters for $\mathcal{B}^{\rho,d}$. The results of Leclerc–Thibon [LT] comparing decomposition numbers with canonical basis coefficients, together with the first author’s formula for canonical basis coefficients corresponding to spin RoCK blocks, show that our main theorem is true ‘up to column operations’, i.e. that the decomposition matrix of $\mathcal{B}^{\rho,d}$ is obtained from the matrix claimed in our main theorem by post-multiplying by a square matrix $A$. By explicitly constructing projective characters by induction and comparing with known general results on decomposition numbers, we are able to show that $A$ is triangular with non- negative integer entries. By then calculating the Cartan matrix entries predicted by our main theorem and showing that they agree with those of ${\sf W}_{d}$ when $d<p$, we can deduce that $A$ is the identity matrix, which gives us our main theorem. ## 2\. Combinatorial preliminaries We denote $\mathbb{N}:=\mathbb{Z}_{\geqslant 1}$ and $\mathbb{N}_{0}:=\mathbb{Z}_{\geqslant 0}$. Throughout the paper, we work over an algebraically closed field ${\mathbb{F}}$ of characteristic $p>2$. We write * $\diamond$ $\ell:=(p-1)/2$, * $\diamond$ $I:=\\{0,1,\dots,\ell\\}$, * $\diamond$ $J:=\\{0,1,\dots,\ell-1\\}$. For $n\in\mathbb{N}_{0}$, we write $I^{n}$ for the set of words $i_{1}\dots i_{n}$ with $i_{1},\dots,i_{n}\in I$. ### 2.1. Compositions and partitions A _composition_ is an infinite sequence $\lambda=(\lambda_{1},\lambda_{2},\dots)$ of non-negative integers which are eventually zero. Any composition $\lambda$ has finite sum $|\lambda|$, and we say that $\lambda$ is a composition of $|\lambda|$. We write $\mathscr{C}$ for the set of all compositions, and for each $d\in\mathbb{N}_{0}$ we write $\mathscr{C}(d)$ for the set of all compositions of $d$. When writing compositions, we may collect consecutive equal parts together with a superscript, and omit an infinite tail of $0$s. We write $\varnothing$ for the composition $(0,0,\dots)$. A _partition_ is a composition whose parts are weakly decreasing. We write ${\mathscr{P}}$ for the set of all partitions, and ${\mathscr{P}}(d)$ for the set of partitions of $d$. A partition is _strict_ if it has no repeated positive parts. We write ${\mathscr{P}_{0}}(d)$ for the set of all strict partitions of $d$. Say that a strict partition $\lambda$ is _even_ if $\lambda$ has an even number of positive even parts, and _odd_ otherwise. Now write $a(\lambda):=\left\\{\begin{array}[]{ll}0&\hbox{if $\lambda$ is even,}\\\ 1&\hbox{if $\lambda$ is odd.}\end{array}\right.$ (2.1) For a set $S$, let ${\mathscr{P}}^{S}(d)$ denote the set of all _$S$ -multipartitions_ of $d$. So the elements of ${\mathscr{P}}^{S}(d)$ are tuples $\text{\boldmath$\lambda$}=(\lambda^{(s)})_{s\in S}$ of partitions satisfying $\sum_{s\in S}|\lambda^{(s)}|=d$. In the special case $S=I$, we write the elements of ${\mathscr{P}}^{I}(d)$ as tuples $\text{\boldmath$\lambda$}=(\lambda^{(0)},\dots,\lambda^{(\ell)})$, and similarly for ${\mathscr{P}}^{J}(d)$. We refer to $\lambda^{(i)}$ as the $i$th component of $\lambda$. We identify ${\mathscr{P}}^{J}(d)$ with the subset of ${\mathscr{P}}^{I}(d)$ consisting of those $\text{\boldmath$\lambda$}\in{\mathscr{P}}^{I}(d)$ with $\lambda^{(\ell)}=\varnothing$. We use the following binary operations on partitions: if $\lambda,\mu\in{\mathscr{P}}$, then we write $\lambda+\mu$ for the partition $(\lambda_{1}+\mu_{1},\lambda_{2}+\mu_{2},\dots)$, and $\lambda\sqcup\mu$ for the partition obtained by combining the parts of $\lambda$ and $\mu$ and putting them in weakly decreasing order. For example, $(3,1)+(4,1^{2})=(7,2,1)$, while $(3,1)\sqcup(4,1^{2})=(4,3,1^{3})$. The _Young diagram_ of a partition $\lambda$ is the set $\left\\{\left.(r,c)\in\mathbb{N}^{2}\ \right|\ \smash{c\leqslant\lambda_{r}}\right\\},$ whose elements are called the _nodes_ of $\lambda$. We draw the Young diagram as an array of boxes using the English convention, in which $r$ increases down the page and $c$ increases from left to right. We often identify partitions with their Young diagrams; for example, we may write $\lambda\subseteq\mu$ to mean that $\lambda_{r}\leqslant\mu_{r}$ for all $r$. If $\lambda$ is a partition, the _conjugate_ partition $\lambda^{\prime}$ is obtained by reflecting the Young diagram of $\lambda$ on the main diagonal. The _dominance order_ is a partial order $\trianglerighteqslant$ defined on $\mathscr{P}$. We set $\lambda\trianglerighteqslant\mu$ (and say that $\lambda$ _dominates_ $\mu$) if $|\lambda|=|\mu|$ and $\lambda_{1}+\dots+\lambda_{r}\geqslant\mu_{1}+\dots+\mu_{r}$ for all $r\geqslant 1$. This can be interpreted in terms of Young diagrams in the following way: $\lambda\trianglerighteqslant\mu$ if and only if the Young diagram of $\mu$ can be obtained from the Young diagram of $\lambda$ by moving some nodes further to the left, see [JK, 1.4.10]. By [JK, 1.4.11], the dominance order is reversed by conjugation: $\lambda\trianglerighteqslant\mu$ if and only if $\mu^{\prime}\trianglerighteqslant\lambda^{\prime}$. Now we introduce the prime $p$ into the combinatorics. Say that a partition is _$p$ -strict_ if its repeated parts are all divisible by $p$. A $p$-strict partition $\lambda$ is _restricted_ if for all $r$ either $\lambda_{r}-\lambda_{r+1}<p$ or $\lambda_{r}-\lambda_{r+1}=p$ and $p\nmid\lambda_{r}$. We write ${\mathscr{P}}_{p}(n)$ for the set of $p$-strict partitions of $n$, and ${\mathscr{RP}_{p}}(n)$ for the set of restricted $p$-strict partitions of $n$. We also introduce some new terminology: say that a $p$-strict partition $\lambda$ is a _$p^{\prime}$_ -partition (or simply that $\lambda$ is $p^{\prime}$) if it has no positive parts divisible by $p$. Suppose $\lambda$ is a $p$-strict partition. _Removing a $p$-bar_ from $\lambda$ means either: * $\diamond$ replacing a part $\lambda_{r}\geqslant p$ with $\lambda_{r}-p$, and rearranging the parts into decreasing order, or * $\diamond$ deleting two parts summing to $p$. In the first case we assume that either $p\mid\lambda_{r}$ or $\lambda_{r}-p$ is not a part of $\lambda$, so that the resulting partition is $p$-strict. The _$p$ -bar-core_ of $\lambda$ is the partition obtained by repeatedly removing $p$-bars until it is not possible to remove any more – this is well defined thanks to [MY1, Theorem 1]. The $p$-bar-weight of $\lambda$ is the number of $p$-bars removed to reach its $p$-bar-core. If $\rho$ is a $p$-bar-core and $d\geqslant 1$, we write: * $\diamond$ $\mathscr{P}_{p}^{\rho,d}$ for the set of $p$-strict partitions with $p$-bar- core $\rho$ and $p$-bar-weight $d$; * $\diamond$ $\mathscr{RP}_{p}^{\rho,d}$ for the set of restricted partitions in $\mathscr{P}_{p}^{\rho,d}$; * $\diamond$ $\mathscr{P}_{0}^{\rho,d}$ for the set of strict partitions in $\mathscr{P}_{p}^{\rho,d}$; * $\diamond$ $\mathscr{P}_{p^{\prime}}^{\rho,d}$ for the set of $p^{\prime}$-partitions in $\mathscr{P}_{p}^{\rho,d}$. Note that $\mathscr{P}_{p^{\prime}}^{\rho,d}\subseteq\mathscr{P}_{0}^{\rho,d}$. Now we look at individual nodes. The _residue_ of a node in column $c$ is the smaller of the residues of $c-1$ and $-c$ modulo $p$. So the residues of nodes follow the repeating pattern $0,1,\dots,\ell-1,\ell,\ell-1,\dots,1,0,0,1,\dots,\ell-1,\ell,\ell-1,\dots,1,0,\dots$ from left to right in every row of a Young diagram. Note that the residue of a node is always interpreted as an element of $I$. For $i\in I$, an _$i$ -node_ means a node of residue $i$. ### 2.2. Littlewood–Richardson coefficients, Specht modules and permutation modules For partitions $\lambda,\mu^{1},\dots,\mu^{r}$ we denote by $\operatorname{c}(\lambda;\mu^{1},\dots,\mu^{r})$ the corresponding Littlewood–Richardson coefficient, which is zero unless $|\lambda|=|\mu^{1}|+\dots+|\mu^{r}|$. In fact, $\operatorname{c}(\lambda;\mu^{1},\dots,\mu^{r})$ does not depend on the order of the partitions $\mu^{1},\dots,\mu^{r}$ and depends only on the multiset $\\{\mu^{1},\dots,\mu^{r}\\}$. So we will also use the notation $\operatorname{c}(\lambda;M)$ for any multiset $M$ of partitions. If $M=\\{\mu^{1},\dots,\mu^{r}\\}$ and $N=\\{\nu^{1},\dots,\nu^{s}\\}$ are two multisets of partitions, we can also consider $\operatorname{c}(\lambda;M\sqcup N):=\operatorname{c}(\lambda;\mu^{1},\dots,\mu^{r},\nu^{1},\dots,\nu^{s}).$ Below we will use various standard results on Littlewood–Richardson coefficients which can be found for example in [M2, I.9] or [Fu, Section 5]. We will often use calculations involving representations of the symmetric group in characteristic zero. For any group $G$, let $\mathbf{1}_{G}$ denote the trivial $G$-module. For the group algebra ${\mathbb{C}}\mathfrak{S}_{d}$, the irreducible modules are the _Specht modules_ $\mathcal{S}^{\lambda}$, for $\lambda\in\mathscr{P}(n)$. In particular, $\mathcal{S}^{(n)}$ is the trivial $\mathfrak{S}_{n}$-module, and $\mathcal{S}^{(1^{n})}$ is the sign module, which we also denote $\mathtt{sgn}$. It is well-known that $\mathcal{S}^{\lambda}\otimes\mathtt{sgn}\cong\mathcal{S}^{\lambda^{\prime}}$ (2.2) for all $\lambda$, see [J2, 6.7]. Given a ${\mathbb{C}}\mathfrak{S}_{n}$-module $M$ and any partition $\lambda$, we write $[M:\mathcal{S}^{\lambda}]$ for the multiplicity of $\mathcal{S}^{\lambda}$ as a composition factor of $M$ if $|\lambda|=n$, and $0$ otherwise. We often induce and restrict modules between $\mathfrak{S}_{n}$ and its Young subgroups. If $\alpha=(\alpha_{1},\dots,\alpha_{r})\in\mathscr{C}(n)$, then the Young subgroup $\mathfrak{S}_{\alpha}$ is the naturally embedded subgroup $\mathfrak{S}_{\alpha_{1}}\times\dots\times\mathfrak{S}_{\alpha_{r}}$ of $\mathfrak{S}_{n}$. Now given modules $M_{1},\dots,M_{r}$ for $\mathfrak{S}_{\alpha_{1}},\dots,\mathfrak{S}_{\alpha_{r}}$ respectively, we obtain a module $M_{1}\boxtimes\dots\boxtimes M_{r}$ for $\mathfrak{S}_{\alpha}$ and the induced module $M_{1}\mathbin{\circ}\cdots\mathbin{\circ}M_{r}:={\mathrm{Ind}}^{\mathfrak{S}_{n}}_{\mathfrak{S}_{\alpha}}M_{1}\boxtimes\cdots\boxtimes M_{r}.$ For example, if $\lambda\in\mathscr{C}(n)$, then $\mathcal{S}^{(\lambda_{1})}\mathbin{\circ}\mathcal{S}^{(\lambda_{2})}\mathbin{\circ}\cdots$ is the permutation module $\mathcal{M}^{\lambda}$ defined in [J2, 4.1], nowadays called the _Young permutation module_. In general, given partitions $\alpha^{1},\dots,\alpha^{r}$ and $\lambda$, the multiplicity $[\mathcal{S}^{\alpha^{1}}\mathbin{\circ}\dots\mathbin{\circ}\mathcal{S}^{\alpha^{r}}:\mathcal{S}^{\lambda}]$ is the Littlewood–Richardson coefficient $\operatorname{c}(\lambda;\alpha^{1},\dots,\alpha^{r})$. By Frobenius reciprocity, this can also be written as $[{\mathrm{Res}}_{\mathfrak{S}_{(|\alpha^{1}|,\dots,|\alpha^{r}|)}}\mathcal{S}^{\lambda}:\mathcal{S}^{\alpha^{1}}\boxtimes\dots\boxtimes\mathcal{S}^{\alpha^{r}}]$. Later we will need the following results. ###### Lemma 2.1. Suppose $\alpha\in\mathscr{P}$ and $\beta,\gamma\in\mathscr{C}$. Then $\operatorname{c}(\alpha;(1^{\beta_{1}}),(1^{\beta_{2}}),\dots,(\gamma_{1}),(\gamma_{2}),\dots)=\left[(\mathcal{M}^{\beta}\otimes\mathtt{sgn})\mathbin{\circ}\mathcal{M}^{\gamma}:\mathcal{S}^{\alpha}\right].$ ###### Proof. The left-hand side equals $\displaystyle\left[(\mathcal{S}^{(1^{\beta_{1}})}\mathbin{\circ}\mathcal{S}^{(1^{\beta_{2}})}\mathbin{\circ}\dotsb)\mathbin{\circ}(\mathcal{S}^{(\gamma_{1})}\mathbin{\circ}\mathcal{S}^{(\gamma_{2})}\mathbin{\circ}\dotsb):\mathcal{S}^{\alpha}\right]$ $\displaystyle=$ $\displaystyle\left[\big{(}(\mathcal{S}^{(\beta_{1})}\otimes\mathtt{sgn})\mathbin{\circ}(\mathcal{S}^{(\beta_{2})}\otimes\mathtt{sgn})\mathbin{\circ}\dotsb\big{)}\mathbin{\circ}(\mathcal{S}^{(\gamma_{1})}\mathbin{\circ}\mathcal{S}^{(\gamma_{2})}\mathbin{\circ}\dotsb):\mathcal{S}^{\alpha}\right]$ $\displaystyle=$ $\displaystyle\left[\big{(}(\mathcal{S}^{(\beta_{1})}\mathbin{\circ}\mathcal{S}^{(\beta_{2})}\mathbin{\circ}\dotsb)\otimes\mathtt{sgn}\big{)}\mathbin{\circ}(\mathcal{S}^{(\gamma_{1})}\mathbin{\circ}\mathcal{S}^{(\gamma_{2})}\mathbin{\circ}\dotsb):\mathcal{S}^{\alpha}\right]$ $\displaystyle=$ $\displaystyle\left[(\mathcal{M}^{\beta}\otimes\mathtt{sgn})\mathbin{\circ}\mathcal{M}^{\gamma}:\mathcal{S}^{\alpha}\right].\qed$ ###### Lemma 2.2. Suppose $\lambda\in\mathscr{C}$ and $\tau,\sigma\in\mathscr{P}$. Then $\sum_{\mu\in\mathscr{P}}[\mathcal{M}^{\lambda}:\mathcal{S}^{\mu}][\mathcal{S}^{\tau}\mathbin{\circ}\mathcal{S}^{\sigma}:\mathcal{S}^{\mu}]=\sum_{\begin{subarray}{c}\beta,\gamma\in\mathscr{C}\\\ \beta+\gamma=\lambda\end{subarray}}[\mathcal{M}^{\beta}:\mathcal{S}^{\tau}][\mathcal{M}^{\gamma}:\mathcal{S}^{\sigma}].$ ###### Proof. Let $n=|\lambda|$. We may assume that $|\tau|+|\sigma|=n$ as well (since otherwise both sides are obviously zero) and we may restrict the range of summation on the left-hand side to $\mu\in\mathscr{P}(n)$. The definition of $\mathcal{M}^{\lambda}$ gives $[\mathcal{M}^{\lambda}:\mathcal{S}^{\mu}]=[\mathcal{S}^{(\lambda_{1})}\mathbin{\circ}\mathcal{S}^{(\lambda_{2})}\mathbin{\circ}\dotsb:\mathcal{S}^{\mu}]$. On the other hand, if we define $K$ to be the Young subgroup $\mathfrak{S}_{(|\tau|,|\sigma|)}$, then Frobenius reciprocity gives $[\mathcal{S}^{\tau}\mathbin{\circ}\mathcal{S}^{\sigma}:\mathcal{S}^{\mu}]=[{\mathrm{Res}}_{K}\mathcal{S}^{\mu}:\mathcal{S}^{\tau}\boxtimes\mathcal{S}^{\sigma}]$. Since the irreducible ${\mathbb{C}}\mathfrak{S}_{n}$-modules are precisely the modules $\mathcal{S}^{\mu}$ for $\mu\in\mathscr{P}(n)$, the left-hand side gives the multiplicity $[{\mathrm{Res}}_{K}{\mathrm{Ind}}^{\mathfrak{S}_{n}}\mathbf{1}_{\mathfrak{S}_{\lambda}}:\mathcal{S}^{\tau}\boxtimes\mathcal{S}^{\sigma}].$ By Mackey’s Theorem, this is the same as $\sum_{H}[{\mathrm{Ind}}^{K}\mathbf{1}_{H}:\mathcal{S}^{\tau}\boxtimes\mathcal{S}^{\sigma}],$ summing over $K$-conjugacy class representatives of subgroups $H\leqslant K$ of the form $(\mathfrak{S}_{\lambda})^{x}\cap K$ for $x\in\mathfrak{S}_{n}$. We can take these representatives to be the groups $\mathfrak{S}_{\beta}\times\mathfrak{S}_{\gamma}$ as $\beta,\gamma$ range over compositions satisfying $|\beta|=|\tau|$, $|\gamma|=|\sigma|$ and $\beta_{r}+\gamma_{r}=\lambda_{r}$ for each $r$. Now the definition of the modules $\mathcal{M}^{\beta}$ and $\mathcal{M}^{\gamma}$ gives the result. ∎ We have the following ‘Mackey formula’ for Littlewood–Richardson coefficients. ###### Lemma 2.3. Suppose $\alpha,\beta,\gamma,\delta\in\mathscr{P}$. Then $\sum_{\lambda\in\mathscr{P}}\operatorname{c}(\lambda;\alpha,\beta)\operatorname{c}(\lambda;\gamma,\delta)=\sum_{{\varphi},\chi,\psi,\omega\in\mathscr{P}}\operatorname{c}(\alpha;{\varphi},\chi)\operatorname{c}(\beta;\psi,\omega)\operatorname{c}(\gamma;{\varphi},\psi)\operatorname{c}(\delta;\chi,\omega).$ ###### Proof. The special case where $\alpha=(r)$ is proved by Chuang and Tan [CT1, Lemma 2.2(3)], but their proof works in the general case. ∎ ### 2.3. Kostka polynomials Given $\lambda,\sigma\in\mathscr{P}$, we write $K^{-1}_{\lambda\sigma}(t)$ for the _inverse Kostka polynomial_ indexed by $\lambda,\sigma$; this polynomial arises in the theory of symmetric functions: it is the coefficient of the Schur function $s_{\sigma}$ when the Hall–Littlewood symmetric function $P_{\lambda}$ is expressed in terms of Schur functions. We refer to [M2, III.6] for more information on Kostka polynomials, but we note in particular that $K^{-1}_{\lambda\sigma}(t)$ is zero unless $\lambda\trianglerighteqslant\sigma$ and that $K^{-1}_{\lambda\lambda}(t)=1$; see [Fa3, Lemma 3.4]. Of special importance for us will be the evaluation of $K^{-1}_{\lambda\sigma}(t)$ at $t=-1$. So $K^{-1}_{\lambda\sigma}(-1)$ is the coefficient of $s_{\sigma}$ in the Schur P-function $P_{\lambda}$. We note two lemmas that we will need later. ###### Lemma 2.4. Suppose $\sigma\in\mathscr{P}(n)$. Then $K^{-1}_{\lambda\sigma}(-1)\in\mathbb{N}_{0}$ for all $\lambda\in\mathscr{P}_{0}(n)$, and there is at least one $\lambda\in\mathscr{P}_{0}(n)$ for which $K^{-1}_{\lambda\sigma}(-1)>0$. ###### Proof. Stembridge [S, Theorem 9.3(b)] shows that $K^{-1}_{\lambda\sigma}(-1)$ equals the number of tableaux of a certain type, which means in particular that $K^{-1}_{\lambda\sigma}(-1)\in\mathbb{N}_{0}$. Stembridge’s formula shows in particular that $K^{-1}_{\lambda\sigma}(-1)>0$ when $\lambda$ is the strict partition whose parts are the diagonal hook lengths of $\sigma$. ∎ ###### Lemma 2.5. Suppose $\xi,\pi\in\mathscr{P}$. Then $\sum_{\lambda\in\mathscr{P}_{0}}2^{h(\lambda)}K^{-1}_{\lambda\xi}(-1)K^{-1}_{\lambda\pi}(-1)=\sum_{\beta,\gamma\in\mathscr{P}}\operatorname{c}(\xi;\beta,\gamma^{\prime})\operatorname{c}(\pi;\beta,\gamma).$ ###### Proof. We consider symmetric functions in an infinite set of variables $X$. Let $s_{\pi}$ denote the Schur function indexed by $\pi\in\mathscr{P}$. Since the Schur functions are linearly independent, it suffices to show the following equality of symmetric functions, for each $\xi$: $\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}_{0}\\\ \pi\in\mathscr{P}\end{subarray}}2^{h(\lambda)}K^{-1}_{\lambda\xi}(-1)K^{-1}_{\lambda\pi}(-1)s_{\pi}=\sum_{\beta,\gamma,\pi\in\mathscr{P}}\operatorname{c}(\xi;\beta,\gamma^{\prime})\operatorname{c}(\pi;\beta,\gamma)s_{\pi}.$ Working with an indeterminate $t$, consider the symmetric function $\sum_{\lambda,\pi\in\mathscr{P}}b_{\lambda}(t)K^{-1}_{\lambda\xi}(t)K^{-1}_{\lambda\pi}(t)s_{\pi},$ where $b_{\lambda}(t)$ is the polynomial defined in [M2, (2.12) on p.210]. According to the transition matrix in [M2, p.241], this coincides with the ‘dual Schur function’ $S_{\xi}(t)$. Now specialize $t$ to $-1$. It is immediate from the definition of $b_{\lambda}(t)$ that $b_{\lambda}(-1)=\begin{cases}2^{h(\lambda)}&\text{if }\lambda\in\mathscr{P}_{0}\\\ 0&\text{otherwise},\end{cases}$ so we find that $S_{\xi}(-1)=\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}_{0}\\\ \pi\in\mathscr{P}\end{subarray}}2^{h(\lambda)}K^{-1}_{\lambda\xi}(-1)K^{-1}_{\lambda\pi}(-1)s_{\pi}.$ Let us write $S_{\xi}(-1)$ as $\bar{S}_{\xi}$. According to [M2, III.8, Example 7(a)], $\bar{S}_{\xi}$ equals the function $s_{\xi}(X/{-}X)$ defined in [M2, I.5, Example 23]. From equation (1) in [loc. cit.] we obtain $s_{\xi}(X/{-}X)=\sum_{\beta\in\mathscr{P}}s_{\beta}s_{\xi^{\prime}/\beta^{\prime}},$ where the skew Schur function $s_{\xi^{\prime}/\beta^{\prime}}$ equals $\sum_{\gamma\in\mathscr{P}}\operatorname{c}(\xi^{\prime};\beta^{\prime},\gamma)s_{\gamma}$. In addition $s_{\beta}s_{\gamma}=\sum_{\pi\in\mathscr{P}}\operatorname{c}(\pi;\beta,\gamma)s_{\pi}$ (indeed, this is the most usual definition of the Littlewood–Richardson coefficients), so that $\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}_{0}\\\ \pi\in\mathscr{P}\end{subarray}}2^{h(\lambda)}K^{-1}_{\lambda\xi}(-1)K^{-1}_{\lambda\pi}(-1)s_{\pi}=\sum_{\beta.\gamma\in\mathscr{P}}\operatorname{c}(\xi^{\prime};\beta^{\prime},\gamma)\operatorname{c}(\pi;\beta,\gamma)s_{\pi}.$ Now the standard result that $\operatorname{c}(\xi^{\prime};\beta^{\prime},\gamma)=\operatorname{c}(\xi;\beta,\gamma^{\prime})$ gives the required equality. ∎ ## 3\. Rouquier bar-cores ### 3.1. Definition and first properties For any $p$-strict partition $\rho$, define $r_{i}(\rho):=\left|\left\\{\left.r\in\mathbb{N}\ \right|\ \smash{\rho_{r}\equiv i\ (\operatorname{mod}\,p)}\right\\}\right|$ for $i\in\\{1,\dots,p-1\\}$. If $\rho$ is a $p$-bar-core, then $\rho$ is determined by the integers $r_{i}(\rho)$. Following [KL], given $d\geqslant 1$, we say that a $p$-bar-core $\rho$ is _$d$ -Rouquier_ if * $\diamond$ $r_{1}(\rho)\geqslant d$, and * $\diamond$ $r_{i}(\rho)\geqslant r_{i-1}(\rho)+d-1$ for $2\leqslant i\leqslant\ell$. (This automatically implies that $r_{i}(\rho)=0$ for $i>\ell$, since a $p$-bar-core cannot have two parts whose sum is divisible by $p$.) Assume that $\rho$ is a $d$-Rouquier $p$-bar-core, and $\lambda\in\mathscr{P}_{p}^{\rho,d}$. We want to define the _$p$ -bar- quotient_ of $\lambda$. First note that $r_{i}(\lambda)=r_{i}(\rho)$ for each $1\leqslant i\leqslant\ell$, since $r_{i}(\rho)\geqslant d$, cf. [KL, Lemma 4.1.1.(i)]. Now define $\lambda^{(0)}$ to be the partition obtained by taking all the parts of $\lambda$ divisible by $p$ and dividing them by $p$. For $1\leqslant i\leqslant\ell$, let $r:=r_{i}(\lambda)$, let $\lambda_{k_{1}}>\dots>\lambda_{k_{r}}$ be the parts of $\lambda$ congruent to $i$ modulo $p$, and define the partition $\lambda^{(i)}:=\left(\frac{\lambda_{k_{1}}-(r-1)p-i}{p},\frac{\lambda_{k_{2}}-(r-2)p-i}{p},\dots,\frac{\lambda_{k_{r}}-i}{p}\right).$ The $p$-bar-quotient of $\lambda$ is the multipartition $(\lambda^{(0)},\dots,\lambda^{(\ell)})\in{\mathscr{P}}^{I}(d)$. ###### Example. Suppose $p=5$ and $\rho=(32,27,22,17,16,12,11,7,6,2,1)$. Then $\rho$ is $4$-Rouquier, with $(r_{1}(\rho),r_{2}(\rho),r_{3}(\rho),r_{4}(\rho))=(4,7,0,0)$. The partition $\lambda=(37,32,22,17,16,12,11,10,7,6,2,1)$ lies in $\mathscr{P}_{0}^{\rho,4}$, and has $5$-bar-quotient $(\lambda^{(0)},\lambda^{(1)},\lambda^{(2)})=((2),\varnothing,(1^{2}))$. ###### Lemma 3.1. Suppose $\rho$ is a $d$-Rouquier $p$-bar-core, and $\lambda\in\mathscr{P}_{p}^{\rho,d}$, with $p$-bar-quotient $(\lambda^{(0)},\dots,\lambda^{(\ell)})$. Then: 1. (i) $\lambda$ is strict if and only if $\lambda^{(0)}$ is strict; 2. (ii) $\lambda$ is $p^{\prime}$ if and only if $\lambda^{(0)}=\varnothing$; 3. (iii) $\lambda$ is restricted if and only if $\lambda^{(\ell)}=\varnothing$. ###### Proof. The first two statements follow directly from the definition, so we need only prove the third. Note that by the given properties of the integers $r_{i}(\rho)$, the $d$ largest parts of $\rho$ are all congruent to $\ell$ modulo $p$, and $\rho_{k}<\rho_{1}-(d-1)p$ for any $k$ with $\lambda_{k}\not\equiv\ell\ (\operatorname{mod}\,p)$. We obtain $\lambda$ from $\rho$ by adding $d$ $p$-bars. So any part $\lambda_{k}$ for which $\lambda_{k}\not\equiv\ell\ (\operatorname{mod}\,p)$ satisfies $\lambda_{k}<\rho_{1}+p$. If $\lambda^{(\ell)}=\varnothing$ then $\lambda_{1}<\rho_{1}+p$, while $\lambda$ contains all the integers $\ell,\ell+p,\dots,\rho_{1}$, so $\lambda$ is restricted. If instead $\lambda^{(\ell)}_{1}\neq\varnothing$, choose $a$ such that $\lambda^{(\ell)}_{a}>\lambda^{(\ell)}_{a+1}$. Then $|\lambda^{(\ell)}|\geqslant a$, so that $|\lambda^{(i)}|\leqslant d-a$ for any $i\neq\ell$. This means that any part $\lambda_{k}\not\equiv\ell\ (\operatorname{mod}\,p)$ satisfies $\lambda_{k}<\rho_{1}-(a-1)p=\rho_{a}$. So $\lambda$ contains the part $\lambda_{a}=\rho_{a}+\lambda^{(\ell)}_{a}p$, but does not contain any parts $\lambda_{k}$ with $\rho_{a}+(\lambda^{(\ell)}_{a}-1)p\leqslant\lambda_{k}<\rho_{a}+\lambda^{(\ell)}_{a}p$, so is not restricted. ∎ Clearly $\lambda\in\mathscr{P}_{p}^{\rho,d}$ is determined by $\rho$ and the $p$-bar-quotient $(\lambda^{(0)},\dots,\lambda^{(\ell)})$; conversely, given a multipartition $(\lambda^{(0)},\dots,\lambda^{(\ell)})\in{\mathscr{P}}^{I}(d)$, there is a partition $\lambda\in\mathscr{P}_{p}^{\rho,d}$ with $p$-bar-quotient $(\lambda^{(0)},\dots,\lambda^{(\ell)})$. In view of this and Lemma 3.1, we see that $|\mathscr{P}_{p}^{\rho,d}|=|{\mathscr{P}}^{I}(d)|\qquad\text{and}\qquad|\mathscr{P}_{p^{\prime}}^{\rho,d}|=|\mathscr{RP}_{p}^{\rho,d}|=|{\mathscr{P}}^{J}(d)|.$ (3.1) ### 3.2. Rouquier bar-cores and dominance order For our calculations in RoCK blocks, it will be helpful to introduce a partial order on $\mathscr{P}^{I}(d)$: given two multipartitions $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ and $(\mu^{(0)},\dots,\mu^{(\ell)})$ in $\mathscr{P}^{I}(d)$, we write $(\lambda^{(0)},\dots,\lambda^{(\ell)})\succcurlyeq(\mu^{(0)},\dots,\mu^{(\ell)})$ if $|\lambda^{(0)}|+\dots+|\lambda^{(k-1)}|+{\lambda^{(k)}}^{\prime}_{1}+\dots+{\lambda^{(k)}}^{\prime}_{c}\geqslant|\mu^{(0)}|+\dots+|\mu^{(k-1)}|+{\mu^{(k)}}^{\prime}_{1}+\dots+{\mu^{(k)}}^{\prime}_{c}$ for all $0\leqslant k\leqslant\ell$ and $c\geqslant 1$. This order can be visualized by drawing the Young diagrams of $\lambda^{(0)},\dots,\lambda^{(\ell)}$ in a row from left to right; then $(\lambda^{(0)},\dots,\lambda^{(\ell)})\succcurlyeq(\mu^{(0)},\dots,\mu^{(\ell)})$ if and only if $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ can obtained from $(\mu^{(0)},\dots,\mu^{(\ell)})$ by moving nodes further to the left. ###### Lemma 3.2. Let $\rho$ be a $d$-Rouquier $p$-bar-core. Suppose that the partitions $\lambda$ and $\mu$ in $\mathscr{P}_{p}^{\rho,d}$ have $p$-bar-quotients $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ and $(\mu^{(0)},\dots,\mu^{(\ell)})$, respectively. Then $(\lambda^{(0)},\dots,\lambda^{(\ell)})\succcurlyeq(\mu^{(0)},\dots,\mu^{(\ell)})$ if and only if $\lambda\trianglelefteqslant\mu$. ###### Proof. For $i=0,\dots,\ell$ let $r_{i}$ be the largest part of $\rho$ congruent to $i$ modulo $p$. We also denote $\text{\boldmath$\lambda$}:=(\lambda^{(0)},\dots,\lambda^{(\ell)})$, $\text{\boldmath$\mu$}:=(\mu^{(0)},\dots,\mu^{(\ell)})$. We construct $\lambda$ from $\rho$ by successively adding $p$-bars. Correspondingly, the $p$-bar-quotient $\lambda$ is obtained from $(\varnothing,\dots,\varnothing)$ by adding nodes; adding the node $(r,c)$ to $\lambda^{(i)}$ corresponds to adding nodes to $\lambda$ in columns $\begin{cases}r_{i}+(c-r)p+1,r_{i}+(c-r)p+2,\dots,r_{i}+(c-r+1)p&\text{if $i>0$}\\\ (c-1)p+1,(c-1)p+2,\ldots,cp&\text{if $i=0$.}\end{cases}$ (3.2) We now prove the ‘only-if’ part of the lemma. It is easy to see that if $\text{\boldmath$\lambda$}\succcurlyeq\text{\boldmath$\mu$}$ then we can reach $\lambda$ from $\mu$ by a sequence of moves in which a single node is moved further to the left; so it suffices to consider a single such move, and show that this move corresponds to moving nodes to the left in $\mu$. So suppose $\lambda$ is obtained from $\mu$ by replacing the node $(s,c)$ in the $j$th component with the node $(r,b)$ in the $i$th component, where $i\leqslant j$. If $0<i=j$, then $b-r<c-s$, so by (3.2) $\lambda$ is obtained from $\mu$ by moving $p$ nodes further to the left. If $0=i=j$, then a similar argument applies using the inequality $b<c$. If $0<i<j$, then $b-r\leqslant c-s+d-1$, because $\mu^{(i)}_{1}+{\mu^{(j)}}^{\prime}_{1}\leqslant d$. Now (3.2) and the fact that $r_{i}<r_{j}+(d-1)p$ means that $\lambda$ is obtained from $\mu$ by moving $p$ nodes further to the left. If $0=i<j$, then we use a similar argument via the inequality $b\leqslant c-s+d$. In any case, we obtain $\lambda\vartriangleleft\mu$, as required. We now prove the ‘if’ part of the lemma. Assume $\text{\boldmath$\lambda$}\not\succcurlyeq\text{\boldmath$\mu$}$; then we must show that $\lambda\ntrianglelefteqslant\mu$. Case 1: there is $k\in I$ such that $|\lambda^{(0)}|+\dots+|\lambda^{(k)}|<|\mu^{(0)}|+\dots+|\mu^{(k)}|$. Note that in this case $k<\ell$. Let $a=|\lambda^{(0)}|+\dots+|\lambda^{(k)}|$ and $b=|\mu^{(0)}|+\dots+|\mu^{(k)}|$. Now let $\nu,\xi\in\mathscr{P}_{p}^{\rho,d}$ be given by $\nu^{(i)}=\begin{cases}(1^{a})&\text{if $i=0$}\\\ (1^{d-a})&\text{if $i=k+1$}\\\ \varnothing&\text{otherwise},\end{cases}\qquad\xi^{(i)}=\begin{cases}(b)&\text{if $i=k$}\\\ (d-b)&\text{if $i=\ell$}\\\ \varnothing&\text{otherwise}.\end{cases}$ Then $(\nu^{(0)},\dots,\nu^{(\ell)})\succcurlyeq\text{\boldmath$\lambda$}$ and $\text{\boldmath$\mu$}\succcurlyeq(\xi^{(0)},\dots,\xi^{(\ell)})$. So (from the ‘$\Rightarrow$’ part of the Lemma) in order to show that $\lambda\ntrianglelefteqslant\mu$ it suffices to show that $\nu\ntrianglelefteqslant\xi$. To do this, we let $r$ be such that $\rho_{r}=r_{k+1}-(d-a-1)p$, and compare $\nu_{1}+\dots+\nu_{r}$ with $\xi_{1}+\dots+\xi_{r}$. We obtain $\displaystyle\nu_{1}+\dots+\nu_{r}$ $\displaystyle=\rho_{1}+\dots+\rho_{r}+(d-a)p,$ $\displaystyle\xi_{1}+\dots+\xi_{r}$ $\displaystyle=\rho_{1}+\dots+\rho_{r}+(d-b)p+\max\\{r_{k}-r_{k+1}+(d+b-a-1)p,0\\},$ and now the assumptions $r_{k+1}>r_{k}+(d-1)p$ and $a<b$ give $\nu_{1}+\dots+\nu_{r}>\xi_{1}+\dots+\xi_{r}$, so that $\nu\ntrianglelefteqslant\xi$. Case 2: $|\lambda^{(0)}|+\dots+|\lambda^{(k)}|\geqslant|\mu^{(0)}|+\dots+|\mu^{(k)}|$ for every $k\in I$. The assumption that $\text{\boldmath$\lambda$}\not\succcurlyeq\text{\boldmath$\mu$}$ means that we can find $k\in I$ and $c\geqslant 1$ for which $\sum_{i=0}^{k-1}|\lambda^{(i)}|\,+\,{\lambda^{(k)}}^{\prime}_{1}+\dots+{\lambda^{(k)}}^{\prime}_{c}<\sum_{i=0}^{k-1}|\mu^{(i)}|\,+\,{\mu^{(k)}}^{\prime}_{1}+\dots+{\mu^{(k)}}^{\prime}_{c}.$ (3.3) First we assume that $k>0$. Let $r={\lambda^{(k)}}^{\prime}_{c}$, and $s=r_{k}+(c-r+1)p$; then we claim that $\lambda^{\prime}_{1}+\dots+\lambda^{\prime}_{s}<\mu^{\prime}_{1}+\dots+\mu^{\prime}_{s}$, so that $\lambda\ntrianglelefteqslant\mu$. We calculate $\lambda^{\prime}_{1}+\dots+\lambda^{\prime}_{s}-(\rho^{\prime}_{1}+\dots+\rho^{\prime}_{s})$ using (3.2). For $0\leqslant i<k$ each node of $\lambda^{(i)}$ contributes $p$ to this sum. In addition, each node $(t,b)$ of $\lambda^{(k)}$ for which $b-t\leqslant c-r$ contributes $p$ to the sum. (The nodes of $\lambda^{(i)}$ for $i>k$ do not contribute, because of the inequality $r_{k+1}-r_{k}>(d-1)p$.) Writing $T_{r,c}=\sum_{x=1}^{r-1}\min\\{x,c\\}$, we obtain $\displaystyle\sum_{i=1}^{s}\lambda^{\prime}_{i}-\sum_{i=1}^{s}\rho^{\prime}_{i}$ $\displaystyle=\left(\sum_{i=0}^{k-1}|\lambda^{(i)}|+\sum_{t\geqslant\max\\{1,r-c\\}}\min\\{\lambda^{(k)}_{t},t+c-r\\}\right)p$ $\displaystyle=\left(\sum_{i=0}^{k-1}|\lambda^{(i)}|+\sum_{d=1}^{c}{\lambda^{(k)}}^{\prime}_{d}-T_{r,c}\right)p,$ with the second equality coming from the fact that ${\lambda^{(k)}}^{\prime}_{c}=r$. We calculate $\mu^{\prime}_{1}+\dots+\mu^{\prime}_{s}-(\rho^{\prime}_{1}+\dots+\rho^{\prime}_{s})$ in the same way. The assumption that $|\mu^{(0)}|+\dots+|\mu^{(k-1)}|\leqslant|\lambda^{(0)}|+\dots+|\lambda^{(k-1)}|$ means that each node of $\mu^{(i)}$ for $i<k$ contributes $p$ to this sum, while the nodes of $\mu^{(i)}$ for $i>k$ do not contribute. So, as with $\lambda$, we obtain $\sum_{i=1}^{s}\mu^{\prime}_{i}-\sum_{i=1}^{s}\rho^{\prime}_{i}=\left(\sum_{i=0}^{k-1}|\mu^{(i)}|+\sum_{t\geqslant\max\\{1,r-c\\}}\min\\{\mu^{(k)}_{t},t+c-r\\}\right)p.$ It follows that $\sum_{t\geqslant\max\\{1,r-c\\}}\min\\{\mu^{(k)}_{t},t+c-r\\}\geqslant\sum_{d=1}^{c}{\mu^{(k)}}^{\prime}_{d}-T_{r,c}$ and then $\sum_{i=1}^{s}\mu^{\prime}_{i}-\sum_{i=1}^{s}\rho^{\prime}_{i}\geqslant\left(\sum_{i=0}^{k-1}|\mu^{(i)}|+\sum_{d=1}^{c}{\mu^{(k)}}^{\prime}_{d}-T_{r,c}\right)p.$ We obtain $\lambda^{\prime}_{1}+\dots+\lambda^{\prime}_{s}<\mu^{\prime}_{1}+\dots+\mu^{\prime}_{s}$, as required. Now assume instead that $k=0$. Then we claim that $\lambda^{\prime}_{1}+\dots+\lambda_{cp}^{\prime}<\mu^{\prime}_{1}+\dots+\mu_{cp}^{\prime}$. As for the case above, we calculate $\lambda^{\prime}_{1}+\dots+\lambda^{\prime}_{s}-(\rho^{\prime}_{1}+\dots+\rho^{\prime}_{s})$ using (3.2). Each node $(t,b)$ of $\lambda^{(0)}$ with $b\leqslant c$ contributes $p$ to this sum, and the nodes of $\lambda^{(i)}$ for $i\geqslant 1$ do not contribute, because of the inequality $r_{1}>dp$ and the fact that $|\lambda^{(0)}|\geqslant c$. So we obtain $\sum_{i=1}^{cp}\lambda^{\prime}_{i}-\sum_{i=1}^{cp}\rho^{\prime}_{i}=\sum_{i=1}^{c}{\lambda^{(0)}}^{\prime}_{i}.$ The same formula with $\mu$ in place of $\lambda$ gives the result. ∎ ### 3.3. Rouquier bar-cores and containment of partitions We will need the following generalization of [KL, Lemma 4.1.2]. ###### Proposition 3.3. Suppose $\rho$ is a $d$-Rouquier $p$-bar-core. Suppose $\lambda\in\mathscr{P}_{p}^{\rho,a}$ and $\alpha\in\mathscr{P}_{p}^{\rho,b}$, where $a,b\leqslant d$, and let $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ and $(\alpha^{(0)},\dots,\alpha^{(\ell)})$ be the $p$-bar-quotients of $\lambda$ and $\alpha$. Then the following are equivalent: 1. (i) $\lambda\subseteq\alpha$; 2. (ii) $\lambda^{(j)}\subseteq\alpha^{(j)}$ for all $j\in I$; 3. (iii) $\alpha$ can be obtained from $\lambda$ by successively adding $p$-bars. ###### Proof. It is trivial that (iii)$\Rightarrow$(i). It is also very easy to see that (ii)$\Rightarrow$(iii): adding a node to a component of the $p$-bar-quotient corresponds to increasing one of the parts of the partition by $p$, which is a way of adding a $p$-bar. So it remains to show that (i)$\Rightarrow$(ii). (We remark that the case where $b-a=1$ is proved in [KL, Lemma 4.1.2].) We use induction on $a$. The case $a=0$ is trivial, so we assume $a>0$, and that the result is true with $a$ replaced by any smaller value. Assume $\lambda\subseteq\alpha$. Suppose $\mu\in\mathscr{P}_{p}^{\rho,a-1}$ and that the $p$-bar-quotient $(\mu^{(0)},\dots,\mu^{(\ell)})$ of $\mu$ is obtained from the $p$-bar- quotient of $\lambda$ by removing a single node. Then $\mu\subset\lambda$ (from the fact that (ii)$\Rightarrow$(iii)$\Rightarrow$(i)), so $\mu\subset\alpha$, and the inductive hypothesis gives $\mu^{(j)}\subseteq\alpha^{(j)}$ for all $j$. So the only node of $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ which can fail to be a node of $(\alpha^{(0)},\dots,\alpha^{(\ell)})$ is the node removed to obtain $\mu$. In particular, if there are at least two such partitions $\mu$ (that is, if $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ has at least two removable nodes), then $\lambda^{(j)}\subseteq\alpha^{(j)}$ for all $j$ as required. So we can assume that $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ has only one removable node. This means there is $k\in I$ such that $\lambda^{(k)}$ is a rectangular partition $(x^{y})$ with $x,y\geqslant 1$, while $\lambda^{(j)}=\varnothing$ for $j\neq k$. From the argument in the previous paragraph, we can assume that $\alpha^{(k)}$ contains the partition $(x^{y-1},x-1)$. If we suppose for a contradiction that $\lambda^{(k)}\nsubseteq\alpha^{(k)}$, then $\alpha^{(k)}$ has fewer than $\min\\{x,y\\}$ nodes $(r,c)$ for which $c-r=x-y$. For each $j$ we define $r_{j}$ to be the largest part of $\rho$ congruent to $j$ modulo $p$. As observed in Lemma 3.2, adding the node $(r,c)$ to the $j$th component of $\lambda^{(j)}$ corresponds to adding nodes to $\lambda$ in columns $r_{j}+(c-\hat{r})p+1,\ r_{j}+(c-\hat{r})p+2,\ \dots,\ r_{j}+(c-\hat{r}+1)p,$ where we write $\hat{r}=1$ if $j=0$, and $\hat{r}=r$ otherwise. Assume first that $k\geqslant 1$. Then the assumption $\lambda\subseteq\alpha$ and the paragraph above give $\alpha^{\prime}_{r_{k}+(x-y)p+1}-\rho^{\prime}_{r_{k}+(x-y)p+1}\geqslant\lambda^{\prime}_{r_{k}+(x-y)p+1}-\rho^{\prime}_{r_{k}+(x-y)p+1}=\min\\{x,y\\}.$ Since $\alpha^{(k)}$ has fewer than $\min\\{x,y\\}$ nodes $(r,c)$ for which $c-r=x-y$, there must be some $j\neq k$ such that $\alpha^{(j)}$ has a node $(r,c)$ for which $r_{k}+(x-y)p+1=\begin{cases}r_{j}+(c-\hat{r})p+k-j+1&\text{if }j<k,\\\ r_{j}+(c-r)p+p+k-j+1&\text{if }j>k.\end{cases}$ In fact this is impossible for $j>k$, since it gives $(r-c-1+x-y)p+j-k=r_{j}-r_{k}\geqslant(d-1)p+j-k,$ and therefore $r-c+x-y\geqslant d.$ But $|\alpha^{(j)}|\geqslant r-c+1$ and $|\alpha^{(k)}|\geqslant x+y-2$, and we obtain $|\alpha^{(k)}|+|\alpha^{(j)}|\geqslant d+1$, which contradicts the assumption that $\alpha\in\mathscr{P}_{p}^{\rho,b}$ with $b\leqslant d$. So instead $j<k$. Now we obtain $(c-\hat{r}+y-x)p+k-j=r_{k}-r_{j}\geqslant(d-1)p+k-j,$ so that $c-\hat{r}+y-x\geqslant d-1.$ But $|\alpha^{(j)}|\geqslant c-\hat{r}+1$ and $|\alpha^{(k)}|\geqslant x+y-2$ and $|\alpha^{(j)}|+|\alpha^{(k)}|\leqslant d$, so we have equality everywhere, and in particular $|\alpha^{(j)}|+|\alpha^{(k)}|=d$. Now we perform a similar calculation using the fact that $\alpha^{\prime}_{r_{k}+(x-y+1)p}-\rho^{\prime}_{r_{k}+(x-y+1)p}\geqslant\min\\{x,y\\}$. Now there is $j^{\prime}\neq k$ such that (writing $\check{r}=1$ if $j^{\prime}=0$ and $\check{r}=r$ otherwise) $r_{k}+(x-y+1)p=\begin{cases}r_{j^{\prime}}+(c-\check{r})p+k-j^{\prime}&\text{if }j^{\prime}<k,\\\ r_{j^{\prime}}+(c-r)p+p+k-j^{\prime}&\text{if }j^{\prime}>k.\end{cases}$ Now the case $j^{\prime}<k$ leads to an impossibility (in a similar way to the case $j>k$ above), so $j^{\prime}$ must be greater than $k$. But now we have indices $j<k<j^{\prime}$ with $|\alpha^{(j)}|+|\alpha^{(k)}|+|\alpha^{(j^{\prime})}|\geqslant d+1$, which again contradicts the assumption $\alpha\in\mathscr{P}_{p}^{\rho,d}$. The result follows in the case $k\geqslant 1$. The case $k=0$ is similar but simpler. In this case $\alpha^{\prime}_{(x-1)p+1}-\rho^{\prime}_{(x-1)p+1}\geqslant\lambda^{\prime}_{(x-1)p+1}-\rho^{\prime}_{(x-1)p+1}=y,$ but $\alpha^{(0)}$ has fewer than $y$ nodes in column $x$, so there is $j>0$ such that $\alpha^{(j)}$ has a node $(r,c)$ with $(x-1)p+1=r_{j}+(c-r)p+p+1-j$ and therefore $(r-c+x-2)p+j=r_{j}\geqslant(d-1)p+j$ so that $r-c+x-2\geqslant d-1.$ But now the fact that $|\alpha^{(0)}|\geqslant x-1$ and $|\alpha^{(j)}|\geqslant r-c+1$ gives a contradiction. So the result follows in the case $k=0$ as well. ∎ ## 4\. Superalgebras, supermodules and wreath superproducts The representation theory of double covers of symmetric groups is best approached via superalgebras. In this section we recall the general theory and then study representations of some special wreath superproducts ${\sf A}_{\ell}\wr\mathfrak{S}_{d}$ which play a crucial role for RoCK (super)blocks of double covers of symmetric groups, cf. Theorem 5.4. Our aim is to compute the Cartan invariants for ${\sf A}_{\ell}\wr\mathfrak{S}_{d}$ in the case where $d<p$ in terms of Littlewood-Richardson coefficients, cf. Corollary 4.11. ### 4.1. Superspaces We write $\mathbb{Z}/2\mathbb{Z}=\\{{\bar{0}},{\bar{1}}\\}$. If $V$ is a vector space over ${\mathbb{F}}$, a _$\mathbb{Z}/2\mathbb{Z}$ -grading_ on $V$ is a direct sum decomposition $V=V_{\bar{0}}\oplus V_{\bar{1}}$. A vector _superspace_ is a vector space with a chosen $\mathbb{Z}/2\mathbb{Z}$-grading. For ${\varepsilon}\in\mathbb{Z}/2\mathbb{Z}$, if $v\in V_{{\varepsilon}}$ we write $|v|={\varepsilon}$ and say that $v$ is _homogeneous_ of parity ${\varepsilon}$. If $V$ and $W$ are superspaces and ${\varepsilon}\in\mathbb{Z}/2\mathbb{Z}$ then a linear map $f:V\to W$ is called a _homogeneous superspace homomorphism of parity ${\varepsilon}$_ if $f(V_{\delta})\subseteq W_{\delta+{\varepsilon}}$ for all $\delta\in\mathbb{Z}/2\mathbb{Z}$. A _superspace homomorphism $f:V\to W$_ means a map $f=f_{\bar{0}}+f_{\bar{1}}$, where for ${\varepsilon}={\bar{0}},{\bar{1}}$ the map $f_{\varepsilon}:V\to W$ is a homogeneous superspace homomorphism of parity ${\varepsilon}$. We will use the term ‘even homomorphism’ to mean ‘homogeneous homomorphism of parity ${\bar{0}}$’, and similarly for odd homomorphisms. We write $\Pi$ for the parity change functor, see e.g. [Kl, §12.1]. Thus, for a superspace $V$, the superspace $\Pi V$ equals $V$ as a vector space, but with parities swapped. We define an odd isomorphism of superspaces $\displaystyle\sigma_{V}:V\longrightarrow\Pi V,\ v\longmapsto(-1)^{|v|}v.$ If $V_{1},\dots,V_{d}$ are superspaces then $V_{1}\otimes\dots\otimes V_{d}$ is a superspace with $|v_{1}\otimes\dots\otimes v_{d}|=|v_{1}|+\dots|v_{d}|$. (Here and below in similar situations we assume that the elements $v_{k}$ are homogeneous and extend by linearity where necessary.) If $f_{i}:V_{i}\to W_{i}$ is a superspace homomorphism for $i=1,\dots,d$, then $f_{1}\otimes\dots\otimes f_{d}:V_{1}\otimes\dots\otimes V_{d}\to W_{1}\otimes\dots\otimes W_{d}$ is a superspace homomorphism defined from $(f_{1}\otimes\dots\otimes f_{d})(v_{1}\otimes\dots\otimes v_{d})=(-1)^{\sum_{1\leqslant r<s\leqslant d}|f_{s}||v_{r}|}f_{1}(v_{1})\otimes\dots\otimes f_{d}(v_{d}).$ Let $V=V_{\bar{0}}\oplus V_{\bar{1}}$ be a superspace, and $d\in\mathbb{N}$. The symmetric group $\mathfrak{S}_{d}$ acts on $V^{\otimes d}$ via ${}^{w}(v_{1}\otimes\dots\otimes v_{d}):=(-1)^{[w;v_{1},\dots,v_{d}]}v_{w^{-1}(1)}\otimes\dots\otimes v_{w^{-1}(d)},$ where for $w\in\mathfrak{S}_{d}$ and $v_{1},\dots,v_{d}\in V$, we have $[w;v_{1},\dots,v_{d}]:=\sum_{\begin{subarray}{c}1\leqslant a<c\leqslant d\\\ w(a)>w(c)\end{subarray}}|v_{a}||v_{c}|.$ It is now easy to check that $\sigma_{V}^{\otimes d}({}^{w}(v_{1}\otimes\dots\otimes v_{d}))=\mathtt{sgn}(w)\Big{(}{}^{w}\big{(}\sigma_{V}^{\otimes d}(v_{1}\otimes\dots\otimes v_{d})\big{)}\Big{)}.$ (4.1) ### 4.2. Superalgebras An ${\mathbb{F}}$-_superalgebra_ is an ${\mathbb{F}}$-algebra $A$ with a chosen $\mathbb{Z}/2\mathbb{Z}$-grading $A=A_{\bar{0}}\oplus A_{\bar{1}}$ such that $ab\in A_{|a|+|b|}$ (whenever $a,b\in A$ are both homogeneous). If $A$ and $B$ are ${\mathbb{F}}$-superalgebras, a _superalgebra homomorphism_ $f:A\to B$ is an even unital algebra homomorphism. If $A_{1},\dots,A_{d}$ are superalgebras then the superspace $A_{1}\otimes\dots\otimes A_{d}$ is a superalgebra with multiplication $(a_{1}\otimes\dots\otimes a_{d})(b_{1}\otimes\dots\otimes b_{d})=(-1)^{\sum_{1\leqslant r<s\leqslant d}|a_{s}||b_{r}|}a_{1}b_{1}\otimes\dots\otimes a_{d}b_{d}.$ ###### Example 4.1. We consider the quiver and define the Brauer tree algebra ${\sf A}_{\ell}$ to be the path algebra of this quiver generated by length $0$ paths $\\{{\textsf{e}}^{j}\mid j\in J\\}$, and length $1$ paths ${\mathsf{u}}$ and $\\{{\textsf{a}}^{k,k+1},{\textsf{a}}^{k+1,k}\mid 0\leqslant k\leqslant\ell-2\\}$, modulo the following relations: 1. (i) all paths of length three or greater are zero; 2. (ii) all paths of length two that are not cycles are zero; 3. (iii) the length-two cycles based at the vertex $i\in\\{1,\dots,\ell-2\\}$ are equal; 4. (iv) ${\mathsf{u}}^{2}={\textsf{a}}^{0,1}{\textsf{a}}^{1,0}$ if $l\geqslant 2$. For example, if $\ell=1$ then the algebra ${\sf A}_{\ell}$ is the truncated polynomial algebra ${\mathbb{F}}[{\mathsf{u}}]/({\mathsf{u}}^{3})$. The algebra ${\sf A}_{\ell}$ is considered as a superalgebra by declaring that ${\mathsf{u}}$ is odd and all other generators are even. ###### Example 4.2. For $d\in\mathbb{N}$, we consider the wreath superproduct ${\sf W}_{d}:={\sf A}_{\ell}\wr\mathfrak{S}_{d}.$ As a vector superspace this is just ${\sf A}_{\ell}^{\otimes d}\otimes{\mathbb{F}}\mathfrak{S}_{d}$, with ${\mathbb{F}}\mathfrak{S}_{d}$ concentrated in degree ${\bar{0}}$. The multiplication is determined by the following requirements: 1. (1) $\text{\boldmath$z$}\mapsto\text{\boldmath$z$}\otimes 1$ defines a superalgebra embedding ${\sf A}_{\ell}^{\otimes d}\to{\sf A}_{\ell}^{\otimes d}\otimes{\mathbb{F}}\mathfrak{S}_{d}$; we identify ${\sf A}_{\ell}^{\otimes d}$ with a subsuperalgebra of ${\sf W}_{d}$ via this embedding. 2. (2) $x\mapsto 1\otimes x$ defines a superalgebra embedding ${\mathbb{F}}\mathfrak{S}_{d}\to{\sf A}_{\ell}^{\otimes d}\otimes{\mathbb{F}}\mathfrak{S}_{d}$; we identify ${\mathbb{F}}\mathfrak{S}_{d}$ with a subsuperalgebra of ${\sf W}_{d}$ via this embedding. 3. (3) $w({\textsf{z}}_{1}\otimes\dots\otimes{\textsf{z}}_{d})={}^{w}({\textsf{z}}_{1}\otimes\dots\otimes{\textsf{z}}_{d})w$ for all $w\in\mathfrak{S}_{d}$ and all ${\textsf{z}}_{1},\dots,{\textsf{z}}_{d}\in{\sf A}_{\ell}$. ### 4.3. Supermodules Let $A$ be a superalgebra. An $A$-_supermodule_ means an $A$-module $V$ with a chosen $\mathbb{Z}/2\mathbb{Z}$-grading $V=V_{\bar{0}}\oplus V_{\bar{1}}$ such that $av\in V_{|a|+|v|}$ for all (homogeneous) $a\in A$ and $v\in V$. If $V$ and $W$ are $A$-supermodules then a homomorphism $f:V\to W$ of superspaces is a homomorphism of $A$-supermodules if $f(av)=(-1)^{|f||a|}af(v)$ for $a\in A$ and $v\in V$. For an $A$-supermodule $V$, the superspace $\Pi V$ is considered as an $A$-supermodule via the new action $a\cdot v=(-1)^{|a|}av$ for $a\in A$ and $v\in\Pi V=V$. The map $\sigma_{V}:V\to\Pi V$ is then an odd isomorphism of supermodules; in particular, $\sigma_{V}(av)=(-1)^{|a|}a\cdot\sigma_{V}(v)$ for $a\in A$ and $v\in V$. We write ‘$\simeq$’ for an even isomorphism of $A$-supermodules, and ‘$\cong$’ for an arbitrary isomorphism of $A$-supermodules, cf. [Kl, Chapter 12]. A _subsupermodule_ of an $A$-supermodule $V$ is an $A$-submodule $W\subseteq V$ such that $W=(W\cap V_{\bar{0}})\oplus(W\cap V_{\bar{1}})$. An $A$-supermodule is _irreducible_ if it has exactly two subsupermodules. Irreducible supermodules come in two different types: an irreducible supermodule is of _type $\mathtt{M}$_ if it is irreducible as a module, and of _type $\mathtt{Q}$_ otherwise (in which case as a module it is the direct sum of two non-isomorphic irreducible modules, see for example [Kl, Section 12.2]). Every irreducible module arises in one of these ways from an irreducible supermodule (see for example [Kl, Corollary 12.2.10]), so understanding the irreducible supermodules (together with their types) is essentially equivalent to understanding irreducible modules. If $L$ is a finite-dimensional irreducible $A$-supermodule, then $L$ is of type $\mathtt{M}$ if and only if $L\not\simeq\Pi L$, see [Kl, Lemma 12.2.8]. If $A_{1},\dots,A_{d}$ are superalgebras and $V_{1},\dots,V_{d}$ are supermodules over $A_{1},\dots,A_{d}$ respectively, we have a supermodule $V_{1}\boxtimes\dots\boxtimes V_{d}$ over $A_{1}\otimes\dots\otimes A_{d}$, which is $V_{1}\otimes\dots\otimes V_{d}$ as a superspace with the action defined by $(a_{1}\otimes\dots\otimes a_{d})(v_{1}\otimes\dots\otimes v_{d})=(-1)^{\sum_{1\leqslant r<s\leqslant d}|a_{s}||v_{r}|}a_{1}v_{1}\otimes\dots\otimes a_{d}v_{d}.$ If $f_{i}:V_{i}\to W_{i}$ is an $A$-supermodule homomorphism for $i=1,\dots,d$, then $f_{1}\otimes\dots\otimes f_{d}:V_{1}\boxtimes\dots\boxtimes V_{d}\to W_{1}\boxtimes\dots\boxtimes W_{d}$ is a homomorphism of supermodules over $A_{1}\otimes\dots\otimes A_{d}$. In particular, $\sigma_{V_{1}}\otimes\dots\otimes\sigma_{V_{d}}:V_{1}\boxtimes\dots\boxtimes V_{d}\to(\Pi V_{1})\boxtimes\dots\boxtimes(\Pi V_{d})$ (4.2) is an isomorphism of ($A_{1}\otimes\dots\otimes A_{d}$)-modules (of parity $d\text{ }(\text{\rm mod }2)\,$). If $V$ is a finite-dimensional $A$-supermodule, a composition series of $V$ is a sequence of subsupermodules $0=V_{0}\subset V_{1}\subset\dots\subset V_{n}=V$ such that $V_{k}/V_{k-1}$ is an irreducible supermodule for all $k=1,\dots,n$. If $L_{1},\dots,L_{n}$ are irreducible $A$-modules (not necessarily distinct) such that $V_{k}/V_{k-1}\simeq L_{k}$ for $k=1,\dots,n$, we say that $V$ has composition factors $L_{1},\dots,L_{k}$. These are well defined up to even isomorphisms and permutation. So if $L$ is an irreducible $A$-supermodule we have a well-defined composition multiplicity $[V:L]:=\\{k\mid L_{k}\cong L\\}.$ (If $L$ is of type $\mathtt{M}$ so that $L\not\simeq\Pi L$, we could consider the more delicate graded composition multiplicity $[V:L]_{\pi}=m+n\pi$ where $m=\\{k\mid L_{k}\simeq L\\}$ and $n=\\{k\mid L_{k}\simeq\Pi L\\}$, so that $[V:L]=m+n$, but this will not be needed.) If $A$ is a finite-dimensional superalgebra and $L$ is an irreducible $A$-supermodule, we denote the projective cover of $L$ by $P_{L}$. This is a direct summand of the regular supermodule with head $L$, see [Kl, Proposition 12.2.12]. The composition factors of the principal indecomposable supermodules $P_{L}$ will be of central importance in this paper. In particular, the _super-Cartan invariants of $A$_ are defined as the multiplicities $c_{L,L^{\prime}}:=[P_{L}:L^{\prime}]$ for all irreducible $A$-supermodules $L,L^{\prime}$. The super-Cartan matrix of $A$ is then the matrix $(c_{L,L^{\prime}})$ of all super-Cartan invariants of $A$. For the superalgebra ${\sf A}_{\ell}$ of 4.1, up to even isomorphisms and parity shifts $\Pi$, a complete set of irreducible ${\sf A}_{\ell}$-supermodules is $\\{L_{j}\mid j\in J\\}$ (4.3) where $L_{j}$ is spanned by an even vector $v_{j}$ such that ${\textsf{e}}_{j}v_{j}=v_{j}$ and all other standard generators of ${\sf A}_{\ell}$ act on $v_{j}$ as zero. Now, note that for each $j$, the supermodule $L_{j}$ is of type $\mathtt{M}$, and $P_{j}:={\sf A}_{\ell}{\textsf{e}}_{j}$ is a projective cover of $L_{j}$. We can easily write down a basis for each $P_{j}$: $\displaystyle P_{0}$ $\displaystyle=\langle{\textsf{e}}_{0},{\mathsf{u}}{\textsf{e}}_{0},{\textsf{a}}^{1,0}{\textsf{e}}_{0},{\mathsf{u}}^{2}{\textsf{e}}_{0}\rangle\qquad$ $\displaystyle\text{(omitting ${\textsf{a}}^{1,0}{\textsf{e}}_{0}$ if $\ell=1$)},$ $\displaystyle P_{j}$ $\displaystyle=\langle{\textsf{e}}_{j},{\textsf{a}}^{j-1,j}{\textsf{e}}_{j},{\textsf{a}}^{j+1,j}{\textsf{e}}_{j},{\textsf{a}}^{j,j+1}{\textsf{a}}^{j+1,j}{\textsf{e}}_{j}\rangle\qquad$ $\displaystyle\text{for }1\leqslant j\leqslant\ell-2,$ $\displaystyle P_{\ell-1}$ $\displaystyle=\langle{\textsf{e}}_{\ell-1},{\textsf{a}}^{\ell-2,\ell-1}{\textsf{e}}_{\ell-1},{\textsf{a}}^{\ell-1,\ell-2}{\textsf{a}}^{\ell-2,\ell-1}{\textsf{e}}_{\ell-1}\rangle\qquad$ $\displaystyle\text{if $\ell\geqslant 2$}.$ From this, we can immediately read off the composition factors of each $P_{j}$: ###### Lemma 4.3. $P_{0}$ has composition factors $L_{0},\Pi L_{0},L_{1},L_{0}$ (omitting $L_{1}$ if $\ell=1$), $P_{i}$ has composition factors $L_{i},L_{i-1},L_{i+1},L_{i}$ for $1\leqslant i\leqslant\ell-2$, and $P_{\ell-1}$ has composition factors $L_{\ell-1},L_{\ell-2},L_{\ell-1}$ if $\ell\geqslant 2$. ### 4.4. Representations of wreath superproducts ${\sf W}_{d}$ We suppose from now until the end of Section 4 that $d<p$ or $p=0$. Our aim is to develop the representation theory of the wreath superproduct algebra ${\sf W}_{d}$ from Example 4.2, and ultimately to compute the super-Cartan matrix for ${\sf W}_{d}$. We take inspiration from the paper [CT2] by Chuang and Tan; many of our results are straightforward adaptations of their results to supermodules. Given $\text{\boldmath$j$}=j_{1}\dots j_{d}\in J^{d}$, we define the idempotent ${\textsf{e}}_{\text{\boldmath$j$}}:={\textsf{e}}_{j_{1}}\otimes\dots\otimes{\textsf{e}}_{j_{d}}\in{\sf A}_{\ell}^{\otimes d}\subseteq{\sf W}_{d}.$ Then we have the orthogonal idempotent decomposition in ${\sf W}_{d}$: $1=\sum_{\text{\boldmath$j$}\in J^{d}}e_{\text{\boldmath$j$}}.$ (4.4) For a composition $\delta=(\delta_{1},\dots,\delta_{k})$ of $d$, we have a Young subgroup $\mathfrak{S}_{\delta}=\mathfrak{S}_{\delta_{1}}\times\dots\times\mathfrak{S}_{\delta_{k}}\leqslant\mathfrak{S}_{d}$ and the corresponding parabolic subalgebra ${\sf W}_{\delta}:={\sf A}_{\ell}^{\otimes d}\otimes{\mathbb{F}}\mathfrak{S}_{\delta}\subseteq{\sf W}_{d}.$ Note that ${\sf W}_{\delta}\cong{\sf W}_{\delta_{1}}\otimes\dots\otimes{\sf W}_{\delta_{k}}$ (tensor product of superalgebras). If $V_{1},\dots,V_{k}$ are supermodules for ${\sf W}_{\delta_{1}},\dots,{\sf W}_{\delta_{k}}$ respectively, then we have the supermodule $V_{1}\boxtimes\dots\boxtimes V_{k}$ over ${\sf W}_{\delta_{1}}\otimes\dots\otimes{\sf W}_{\delta_{k}}={\sf W}_{\delta}$, so we can form the ${\sf W}_{d}$-supermodule $V_{1}\circ\dots\circ V_{k}:={\mathrm{Ind}}^{{\sf W}_{d}}_{{\sf W}_{\delta}}(V_{1}\boxtimes\dots\boxtimes V_{k}).$ Note that the operation ‘$\circ$’ is commutative in the sense that $V\circ V^{\prime}\simeq V^{\prime}\circ V$. Recall that if $\lambda\in{\mathscr{P}}(d)$, we write $\mathcal{S}^{\lambda}$ for the corresponding Specht module for ${\mathbb{F}}\mathfrak{S}_{d}$. Our assumptions on $p$ mean that $\mathcal{S}^{\lambda}$ is irreducible, and we can fix a primitive idempotent $f^{\lambda}\in{\mathbb{F}}\mathfrak{S}_{d}$ such that ${\mathbb{F}}\mathfrak{S}_{d}f^{\lambda}\cong\mathcal{S}^{\lambda}$. Now given $\text{\boldmath$\lambda$}\in{\mathscr{P}}^{J}(d)$, define $\delta:=(\delta_{0},\dots,\delta_{\ell-1})=(|\lambda^{(0)}|,\dots,|\lambda^{(\ell-1)}|)$. Then we have a primitive idempotent $f^{\text{\boldmath$\lambda$}}:=f^{\lambda^{(0)}}\otimes\dots\otimes f^{\lambda^{(\ell-1)}}\in{\mathbb{F}}\mathfrak{S}_{\delta_{0}}\otimes\dots\otimes{\mathbb{F}}\mathfrak{S}_{\delta_{\ell-1}}={\mathbb{F}}\mathfrak{S}_{\delta},$ from which we define an idempotent $e(\text{\boldmath$\lambda$}):=e_{0}^{\otimes\delta_{0}}\otimes\dots\otimes e_{\ell-1}^{\otimes\delta_{\ell-1}}\otimes f^{\text{\boldmath$\lambda$}}\in{\sf W}_{d}.$ (4.5) Let $V$ be a finite-dimensional ${\sf A}_{\ell}$-supermodule and $\lambda\in{\mathscr{P}}(d)$. Denote: $V(\lambda):=V^{\otimes d}\otimes\mathcal{S}^{\lambda}$ considered as a ${\sf W}_{d}$-supermodule via $\displaystyle\text{\boldmath$z$}(v_{1}\otimes\dots\otimes v_{d}\otimes y)$ $\displaystyle=\text{\boldmath$z$}(v_{1}\otimes\dots\otimes v_{d})\otimes y,$ $\displaystyle w(v_{1}\otimes\dots\otimes v_{d}\otimes y)$ $\displaystyle={}^{w}(v_{1}\otimes\dots\otimes v_{d})\otimes wy$ for all $\text{\boldmath$z$}\in{\sf A}_{\ell}^{\otimes x}$, $w\in\mathfrak{S}_{d}$, $v_{1},\dots,v_{d}\in V$, $y\in\mathcal{S}^{\lambda}$. Important special cases of this construction where $V=L_{j}$ and $V=P_{j}$ are the simple ${\sf A}_{\ell}$-module and its projective cover constructed in Section 4.3, yield the ${\sf W}_{d}$-supermodules $L_{j}(\lambda)$ and $P_{j}(\lambda)$. For a general $V$ we have the following two results. ###### Lemma 4.4. Let $V$ be a finite-dimensional ${\sf A}_{\ell}$-supermodule and $\lambda\in{\mathscr{P}}(d)$. Then $(\Pi V)(\lambda)\cong V(\lambda^{\prime})$. ###### Proof. By (2.2), we have $\mathcal{S}^{\lambda^{\prime}}\cong\mathcal{S}^{\lambda}\otimes\mathtt{sgn}$, so we can identify $\mathcal{S}^{\lambda^{\prime}}$ with $\mathcal{S}^{\lambda}$ as vector spaces but with the new action $w\cdot d=\mathtt{sgn}(w)wd$. Now, we consider the linear isomorphism ${\varphi}:=\sigma_{V}^{\otimes d}\otimes\mathop{\mathrm{id}}\nolimits:V(\lambda^{\prime})=V^{\otimes d}\otimes\mathcal{S}^{\lambda^{\prime}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}(\Pi V)^{\otimes d}\otimes\mathcal{S}^{\lambda}=(\Pi V)(\lambda).$ As pointed out in (4.2), ${\varphi}$ is an isomorphism of ${\sf A}_{\ell}^{\otimes d}$-supermodules. On the other hand, for $v_{1},\dots,v_{d}\in V$, $y\in\mathcal{S}^{\lambda^{\prime}}$ and $w\in\mathfrak{S}_{d}$, we have $\displaystyle{\varphi}\big{(}w(v_{1}\otimes\dots\otimes v_{d}\otimes y)\big{)}$ $\displaystyle={\varphi}\big{(}{}^{w}(v_{1}\otimes\dots\otimes v_{d})\otimes(\mathtt{sgn}(w)wy)\big{)}$ $\displaystyle=\mathtt{sgn}(w)\sigma_{V}^{\otimes d}\big{(}{}^{w}(v_{1}\otimes\dots\otimes v_{d})\big{)}\otimes wy$ $\displaystyle={}^{w}\big{(}\sigma_{V}^{\otimes d}(v_{1}\otimes\dots\otimes v_{d})\big{)}\otimes wy$ $\displaystyle=w{\varphi}(v_{1}\otimes\dots\otimes v_{d}\otimes y)$ where we use (4.1) for the penultimate equality. So ${\varphi}$ is also an isomorphism of ${\mathbb{F}}\mathfrak{S}_{d}$-modules. It follows that ${\varphi}$ is an isomorphism of ${\sf W}_{d}$-supermodules. ∎ ###### Lemma 4.5. Let $V$ be a finite-dimensional ${\sf A}_{\ell}$-supermodule, and $\delta=(\delta_{1},\dots,\delta_{k})$ be a composition of $d$. 1. (i) For $\lambda\in{\mathscr{P}}(d)$, we have ${\mathrm{Res}}^{{\sf W}_{d}}_{{\sf W}_{\delta}}V(\lambda)\cong\bigoplus_{\mu^{1}\in{\mathscr{P}}(\delta_{1}),\dots,\mu^{k}\in{\mathscr{P}}(\delta_{k})}\big{(}V(\mu^{1})\boxtimes\dots\boxtimes V(\mu^{k})\big{)}^{\oplus\operatorname{c}(\lambda;\mu^{1},\dots,\mu^{k})}.$ 2. (ii) For $\mu^{1}\in{\mathscr{P}}(\delta_{1}),\dots,\mu^{k}\in{\mathscr{P}}(\delta_{k})$, we have $V(\mu^{1})\circ\dots\circ V(\mu^{k})\cong\bigoplus_{\lambda\in{\mathscr{P}}(d)}V(\lambda)^{\oplus\operatorname{c}(\lambda;\mu^{1},\dots,\mu^{k})}.$ ###### Proof. The proof is identical to that of [CT2, Lemma 3.3] paying attention to the superalgebra signs. ∎ Given $\text{\boldmath$\lambda$}=(\lambda^{(0)},\dots,\lambda^{(\ell-1)})\in{\mathscr{P}}^{J}(d)$, we now define the ${\sf W}_{d}$-supermodules $\displaystyle L(\text{\boldmath$\lambda$})$ $\displaystyle:=L_{0}(\lambda^{(0)})\circ\dots\circ L_{\ell-1}(\lambda^{(\ell-1)}),$ $\displaystyle P(\text{\boldmath$\lambda$})$ $\displaystyle:=P_{0}(\lambda^{(0)})\circ\dots\circ P_{\ell-1}(\lambda^{(\ell-1)}).$ ###### Proposition 4.6. The set $\\{L(\text{\boldmath$\lambda$})\mid\text{\boldmath$\lambda$}\in{\mathscr{P}}^{J}(d)\\}$ is a complete irredundant set of irreducible ${\sf W}_{d}$-supermodules up to even isomorphism and parity shift. Moreover, $P(\text{\boldmath$\lambda$})$ is a projective cover of $L(\text{\boldmath$\lambda$})$ for each $\text{\boldmath$\lambda$}\in{\mathscr{P}}^{J}(d)$. ###### Proof. The first statement is easy to see and is well-known, see e.g. [M1, Theorem A.5]. For the second statement, note using Frobenius reciprocity that $P(\text{\boldmath$\lambda$})\simeq{\sf W}_{d}e(\text{\boldmath$\lambda$})$ for the idempotent $e(\text{\boldmath$\lambda$})\in{\sf W}_{d}$ defined in (4.5). We now also deduce that $\dim\operatorname{Hom}_{{\sf W}_{d}}(P(\text{\boldmath$\lambda$}),L(\text{\boldmath$\mu$}))=\delta_{\text{\boldmath$\lambda$},\text{\boldmath$\mu$}}$ completing the proof. ∎ Now we determine the composition factors of the modules $L(\text{\boldmath$\lambda$}^{1})\circ\cdots\circ L(\text{\boldmath$\lambda$}^{k})$. ###### Lemma 4.7. Let $\text{\boldmath$\mu$}\in{\mathscr{P}}^{J}(d)$ and let $(\delta_{1},\dots,\delta_{k})$ be a composition of $d$. For $r=1,\dots,k$, suppose $\text{\boldmath$\lambda$}^{r}=(\lambda^{(r,0)},\dots,\lambda^{(r,\ell-1)})\in{\mathscr{P}}^{J}(\delta_{r})$. Then $[L(\text{\boldmath$\lambda$}^{1})\circ\cdots\circ L(\text{\boldmath$\lambda$}^{k}):L(\text{\boldmath$\mu$})]=\prod_{j\in J}\operatorname{c}(\mu^{(j)};\lambda^{(1,j)},\dots,\lambda^{(k,j)}).$ ###### Proof. This follows from Lemma 4.5(ii) using commutativity of ‘$\circ$’. ∎ ### 4.5. The super-Cartan matrix for ${\sf W}_{d}$ In this subsection we continue to assume that $d<p$ or $p=0$. Having explicitly constructed the irreducible and projective indecomposable supermodules for ${\sf W}_{d}$, we now proceed to compute its super-Cartan invariants. ###### Lemma 4.8. Let $V,W$ be finite-dimensional ${\sf A}_{\ell}$-supermodules and $U$ be a subsupermodule of $V$ such that $V/U\simeq W$. Then for $\lambda\in{\mathscr{P}}(d)$, the ${\sf W}_{d}$-supermodule $V(\lambda)$ has a filtration with subfactors $U(\mu)\circ W(\nu)$ each appearing exactly $\operatorname{c}(\lambda;\mu,\nu)$ times. ###### Proof. For $0\leqslant c\leqslant d$, we denote by $V_{c}$ the subsupermodule of $V(\lambda)=V^{\otimes d}\otimes\mathcal{S}^{\lambda}$ spanned by the vectors of the form $v_{1}\otimes\dots\otimes v_{d}\otimes x$ such that at least $c$ of the vectors $v_{1},\dots,v_{d}\in V$ belong to $U$ and $x\in\mathcal{S}^{\lambda}$. This gives a filtration $V(\lambda)=V_{0}\supseteq V_{1}\supseteq\dots\supseteq V_{d}\supseteq V_{d+1}=0$ with $\frac{V_{c}}{V_{c+1}}\cong\bigoplus_{\begin{subarray}{c}\mu\in{\mathscr{P}}(c)\\\ \nu\in{\mathscr{P}}(d-c)\end{subarray}}(U(\mu)\circ W(\nu))^{\oplus\operatorname{c}(\lambda;\mu,\nu)},$ cf. the proof of [CT2, Lemma 4.2]. ∎ ###### Lemma 4.9. Let $\lambda\in{\mathscr{P}}(d)$, and $V$ be an ${\sf A}_{\ell}$-supermodule with composition series $V=V_{0}\supset V_{1}\supset\dots\supset V_{m+1}=0.$ Set $K:=\\{0,\dots,m\\}$. For $\text{\boldmath$\nu$}=(\nu^{(0)},\dots,\nu^{(m)})\in{\mathscr{P}}^{K}(d)$ and $j\in J$, define multisets $\displaystyle M(j,\text{\boldmath$\nu$})$ $\displaystyle:=\\{\nu^{(k)}\mid k\in K,\ V_{k}/V_{k+1}\simeq L_{j}\\}$ $\displaystyle M^{\prime}(j,\text{\boldmath$\nu$})$ $\displaystyle:=\\{(\nu^{(k)})^{\prime}\mid k\in K,\ V_{k}/V_{k+1}\simeq\Pi L_{j}\\}.$ Then for any $\text{\boldmath$\mu$}=(\mu^{(0)},\dots,\mu^{(\ell-1)})\in{\mathscr{P}}^{J}(d)$, we have $[V(\lambda):L(\text{\boldmath$\mu$})]=\sum_{\text{\boldmath$\nu$}\in{\mathscr{P}}^{K}(d)}\operatorname{c}(\lambda;\nu^{(0)},\dots,\nu^{(m)})\,\prod_{j\in J}\operatorname{c}(\mu^{(j)};M(j,\text{\boldmath$\nu$})\sqcup M^{\prime}(j,\text{\boldmath$\nu$})).$ ###### Proof. This follows by induction from Lemma 4.8, using Lemmas 4.5 and 4.4. ∎ The following result is a ‘superversion’ of [CT2, Proposition 4.4]. ###### Proposition 4.10. Let $V_{0},\dots,V_{\ell-1}$ be finite-dimensional ${\sf A}_{\ell}$-supermodules and $\text{\boldmath$\lambda$}=(\lambda^{(0)},\dots,\lambda^{(\ell-1)})\in{\mathscr{P}}^{J}(d)$. Set $V(\text{\boldmath$\lambda$}):=V_{0}(\lambda^{(0)})\circ\dots\circ V_{\ell-1}(\lambda^{(\ell-1)}).$ Let $V_{j}=V_{j,0}\supset V_{j,1}\supset\dots\supset V_{j,m_{j}+1}=0$ be a composition series of $V_{j}$ for each $j\in J$. Set $K:=\\{(j,s)\in J\times\mathbb{N}_{0}\mid s\leqslant m_{j}\\}.$ For $i\in J$ and $\text{\boldmath$\nu$}\in{\mathscr{P}}^{K}(d)$, let $\displaystyle M(i,\text{\boldmath$\nu$})$ $\displaystyle:=\\{\nu^{(j,s)}\mid(j,s)\in K\ \text{and}\ V_{j,s}/V_{j,s+1}\simeq L_{i}\\}$ $\displaystyle M^{\prime}(i,\text{\boldmath$\nu$})$ $\displaystyle:=\\{(\nu^{(j,s)})^{\prime}\mid(j,s)\in K\ \text{and}\ V_{j,s}/V_{j,s+1}\simeq\Pi L_{i}\\}.$ Then for any $\text{\boldmath$\mu$}=(\mu^{(0)},\dots,\mu^{(\ell-1)})\in{\mathscr{P}}^{J}(d)$, we have $[V(\text{\boldmath$\lambda$}):L(\text{\boldmath$\mu$})]=\sum_{\text{\boldmath$\nu$}\in{\mathscr{P}}^{K}(d)}\prod_{j\in J}\operatorname{c}(\lambda^{(j)};\nu^{(j,0)},\dots,\nu^{(j,m_{j})})\,\operatorname{c}(\mu^{(j)};M(j,\text{\boldmath$\nu$})\sqcup M^{\prime}(j,\text{\boldmath$\nu$})).$ ###### Proof. For $j\in J$, we set $\displaystyle M(i,\text{\boldmath$\nu$},j)$ $\displaystyle:=\\{\nu^{(j,s)}\mid 0\leqslant s\leqslant m_{j}\ \text{and}\ V_{j,s}/V_{j,s+1}\simeq L_{i}\\}$ $\displaystyle M^{\prime}(i,\text{\boldmath$\nu$},j)$ $\displaystyle:=\\{(\nu^{(j,s)})^{\prime}\mid 0\leqslant s\leqslant m_{j}\ \text{and}\ V_{j,s}/V_{j,s+1}\simeq\Pi L_{i}\\},$ so that $M(i,\text{\boldmath$\nu$})=\bigsqcup_{j\in J}M(i,\text{\boldmath$\nu$},j)$ and $M^{\prime}(i,\text{\boldmath$\nu$})=\bigsqcup_{j\in J}M^{\prime}(i,\text{\boldmath$\nu$},j)$. By Lemma 4.9 , for each $j\in J$, setting $\delta_{j}:=|\lambda^{(j)}|$, we have in the Grothendieck group $[V(\lambda^{(j)})]=\sum_{\text{\boldmath$\mu$}^{j}\in{\mathscr{P}}^{J}(\delta_{j})}q_{\text{\boldmath$\mu$}^{j}}[L(\text{\boldmath$\mu$}^{j})],$ where $q_{\text{\boldmath$\mu$}^{j}}=\sum_{\nu^{(j,0)},\dots,\nu^{(j,m_{j})}}\operatorname{c}(\lambda^{(j)};\nu^{(j,0)},\dots,\nu^{(j,m_{j})})\,\prod_{i\in J}\operatorname{c}(\mu^{(j,i)};M(i,\text{\boldmath$\nu$},j)\sqcup M^{\prime}(i,\text{\boldmath$\nu$},j)).$ Now, $\displaystyle[V(\text{\boldmath$\lambda$})]$ $\displaystyle=[V(\lambda^{(0)})\circ\dots\circ V_{\ell-1}(\lambda^{(\ell-1)})]$ $\displaystyle=\sum_{\text{\boldmath$\mu$}^{0}\in{\mathscr{P}}^{J}(\delta_{0}),\dots,\text{\boldmath$\mu$}^{\ell-1}\in{\mathscr{P}}^{J}(\delta_{\ell-1})}q_{\text{\boldmath$\mu$}^{0}}\dots q_{\text{\boldmath$\mu$}^{\ell-1}}[L_{\text{\boldmath$\mu$}^{0}}\circ\cdots\circ L_{\text{\boldmath$\mu$}^{\ell-1}}].$ It remains to apply Lemma 4.7 and use the following identity involving Littlewood–Richardson coefficients: $\operatorname{c}(\mu^{(i)};M(i,\text{\boldmath$\nu$})\sqcup M^{\prime}(i,\text{\boldmath$\nu$}))=\operatorname{c}(\mu^{(i)};\mu^{(0,i)},\dots,\mu^{(\ell-1,i)})\prod_{j\in J}\operatorname{c}(\mu^{(j,i)};M(i,\text{\boldmath$\nu$},j)\sqcup M^{\prime}(i,\text{\boldmath$\nu$},j)),$ which in turn follows from the description of the Littlewood–Richardson coefficient in terms of induction for symmetric groups using the transitivity of induction. ∎ ###### Corollary 4.11. Let $\text{\boldmath$\lambda$},\text{\boldmath$\mu$}\in{\mathscr{P}}^{J}(d)$. Then $[P(\text{\boldmath$\lambda$}):L(\text{\boldmath$\mu$})]=\sum\prod_{j\in J}\operatorname{c}(\mu^{(j)};\alpha^{(j)},\beta^{(j+1)},\gamma^{(j-1)},\delta^{(j)})\,\operatorname{c}(\lambda^{(j)};\alpha^{(j)},\beta^{(j)},\gamma^{(j)},\delta^{(j)}),$ where the summation is over all partitions $\alpha^{(i)},\beta^{(i)},\gamma^{(i)},\delta^{(i)}$ with $i\in J$, reading $\gamma^{(-1)}=(\beta^{(0)})^{\prime}$ and $\beta^{(\ell)}=\gamma^{(\ell-1)}=\varnothing$. (If $\ell=1$ this formula is interpreted as $c_{\lambda,\mu}=\sum_{\alpha,\beta,\delta}\operatorname{c}(\mu;\alpha,\beta^{\prime},\delta)\operatorname{c}(\lambda;\alpha,\beta,\delta)$.) ###### Proof. Apply Proposition 4.10 to the case $V(\text{\boldmath$\lambda$})=P(\text{\boldmath$\lambda$})$, using Lemma 4.3. ∎ ## 5\. Representations of double covers of symmetric groups ### 5.1. The double cover of the symmetric group Let $\hat{\mathfrak{S}}_{n}$ denote a proper double cover of the symmetric group $\mathfrak{S}_{n}$. Then $\hat{\mathfrak{S}}_{n}$ contains a central element $z$ of order $2$, with $\hat{\mathfrak{S}}_{n}/\langle z\rangle\cong\mathfrak{S}_{n}$. The central involution $z$ yields a central idempotent $e_{z}=\frac{1}{2}(1-z)$, and direct sum decomposition ${\mathbb{F}}\hat{\mathfrak{S}}_{n}=e_{z}{\mathbb{F}}\hat{\mathfrak{S}}_{n}\oplus(1-e_{z}){\mathbb{F}}\hat{\mathfrak{S}}_{n}.$ The algebra $(1-e_{z}){\mathbb{F}}\hat{\mathfrak{S}}_{n}$ is isomorphic to ${\mathbb{F}}\mathfrak{S}_{n}$, so we concentrate here on representations of $e_{z}{\mathbb{F}}\hat{\mathfrak{S}}_{n}$, often called the _spin representations_ of $\mathfrak{S}_{n}$. We identify $e_{z}{\mathbb{F}}\hat{\mathfrak{S}}_{n}$ with the twisted group algebra $\mathcal{T}_{n}$, see [Kl, Section 13.1], where a superalgebra structure is defined on $\mathcal{T}_{n}$ by letting $e_{z}\sigma$ being even or odd depending on whether the image of $\sigma$ in $\mathfrak{S}_{n}$ is even or odd. The classification of irreducible spin supermodules in characteristic $0$ goes back to Schur (though Schur worked with modules rather than supermodules, and only constructed characters; the corresponding modules were constructed much later, by Nazarov [N]). For each strict partition $\lambda$ of $n$ there is an irreducible spin supermodule $\mathrm{S}_{\mathbb{C}}(\lambda)$ for ${\mathbb{C}}\hat{\mathfrak{S}}_{n}$, and $\left\\{\left.\mathrm{S}_{\mathbb{C}}(\lambda)\ \right|\ \smash{\lambda\in{\mathscr{P}_{0}}(n)}\right\\}$ is a complete irredundant set of irreducible spin supermodules. Moreover, recalling (2.1), the supermodule $\mathrm{S}_{\mathbb{C}}(\lambda)$ is of type $\mathtt{M}$ if $a(\lambda)=0$, and of type $\mathtt{Q}$ if $a(\lambda)=1$. The classification of irreducible supermodules in characteristic $p$ is due to Brundan and the second author [BK2]. (Another classification is obtained in [BK1], and [KS, Theorem B] shows that the two classifications agree.) For each restricted $p$-strict partition $\mu$ of $n$, there is an irreducible $\mathcal{T}_{n}$-supermodule $\mathrm{D}(\mu)$, and $\left\\{\left.\mathrm{D}(\mu)\ \right|\ \smash{\mu\in{\mathscr{RP}_{p}}(n)}\right\\}$ is a complete irredundant set of irreducible $\mathcal{T}_{n}$-supermodules. Moreover, $\mathrm{D}(\mu)$ is of type $\mathtt{M}$ if $\mu$ has an even number of nodes of non-zero residue, and of type $\mathtt{Q}$ otherwise. Since we shall be interested exclusively in representations in characteristic $p$, we use the notation $\mathrm{S}(\lambda)$ for a $p$-modular reduction of $\mathrm{S}_{\mathbb{C}}(\lambda)$, viewed as a $\mathcal{T}_{n}$-supermodule. Note that $\mathrm{S}(\lambda)$ is not well-defined as a supermodule, but its composition factors are. The (super) _decomposition number problem_ then asks for the composition multiplicities $[\mathrm{S}(\lambda):\mathrm{D}(\mu)]$ for $\lambda\in{\mathscr{P}_{0}}(n)$ and $\mu\in{\mathscr{RP}_{p}}(n)$. The block classification for spin modules is due to Humphreys [H]. Here we prefer to deal with spin _superblocks_ , i.e. indecomposable direct summands of $\mathcal{T}_{n}$ as a superalgebra; in fact blocks and superblocks coincide except in the trivial case of simple blocks, so we ignore this distinction, and say ‘block’ to mean ‘superblock’, see [KL, §5.2b] for more details on this. With this convention, each $\mathrm{S}(\lambda)$ belongs to a single block, and the $\mathcal{T}_{n}$-supermodules $\mathrm{S}(\lambda)$ and $\mathrm{D}(\mu)$ lie in the same block if and only if $\lambda$ and $\mu$ have the same $p$-bar-core. This automatically means that they have the same $p$-bar-weight, so blocks are labelled by pairs $(\rho,d)$, where $\rho$ is a $p$-bar-core and $d\in\mathbb{N}_{0}$ with $|\rho|+pd=n$. We write $\mathcal{B}^{\rho,d}$ for the block corresponding to the pair $(\rho,d)$. An alternative statement of the block classification can be given using residues: in view of [MY1, Theorem 5], two $p$-strict partitions of $n$ have the same $p$-bar-core if and only if they have the same number of $i$-nodes for each $i\in I$. So we may alternatively label a block of $\mathcal{T}_{n}$ with a multiset consisting of $n$ elements of $I$, corresponding to the residues of the nodes of any partition labelling an irreducible module in the block. We write $\mathcal{B}_{S}$ for the block labelled by the multiset $S$. An important consequence of this is that all the irreducible supermodules in a block have the same type; so we say that a block has type $\mathtt{M}$ or $\mathtt{Q}$ accordingly. We also have a double cover $\hat{\mathfrak{A}}_{n}\subseteq\hat{\mathfrak{S}}_{n}$ of the alternating group whose twisted group algebra $e_{z}{\mathbb{F}}\hat{\mathfrak{A}}_{n}$ can be identified with the even component $(\mathcal{T}_{n})_{\bar{0}}$. Moreover, by [Ke, Proposition 3.16], the even component $\mathcal{B}^{\rho,d}_{\bar{0}}$ of $\mathcal{B}^{\rho,d}$ is a single block of ${\mathbb{F}}\hat{\mathfrak{A}}_{n}$, unless $d=0$ and $\mathcal{B}^{\rho,d}$ is of type $\mathtt{M}$. We refer the reader to [KL, §5.2b] for more on this. ### 5.2. Branching rules and weights The block classification using multisets of residues allows us to define restriction and induction functors $E_{i}$ and $F_{i}$. Suppose $M$ is a $\mathcal{T}_{n}$-supermodule lying in the block $\mathcal{B}_{S}$. Given $i\in I$, we define a $\mathcal{T}_{n+1}$-module $F_{i}M$ by inducing $M$ to $\mathcal{T}_{n+1}$ and then taking the block component lying in the block $\mathcal{B}_{S\sqcup\\{i\\}}$ (if there is such a block; otherwise we set $F_{i}M:=0$). The restriction functor $E_{i}$ is defined in a similar way by restricting to $\mathcal{T}_{n-1}$ and removing a copy of $i$ from $S$. The functors $E_{i},F_{i}$ (which are called $\operatorname{res}_{i}$ and $\operatorname{ind}_{i}$ in [Kl, (22.17),(22.18)]) are defined for all $n$, so we can consider powers $E_{i}^{r},F_{i}^{r}$ for $r\geqslant 0$. Given $\lambda\in{\mathscr{P}_{0}}(n)$, let $M(\lambda,i)$ be the set of strict partitions of $n+1$ which can be obtained by adding an $i$-node to $\lambda$. Then, in view of [MY2, Theorem 3], in the Grothendieck group of $\mathcal{T}_{n+1}$ we have $[F_{i}\mathrm{S}(\lambda)]=\sum_{\mu\in M(\lambda,i)}a_{\lambda\mu}[\mathrm{S}(\mu)],$ (5.1) where $a_{\lambda\mu}$ equals $2$ if $\lambda$ is odd and $\mu$ is even, and $1$ otherwise. Frobenius reciprocity yields a corresponding result for $E_{i}\mathrm{S}(\lambda)$. (This description of $[E_{i}\mathrm{S}(\lambda)]$ and $[F_{i}\mathrm{S}(\lambda)]$ can also be deduced by considering the $p>n$ case of [Kl, Theorems 22.3.4, 22.3.5].) We can now apply the operators $E_{i}$ and $F_{i}$ to characters of supermodules (either ordinary characters or $p$-modular Brauer characters) as well as to supermodules. For example, if $\chi^{\lambda}$ denotes the character of an irreducible supermodule $S_{\mathbb{C}}(\lambda)$, we define $F_{i}\chi^{\lambda}=\sum_{\mu\in M(\lambda,i)}a_{\lambda\mu}\chi^{\mu}$. We define $E_{i}\chi^{\lambda}$ similarly. The modular branching rules of Brundan–Kleshchev and Kleshchev–Shchigolev give information on the modules $E_{i}\mathrm{D}(\mu)$. We just need one result, and to state this we need some more combinatorics. Suppose $\mu$ is a $p$-strict partition and $i\in I$. Let $\mu^{-}$ denote the smallest $p$-strict partition such that $\mu^{-}\subseteq\mu$ and $\mu\setminus\mu^{-}$ consists of $i$-nodes. These nodes are called the _removable_ $i$-nodes of $\mu$. Similarly, let $\mu^{+}$ denote the largest $p$-strict partition such that $\mu^{+}\supseteq\mu$ and $\mu^{+}\setminus\mu$ consists of $i$-nodes. These nodes are called the _addable_ $i$-nodes of $\mu$. The _$i$ -signature_ of $\mu$ is the sequence of signs obtained by listing the addable and removable $i$-nodes of $\mu$ from left to right, writing a $+$ for each addable $i$-node and a $-$ for each removable $i$-node. The _reduced $i$-signature_ is the subsequence obtained by successively deleting adjacent pairs $+-$. The removable nodes corresponding to the $-$ signs in the reduced $i$-signature are called the _normal_ $i$-nodes of $\mu$. The result we will need below is the following (see [KS, Theorem A(ii)]). ###### Lemma 5.1. Suppose $\mu\in{\mathscr{RP}_{p}}(n)$ and $\nu\in{\mathscr{RP}_{p}}(n-1)$, and that $\nu$ is obtained from $\mu$ by removing a normal $i$-node. Then $\mathrm{D}(\nu)$ is a composition factor of $E_{i}\mathrm{D}(\mu)$. Now given a $\mathcal{T}_{n}$-supermodule and a word $\text{\boldmath$i$}=i_{1}\dots i_{n}\in I^{n}$, we say that $i$ is a _weight_ of $M$ if $E_{i_{1}}\dots E_{i_{n}}M\neq 0$. The fact that the functors $E_{i}$ are exact, together with the results above, yields the following. ###### Proposition 5.2. Suppose $i\in I$ and $i_{1}\dots i_{n-1}\in I^{n-1}$. 1. (i) Suppose $\lambda\in{\mathscr{P}_{0}}(n)$ and $\mu\in{\mathscr{P}_{0}}(n-1)$ is obtained from $\lambda$ by removing an $i$-node. If $i_{1}\dots i_{n-1}$ is a weight of $\mathrm{S}(\mu)$, then $i_{1}\dots i_{n-1}i$ is a weight of $\mathrm{S}(\lambda)$. 2. (ii) Suppose $\mu\in{\mathscr{RP}_{p}}(n)$ and $\nu\in{\mathscr{RP}_{p}}(n-1)$ is obtained from $\mu$ by removing a normal $i$-node. If $i_{1}\dots i_{n-1}$ is a weight of $\mathrm{D}(\nu)$, then $i_{1}\dots i_{n-1}i$ is a weight of $\mathrm{D}(\mu)$. For (much) more information on branching rules for $\mathcal{T}_{n}$, see [Kl, Part II] and[KS]. ### 5.3. Virtual projective characters Given $\lambda\in\mathscr{P}_{0}^{\rho,d}$, we write $\chi^{\lambda}$ for the character of the irreducible supermodule $\mathrm{S}_{\mathbb{C}}(\lambda)$, and we denote by $\operatorname{Ch}^{\rho,d}$ the ${\mathbb{Q}}$-span of the set $\\{\chi^{\lambda}\mid\lambda\in\mathscr{P}_{0}^{\rho,d}\\}$ of class functions on $\hat{\mathfrak{S}}_{|\rho|+dp}$. For each $\mu\in\mathscr{RP}_{p}^{\rho,d}$ we have an indecomposable projective supermodule $P(\mu)$ with simple head $D(\mu)$. Lifting the idempotents as in the classical theory we deduce that $P(\mu)$ lifts to characteristic zero, yielding the character ${\varphi}^{{\mu}}\in\operatorname{Ch}^{\rho,d}$. We denote by $\operatorname{PCh}^{\rho,d}$ the ${\mathbb{Q}}$-span of the set $\\{{\varphi}^{{\mu}}\mid\mu\in\mathscr{RP}_{p}^{\rho,d}\\}$ and refer to the elements of $\operatorname{PCh}^{\rho,d}$ as _virtual projective characters_. Note that $\\{\chi^{\lambda}\mid\lambda\in\mathscr{P}_{0}^{\rho,d}\\}$ is a basis for $\operatorname{Ch}^{\rho,d}$ since each $\chi^{\lambda}$ is either an irreducible character or a sum of two irreducible characters $\chi^{\lambda,+}+\chi^{\lambda,-}$, and all the irreducible characters $\chi^{\lambda,\pm},\chi^{\mu}$ are distinct (cf. [Kl, Corollary 12.2.10]). Moreover, $\\{{\varphi}^{{\mu}}\mid\mu\in\mathscr{RP}_{p}^{\rho,d}\\}$ is a basis for $\operatorname{PCh}^{\rho,d}$. This is proved as for the $\chi^{\lambda}$. First, note that each ${\varphi}^{{\mu}}$ is either an indecomposable projective character or a sum of two indecomposable projective characters ${\varphi}^{{{\lambda,+}}}+{\varphi}^{{{\lambda,-}}}$, and all the indecomposable projective characters ${\varphi}^{{{\lambda,\pm}}},{\varphi}^{{\mu}}$ are distinct in view of [Kl, Proposition 12.2.12 and Lemma 12.2.16]. Then use linear independence of the indecomposable projective characters [CR, Theorem 18.26(iii)]. Given ${\varphi}=\sum_{\lambda\in\mathscr{P}_{0}^{\rho,d}}a_{\lambda}\chi^{\lambda}\in\operatorname{Ch}^{\rho,d}$, we write the coefficient $a_{\lambda}$ as $[{\varphi}:\chi^{\lambda}]$. We say that $\chi^{\lambda}$ _occurs_ in ${\varphi}$ if $[{\varphi}:\chi^{\lambda}]$ is non-zero. Below we will use a superversion of Brauer reciprocity to compute decomposition numbers for $\mathcal{B}^{\rho,d}$ in terms of the multiplicities $[{\varphi}^{{\mu}}:\chi^{\lambda}]$: $[\mathrm{S}(\lambda):\mathrm{D}(\mu)]=\begin{cases}2[{\varphi}^{{\mu}}:\chi^{\lambda}]&\text{if $\mathrm{S}_{\mathbb{C}}(\lambda)$ is of type $\mathtt{Q}$ and $\mathrm{D}(\mu)$ is of type $\mathtt{M}$},\\\ \tfrac{1}{2}[{\varphi}^{{\mu}}:\chi^{\lambda}]&\text{if $\mathrm{S}_{\mathbb{C}}(\lambda)$ is of type $\mathtt{M}$ and $\mathrm{D}(\mu)$ is of type $\mathtt{Q}$},\\\ [{\varphi}^{{\mu}}:\chi^{\lambda}]&\text{otherwise}.\end{cases}$ (5.2) This follows from the classical Brauer reciprocity taking into account that when $\mathrm{S}_{\mathbb{C}}(\lambda)$ is of type $\mathtt{Q}$ we have $\chi^{\lambda}=\chi^{\lambda,+}+\chi^{\lambda,-}$, and when $\mathrm{D}(\mu)$ is of type $\mathtt{Q}$ we have ${\varphi}^{{\lambda}}={\varphi}^{{{\lambda,+}}}+{\varphi}^{{{\lambda,-}}}$, and moreover, $\mathrm{D}(\mu)\cong\mathrm{D}(\mu,+)\oplus\mathrm{D}(\mu,-)$ for non-isomorphic irreducible modules $\mathrm{D}(\mu,+)$ and $\mathrm{D}(\mu,-)$ obtained from each other by tensoring with sign. ### 5.4. Projective characters from the $q$-deformed Fock space Leclerc and Thibon [LT] show how one can use canonical basis vectors to obtain another basis for the space $\operatorname{PCh}^{\rho,d}$; we briefly outline the background. Let $q$ be an indeterminate. The _level- $1$ Fock space_ of type $A^{(2)}_{2l}$ is a $\mathbb{Q}(q)$-vector space $\mathscr{F}$ with a _standard basis_ $\left\\{\left.|\lambda\rangle\ \right|\ \smash{\lambda\text{ a $p$-strict partition}}\right\\}.$ This space is naturally a module for the quantum group $U_{q}(A^{(2)}_{2l})$. We note that the conventions for residues (and for simple roots in type $A^{(2)}_{2l})$ used here are as in [KL],[Fa3] and differ from those in [LT]. The submodule of $\mathscr{F}$ generated by the vector $|\varnothing\rangle$ possesses a _canonical basis_ $\left\\{\left.G(\mu)\ \right|\ \smash{\mu\text{ a restricted $p$-strict partition}}\right\\}.$ Expanding the canonical basis vectors in terms of the standard basis, one obtains the _$q$ -decomposition numbers_ $d_{\lambda\mu}(q)$, indexed by pairs of $p$-strict partitions $\lambda,\mu$ with $\mu$ restricted: $G(\mu)=\sum_{\lambda\text{ $p$-strict}}d_{\lambda\mu}(q)|\lambda\rangle.$ In fact [LT, Theorem 4.1] implies that $d_{\lambda\mu}(q)$ is zero unless $\lambda$ and $\mu$ have the same $p$-bar-core and the same size, so for $\mu\in\mathscr{RP}_{p}^{\rho,d}$ we actually have $G(\mu)=\sum_{\lambda\in\mathscr{P}_{p}^{\rho,d}}d_{\lambda\mu}(q)|\lambda\rangle.$ (5.3) By [LT, Theorem 4.1(i)] each $d_{\lambda\mu}(q)$ is a polynomial in $q$ with integer coefficients. So given a strict partition $\lambda$ and a restricted $p$-strict partition $\mu$, recalling (2.1), we can define the integers $D_{\lambda\mu}=2^{\lfloor\frac{1}{2}(h_{p}(\lambda)+1-a(\lambda))\rfloor}d_{\lambda\mu}(1),$ where $h_{p}(\lambda)$ denotes the number of positive parts of $\lambda$ that are divisible by $p$. Then the discussion in [LT, Section 6] shows the following. ###### Proposition 5.3. Suppose $\mu$ is a restricted $p$-strict partition of $n$. Then the character $\hat{{\varphi}}^{{\mu}}:=\sum_{\lambda\text{ strict}}D_{\lambda\mu}\chi^{\lambda}$ (5.4) is a virtual projective character of $\hat{\mathfrak{S}}_{n}$. Moreover, $\\{\hat{{\varphi}}^{{\mu}}\mid\mu\in\mathscr{RP}_{p}^{\rho,d}\\}$ is a basis for $\operatorname{PCh}^{\rho,d}$. In fact, the character $\hat{{\varphi}}^{{\mu}}$ coincides with ${\varphi}^{{\mu}}$ quite often, and our main aim in this paper is to show that $\hat{{\varphi}}^{{\mu}}={\varphi}^{{\mu}}$ when $\mu\in\mathscr{RP}_{p}^{\rho,d}$ and $\mathcal{B}^{\rho,d}$ is an abelian defect RoCK block. ### 5.5. RoCK blocks for double covers and the Kleshchev–Livesey Morita equivalence Now, following [KL], we can define RoCK blocks: given a $p$-bar-core $\rho$ and $d\geqslant 0$, we say that $\mathcal{B}^{\rho,d}$ is a _RoCK block_ if $\rho$ is $d$-Rouquier. The term ‘RoCK’ is borrowed from the corresponding theory for (non-spin) representations of symmetric groups, and stands for ‘Rouquier or Chuang–Kessar’. The definition of spin RoCK blocks is a natural analogue of the non-spin situation, and we expect that RoCK blocks will play a similarly important role. This has already begun with the use of RoCK blocks in proving Broué’s conjecture for double covers [KL, BK4, ELV]. Our purpose in this paper is to emulate the work of Chuang and Tan in the non-spin case and find the decomposition numbers for RoCK blocks. Recall the material of Section 4, in particular, the wreath superproduct ${\sf W}_{d}={\sf A}_{\ell}\wr\mathfrak{S}_{d}$. One of the main results of [KL] is a Morita superequivalence relating a RoCK (super)block $\mathcal{B}^{\rho,d}$ with $d<p$ and ${\sf W}_{d}$. This easily implies the following theorem: ###### Theorem 5.4. Suppose $1\leqslant d<p$, and $\rho$ is a $d$-Rouquier $p$-bar-core. Then we have a Morita equivalence ${\sf W}_{d}\sim_{{{\mathrm{Mor}}}}\begin{cases}\mathcal{B}^{\rho,d}&\text{if $\mathcal{B}^{\rho,d}$ is of type $\mathtt{M}$,}\\\ \mathcal{B}^{\rho,d}_{\bar{0}}&\text{if $\mathcal{B}^{\rho,d}$ is of type $\mathtt{Q}$.}\end{cases}$ ###### Proof. By [KL, Proposition 5.4.10(i)], we have a Morita superequivalence $\mathcal{B}^{\rho,d}\sim_{{{\mathrm{sMor}}}}\begin{cases}{\sf W}_{d}&\text{if $\mathcal{B}^{\rho,d}$ is of type $\mathtt{M}$,}\\\ {\sf W}_{d}\otimes{\mathcal{C}}_{1}&\text{if $\mathcal{B}^{\rho,d}$ is of type $\mathtt{Q}$.}\end{cases}$ where ${\mathcal{C}}_{1}$ is the Clifford superalgebra of rank $1$. If $\mathcal{B}^{\rho,d}$ is of type $\mathtt{M}$ the result follows immediately since Morita superequivalence implies Morita equivalence, see [KL, §2.2c]. If $\mathcal{B}^{\rho,d}$ is of type $\mathtt{Q}$, then we obtain $\mathcal{B}^{\rho,d}\otimes{\mathcal{C}}_{1}\sim_{{{\mathrm{sMor}}}}{\sf W}_{d}\otimes{\mathcal{C}}_{1}\otimes{\mathcal{C}}_{1}\simeq{\sf W}_{d}\otimes{\mathcal{C}}_{2}$, and we apply [KL, Lemmas 2.2.19 and 2.2.20]. ∎ ### 5.6. The regularization theorem One of the early general results concerning decomposition numbers for symmetric groups is James’s regularization theorem [J1]. Later we will need the analogue for spin modules, which was proved by Brundan and the second author [BK3, Theorem 1.2]. They define (in a combinatorial way) a function $\lambda\mapsto\lambda^{\operatorname{reg}}$ from ${\mathscr{P}_{0}}(n)$ to ${\mathscr{RP}_{p}}(n)$ and prove the following statement. ###### Theorem 5.5. Suppose $\lambda$ is a strict partition. Then $\mathrm{D}(\lambda^{\operatorname{reg}})$ occurs as a composition factor of $\mathrm{S}(\lambda)$, and $\mathrm{D}(\nu)$ is a composition factor of $\mathrm{S}(\lambda)$ only if $\lambda^{\operatorname{reg}}\trianglerighteqslant\nu$. We will not need the exact definition of regularization, since we use an alternative description of regularization in RoCK blocks, as follows. ###### Lemma 5.6. Suppose $\rho$ is a $d$-Rouquier $p$-bar-core, and $\lambda\in\mathscr{P}_{0}^{\rho,d}$ with $p$-bar-quotient $(\lambda^{(0)},\dots,\lambda^{(\ell)})$. Then $\lambda^{\operatorname{reg}}$ is the partition in $\mathscr{RP}_{p}^{\rho,d}$ with $p$-bar-quotient $(\lambda^{(0)},\dots,\lambda^{(\ell-2)},\lambda^{(\ell-1)}+{\lambda^{(\ell)}}^{\prime},\varnothing).$ Lemma 5.6 is not very hard to prove directly from the combinatorial definition of $\lambda^{\operatorname{reg}}$, but we will give a proof using canonical basis coefficients in Section 5.4. ## 6\. Projective characters Having summarized all the background we need, we now work towards our main result. Throughout this section we fix an integer $d\geqslant 1$ and a $d$-Rouquier $p$-bar-core $\rho$. Our aim is to work with projective characters in $\mathcal{B}^{\rho,d}$; our main result in this section is to find the decomposition matrix for $\mathcal{B}^{\rho,d}$ up to multiplying by a non-negative unitriangular matrix. Note that the results of this section do not require $d<p$. ### 6.1. Projective characters $\hat{{\varphi}}^{{\mu}}$ in RoCK blocks Recall the virtual projective characters $\hat{{\varphi}}^{{\mu}}$ defined in (5.4). One of the main results of the first author’s paper [Fa3] is an explicit determination of the canonical basis vectors $G(\mu)$ for partitions in RoCK blocks. As a result of this, we can give the characters $\hat{{\varphi}}^{{\mu}}$ in $\mathcal{B}^{\rho,d}$ explicitly. First, we give the formula for the canonical basis coefficients in a weight space corresponding to a RoCK block. Recall the notation of Section 2.3. ###### Theorem 6.1 (​[Fa3, Theorem 8.2]). Suppose $\rho$ is a $d$-Rouquier $p$-bar-core, $\lambda\in\mathscr{P}_{p}^{\rho,d}$ and $\mu\in\mathscr{RP}_{p}^{\rho,d}$. Then $d_{\lambda\mu}(q)=\sum K^{-1}_{\lambda^{(0)}\sigma^{(0)}}(-q^{2})\prod_{i=1}^{\ell}\operatorname{c}(\lambda^{(i)};\sigma^{(i)},\tau^{(i)})\operatorname{c}(\mu^{(i-1)};\sigma^{(i-1)},{\tau^{(i)}}^{\prime})q^{2\sum_{i\in I}i(|\lambda^{(i)}|-|\mu^{(i)}|)},$ where the sum is over all partitions $\sigma^{(0)},\dots,\sigma^{(\ell-1)},\tau^{(1)},\dots,\tau^{(\ell)}$, and we read $\sigma^{(\ell)}$ as $\varnothing$. As a consequence we can write down the characters $\hat{{\varphi}}^{{\mu}}$ in RoCK blocks; this follows from Theorem 6.1, (5.3) and the definition (5.4). ###### Corollary 6.2. Suppose $\rho$ is a $d$-Rouquier $p$-bar-core and $\mu\in\mathscr{RP}_{p}^{\rho,d}$. Then $\hat{{\varphi}}^{{\mu}}=\sum_{\lambda\in\mathscr{P}_{0}^{\rho,d}}2^{\lfloor\frac{1}{2}(h(\lambda^{(0)})+1-a(\lambda))\rfloor}\sum K^{-1}_{\lambda^{(0)}\sigma^{(0)}}(-1)\prod_{i=1}^{\ell}\operatorname{c}(\lambda^{(i)};\sigma^{(i)},\tau^{(i)})\operatorname{c}(\mu^{(i-1)};\sigma^{(i-1)},{\tau^{(i)}}^{\prime})\chi^{\lambda},$ where the second sum is over all partitions $\sigma^{(0)},\dots,\sigma^{(\ell-1)},\tau^{(1)},\dots,\tau^{(\ell)}$, and we read $\sigma^{(\ell)}$ as $\varnothing$. ###### Corollary 6.3. Suppose $\rho$ is a $d$-Rouquier $p$-bar-core and $\mu\in\mathscr{RP}_{p}^{\rho,d}$. Then $\hat{{\varphi}}^{{\mu}}$ is a non- negative integral linear combination of irreducible characters $\chi^{\lambda}$ with $\lambda\in\mathscr{P}_{0}^{\rho,d}$. ###### Proof. Follows from Corollary 6.2 and Lemma 2.4. ∎ We now use Theorem 6.1 to give the deferred proof of Lemma 5.6. This relies on the following regularization theorem for canonical basis coefficients. ###### Theorem 6.4 (​[Fa1, Theorem 3.2]). If $\lambda\in{\mathscr{P}}_{p}(n)$ and $\mu\in{\mathscr{RP}_{p}}(n)$ then $d_{\lambda\lambda^{\operatorname{reg}}}(q)\neq 0$, and $d_{\lambda\mu}(q)=0$ unless $\lambda^{\operatorname{reg}}\trianglerighteqslant\mu$. ###### Proof of Lemma 5.6. The Lemma asserts that $\lambda^{\operatorname{reg}}$ is the partition $\nu\in\mathscr{P}_{p}^{\rho,d}$ defined by $\nu^{(i)}=\begin{cases}\lambda^{(i)}&\text{if }0\leqslant i\leqslant\ell-2,\\\ \lambda^{(\ell-1)}+{\lambda^{(\ell)}}^{\prime}&\text{if }i=\ell-1,\\\ \varnothing&\text{if }i=\ell.\end{cases}$ Theorem 6.4 shows that $\lambda^{\operatorname{reg}}$ is the most dominant $p$-strict partition $\mu$ for which $d_{\lambda\mu}(q)\neq 0$. So to show that $\lambda^{\operatorname{reg}}=\nu$ we must show that $d_{\lambda\nu}(q)\neq 0$, and that if $\mu\trianglerighteqslant\nu$ with $d_{\lambda\mu}(q)\neq 0$ then $\mu=\nu$. Showing that $d_{\lambda\nu}(q)\neq 0$ is straightforward: in order to obtain a non-zero summand in the formula in Theorem 6.1, we must take $\sigma^{(i)}=\lambda^{(i)}$ for $0\leqslant i\leqslant\ell-1$, $\tau^{(i)}=\varnothing$ for $1\leqslant i\leqslant\ell-1$, and $\tau^{(\ell)}=\lambda^{(\ell)}$, giving $d_{\lambda\nu}(q)=q^{2|\lambda^{(\ell)}|}$. Now take a $p$-strict partition such that $\mu\trianglerighteqslant\nu$ and $d_{\lambda\mu}(q)\neq 0$. From (5.3), $\mu$ must lie in $\mathscr{RP}_{p}^{\rho,d}$. Choose partitions $\sigma^{(i)},\tau^{(i)}$ for which the summand in Theorem 6.1 is non-zero. We assume for the rest of the proof that $p\geqslant 5$; a minor modification is needed when $p=3$, which we leave to the reader. In view of Lemma 3.2, the assumption that $\mu\trianglerighteqslant\nu$ means that $|\mu^{(0)}|+\dots+|\mu^{(r)}|\leqslant|\lambda^{(0)}|+\dots+|\lambda^{(r)}|$ for $0\leqslant r\leqslant\ell-2$. On the other hand, the non-vanishing of the polynomial $K^{-1}_{\lambda^{(0)}\sigma^{(0)}}(-q^{2})$ and of the Littlewood–Richardson coefficients $\operatorname{c}(\lambda^{(i)};\sigma^{(i)},\tau^{(i)})$ and $\operatorname{c}(\mu^{(i-1)};\sigma^{(i-1)},{\tau^{(i)}}^{\prime})$ implies that $|\mu^{(0)}|+\dots+|\mu^{(r)}|=|\lambda^{(0)}|+\dots+|\lambda^{(r)}|+|\tau^{(r+1)}|$ for $0\leqslant r\leqslant\ell-1$. So $|\tau^{(1)}|=\dots=|\tau^{(\ell-1)}|=0$ and $|\tau^{(\ell)}|=|\lambda^{(\ell)}|$. Again by the non-vanishing of the Littlewood–Richardson coefficients it then follows that $\tau^{(1)}=\dots=\tau^{(\ell-1)}=\varnothing$, while $\tau^{(\ell)}=\lambda^{(\ell)}$. This in turn gives $\sigma^{(i)}=\mu^{(i)}$ for $0\leqslant i\leqslant\ell-2$, and $\sigma^{(i)}=\lambda^{(i)}$ for $1\leqslant i\leqslant\ell-1$, so that * $\diamond$ $K^{-1}_{\lambda^{(0)}\mu^{(0)}}(t)\neq 0$, * $\diamond$ $\mu^{(i)}=\lambda^{(i)}$ for $1\leqslant i\leqslant\ell-2$, * $\diamond$ $\operatorname{c}(\mu^{(\ell-1)};\lambda^{(\ell-1)},{\lambda^{(\ell)}}^{\prime})\neq 0$. In particular, $|\mu^{(i)}|=|\nu^{(i)}|$ for all $i$, so (again using Lemma 3.2) the assumption $\mu\trianglerighteqslant\nu$ amounts to the statement that $\mu^{(i)}\trianglerighteqslant\nu^{(i)}$ for all $i$. But now the only way that $K^{-1}_{\lambda^{(0)}\mu^{(0)}}(t)=K^{-1}_{\nu^{(0)}\mu^{(0)}}(t)$ can be non-zero is if $\mu^{(0)}=\nu^{(0)}=\lambda^{(0)}$. A standard result about Littlewood–Richardson coefficients is that the most dominant partition $\xi$ for which $\operatorname{c}(\xi;\lambda^{(\ell-1)},{\lambda^{(\ell)}}^{\prime})\neq 0$ is $\lambda^{(\ell-1)}+{\lambda^{(\ell)}}^{\prime}$, so we also obtain $\mu^{(\ell-1)}=\nu^{(\ell-1)}$, and therefore $\mu=\nu$. ∎ ### 6.2. Gelfand–Graev induction Our aim is to explore the relationship between the characters ${\varphi}^{{\mu}}$ and $\hat{{\varphi}}^{{\mu}}$ by considering a third set of projective characters obtained by inducing the projective character $\chi^{\rho}$ along special words which we call thick Gelfand–Graev words. Recall the induction operators $F_{i}$ from Section 5.2. Given $i\in J$ and $k\geqslant 1$, we define the corresponding _thick Gelfand–Graev word_ (cf. [KL, (4.2.1)]) $\displaystyle\text{\boldmath$g$}^{i,k}$ $\displaystyle:=\ell^{k}(\ell-1)^{2k}\dots(i+1)^{2k}i^{k}\dots 1^{k}0^{2k}1^{k}\dots i^{k}$ (6.1) and the corresponding induction operator $\displaystyle F(i,k)$ $\displaystyle:=F_{i}^{k}\dots F_{1}^{k}F_{0}^{2k}F_{1}^{k}\dots F_{i}^{k}F_{i+1}^{2k}\dots F_{\ell-1}^{2k}F_{\ell}^{k}.$ (6.2) We want to know what these operators do to characters in a RoCK block. ###### Remark 6.5. We could define divided power induction operators $F_{i}^{(r)}:=\frac{F_{i}^{r}}{r!}$ and use them in place of the usual powers in the definition of $F(i,k)$. This would produce slightly simpler formulas in Propositions 6.6 and 6.7 below but would not make things any easier, since a priori, $F_{i}^{(r)}$ is defined on the Grothendieck groups with scalars extended from $\mathbb{Z}$ to ${\mathbb{Q}}$ (although one can check, using [Kl, Lemma 22.3.15] for the case of large $p$, that in fact $F_{i}^{(r)}$ is always defined on the Grothendieck groups without extending scalars; we will not pursue this). ###### Proposition 6.6. Take $i\in J$, $\lambda\in\mathscr{P}_{0}^{\rho,c}$ and $\alpha\in\mathscr{P}_{0}^{\rho,c+k}$, where $k\geqslant 1$ and $c+k\leqslant d$. Then $\chi^{\alpha}$ occurs in $F(i,k)\chi^{\lambda}$ if and only if the $p$-bar-quotient $(\alpha^{(0)},\dots,\alpha^{(\ell)})$ is obtained from $(\lambda^{(0)},\dots,\lambda^{(\ell)})$ by adding $k$ nodes in components $i$ and $i+1$, with no two nodes added in the same column of component $i$ or in the same row of component $i+1$. If $\alpha$ satisfies this condition, define $f(\lambda,\alpha)=\bigl{|}\left\\{\left.c\geqslant 1\ \right|\ \smash{\alpha^{(0)}\setminus\lambda^{(0)}\text{ contains a node in column $c$ but not in column $c+1$}}\right\\}\bigr{|}.$ Then $[F(i,k)\chi^{\lambda}:\chi^{\alpha}]=2^{f(\lambda,\alpha)+\frac{1}{2}(k(p-2)+h(\lambda^{(0)})-h(\alpha^{(0)})+a(\lambda)-a(\alpha))}(2k)!^{\ell-i}k!^{2i+1}.$ ###### Proof. First we assume $i>0$. For $j\in I$, we define a _$j$ -hook_ to be a set of nodes of the form $\\{(r+\ell-j,c+j+1),(r+\ell-j-1,c+j+2),\dots,(r,c+\ell+1),(r,c+\ell+2),\dots,(r,c+j+p)\\}$ for $r\geqslant 1$ and $c\geqslant 0$ with $p\mid c$. In other words, a $j$-hook is a set of $p$ nodes with residues in the configuration below. $\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{\ell{-}1}$$\vphantom{1}\smash{1}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{1}$$\vphantom{1}\smash{j{-}2}$$\vphantom{1}\smash{j{-}1}$$\vphantom{1}\smash{\ell{-}1}$$\vphantom{1}\smash{j{+}1}$$\vphantom{1}\smash{j}$ In [KL, Section 4.1a], Kleshchev and Livesey observe that if $\lambda\in\mathscr{P}_{0}^{\rho,c}$ with $c<d$, then adding a node to the $j$th component of the $p$-bar-quotient of $\lambda$ corresponds to adding a $j$-hook to $\lambda$. By Proposition 3.3, if $\lambda\in\mathscr{P}_{0}^{\rho,c}$ and $\alpha\in\mathscr{P}_{0}^{\rho,c+k}$ with $\alpha\supseteq\lambda$, then $\alpha$ can be obtained from $\lambda$ by adding some $p$-bars. Thus $\alpha^{(j)}\supseteq\lambda^{(j)}$ for all $j\in I$. In particular, if $\chi^{\alpha}$ occurs in $F(i,k)\chi^{\lambda}$, then $\alpha$ is obtained from $\lambda$ by adding $j$-hooks (for various values of $j$). But by the branching rule $\alpha$ is also obtained from $\lambda$ by adding nodes one at a time, with a specific sequence of residues determined by the definition of $F(i,k)$. In particular, the last $k$ nodes added must all have residue $i$, so there must be a strict partition $\beta$ with $\lambda\subset\beta\subset\alpha$ such that $\alpha\setminus\beta$ comprises $k$ nodes of residue $i$. In any of the individual $j$-hooks comprising $\alpha\setminus\lambda$, the last node added must either be the leftmost node of residue $j$, or the rightmost node of residue $j-1$. So the last node added can have residue $i$ only if $j=i$ or $i+1$. Moreover, the assumption that $i>0$ means that the last two nodes added in a given $j$-hook cannot both have residue $i$. So the only way the last $k$ nodes added in reaching $\alpha$ from $\lambda$ can all have residue $i$ is if all the added hooks are $i$-hooks or $(i+1)$-hooks, and each of these hooks contains exactly one node of $\alpha\setminus\beta$. In particular, the $p$-bar-quotient of $\alpha$ is obtained from the $p$-bar- quotient of $\lambda$ by adding nodes in components $i$ and $i+1$. If two nodes are added to the same column of $\lambda^{(i)}$, the corresponding $i$-hooks are diagonally adjacent, as in the following diagram. $\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{i{-}1}$$\vphantom{1}\smash{i}$$\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{i{-}1}$$\vphantom{1}\smash{i}$ But now the $i$-hook on the right cannot contain a node of $\alpha\setminus\beta$, because the $i$-node at the left of this hook must be added before the $(i-1)$-node at the right of the hook on the left. This is a contradiction. Similarly, if two nodes are added to the same row of $\lambda^{(i+1)}$, then the corresponding hooks are horizontally adjacent, and we reach a contradiction in the same way. $\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{i}$$\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{i}$$\vphantom{1}\smash{i{+}1}$$\vphantom{1}\smash{i{+}1}$ This is enough to prove the ‘only if’ part of the Proposition. For the ‘if’ part, suppose the $p$-bar-quotient of $\alpha$ is obtained form the $p$-bar- quotient of $\lambda$ by adding nodes in different columns of $\lambda^{(i)}$ and in different rows of $\lambda^{(i+1)}$. To show that $\chi^{\alpha}$ occurs in $F(i,k)\chi^{\lambda}$, we show that we can get from $\lambda$ to $\alpha$ by adding nodes one at a time with the appropriate sequence of residues. We begin by adding all the $\ell$-nodes in $\alpha\setminus\lambda$ (in an arbitrary order), then all the $(\ell-1)$-nodes, and so on, down to the $(i+1)$-nodes. Then we add an $i$-node in each hook, then an $(i-1)$-node in each hook, and so on, working along the arm of each hook, until we add a node of residue $1$ to each hook. Then we add all nodes of residue $0$ in $\alpha\setminus\lambda$, and then all remaining nodes of residues $1,\dots,i$ in turn. The assumptions on $\alpha$ mean that we obtain a strict partition at each stage, so $\chi^{\alpha}$ does occur in $F(i,k)\chi^{\lambda}$. The construction in the preceding paragraph enables us to compute the coefficient of $\chi^{\alpha}$ in $F(i,k)\chi^{\lambda}$. To do this, we need to count possible orders in which the nodes of $\alpha\setminus\lambda$ can be added to $\lambda$ with the required sequence of residues, so that the partition obtained at each stage is strict. For each term $F_{j}^{ak}$ appearing in $F(i,k)$, we need to add $ak$ nodes of residue $j$, and it clear that the choice made in the previous paragraph is the only possibility: in order to be able to add the nodes of residue $0$ in a given hook when applying $F_{0}^{2k}$, we must already have added the nodes of residues $i,i-1,\dots,1$ to the left of the nodes of residue $0$ in that hook. So our only choice is in which order to add the $j$-nodes for each factor $F_{j}^{ak}$. In each case we have a free choice, except for the factor $F_{0}^{2k}$: here in each hook the leftmost $0$-node must be added before the rightmost one. So the number of choices of order is $k!\times\prod_{j=i+1}^{\ell-1}(2k)!\times\prod_{i=1}^{i}k!^{2}\times\frac{(2k)!}{2^{k}}=\frac{k!^{2i+1}(2k)!^{\ell-i}}{2^{k}}.$ It remains to consider the coefficients $a_{\lambda\mu}$ appearing in the branching rule. Because $i>0$, the assumptions on $\alpha$ give $\alpha^{(0)}=\lambda^{(0)}$, which in turn implies that $h(\lambda)=h(\alpha)$; therefore, as we go from $\lambda$ to $\alpha$ by adding nodes, the partitions obtained alternate between even and odd. So the number of times we pass from an odd partition to an even partition is $\frac{1}{2}(kp+a(\lambda)-a(\alpha))$. This yields $[F(i,k)\chi^{\lambda}:\chi^{\alpha}]=2^{\frac{1}{2}(k(p-2)+a(\lambda)-a(\alpha))}(2k)!^{\ell-i}k!^{2i+1},$ which agrees with the Proposition because $\lambda^{(0)}=\alpha^{(0)}$. Now we consider the case where $i=0$. Now in order for for $\chi^{\alpha}$ to appear in $F(i,k)\chi^{\lambda}$, it must be the case that $\alpha$ is obtained from $\lambda$ by adding $j$-hooks, and now there must exist a strict partition $\beta$ with $\lambda\subset\beta\subset\alpha$ such that $\alpha\setminus\beta$ comprises $2k$ nodes of residue $0$. Arguing as in the previous case, this implies that $\alpha^{(j)}=\lambda^{(j)}$ for $j\geqslant 2$, while $\alpha^{(1)}$ is obtained from $\lambda^{(1)}$ by adding nodes in distinct rows, and $\alpha^{(0)}\supseteq\lambda^{(0)}$. Now if two nodes are added in the same column of $\lambda^{(0)}$, then the corresponding $0$-hooks are vertically stacked, as in the following diagram. $\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{1}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{1}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{1}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{1}$$\vphantom{1}\smash{0}$ But now the upper $0$-bar cannot contain any nodes of $\alpha\setminus\beta$, giving a contradiction. So again we find that the nodes added to $\lambda^{(0)}$ to obtain $\alpha^{(0)}$ must be added in distinct columns. Now suppose $\alpha$ satisfies the conditions, and consider how we can obtain $\alpha$ from $\lambda$ by applying $F(0,k):=F_{0}^{2k}F_{1}^{2k}\dots F_{\ell-1}^{2k}F_{\ell}^{k}$. For each of the residues $j=\ell,\ell-1,\dots,1$, we can add the $j$-nodes of $\alpha\setminus\lambda$. In each added $1$-hook, the two $0$-nodes must be added in order from left to right, but otherwise there are no restrictions on the $1$-hooks. The $0$-nodes occurring in the added $0$-hooks can be added in any order, except that when two added $0$-hooks correspond to nodes in consecutive columns of $\alpha^{(0)}$, then the rightmost $0$-node of the left hook is adjacent to the leftmost $0$-node of the right hook (as in one of the following diagrams) so that these two nodes must be added in a specific order. $\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$ $\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{\ell}$$\vphantom{1}\smash{0}$$\vphantom{1}\smash{0}$ As a result, we obtain a coefficient $k!(2k)!^{\ell}/2^{k-f(\lambda,\alpha)}$. But we also need to take into account the coefficients coming from the branching rule: the partitions obtained as we add nodes alternate between even and odd, except when we add a node in column $1$. So we obtain a further factor $2^{\frac{1}{2}(kp+a(\lambda)-a(\alpha)+h(\lambda)-h(\alpha))}$. Putting these coefficients together, we obtain $[F(0,k)\chi^{\lambda}:\chi^{\alpha}]=2^{f(\lambda,\alpha)+\frac{1}{2}(k(p-2)+h(\lambda)-h(\alpha)+a(\lambda)-a(\alpha))}(2k)!^{\ell}k!,$ in agreement with the Proposition. ∎ ### 6.3. Projective characters obtained by induction Our aim is to explore the relationship between the characters ${\varphi}^{{\mu}}$ and $\hat{{\varphi}}^{{\mu}}$, which we do by considering a third set of projective characters. Recall from Section 2.1 the set $\mathscr{P}_{p^{\prime}}^{\rho,d}\subseteq\mathscr{P}_{0}^{\rho,d}$ of the $p^{\prime}$-partitions in $\mathscr{P}_{p}^{\rho,d}$. By Lemma 3.1(ii), a partition $\lambda\in\mathscr{P}_{p}^{\rho,d}$ is $p^{\prime}$ if and only if $\lambda^{(0)}=\varnothing$. Recall (6.2). Given $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$, we will define a projective character $\tilde{\varphi}^{{\lambda}}$ by inducing the projective character $\chi^{\rho}$: $\tilde{\varphi}^{{\lambda}}=\prod_{i=1}^{\ell}\prod_{r=1}^{\lambda^{(i)}_{1}}F(i-1,{\lambda^{(i)}}^{\prime}_{r})\,\chi^{\rho}\in\operatorname{PCh}^{\rho,d},$ (6.3) where the factors $F(i-1,{\lambda^{(i)}}^{\prime}_{r})$ can be taken in any order. (It is not obvious at this stage that $\tilde{\varphi}^{{\lambda}}$ is independent of the order of the factors, but we will see in Corollary 6.9(ii) that this is the case. For now, we define $\tilde{\varphi}^{{\lambda}}$ by fixing an arbitrary order for each $\lambda$.) For any strict partition $\pi$ and any composition $\gamma$ let $\bar{c}(\pi;\gamma)$ be the number of ways $\pi$ can be obtained from $\varnothing$ by adding at each step $\gamma_{i}$ nodes all in different columns such that each step a strict partition is obtained. Now given $\alpha\in\mathscr{P}_{0}^{\rho,d}$ and $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$, define $\displaystyle\widetilde{D}_{\lambda}$ $\displaystyle=2^{\frac{1}{2}(d(p-2)+a(\lambda)-a(\rho))}\prod_{i=1}^{\ell}\prod_{r\geqslant 1}(2{\lambda^{(i)}}^{\prime}_{r})!^{l-i+1}{\lambda^{(i)}}^{\prime}_{r}!^{2i-1},$ $\displaystyle\widetilde{D}_{\lambda\alpha}$ $\displaystyle=\widetilde{D}_{\lambda}\sum_{\begin{subarray}{c}\beta^{(1)},\dots,\beta^{(\ell)}\in\mathscr{C}\\\ \gamma^{(1)},\dots,\gamma^{(\ell)}\in\mathscr{C}\\\ \beta^{(i)}+\gamma^{(i)}={\lambda^{(i)}}^{\prime}\end{subarray}}\bar{c}(\alpha^{(0)};\gamma^{(1)})\prod_{i=1}^{\ell}\left[(\mathcal{M}^{\beta^{(i)}}\otimes\mathtt{sgn})\mathbin{\circ}\mathcal{M}^{\gamma^{(i+1)}}:\mathcal{S}^{\alpha^{(i)}}\right],$ where we read $\gamma^{(\ell+1)}$ as $\varnothing$. Then we can deduce the following result from Proposition 6.6. ###### Proposition 6.7. Suppose $\alpha\in\mathscr{P}_{0}^{\rho,d}$ and $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$. Then $\chi^{\alpha}$ occurs in $\tilde{\varphi}^{{\lambda}}$ if and only if $\widetilde{D}_{\lambda\alpha}\neq 0$. Furthermore, if $\alpha$ is a $p^{\prime}$-partition, then $[\tilde{\varphi}^{{\lambda}}:\chi^{\alpha}]=\widetilde{D}_{\lambda\alpha}$. ###### Proof. We construct $\tilde{\varphi}^{{\lambda}}$ by starting from $\chi^{\rho}$ and applying each of the operators $F(i-1,{\lambda^{(i)}}^{\prime}_{r})$, for $1\leqslant i\leqslant\ell$ and $1\leqslant r\leqslant\lambda^{(i)}_{1}$. We start from the $p$-bar-quotient of $\rho$, i.e. $(\varnothing,\dots,\varnothing)$, and when we apply $F(i-1,{\lambda^{(i)}}^{\prime}_{r})$, we add ${\lambda^{(i)}}^{\prime}_{r}$ nodes in components $i-1$ and $i$ in accordance with Proposition 6.6, and we consider the possible choices of how to add these nodes. Let $\beta^{(i)}_{r}$ be the number of nodes we add in component $i$, and $\gamma^{(i)}_{r}$ the number of nodes we add in component $i-1$. This defines partitions $\beta^{(i)},\gamma^{(i)}$ for $1\leqslant i\leqslant\ell$ with $\beta^{(i)}+\gamma^{(i)}={\lambda^{(i)}}^{\prime}$, and we need to consider all possible such choices of $\beta^{(i)},\gamma^{(i)}$. Take a particular choice of $\beta^{(i)},\gamma^{(i)}$, and consider the coefficient of $\chi^{\alpha}$ obtained. Recall from Proposition 6.6 that when we apply $F(i-1,{\lambda^{(i)}}^{\prime}_{r})$, the nodes added in component $i-1$ must be in distinct columns, and the nodes added in component $i$ must be in distinct rows. So (by the Pieri rule) the number of ways of obtaining the $p$-bar-quotient $(\alpha^{(0)},\alpha^{(1)},\dots,\alpha^{(\ell)})$ is $\displaystyle\bar{c}(\alpha^{(0)};\gamma^{(1)})\prod_{i=1}^{\ell}\operatorname{c}(\alpha^{(i)};(\gamma^{(i+1)}_{1}),(\gamma^{(i+1)}_{2}),\dots\ (1^{\beta^{(i)}_{1}}),(1^{\beta^{(i)}_{2}}),\dots)$ $\displaystyle=\bar{c}(\alpha^{(0)};\gamma^{(1)})\prod_{i=1}^{\ell}\left[(\mathcal{M}^{\beta^{(i)}}\otimes\mathtt{sgn})\mathbin{\circ}\mathcal{M}^{\gamma^{(i+1)}}:\mathcal{S}^{\alpha^{(i)}}\right]$ by Lemma 2.1; here we read $\gamma^{(\ell+1)}=\varnothing$. We sum over all possible choices of $\beta^{(i)},\gamma^{(i)}$ to get $\widetilde{D}_{\lambda\alpha}/\widetilde{D}_{\lambda}$; so the coefficient of $\chi^{\alpha}$ is non-zero if and only if $\widetilde{D}_{\lambda\alpha}\neq 0$. In the case where $\alpha$ is a $p^{\prime}$-partition, the product of the coefficients arising from Proposition 6.6 is $\widetilde{D}_{\lambda}$, so the coefficient of $\chi^{\alpha}$ in $\tilde{\varphi}^{{\lambda}}$ is $\widetilde{D}_{\lambda\alpha}$. ∎ Our next task is to show that the characters $\tilde{\varphi}^{{\lambda}}$ are linearly independent. First we use Proposition 6.7 to give more information about the structure of the characters $\tilde{\varphi}^{{\lambda}}$. Recall the partial order $\succcurlyeq$ on multipartitions from Section 3. ###### Proposition 6.8. Suppose $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$. Then the character $\chi^{\lambda}$ occurs in $\tilde{\varphi}^{{\lambda}}$, while any character $\chi^{\alpha}$ occurring in $\tilde{\varphi}^{{\lambda}}$ satisfies $(\lambda^{(0)},\dots,\lambda^{(\ell)})\preccurlyeq(\alpha^{(0)},\dots,\alpha^{(\ell)})\preccurlyeq({\lambda^{(1)}}^{\prime},\dots,{\lambda^{(\ell)}}^{\prime},\varnothing).$ ###### Proof. Certainly $\chi^{\lambda}$ occurs in $\tilde{\varphi}^{{\lambda}}$: in the sum in Proposition 6.7 we can take $\beta^{(i)}={\lambda^{(i)}}^{\prime}$ and $\gamma^{(i)}=\varnothing$ for all $i$; the corresponding summand is then $\displaystyle\prod_{i=1}^{\ell-1}\left[\mathcal{M}^{{\lambda^{(i)}}^{\prime}}\otimes\mathtt{sgn}:\mathcal{S}^{\lambda^{(i)}}\right]$ $\displaystyle=\prod_{i=1}^{\ell-1}\left[\mathcal{M}^{{\lambda^{(i)}}^{\prime}}\otimes\mathtt{sgn}:\mathcal{S}^{{\lambda^{(i)}}^{\prime}}\otimes\mathtt{sgn}\right]$ $\displaystyle=\prod_{i=1}^{\ell-1}\left[\mathcal{M}^{{\lambda^{(i)}}^{\prime}}:\mathcal{S}^{{\lambda^{(i)}}^{\prime}}\right]$ which is well known to be non-zero (indeed, $\mathcal{S}^{{\lambda^{(i)}}^{\prime}}$ is _defined_ to be a submodule of $\mathcal{M}^{{\lambda^{(i)}}^{\prime}}$). Now suppose $\chi^{\alpha}$ occurs in $\tilde{\varphi}^{{\lambda}}$, and choose $\beta^{(1)},\dots,\beta^{(\ell)},\gamma^{(1)},\dots,\gamma^{(\ell)}$ such that the corresponding summand in $\widetilde{D}_{\lambda\alpha}$ is non- zero. Then in particular $|\alpha^{(i)}|=|\beta^{(i)}|+|\gamma^{(i+1)}|$ for $0\leqslant i\leqslant\ell$ (where we read $\beta^{(0)}=\gamma^{(\ell+1)}=\varnothing$). To show that $(\alpha^{(0)},\dots,\alpha^{(\ell)})\succcurlyeq(\lambda^{(0)},\dots,\lambda^{(\ell)})$, take $0\leqslant k\leqslant\ell$ and $c\geqslant 1$. Then $\displaystyle\left(\sum_{i=0}^{k-1}|\alpha^{(i)}|+\sum_{i=1}^{c}{\alpha^{(k)}}^{\prime}_{i}\right)-\left(\sum_{i=0}^{k-1}|\lambda^{(i)}|+\sum_{i=1}^{c}{\lambda^{(k)}}^{\prime}_{i}\right)$ $\displaystyle=|\gamma^{(k)}|+\sum_{i=1}^{c}{\alpha^{(k)}}^{\prime}_{i}-\sum_{i=1}^{c}{\lambda^{(k)}}^{\prime}_{i}$ $\displaystyle\geqslant|\gamma^{(k)}|+\sum_{i=1}^{c}(\beta^{(k)}\sqcup{\gamma^{(k+1)}}^{\prime})_{i}-\sum_{i=1}^{c}\beta^{(k)}_{i}-\sum_{i=1}^{c}\gamma^{(k)}_{i}$ $\displaystyle\geqslant\sum_{i=1}^{c}(\beta^{(k)}\sqcup{\gamma^{(k+1)}}^{\prime})_{i}-\sum_{i=1}^{c}\beta^{(k)}_{i}$ $\displaystyle\geqslant 0,$ as required. To show that $(\alpha^{(0)},\dots,\alpha^{(\ell)})\preccurlyeq({\lambda^{(1)}}^{\prime},\dots,{\lambda^{(\ell)}}^{\prime},\varnothing)$, take $0\leqslant k\leqslant\ell$ and $c\geqslant 1$. Then $\displaystyle\left(\sum_{i=0}^{k-1}|\lambda^{(i+1)}|+\sum_{i=1}^{c}\lambda^{(k+1)}_{i}\right)-\left(\sum_{i=0}^{k-1}|\alpha^{(i)}|+\sum_{i=1}^{c}{\alpha^{(k)}}^{\prime}_{i}\right)$ $\displaystyle=|\beta^{(k)}|+\sum_{i=1}^{c}\lambda^{(k+1)}_{i}-\sum_{i=1}^{c}{\alpha^{(k)}}^{\prime}_{i}$ $\displaystyle\geqslant|\beta^{(k)}|+\sum_{i=1}^{c}({\beta^{(k+1)}}^{\prime}\sqcup{{\gamma^{(k+1)}}^{\prime})}_{i}-\sum_{i=1}^{c}(\beta^{(k)}+{\gamma^{(k+1)}}^{\prime})_{i}$ $\displaystyle\geqslant\sum_{i=1}^{c}((\beta^{(k+1)})^{\prime}\sqcup{\gamma^{(k+1)}}^{\prime})_{i}-\sum_{i=1}^{c}{\gamma^{(k+1)}}^{\prime}_{i}$ $\displaystyle\geqslant 0,$ as required. ∎ As a consequence, we can show that the characters $\tilde{\varphi}^{{\lambda}}$ span the space of virtual projective characters, and derive some information about the form of the indecomposable projective characters. ###### Corollary 6.9. 1. (i) The set $\left\\{\left.\tilde{\varphi}^{{\lambda}}\ \right|\ \smash{\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}}\right\\}$ is a basis for the space of virtual projective characters in $\mathcal{B}^{\rho,d}$. 2. (ii) For each $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$, the character $\tilde{\varphi}^{{\lambda}}$ is independent of the order of the factors $F(i-1,{\lambda^{(i)}}^{\prime}_{r})$. 3. (iii) There is a bijection $\lambda\mapsto\lambda_{\circ}$ from $\mathscr{P}_{p^{\prime}}^{\rho,d}$ to $\mathscr{RP}_{p}^{\rho,d}$ such that $\chi^{\lambda}$ occurs in ${\varphi}^{{\lambda_{\circ}}}$, and any character $\chi^{\alpha}$ occurring in ${\varphi}^{{\lambda_{\circ}}}$ satisfies $\alpha\trianglelefteqslant\lambda$. ###### Proof. (i) Since $|\mathscr{P}_{p^{\prime}}^{\rho,d}|=|\mathscr{RP}_{p}^{\rho,d}|$ by (3.1), it suffices to show that the $\tilde{\varphi}^{{\lambda}}$ are linearly independent. But this follows from Proposition 6.8 which shows that the matrix giving the multiplicities $[\tilde{\varphi}^{{\lambda}}:\chi^{\alpha}]$ for $\alpha,\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$ is triangular with non- zero diagonal. (ii) Let $\tilde{\varphi}^{{\lambda}}$ be defined using a particular choice of order of the factors $F(i-1,({\lambda^{(i)}}^{\prime}_{r}))$, and let $\tilde{\varphi}^{{\lambda^{\ast}}}$ be defined in the same way but using a different order. By Proposition 6.7, $\tilde{\varphi}^{{\lambda}}-\tilde{\varphi}^{{\lambda^{\ast}}}$ is a linear combination of the characters $\chi^{\alpha}$ with $\alpha$ _not_ being $p^{\prime}$. By (i) we can write $\tilde{\varphi}^{{\lambda}}-\tilde{\varphi}^{{\lambda^{\ast}}}$ as a linear combination of the characters $\tilde{\varphi}^{{\xi}}$ with $\xi\in\mathscr{P}_{p^{\prime}}^{\rho,d}$. If this linear combination is non- zero, then take $\xi$ maximal in the dominance order such that $\tilde{\varphi}^{{\xi}}$ appears with non-zero coefficient. Then by Proposition 6.8 the character $\chi^{\xi}$ occurs in $\tilde{\varphi}^{{\lambda}}-\tilde{\varphi}^{{\lambda^{\ast}}}$, a contradiction. (iii) Since $\tilde{\varphi}^{{\lambda}}$ is a character (not just a virtual character), it can be written as a linear combination, with non-negative coefficients, of the indecomposable projective characters. Since $\chi^{\lambda}$ occurs in $\tilde{\varphi}^{{\lambda}}$, it must occur in some indecomposable constituent ${\varphi}^{{\lambda_{\circ}}}$ of $\tilde{\varphi}^{{\lambda}}$. Then if $\chi^{\alpha}$ occurs in ${\varphi}^{{\lambda_{\circ}}}$ it must occur in $\tilde{\varphi}^{{\lambda}}$, giving $\alpha\trianglelefteqslant\lambda$. This defines a map $\mathscr{P}_{p^{\prime}}^{\rho,d}\to\mathscr{RP}_{p}^{\rho,d},\ \lambda\mapsto\lambda_{\circ}$ with the required properties. This map is obviously injective, and hence bijective since $|\mathscr{P}_{p^{\prime}}^{\rho,d}|=|\mathscr{RP}_{p}^{\rho,d}|$ by (3.1). ∎ ### 6.4. The bijection $\lambda\mapsto\lambda_{\circ}$ In Corollary 6.9(iii), we have defined the bijection $\mathscr{P}_{p^{\prime}}^{\rho,d}\to\mathscr{RP}_{p}^{\rho,d},\qquad\lambda\mapsto\lambda_{\circ}$ such that $\chi^{\lambda}$ occurs in ${\varphi}^{{\lambda_{\circ}}}$, and any character $\chi^{\alpha}$ occurring in ${\varphi}^{{\lambda_{\circ}}}$ satisfies $\alpha\trianglelefteqslant\lambda$. The goal of this subsection is to prove Proposition 6.12 which describes the bijection explicitly. To prove this proposition, we consider weights of modules, as outlined in Section 5.2. We fix a weight $\text{\boldmath$i$}^{\rho}$ of $\mathrm{D}(\rho)$. Recalling (6.1), for any $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$ and $j\in J$, define the word $\text{\boldmath$g$}^{j,\lambda}$ to be the concatenation $\text{\boldmath$g$}^{j,\lambda}:=\text{\boldmath$g$}^{j,{\lambda^{(j+1)}}^{\prime}_{1}}\,\text{\boldmath$g$}^{j,{\lambda^{(j+1)}}^{\prime}_{2}}\,\text{\boldmath$g$}^{j,{\lambda^{(j+1)}}^{\prime}_{3}}\,\dots.$ Now define $\text{\boldmath$g$}^{\lambda}$ to be the concatenation $\text{\boldmath$g$}^{\lambda}:=\text{\boldmath$i$}^{\rho}\,\text{\boldmath$g$}^{\ell-1,\lambda}\,\text{\boldmath$g$}^{\ell-2,\lambda}\,\dots\,\text{\boldmath$g$}^{0,\lambda}.$ ###### Lemma 6.10. Let $\mu\in\mathscr{P}_{0}^{\rho,d}$. Then $\text{\boldmath$g$}^{\lambda}$ is a weight of $\mathrm{S}(\mu)$ if and only if $\chi^{\mu}$ occurs in $\tilde{\varphi}^{{\lambda}}$. ###### Proof. For a word $\text{\boldmath$i$}=i_{1}\dots i_{n}\in I^{n}$, we denote $E_{\text{\boldmath$i$}}:=E_{i_{1}}\dots E_{i_{n}}$ and $F_{\text{\boldmath$i$}}:=F_{i_{n}}\dots F_{i_{1}}$. Then by definition, $\text{\boldmath$g$}^{\lambda}$ is a weight of $\mathrm{S}(\mu)$ if and only if $E_{\text{\boldmath$g$}^{\lambda}}\mathrm{S}(\mu)\neq 0$ if and only if $E_{\text{\boldmath$g$}^{\lambda}}\chi^{\mu}\neq 0$. But $E_{\text{\boldmath$g$}^{\lambda}}=E_{\text{\boldmath$i$}^{\rho}}E_{\text{\boldmath$g$}^{\ell-1,\lambda}}\dots E_{\text{\boldmath$g$}^{0,\lambda}}$, so $E_{\text{\boldmath$g$}^{\lambda}}\chi^{\mu}\neq 0$ if and only if $E_{\text{\boldmath$g$}^{\ell-1,\lambda}}\dots E_{\text{\boldmath$g$}^{0,\lambda}}\chi^{\mu}=c\chi^{\rho}$ for some non-zero scalar $c$. By Frobenius reciprocity, this is equivalent to the fact that $\chi^{\mu}$ occurs in $F_{\text{\boldmath$g$}^{0,\lambda}}\dots F_{\text{\boldmath$g$}^{\ell-1,\lambda}}\chi^{\rho}$. Recalling the definition (6.3) of $\tilde{\varphi}^{{\lambda}}$ and taking into account Corollary 6.9(ii), we deduce that $F_{\text{\boldmath$g$}^{0,\lambda}}\dots F_{\text{\boldmath$g$}^{\ell-1,\lambda}}\chi^{\rho}=\tilde{\varphi}^{{\lambda}}$, completing the proof of the lemma. ∎ Given $\mu\in\mathscr{RP}_{p}^{\rho,d}$, define $\tilde{\mu}\in\mathscr{RP}_{p}^{\rho,d-|\mu^{(0)}|}$ to be the partition with $p$-bar-core $\rho$ and $p$-bar-quotient $(\varnothing,\mu^{(1)},\dots,\mu^{(\ell-1)},\varnothing);$ in other words, $\tilde{\mu}$ is defined by deleting from $\mu$ all the parts divisible by $p$ from $\mu$, cf. Lemma 3.1(iii). Given $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$, define $\hat{\lambda}\in\mathscr{P}_{p^{\prime}}^{\rho,d-|\lambda^{(1)}|}$ to be the partition with $p$-bar-core $\rho$ and $p$-bar-quotient $(\varnothing,\varnothing,\lambda^{(2)},\dots,\lambda^{(\ell)}).$ ###### Lemma 6.11. Suppose $\mu\in\mathscr{RP}_{p}^{\rho,d}$ and $\lambda\in\mathscr{P}_{p^{\prime}}^{\rho,d}$. If $\mu^{(0)}={\lambda^{(1)}}^{\prime}$ and $\text{\boldmath$g$}^{\hat{\lambda}}$ is a weight of $\mathrm{D}(\tilde{\mu})$, then $\text{\boldmath$g$}^{\lambda}$ is a weight of $\mathrm{D}(\mu)$. ###### Proof. By Proposition 5.2 it suffices to show that we can get from $\mu$ to $\tilde{\mu}$ by successively removing normal nodes, with the residues of these nodes giving the word $\text{\boldmath$g$}^{0,\mu^{(0)}_{1}}\text{\boldmath$g$}^{0,\mu^{(0)}_{2}}\dots$. We use induction on $|\mu^{(0)}|$, with the case $\mu^{(0)}=\varnothing$ being vacuous. For the inductive step, suppose $\mu^{(0)}\neq\varnothing$. Let $\mu^{-}$ be the partition obtained from $\mu$ by deleting the last positive part divisible by $p$; call this last part $k=\mu^{(0)}_{h(\mu^{(0)})}$. Similarly, define $\lambda^{-}$ by deleting the last non-zero column from $\lambda^{(1)}$. Then $\tilde{\mu}=\widetilde{\mu^{-}}$ and $\hat{\lambda}=\widehat{\lambda^{-}}$, so by induction if $\text{\boldmath$g$}^{\hat{\lambda}}$ is a weight of $\mathrm{D}(\tilde{\mu})$ then $\text{\boldmath$g$}^{\lambda^{-}}$ is a weight of $\mathrm{D}(\mu^{-})$. So we just need to show that we can get from $\mu$ to $\mu^{-}$ by removing $2k$ normal $0$-nodes, then $2k$ normal $1$-nodes, …, $2k$ normal
# A Flow approach to the prescribed Gaussian curvature problem in $\mathbb{H}^{n+1}$ Haizhong Li Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.R. China<EMAIL_ADDRESS>and RUIJIA ZHANG Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.R. China <EMAIL_ADDRESS> ###### Abstract. In this paper, we study the following prescribed Gaussian curvature problem $K=\frac{\tilde{f}(\theta)}{\phi(\rho)^{\alpha-2}\sqrt{\phi(\rho)^{2}+|\overline{\nabla}\rho|^{2}}},$ a generalization of the Alexandrov problem ($\alpha=n+1$) in hyperbolic space, where $\tilde{f}$ is a smooth positive function on $\mathbb{S}^{n}$, $\rho$ is the radial function of the hypersurface, $\phi(\rho)=\sinh\rho$ and $K$ is the Gauss curvature. By a flow approach, we obtain the existence and uniqueness of solutions to the above equations when $\alpha\geq n+1$. Our argument provides a parabolic proof in smooth category for the Alexandrov problem in $\mathbb{H}^{n+1}$. We also consider the cases $2<\alpha\leq n+1$ under the evenness assumption of $\tilde{f}$ and prove the existence of solutions to the above equations. ###### Key words and phrases: Curvature flow, Monotonicity, Asymptotic behaviour, Hyperbolic space ###### 2010 Mathematics Subject Classification: 35K55, 53E10 ## 1\. introduction The Alexandrov problem proposed by A. D. Alexandrov [A43] is of great significance to the study of convex bodies. It quests for the existence of closed convex hypersurfaces with prescribed volume element of the Gaussian image in the Euclidean space. Let $\hat{M}^{n}$ be the boundary of some convex domains containing a neighborhood of the origin in $\mathbb{R}^{n+1}$, which can be written as a radial graph over $\mathbb{S}^{n}$, i.e., $\hat{M}^{n}=\\{R(\theta)=r(\theta)\theta|\theta\in\mathbb{S}^{n}\\}$ with induced metric $\hat{g}_{ij}$. Denote by $\hat{\nu}(Y):\hat{M}^{n}\rightarrow\mathbb{S}^{n}$ the generalized Gauss map. For smooth $\hat{M}^{n}$, $\hat{\nu}(Y)$ is the unit outward normal vector at $Y\in\hat{M}^{n}$. The Alexandrov problem is to reconstruct $\hat{M}^{n}$ by given integral Gaussian curvature $\displaystyle\mu(\omega)=|\nu(R(\omega))|$ for a nonnegative completely additive function $\mu$ on the set of Borel subsets $\omega$ of $\mathbb{S}^{n}$. Furthermore, if $\hat{M}^{n}$ is at least $C^{2}$, then $\displaystyle|\nu(R(\omega))|=\int_{R(\omega)}\hat{K}\mathrm{d}v_{\hat{M}^{n}}=\int_{\omega}\hat{K}\sqrt{\det(\hat{g}_{ij})}\mathrm{d}\theta_{\mathbb{S}^{n}}$ where $\hat{K}$ is the Gauss curvature of $\hat{M}^{n}$ and $\mathrm{d}\theta_{\mathbb{S}^{n}}$ is the standard measure on $\mathbb{S}^{n}$. If $\mu$ is given by integrating a function, we write it as $\displaystyle\mu(\omega)=\int_{\omega}\tilde{f}\mathrm{d}\theta_{\mathbb{S}^{n}}.$ Then the Alexandrov problem can be reduced to the following fully nonlinear partial differential equation $\displaystyle\hat{K}=\frac{\tilde{f}}{r^{n-1}\sqrt{r^{2}+|\bar{\nabla}r|^{2}}}.$ The existence of regular solutions to this equation was solved by Pogorelev [P73] for surfaces and Oliker [O83] for higher dimensional cases. For more related interesting studies of the Alexandrov problem in $\mathbb{R}^{n+1}$, one can refer to [GL97, T90]. Naturally, similar prescribed Gaussian curvature problems of hypersurface $M^{n}$ in $\mathbb{H}^{n+1}$ were studied in [O83, O89], where the given function defined on $M^{n}$ need constrained conditions to ensure $C^{0}$ estimate. Recently, Yang [Y20] studied the Alexandrov problem in the hyperbolic space. He considered the hypersurface $M^{n}$ in $\mathbb{H}^{n+1}$ whose Gauss curvature measures were prescribed via a radial map. Let $M^{n}$ be the boundary of some convex body in $\mathbb{H}^{n+1}$ enclosing the origin. we can parametrize it as a graph of the radial function $\rho(\theta)$, such that $M^{n}=\\{R(\theta)=(\rho(\theta),\theta):\rho:\mathbb{S}^{n}\rightarrow\mathbb{R}^{+},\theta\in\mathbb{S}^{n}\\}$. If $M^{n}$ is at least $C^{2}$, similar to the Euclidean space, then we can define the prescribed Gaussian curvature measure problem by $\displaystyle\int_{R(\omega)}K\mathrm{d}v_{M^{n}}=\int_{\omega}\tilde{f}\mathrm{d}\theta_{\mathbb{S}^{n}}$ where $\tilde{f}$ is a given positive function on $\mathbb{S}^{n}$. By the coordinate transformation, we write the both sides of the above integrals on any Borel set $\omega$ of $\mathbb{S}^{n}$ as $\displaystyle\int_{\omega}K\sqrt{\det(g_{ij})}\mathrm{d}\theta_{\mathbb{S}^{n}}=\int_{\omega}\tilde{f}\mathrm{d}\theta_{\mathbb{S}^{n}}.$ Then this curvature measure problem is reduced to the following fully nonlinear PDE $\displaystyle K=\frac{\tilde{f}(\theta)}{\phi(\rho)^{n-1}\sqrt{\phi(\rho)^{2}+|\overline{\nabla}\rho|^{2}}}\quad{\rm on}\ \mathbb{S}^{n}.$ (1.1) Yang proved the existence and uniqueness of (1.1) with the condition $\inf\tilde{f}>1$. In his study [Y20], he mentioned that whether the solution to (1.1) exists with $\tilde{f}$ endowed with other geometric conditions is still an open question. Note that the condition $\inf\tilde{f}>1$ in [Y20] is only used in the $C^{0}$ estimate. In the studies of prescribed curvature measure problem, $C^{0}$ estimates are difficult in most cases, but are also more geometric. In Theorem 1.2 and Corollary 1.2, we weaken the condition of $\tilde{f}$ and prove the existence of solutions to (1.1) under the evenness assumption by deriving a delicate $C^{0}$ estimate. Besides, in this paper we consider the following more general prescribed Gaussian curvature problems which correspond to the following fully nonlinear PDE $\displaystyle K=\frac{\tilde{f}(\theta)}{\phi(\rho)^{\alpha-2}\sqrt{\phi(\rho)^{2}+|\overline{\nabla}\rho|^{2}}}\quad{\rm on}\ \mathbb{S}^{n}.$ (1.2) Motivated by the flow studied by Li-Sheng-Wang [LSW20a] in the Euclidean space, we provide a curvature flow approach to (1.2) in hyperbolic space. Write $f(\theta)=\tilde{f}(\theta)^{-1}$. Let $M_{0}$ be a smooth closed uniformly convex hypersurface in $\mathbb{H}^{n+1}$ enclosing the origin. In this paper, we study the following sort of flow $\left\\{\begin{aligned} \frac{\partial}{\partial t}X(x,t)=&-\phi(\rho)^{\alpha}f(\theta)K(x,t)\nu(x,t)+V(x,t),\\\ X(\cdot,0)=&X_{0}(\cdot),\end{aligned}\right.$ (1.3) where $\alpha\geq n+1$ is a constant. Here we regard $\mathbb{H}^{n+1}$ as a warped product space. Any point $X\in\mathbb{H}^{n+1}$ can be parametrized by $X=(\rho,\theta)\in\mathbb{R}^{+}\times\mathbb{S}^{n}$. Then $f$ is a smooth positive function defined on $\mathbb{S}^{n}$, $\phi(\rho)=\sinh\rho$, $K$ is the Gauss curvature of the flow hypersurface $M_{t}$, $\nu$ is the unit outward normal at $X(x,t)$ and $V=\sinh\rho\ \partial_{\rho}$ is a conformal Killing vector field on $\mathbb{H}^{n+1}$. Equivalently, up to a tangential diffeomorphism the flow (1.3) can be written as follows: $\displaystyle\partial_{t}X=\left(-\phi(\rho)^{\alpha}f(\theta)K(x,t)+u(x,t)\right)\nu(x,t),$ (1.4) where $u=\langle V,\nu\rangle$ is the support function of $M_{t}$. Note that the following elliptic equation $\displaystyle\phi(\rho)^{\alpha}K=\tilde{f}(\theta)u\quad{\rm on}\ \mathbb{S}^{n}$ (1.5) (that is exactly (1.2)) remains invariant under (1.3). In this paper, we prove the following results. ###### Theorem 1.1. Let $M_{0}$ be a smooth, closed, uniformly convex hypersurface in hyperbolic space $\mathbb{H}^{n+1}$ enclosing the origin. Suppose that $f$ is a smooth positive function on $\mathbb{S}^{n}$. If * (i) $\alpha>n+1$ or * (ii) $\alpha=n+1$ and $f<1$, then the flow (1.3) has a unique smooth uniformly convex solution $M_{t}$ for all time $t>0$. When $t\rightarrow\infty$, $M_{t}$ converges smoothly to the unique smooth solution of (1.2). ###### Corollary 1.1. Suppose that $\tilde{f}$ is a smooth positive function on $\mathbb{S}^{n}$. Then there is a unique smooth uniformly convex hypersurface $M^{n}$ in $\mathbb{H}^{n+1}$, such that it satisfies (1.2) under one of the following two assumptions, * (i) $\alpha>n+1$ or, * (ii) $\alpha=n+1$ and $\inf\tilde{f}>1$. ###### Remark 1.1. Case $(ii)$ in Corollary 1.1 was proved by Fengrui Yang in [Y20, Theorem 6]. ###### Theorem 1.2. Let $M_{0}$ be a smooth, closed, uniformly convex and origin-symmetric hypersurface in hyperbolic space $\mathbb{H}^{n+1}$. Assume $\alpha=n+1$, and $f$ is a smooth positive even function on $\mathbb{S}^{n}$ satisfying $\int_{\mathbb{S}^{n}}f(\theta)^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}>|S^{n}|$. Then the flow (1.3) has a smooth uniformly convex solution for all time $t>0$, and converges smoothly to the unique smooth even solution of (1.1). ###### Corollary 1.2. Suppose $\tilde{f}$ is a smooth positive even function on $\mathbb{S}^{n}$ satisfying $\int_{\mathbb{S}^{n}}\tilde{f}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}}>|S^{n}|$. Then there is a unique smooth uniformly convex and origin-symmetric hypersurface $M^{n}$ in $\mathbb{H}^{n+1}$, such that it satisfies (1.1). ###### Remark 1.2. In Corollary 1.2, we weaken the condition of $\tilde{f}$ in Case $(ii)$ of Theorem 1.2, from $\inf\tilde{f}>1$ to $\int\tilde{f}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}}>|S^{n}|$ and obtain the existence of the even Alexandrov problem in hyperbolic space. When $2<\alpha\leq n+1$, we consider the following flow $\left\\{\begin{aligned} \frac{\partial}{\partial t}X(x,t)=&-\phi(\rho)^{\alpha}f(\theta)K(x,t)\nu(x,t)+\eta(t)V(x,t),\\\ X(\cdot,0)=&X_{0}(\cdot),\end{aligned}\right.$ (1.6) where $\eta(t)=\frac{\displaystyle\int_{\mathbb{S}^{n}}\frac{K}{u}\phi^{n+1}\mathrm{d}\theta_{\mathbb{S}^{n}}}{\displaystyle\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}}$ and obtain the existence of the following equation $\displaystyle\phi(\rho)^{\alpha}K=c\tilde{f}(\theta)u\quad{\rm on}\ \mathbb{S}^{n}.$ (1.7) ###### Theorem 1.3. Let $M_{0}$ be as in Theorem 1.2. Assume $2<\alpha\leq n+1$, and $f$ is a smooth positive even function on $\mathbb{S}^{n}$. Then the flow (1.6) has a smooth uniformly convex solution for all time $t>0$. When $t\rightarrow\infty$, $M_{t}$ converges smoothly to the smooth solution of (1.7) for some positive constant $c$ in a subsequence. ###### Corollary 1.3. Suppose that $\tilde{f}$ is a smooth positive even function on $\mathbb{S}^{n}$. If $2<\alpha<n+1$, then there is a smooth uniformly convex and origin-symmetric hypersurface $M^{n}$ in $\mathbb{H}^{n+1}$, such that it satisfies (1.7) for some positive constant $c$. Curvature flows in hyperbolic space have been studied extensively in recent years. In these studies, constrained flows were introduced to prove geometric inequalities, see, e.g., [ACW18, Hu-Li-Wei2020, LWX14, SX19, WX14]. Convergence results for inverse curvature flows were obtained in [G11, LZ19, S15, S'15, WWZ20]. Furthermore, volume preserving curvature flows in hyperbolic space have been studied, see for example [AW18, M12]. All these flows in hyperbolic space have the same limiting shape in common, i.e., they all become round. When $f\equiv 1$, case $(i)$ in Theorem 1.1 was proved by Fang Hong in [F21], in which he proved that the flow (1.3) converges smoothly to a geodesic sphere. For the first time, we introduce the function $f$ in flows (1.3), (1.6) and derive the convergence results of solutions to (1.2), (1.7), which build a bridge between curvature flows in hyperbolic space and solutions to the elliptic equation. By using the Klein model (see also in [AW18, CH21, W19]), we project the hyperbolic flow (1.3) to the Euclidean space and obtain the projection flow (5.14). We discover the monotone function (5.16) along (5.14) and derive the asymptotic convergence result of (1.3). For $\alpha\leq n+1$, we design the flow (1.6) and deduce the convergence result by deriving a delicate $C^{0}$ estimate. This paper is organized as follows. In Section 2, we collect some properties of star-shaped hypersurfaces in hyperbolic space, derive some evolution equations of various geometric quantities along (1.3), (1.6) and show that the flows can be reduced to a scalar parabolic PDE for the radial function. In Section 3, we prove $C^{0}$, $C^{1}$ estimates when $\alpha\geq n+1$ and show that the hypersurface preserves star-shaped along the flow (1.3). In Section 4, we obtain the uniform bound of the Gauss curvature $K$ which implies the short time existence of the flow (1.3). By using a new auxiliary function, we obtain the uniform bound of the principal curvatures of $M_{t}$ and establish the a priori estimates for the long time existence of (1.3). In section 5, we study the asymptotic behaviour of the flow (1.3) by projecting $M_{t}$ to the Euclidean space, prove the uniqueness of the solution to (1.2) and complete the proof of Theorem 1.1. In section 6, we complete the proof of Theorem 1.2 by deriving a delicate $C^{0}$ estimate. In Section 7, we study the normalized flow (1.6) under the evenness assumption when $2<\alpha\leq n+1$ and complete the proof of Theorem 1.3. ###### Acknowledgments. The authors were partially supported by NSFC grant No.11831005 and NSFC grant No.12126405. The authors would like to thank Professor Xianfeng Wang and Professor Yong Wei for helpful discussions. ## 2\. Preliminaries In this paper, we fix a point $o\in\mathbb{H}^{n+1}$ and consider the polar geodesic coordinates centered at $o$ and regard $\mathbb{H}^{n+1}$ as a warped product space $[0,+\infty)\times\mathbb{S}^{n}$ equipped with Riemannian metric $\displaystyle g_{\mathbb{H}^{n+1}}=d\rho^{2}+\phi(\rho)^{2}g_{\mathbb{S}^{n}}$ where $\phi(\rho)=\sinh\rho$ and $g_{\mathbb{S}^{n}}$ is the standard metric on the unit sphere $\mathbb{S}^{n}$. Denote $\displaystyle\Phi(\rho)=\int_{0}^{\rho}\sinh s\ \mathrm{d}s=\phi^{\prime}(\rho)-1.$ (2.1) The conformal Killing vector field can be written as $V=D\Phi=\phi\partial\rho$ and $D^{2}\Phi=\phi^{\prime}g_{\mathbb{S}^{n}}$. In particular, $DV=\phi^{\prime}g_{\mathbb{H}^{n+1}}$. ### 2.1. Hypersurfaces in hyperbolic space Let $M^{n}$ be a closed hypersurface in $\mathbb{H}^{n+1}$ and $\\{x^{1},\cdots,x^{n}\\}$ be a local coordinate system of $M^{n}$. We regard $\nu$ as the unit outward normal vector field of $M^{n}$. We denote by the induced metric $g_{ij}=g(X_{i},X_{j})$ and the second fundamental form $h_{ij}=h(X_{i},X_{j})$ of $M^{n}$, where the second fundamental form is defined by $h(X,Y)=\langle\nabla_{X}\nu,Y\rangle$ with any two tangent vector fields $X,Y\in TM^{n}$. The Weingarten matrix is regarded as $\mathcal{W}=\\{h_{i}{}^{j}\\}=\\{h_{ik}g^{kj}\\}$, where $\\{g^{ij}\\}$ is the inverse matrix of $\\{g_{ij}\\}$. The principal curvature $\kappa=(\kappa_{1},\cdots,\kappa_{n})$ of $M^{n}$ are eigenvalues of $\mathcal{W}$. Let $f(\kappa)$ be a symmetric function of the principal curvatures $\kappa=(\kappa_{1},\kappa_{2},\cdots,\kappa_{n})$. There exists a function $\mathcal{F}(\mathcal{W})$ defined on the Weingarten matrix, such that $F(\mathcal{W})=f(\kappa)$. Since $h_{i}{}^{j}=\sum_{k}h_{ik}g^{kj}$, $\mathcal{F}$ can be viewed as a function $\hat{\mathcal{F}}(h_{ij},g_{ij})$ defined on the second fundamental form $\\{h_{ij}\\}$ and the metric $\\{g_{ij}\\}$. In the subsequent article, we denote $\displaystyle\dot{\mathcal{F}}^{pq}(\mathcal{W}):=$ $\displaystyle\frac{\partial\hat{\mathcal{F}}}{\partial h_{pq}}(h_{ij},g_{ij}),\quad\ddot{\mathcal{F}}^{pq,rs}(\mathcal{W}):=\frac{\partial^{2}\hat{\mathcal{F}}}{\partial h_{pq}\partial h_{rs}}(h_{ij},g_{ij}).$ Here we collect some formulas of hypersurface in hyperbolic space (see [GL15, HL21]). ###### Lemma 2.1. Let $(M^{n},g)$ be a smooth hypersurface in $\mathbb{H}^{n+1}$. Then we have $\displaystyle\nabla_{i}\Phi=\langle V,X_{i}\rangle,\quad\nabla_{j}\nabla_{i}\Phi=\phi^{\prime}g_{ij}-uh_{ij}.$ (2.2) The support function $u=\langle V,\nu\rangle$ satisfies $\displaystyle\nabla_{i}u=\langle V,X_{k}\rangle h_{i}{}^{k},\quad\nabla_{j}\nabla_{i}u=\langle V,\nabla h_{ij}\rangle+\phi^{\prime}h_{ij}-uh_{i}{}^{k}h_{kj}$ (2.3) where $\nabla$ is the levi-Civita connection on $M^{n}$ with respect to the induced metric and $\\{X_{1},\cdots,X_{n}\\}$ is a basis of the tangent space of $M^{n}$. Then we have the first and second derivatives of the distance function $\rho$. ###### Corollary 2.1. $\displaystyle\nabla_{i}\rho=\frac{\langle V,X_{i}\rangle}{\phi},\quad\nabla_{j}\nabla_{i}\rho=\frac{\phi^{\prime}}{\phi}(g_{ij}-\nabla_{j}\rho\nabla_{i}\rho)-\frac{uh_{ij}}{\phi}.$ (2.4) ###### Proof. Observe that $\displaystyle\nabla_{i}\Phi=\phi\nabla_{i}\rho,\quad\nabla_{j}\nabla_{i}\Phi=\phi\nabla_{j}\nabla_{i}\rho+\phi^{\prime}\nabla_{j}\rho\nabla_{i}\rho.$ Combing with (2.2) we get (2.4) by a direct calculation. ∎ ### 2.2. Evolution equations For convenience, we consider the following flow $\frac{\partial}{\partial t}X(x,t)=-\Theta\nu(x,t)+\tilde{\eta}(t)V$ (2.5) where $\Theta=\phi(\rho)^{\alpha}f(\theta)K$ and the global term $\tilde{\eta}(t)$ is a function of time $t$. For $\tilde{\eta}(t)\equiv 1$, (2.5) is the flow (1.3); for $\tilde{\eta}(t)=\eta(t)$, (2.5) is the flow (1.6). ###### Lemma 2.2. Along the flow (2.5), we have the following evolution equations (also see [WWZ20, F21]). The induced metric evolves by $\displaystyle\frac{\partial}{\partial t}g_{ij}=$ $\displaystyle-2\Theta h_{ij}+2\phi^{\prime}\tilde{\eta}(t)g_{ij}.$ (2.6) The support function evolves by $\displaystyle\frac{\partial}{\partial t}u=$ $\displaystyle-\phi^{\prime}\Theta+\phi^{\prime}\tilde{\eta}(t)u+\langle V,\nabla\Theta\rangle.$ (2.7) The second fundamental form evolves by $\displaystyle\frac{\partial}{\partial t}h_{i}{}^{j}=\nabla_{i}\nabla^{j}\Theta+\Theta h_{i}{}^{k}h_{k}{}^{j}-\tilde{\eta}(t)\phi^{\prime}h_{i}{}^{j}+(\tilde{\eta}(t)u-\Theta)\delta_{i}{}^{j}$ (2.8) where $\nabla$ is the Levi-Civita connection of the induced metric on $M_{t}$. ###### Proof. By a direct calculation, we have $\displaystyle\frac{\partial}{\partial t}g_{ij}=$ $\displaystyle{\partial_{t}}\langle\partial_{i}X,\partial_{j}X\rangle$ $\displaystyle=$ $\displaystyle\langle D_{i}\left(-\Theta\nu+\tilde{\eta}(t)V\right),\partial_{j}X\rangle+\langle\partial_{i}X,D_{j}\left(-\Theta\nu+\tilde{\eta}(t)V\right)\rangle$ $\displaystyle=$ $\displaystyle-\Theta\left(\langle D_{i}\nu,\partial_{j}X\rangle+\langle\partial_{i}X,D_{j}\nu\rangle\right)+2\tilde{\eta}(t)\phi^{\prime}g_{ij}$ $\displaystyle=$ $\displaystyle-2\Theta h_{ij}+2\tilde{\eta}(t)\phi^{\prime}g_{ij}.$ Since $\partial_{t}\nu$ is tangential, $\displaystyle\frac{\partial}{\partial t}\nu=$ $\displaystyle\langle\partial_{t}\nu,\partial_{j}X\rangle g^{il}\partial_{l}X$ (2.9) $\displaystyle=$ $\displaystyle-\langle\nu,\partial_{j}\left(-\Theta\nu+\tilde{\eta}(t)V\right)\rangle g^{jl}\partial_{l}X$ $\displaystyle=$ $\displaystyle\partial_{j}\Theta g^{jl}\partial_{l}X=\nabla\Theta.$ Using (2.9), we obtain the evolution of the support function $u$ as follows: $\displaystyle\frac{\partial}{\partial t}u=$ $\displaystyle\partial_{t}\langle V,\nu\rangle=\langle-\phi^{\prime}\Theta\nu+\phi^{\prime}\tilde{\eta}(t)V,\nu\rangle+\langle V,\nabla\Theta\rangle$ $\displaystyle=$ $\displaystyle-\phi^{\prime}\Theta+\phi^{\prime}\tilde{\eta}(t)u+\langle V,\nabla\Theta\rangle.$ Now we calculate the evolution of $h_{ij}$ $\displaystyle\frac{\partial}{\partial t}h_{ij}=$ $\displaystyle-\partial_{t}\langle D_{\partial_{i}X}\partial_{j}X,\nu\rangle$ $\displaystyle=$ $\displaystyle-\langle D_{\partial_{i}X}D_{\partial_{j}X}(-\Theta\nu+\tilde{\eta}(t)V),\nu\rangle-R^{\mathbb{H}^{n+1}}(\partial_{i}X,\partial_{t}X,\partial_{j}X,\nu)-\langle D_{\partial_{i}X}\partial_{j}X,\nabla\Theta\rangle$ $\displaystyle=$ $\displaystyle\partial_{i}\partial_{j}\Theta-\Theta(h^{2})_{ij}+\tilde{\eta}(t)\phi^{\prime}h_{ij}+(\tilde{\eta}(t)u-\Theta)g_{ij}-\langle\nabla_{\partial_{i}X}\partial_{j}X,\nabla\Theta\rangle$ $\displaystyle=$ $\displaystyle\nabla_{i}\nabla_{j}\Theta-\Theta h_{i}{}^{k}h_{kj}+\tilde{\eta}(t)\phi^{\prime}h_{ij}+(\tilde{\eta}(t)u-\Theta)g_{ij}.$ From (2.6), we have $\displaystyle\frac{\partial}{\partial t}g^{ij}=-g^{il}\left(\partial_{t}g_{lm}\right)g^{mj}=2\Theta h^{ij}-2\tilde{\eta}(t)\phi^{\prime}g^{ij}.$ Thus $\displaystyle\frac{\partial}{\partial t}h_{i}{}^{j}=$ $\displaystyle\partial_{t}h_{il}g^{lj}+h_{il}\partial_{t}g^{lj}$ (2.10) $\displaystyle=$ $\displaystyle\nabla_{i}\nabla^{j}\Theta+\Theta h_{i}{}^{k}h_{k}{}^{j}-\tilde{\eta}(t)\phi^{\prime}h_{i}{}^{j}+(\tilde{\eta}(t)u-\Theta)\delta_{i}{}^{j}.$ ∎ ### 2.3. Parametrization by radial graph For a closed star-shaped hypersurface $M^{n}\subset\mathbb{H}^{n+1}$, we can parametrize it as a graph of the radial function $\rho(\theta):\mathbb{S}^{n}\to\mathbb{R}$, i.e., $\displaystyle M^{n}=\\{(\rho(\theta),\theta):\rho:\mathbb{S}^{n}\rightarrow\mathbb{R}^{+},\quad\theta\in\mathbb{S}^{n}\\}$ where $\theta=(\theta^{1},\cdots,\theta^{n})$ is a local normal coordinate system of $\mathbb{S}^{n}$ and $\rho$ is a smooth function on $\mathbb{S}^{n}$. Let $f_{i}=\overline{\nabla}_{i}f$, $f_{ij}=\overline{\nabla}^{2}_{ij}f$, where $\overline{\nabla}$ is the Levi- Civita connection on $\mathbb{S}^{n}$ with respect to the standard metric $g_{\mathbb{S}^{n}}$. The tangent space of $M^{n}$ is spanned by (also see [CLW18]) $\displaystyle X_{i}=\rho_{i}\partial_{\rho}+\partial_{\theta_{i}}$ (2.11) and the unit outward normal vector is $\displaystyle\nu=\frac{\partial_{\rho}-\frac{\rho^{i}\partial_{\theta_{i}}}{\phi^{2}}}{w},$ where we set $\displaystyle w=\sqrt{1+\frac{|\overline{\nabla}\rho|^{2}}{\phi^{2}}}.$ (2.12) Then the support function and the induced metric can be expressed as $\displaystyle u=\frac{\phi^{2}}{\sqrt{\phi^{2}+|\overline{\nabla}\rho|^{2}}},$ (2.13) $\displaystyle g_{ij}=\phi^{2}\delta_{ij}+\rho_{i}\rho_{j},\quad g^{ij}=\frac{1}{\phi^{2}}(\delta^{ij}-\frac{\rho_{i}\rho_{j}}{\phi^{2}+|\overline{\nabla}\rho|^{2}}).$ (2.14) The second fundamental form is given by $\displaystyle h_{ij}=\frac{-\phi\rho_{ij}+2\phi^{\prime}\rho_{i}\rho_{j}+\phi^{2}\phi^{\prime}\delta_{ij}}{\sqrt{\phi^{2}+|\overline{\nabla}\rho|^{2}}}$ (2.15) and we have the Weingarten matrix $\displaystyle h_{i}{}^{j}=\frac{1}{\phi^{2}\sqrt{\phi^{2}+|\overline{\nabla}\rho|^{2}}}\left(\delta^{jk}-\frac{\rho_{j}\rho_{k}}{\phi^{2}+|\overline{\nabla}\rho|^{2}}\right)\left(-\phi\rho_{ki}+2\phi^{\prime}\rho_{k}\rho_{i}+\phi^{2}\phi^{\prime}\delta_{ki}\right).$ (2.16) Similar to [LSW20a, p. 901], the flow (2.5) can be written as a scalar parabolic PDE for the radial function $\left\\{\begin{aligned} \frac{\partial}{\partial t}\rho(\theta,t)=&-\phi(\rho)^{\alpha}f(\theta)wK+\tilde{\eta}(t)\phi(\theta,t),\quad\text{for }(\theta,t)\in\mathbb{S}^{n}\times[0,+\infty),\\\ \rho(\cdot,0)=&\rho_{0}(\cdot),\end{aligned}\right.$ (2.17) where $w$ is the function defined in (2.12). ## 3\. $C^{0}$ and $C^{1}$ estimates In this section, we establish the $C^{0}$ and $C^{1}$ estimates of the flow (1.3) for the proof of Theorem 1.1. Especially, we show that the flow hypersurface $M_{t}$ preserves star-shaped along (1.3). ### 3.1. $C^{0}$ estimate In this subsection, we will show that the radial function $\rho$ of (1.3) has uniform bound under the assumption of Theorem 1.1. ###### Lemma 3.1. Let $\rho(\cdot,t)$ be a smooth, positive, uniformly convex solution to (2.17) on $\mathbb{S}^{n}\times[0,T)$ provided $\tilde{\eta}(t)\equiv 1$. If $\alpha>n+1$, or $\alpha=n+1$ with $f<1$, then there is a positive constant $C$ depending only on $n$, $\max f$, $\inf f$ and the initial hypersurface $M_{0}$, such that $\displaystyle\frac{1}{C}\leq\rho(\cdot,t)\leq C,\quad\forall t\in[0,T).$ ###### Proof. Fix time $t$ and suppose that $\rho$ attains its maximum at point $(p_{0},t)$. At $(p_{0},t)$, we have $|\overline{\nabla}\rho|=0$ and $\rho_{ij}\leq 0$. From (2.16), $\displaystyle h_{i}{}^{j}\geq\frac{\phi^{\prime}}{\phi}\delta_{i}{}^{j}.$ (3.1) Inserting (3.1) into (2.17), we obtain $\displaystyle\partial_{t}\rho\leq-\phi^{\alpha-n}\phi^{\prime n}f+\phi=\phi(-\phi^{\alpha-n-1}\phi^{\prime n}f+1)\leq\phi(-\phi^{\alpha-1}f+1),$ (3.2) where we use the fact $\frac{\phi^{\prime}}{\phi}\geq 1$. Since $\alpha\geq n+1$, if $\phi\leq(\min f)^{-\frac{1}{\alpha-1}}$ for all $t\geq 0$, then we obtain the uniform upper bound of $\rho$. If there is some $t_{0}$, such that $\phi>(\min f)^{-\frac{1}{\alpha-1}}$, then we obtain $\rho\leq\max_{\mathbb{S}^{n}}\rho(\cdot,0)$ by (3.2). Thus, $\rho$ obtains the uniform upper bound with respect to the positive lower bound of $f$ on $\mathbb{S}^{n}$ and the initial hypersurface. Suppose $\rho$ attains its spatial minimum at point $(q_{0},t)$. Similarly, at $(q_{0},t)$, we have $\displaystyle h_{i}{}^{j}\leq\frac{\phi^{\prime}}{\phi}\delta_{i}{}^{j}$ (3.3) and $\displaystyle\partial_{t}\rho\geq-\phi^{\alpha-n}\phi^{\prime n}f+\phi=\phi(-\phi^{\alpha-n-1}\phi^{\prime n}f+1).$ (3.4) When $\alpha>n+1$, $\phi^{\alpha-n-1}\phi^{\prime n}=(\sinh\rho)^{\alpha-n-1}(\cosh\rho)^{n}\rightarrow 0$ as $\rho\rightarrow 0$. When $\alpha=n+1$, $\phi^{\alpha-n-1}\phi^{\prime n}=(\cosh\rho)^{n}\rightarrow 1$ as $\rho\rightarrow 0$. Hence if $\phi^{\alpha-n-1}\phi^{\prime n}\geq\frac{1}{\max f}$ for all $t\geq 0$, then we obtain the uniform lower bound of $\rho$ provided $\alpha>n+1$ or $\alpha=n+1$ with the assumption $f<1$. If there is some $t_{0}$, such that $\phi^{\alpha-n-1}\phi^{\prime n}<\frac{1}{\max f}$, we obtain $\rho\geq\min_{\mathbb{S}^{n}}\rho(\cdot,0)$ by (3.4). Thus $\rho$ has the uniform lower bound depending on $\min f$ and the initial hypersurface. ∎ ### 3.2. $C^{1}$ estimate In this subsection, we derive a uniform upper bound of the gradient of $\rho$ by using the approach in [G06, WWZ20]. ###### Lemma 3.2. Let $\rho(\cdot,t)$ be a smooth, positive, uniformly convex solution to (2.17) on $\mathbb{S}^{n}\times[0,T)$. Based on the results of Lemma 3.1, we have $|\overline{\nabla}\rho|\leq C,$ where $C$ only depends on the uniform upper and lower bounds of $\rho$. ###### Proof. Fix some time $t$ and consider the auxiliary function $Q=\log w+\beta\rho$, where we assume $\beta=-2\tanh(\rho_{\min})$ from Lemma 3.1. At the maximal point of $Q$, we have $\overline{\nabla}Q=0$, then $\displaystyle\frac{\gamma_{li}\gamma_{l}}{w^{2}}+\beta\rho_{i}=0,$ (3.5) where we regard $\gamma(\rho)=\log(1-\frac{2}{e^{\rho}+1})$ and $\frac{d\gamma}{d\rho}=\frac{1}{\phi}$. Then (2.12), (2.14), (2.15), (2.16) becomes $\displaystyle g_{ij}=\phi^{2}(\delta_{ij}+\gamma_{i}\gamma_{j}),\quad g^{ij}=\frac{1}{\phi^{2}}(\delta^{ij}-\frac{\gamma_{i}\gamma_{j}}{w^{2}}),$ (3.6) $\displaystyle w=\sqrt{1+|\overline{\nabla}\gamma|^{2}},\quad h_{ij}=\frac{\phi^{\prime}}{\phi w}g_{ij}-\frac{\phi}{w}\gamma_{ij}.$ (3.7) The Weingarten matrix turns into $\displaystyle h_{i}{}^{j}=h_{il}g^{lj}=\frac{\phi^{\prime}}{\phi w}\delta_{i}{}^{j}-\frac{\phi}{w}\gamma_{il}g^{lj}.$ (3.8) Inserting (3.8) into (3.5) and multiplying $w^{2}\rho^{i}$ on the both sides of (3.5), we obtain $\displaystyle\frac{w}{\phi}(\frac{\phi^{\prime}}{\phi w}\delta_{i}{}^{j}-h_{i}{}^{j})g_{lj}\gamma_{l}\rho^{i}+\beta w^{2}|\overline{\nabla}\rho|^{2}=0.$ (3.9) By a direct calculation, from (3.6) and (3.7), we have $\displaystyle g_{lj}\gamma_{l}=\phi\rho_{j}+|\overline{\nabla}\rho|^{2}\frac{\rho_{j}}{\phi}=\phi w^{2}\rho_{j}.$ (3.10) This together with (3.9) implies that $\displaystyle(\beta+\frac{\phi^{\prime}}{\phi})|\overline{\nabla}\rho|^{2}=wh_{i}{}^{j}\rho_{j}\rho^{i}.$ (3.11) Since $\\{h_{i}{}^{j}\\}$ is positive-definite and $\beta+\frac{\phi^{\prime}}{\phi}<0$, we obtain $\overline{\nabla}\rho=0$ at the maximal point of $Q$. Thus $Q_{\max}\leq\beta\rho_{\min}<0$. And $\displaystyle|\overline{\nabla}\rho|\leq\sinh(\rho_{\max})\sqrt{e^{4\rho_{\max}\tanh(\rho_{min})}-1}.$ ∎ ###### Remark 3.1. When the hypersurface is uniformly convex in hyperbolic space, the gradient estimate of $\rho$ follows from the uniform upper and lower bounds of the radial function $\rho$. ###### Corollary 3.1. Based on the results of Lemma 3.1, along the flow (1.3), the hypersurface $M_{t}$ preserves star-shaped and the support function $u$ satisfies $\displaystyle\frac{1}{C}\leq u\leq C,\quad\forall t\in[0,T).$ for some constant $C>0$, where $C$ only depends on $C_{0}$ estimate. ###### Proof. Recall (2.13), $\displaystyle u=\frac{\phi}{\sqrt{1+\frac{|\overline{\nabla}\rho|^{2}}{\phi^{2}}}}.$ The upper and lower bounds of $u$ follows from Lemma 3.1 and 3.2. Besides, we have $\displaystyle\langle\partial_{\rho},\nu\rangle=\frac{u}{\phi}=\frac{1}{w}=\frac{1}{\sqrt{1+\frac{|\overline{\nabla}\rho|^{2}}{\phi^{2}}}}\geq\frac{1}{C^{\prime}}$ for some $C^{\prime}>0$ depends on $\max|\overline{\nabla}\rho|$ and $\min\rho$. Thus the hypersurface preserves star-shaped along the flow (2.5). ∎ ###### Remark 3.2. When the hypersurface is uniformly convex in hyperbolic space, the upper and lower bounds of the support function of $u$ follow from the uniform bound of the radial function $\rho$. ## 4\. $C^{2}$ estimates In this section, let us assume that we have already obtained the uniform upper and lower bounds of the radial function $\rho$ and the global term $\tilde{\eta}(t)$ along the flow (2.5). From Lemma 3.2 and Corollary 3.1, the uniform bound of $\rho$ implies the upper and lower bounds of $w$ and the support function $u$. We shall establish the $C^{2}$ estimate under this assumption. ### 4.1. The bounds of $K$ In this subsection, we show that $K$ is bounded from above and below along (2.5). ###### Lemma 4.1. Along (2.5), there is a constant $c>0$ depending on $M_{0}$, $\alpha$, $n$ and the uniform bounds of $\rho$, $\tilde{\eta}(t)$ and $f$, such that $K\geq c.$ ###### Proof. First, we calculate the evolution equation of $\Theta=\phi^{\alpha}fK$ $\displaystyle\frac{\partial}{\partial t}\Theta=$ $\displaystyle\phi(\rho)^{\alpha}f\partial_{t}K+Kf\partial_{t}\left(\phi(\rho)^{\alpha}\right)$ (4.1) $\displaystyle=$ $\displaystyle\alpha\phi^{\alpha-1}\phi^{\prime}\partial_{t}\rho fK+\phi^{\alpha}f\frac{\partial K}{\partial h_{i}{}^{j}}\partial_{t}h_{i}{}^{j}.$ Differentiating $\langle V,V\rangle=\phi^{2}$, we obtain $\displaystyle\phi\phi^{\prime}\partial_{t}\rho=\langle\partial_{t}V,V\rangle=\phi^{\prime}\langle\partial_{t}X,V\rangle.$ (4.2) By (2.5), we have $\displaystyle\partial_{t}\rho=\frac{\langle\partial_{t}X,V\rangle}{\phi}=\frac{-\Theta u+\tilde{\eta}(t)\phi^{2}}{\phi}=-\frac{\Theta}{w}+\tilde{\eta}(t)\phi.$ (4.3) Inserting (2.8) and (4.3) into (4.1), we get $\displaystyle\frac{\partial}{\partial t}\Theta$ (4.4) $\displaystyle=$ $\displaystyle-\alpha\phi^{\alpha-1}\phi^{\prime}\frac{\Theta}{w}fK+\alpha\tilde{\eta}(t)\phi^{\prime}\Theta+\phi^{\alpha}f\frac{\partial K}{\partial h_{i}{}^{j}}\left(\nabla_{i}\nabla^{j}\Theta+\Theta h_{i}{}^{k}h_{k}{}^{j}-\phi^{\prime}\tilde{\eta}(t)h_{i}{}^{j}+\left(\tilde{\eta}(t)u-\Theta\right)\delta_{i}{}^{j}\right)$ $\displaystyle=$ $\displaystyle\phi^{\alpha}f\dot{K}^{ij}\Theta_{ij}-\frac{\alpha\Theta^{2}\phi^{\prime}}{w\phi}+(\alpha-n)\tilde{\eta}(t)\phi^{\prime}\Theta+(\tilde{\eta}(t)u-\Theta)\phi^{\alpha}f\sigma_{n-1}+\Theta^{2}H,$ where we use $\frac{\partial K}{\partial h_{i}{}^{j}}\delta_{i}{}^{j}=\sigma_{n-1}$ and $\frac{\partial K}{\partial h_{i}{}^{j}}h_{i}{}^{k}h_{k}{}^{j}=KH$ in the last equality. At the spatial minimum point of $\Theta$ on $M_{t}$, we have $\phi^{\alpha}f\dot{K}^{ij}\Theta_{ij}\geq 0$. We can assume $\Theta<\inf\limits_{[0,T)}\tilde{\eta}(t)\min\frac{u}{2}$ from Corollary 3.1 without loss of generality. Hence, $\displaystyle\frac{d}{dt}\Theta_{\min}(t)\geq$ $\displaystyle-\frac{\alpha\Theta_{\min}^{2}\phi^{\prime}}{w\phi}+(\alpha-n)\tilde{\eta}(t)\phi^{\prime}\Theta_{\min}+\inf_{[0,T)}\tilde{\eta}(t)\frac{u_{\min}}{2}\phi^{\alpha}f\Theta_{\min}^{\frac{n-1}{n}}$ $\displaystyle\geq$ $\displaystyle- c_{1}\Theta_{\min}^{2}(t)-c_{2}\Theta_{\min}+c_{3}\Theta_{\min}^{\frac{n-1}{n}}$ for some constant $c_{1},c_{2},c_{3}>0$, where all of them depend on $\alpha$, $n$ and the uniform upper and lower bounds of $\rho$, $\tilde{\eta}(t)$ and $f$. Hence there is a positive constant $c_{4}$ depending only on $c_{1},c_{2},c_{3}$, such that if $\Theta_{\min}(t)\in(0,c_{4})$, we have $\frac{d}{dt}\Theta_{\min}(t)>0$. Therefore $\Theta_{\min}(t)\geq\min\\{\Theta_{\min}(0),c_{4},\inf\tilde{\eta}(t)\min\frac{u}{2}\\}$. Since $\rho$ and $f$ are bounded from above, $K=\phi^{-\alpha}f^{-1}\Theta$ is bounded from below by some positive constant $c$. ∎ ###### Lemma 4.2. Along (2.5), there is a constant $C>0$ depending only on $M_{0}$, $\alpha$, $n$ and the uniform bounds of $\rho$, $\tilde{\eta}(t)$, $\min f$ and $\|f\|_{C^{1}}$, such that $K\leq C.$ ###### Proof. Let $Q=\log\Theta-\log(u-a)$, where $a=\frac{1}{2}\inf_{M\times[0,T)}u$. Recall (2.7), $\displaystyle\frac{\partial}{\partial t}u=$ $\displaystyle-\phi^{\prime}\Theta+\phi^{\prime}\tilde{\eta}(t)u+\langle V,\nabla\Theta\rangle$ (4.5) $\displaystyle=$ $\displaystyle-\phi^{\prime}(\Theta-\tilde{\eta}(t)u)+\alpha\phi^{\prime}\frac{\Theta}{\phi}\langle V,\nabla\rho\rangle+\frac{\langle V,\nabla f\rangle\Theta}{f}+\frac{\langle\nabla K,V\rangle\Theta}{K}$ $\displaystyle=$ $\displaystyle-\phi^{\prime}(\Theta-\tilde{\eta}(t)u)+\alpha\phi^{\prime}|\nabla\rho|^{2}\Theta+\langle V,\nabla\log f\rangle\Theta+\langle\nabla\log K,V\rangle\Theta.$ Combining with (2.3), we obtain the evolution of $u$ $\displaystyle\partial_{t}u=\phi^{\alpha}f\dot{K}^{ij}u_{ij}-\phi^{\prime}\left((n+1)\Theta-\tilde{\eta}(t)u\right)+\alpha\phi^{\prime}|\nabla\rho|^{2}\Theta+\frac{\langle V,\nabla f\rangle}{f}\Theta+u\phi^{\alpha}f\dot{K}^{ij}h_{i}{}^{k}h_{kj},$ (4.6) where we use the Codazzi equation in the equality. $|\nabla f(X)|$ can be estimated as $\displaystyle|\nabla f(X)|\leq C(\min\rho,\|\rho\|_{C^{1}}(\mathbb{S}^{n}),\|f\|_{C^{1}}(\mathbb{S}^{n})).$ (4.7) At the spatial maximum point $(p_{0},t)$ of $Q$ on $M_{t}$, we have $\displaystyle\frac{\nabla\Theta}{\Theta}=\frac{\nabla u}{u-a}.$ (4.8) Inserting (4.6) and (4.8) into (4.4), we obtain $\displaystyle\partial_{t}Q=$ $\displaystyle\frac{\partial_{t}\Theta}{\Theta}-\frac{\partial_{t}u}{u-a}$ $\displaystyle=$ $\displaystyle\frac{\dot{K}^{ij}\Theta_{ij}}{K}-\frac{\alpha\Theta\phi^{\prime}}{w\phi}+(\alpha-n)\tilde{\eta}(t)\phi^{\prime}+(\tilde{\eta}(t)u-\Theta)\frac{\sigma_{n-1}}{K}+\Theta H$ $\displaystyle-\frac{\phi^{\alpha}f\dot{K}^{ij}u_{ij}}{u-a}+\phi^{\prime}\frac{(n+1)\Theta-\tilde{\eta}(t)u}{u-a}-\alpha\phi^{\prime}|\nabla\rho|^{2}\frac{\Theta}{u-a}-\frac{\langle V,\nabla f\rangle}{f}\frac{\Theta}{u-a}-\frac{u}{u-a}\Theta H$ $\displaystyle\leq$ $\displaystyle\phi^{\alpha}f\dot{K}^{ij}Q_{ij}+(\alpha-n)\tilde{\eta}(t)\phi^{\prime}+\phi^{\prime}\frac{(n+1)\Theta}{u-a}+\frac{\phi\Theta|\nabla f|}{f(u-a)}-n(C_{n}^{k})^{-\frac{1}{n}}\phi^{\alpha}fK^{\frac{1+n}{n}}\frac{a}{u-a},$ where we assume $\Theta>\max\eta\max_{M\times[0,T)}u$ from Corollary 3.1 without loss of generality. Here we also use the Newton-Maclaurin inequality $H\geq nK^{\frac{1}{n}}$ and the Cauchy-Schwartz inequality $\langle V,\nabla f\rangle\leq\phi|\nabla f|$ to obtain the last inequality. Then we get $\displaystyle\partial_{t}Q\leq$ $\displaystyle C_{1}+C_{2}\Theta- C_{3}\Theta^{\frac{n+1}{n}}$ (4.9) for some $C_{1},C_{2},C_{3}>0$ depending on $\alpha$, $n$, $\min f$ and $\|f\|_{C^{1}}$, the uniform bounds of $\rho$ and $\tilde{\eta}(t)$. Besides, there is a constant $C_{4}$ depending on the uniform upper and lower bounds of $u$, such that $\displaystyle\frac{1}{C_{4}}e^{Q}\leq\Theta=(u-a)e^{Q}\leq C_{4}e^{Q}.$ (4.10) We have $\displaystyle\partial_{t}Q\leq$ $\displaystyle C_{1}+C_{2}C_{4}e^{Q}-C_{3}(C_{4})^{-\frac{n+1}{n}}e^{\frac{n+1}{n}Q}.$ Therefore, $Q\leq\max\\{C_{5},Q_{\max}(0),\max\eta\max\limits_{M\times[0,T)}u\\}$ where $C_{5}$ is a positive constant depending on $C_{1}$, $C_{2}$, $C_{3}$ and $C_{4}$. Hence $K=\phi^{-\alpha}f^{-1}(u-a)e^{Q}$ is bounded from above by some positive constant $C$ depending only on $M_{0}$, $\alpha$, $n$ $\min f$, $\|f\|_{C^{1}}$ and the uniform bounds of $\rho$ and $\tilde{\eta}(t)$. ∎ In the proof of Lemma 4.1 and Lemma 4.2 we also obtain the uniform bounds of $\Theta$. ###### Corollary 4.1. Along (2.5), $\Theta=\phi^{\alpha}fK$ is uniformly bounded, i.e., $\displaystyle\frac{1}{C}<\Theta(x,t)<C$ for some constant $C>0$ that only depends on $M_{0}$, $\alpha$, $n$ $\min f$, $\|f\|_{C^{1}}$ and the uniform bounds of $\rho$ and $\tilde{\eta}(t)$. We established the $C^{0}$, $C^{1}$ estimates in Section 3 along (1.3) for the proof of Theorem 1.1. Due to Lemma 4.1 and Lemma 4.2, we have the following Corollary. ###### Corollary 4.2. Let $M_{0}$ be as in Theorem 1.1. When $\alpha>n+1$ or $\alpha=n+1$ with $\max f<1$, there is a positive constant $C$ depending only on $M_{0}$, $\alpha$, $n$, $\min f$ and $\|f\|_{C^{1}}$ along the flow (1.3), such that $\displaystyle\frac{1}{C}\leq K\leq C.$ (4.11) ### 4.2. The bound of principal curvatures In this subsection, we shall show the principal curvatures of $M_{t}$ is uniformly bounded along the flow (2.5) under the assumption in section 4. ###### Lemma 4.3. Along (2.5), there is a positive constant $C$ depending on $M_{0}$, $\alpha$, $n$, $\min f$, $\|f\|_{C^{2}}(\mathbb{S}^{n})$ and the uniform bounds of $\rho$, $\tilde{\eta}$, such that the principal curvatures satisfy $\displaystyle\frac{1}{C}\leq\kappa_{i}(\cdot,t)\leq C,\quad\forall\ t\ \in[0,T)\ {\rm and}\ i=1,2,\cdots,n.$ ###### Proof. Denote by $\lambda(x,t)$ the maximal principal radii at $X(x,t)$. Let $A$ be a positive constant to be determined later. Denote by $Q^{\prime}=\log\lambda(x,t)+A\rho$. Fix an arbitrary time $T_{0}\in(0,T)$. Assume that $Q^{\prime}$ attains its maximum at $(x_{0},t_{0})$ provided $t\in[0,T_{0})$. We then introduce a normal coordinate system $\\{\partial_{i}\\}$ around $(x_{0},t_{0})$, such that $\nabla_{\partial_{i}X}{}\partial_{j}X(x_{0},t_{0})=0$ for all $i,j=1,2\cdots,n$ and $h_{ij}(x_{0},t_{0})=\kappa_{i}(x_{0},t_{0})\delta_{ij}$. Further, we can choose $\partial_{1}|_{(x_{0},t_{0})}$ as the eigenvector with respect to $\lambda(x_{0},t_{0})$, i.e., $\lambda(x_{0},t_{0})=\tilde{h}^{11}(x_{0},t_{0})$. Assume that $\\{\tilde{h}^{ij}\\}$ is the inverse matrix of $\\{h_{ij}\\}$. Clearly $\displaystyle\lambda(x,t)=\max\\{\tilde{h}^{ij}(x,t)\xi_{i}\xi_{j}|g^{ij}(x,t)\xi_{i}\xi_{j}=1\\}$ (4.12) For the continuity, using this coordinate system we consider the auxiliary equation $Q=\log\upsilon+A\rho$, where $\upsilon=\frac{\tilde{h}^{11}}{g^{11}}$. Note that $\upsilon(x,t)\leq\lambda(x,t)$ from (4.12) and $\upsilon(x_{0},t_{0})=\lambda(x_{0},t_{0})$. Thus, $Q(x,t)\leq Q(x_{0},t_{0})$ for all $t\in[0,T_{0})$. Now we can calculate the derivatives of $\upsilon$ at $(x_{0},t_{0})$ as follows: $\partial_{t}\upsilon=-(\tilde{h}^{11})^{2}\partial_{t}h_{11}+\tilde{h}^{11}\partial_{t}g_{11}=-\left(\tilde{h}^{11}\right)^{2}\partial_{t}h_{1}{}^{1},$ (4.13) $\displaystyle\nabla_{i}\upsilon=$ $\displaystyle\frac{\partial\tilde{h}_{1}{}^{1}}{\partial h_{p}{}^{q}}\nabla_{i}h_{p}{}^{q}=-\tilde{h}_{1}{}^{p}\tilde{h}_{q}{}^{1}\nabla_{i}h_{p}{}^{q},\quad\nabla_{i}\upsilon(x_{0},t_{0})=-(\tilde{h}^{11})^{2}\nabla_{i}h_{11}.$ (4.14) Then $\displaystyle\nabla_{j}\nabla_{i}\upsilon=$ $\displaystyle\nabla_{j}(-\tilde{h}_{1}{}^{p}\tilde{h}_{q}{}^{1}\nabla_{i}h_{p}{}^{q})=-\nabla_{j}\tilde{h}_{1}{}^{p}\tilde{h}_{q}{}^{1}\nabla_{i}h_{p}{}^{q}-\tilde{h}_{1}{}^{p}\nabla_{j}\tilde{h}_{q}{}^{1}\nabla_{i}h_{p}{}^{q}-\tilde{h}_{1}{}^{p}\tilde{h}_{q}{}^{1}\nabla_{j}\nabla_{i}h_{p}{}^{q}$ $\displaystyle=$ $\displaystyle\tilde{h}_{1}{}^{r}\tilde{h}_{l}{}^{p}\tilde{h}_{q}{}^{1}\nabla_{j}h_{r}{}^{l}\nabla_{i}h_{p}{}^{q}+\tilde{h}_{1}{}^{p}\tilde{h}_{r}{}^{1}\tilde{h}_{q}{}^{s}\nabla_{j}h_{s}{}^{r}\nabla_{i}h_{p}{}^{q}-\tilde{h}_{1}{}^{p}\tilde{h}_{q}{}^{1}\nabla_{j}\nabla_{i}h_{p}{}^{q}$ and $\displaystyle\nabla_{j}\nabla_{i}\upsilon(x_{0},t_{0})=$ $\displaystyle-(\tilde{h}^{11})^{2}\nabla_{j}\nabla_{i}h_{11}+2(\tilde{h}^{11})^{2}\tilde{h}^{pp}\nabla_{i}h_{1p}\nabla_{j}h_{1p}.$ (4.15) Here we denote by $\\{\tilde{h}_{i}{}^{j}\\}$ the inverse matrix of $\\{h_{i}{}^{j}\\}$, i.e., $\\{\tilde{h}_{i}{}^{j}\\}=\\{\tilde{h}^{ik}g_{kj}\\}$. Then We calculate the first term in (2.8) $\displaystyle\nabla_{j}\nabla_{i}\Theta=$ $\displaystyle\phi^{\alpha}f\nabla_{j}\nabla_{i}K+K\nabla_{j}\nabla_{i}(\phi^{\alpha}f)+\nabla_{i}(\phi^{\alpha}f)\nabla_{j}K+\nabla_{j}(\phi^{\alpha}f)\nabla_{i}K$ (4.16) $\displaystyle=$ $\displaystyle\phi^{\alpha}f\dot{K}^{pq}h_{pqij}+\phi^{\alpha}f\ddot{K}^{pq,rs}h_{pqi}h_{rsj}+fK\nabla_{j}\nabla_{i}(\phi^{\alpha})+K\phi^{\alpha}\nabla_{j}\nabla_{i}f+K\nabla_{i}\phi^{\alpha}\nabla_{j}f$ $\displaystyle+K\nabla_{j}\phi^{\alpha}\nabla_{i}f+f\nabla_{i}(\phi^{\alpha})\nabla_{j}K+f\nabla_{j}(\phi^{\alpha})\nabla_{i}K+\phi^{\alpha}\nabla_{i}f\nabla_{j}K+\phi^{\alpha}\nabla_{j}f\nabla_{i}K.$ Due to the Codazzi equation and Ricci identity, we have $\displaystyle\dot{K}^{pq}h_{pqij}=$ $\displaystyle\dot{K}^{pq}h_{piqj}=\dot{K}^{pq}\left(h_{pijq}+h_{lp}R_{liqj}+h_{li}R_{lpqj}\right)$ (4.17) $\displaystyle=$ $\displaystyle\dot{K}^{pq}(h_{ijpq}+h_{lp}h_{lq}h_{ij}-h_{lp}h_{lj}h_{iq}+h_{li}h_{lq}h_{pj}-h_{li}h_{lj}h_{pq}$ $\displaystyle- h_{pq}\delta_{ij}+h_{jp}\delta_{iq}-h_{qi}\delta_{pj}+h_{ij}\delta_{pq})$ $\displaystyle=$ $\displaystyle\dot{K}^{pq}h_{ijpq}+KHh_{ij}-nKh_{il}h_{lj}-nK\delta_{ij}+\dot{K}^{pp}h_{ij}.$ It’s direct to calculate $\displaystyle\ddot{K}^{pq,rs}h_{pqi}h_{rsj}=$ $\displaystyle\frac{\partial(K\tilde{h}^{pq})}{\partial h_{rs}}h_{pqi}h_{rsj}=K\tilde{h}^{pq}\tilde{h}^{rs}h_{pqi}h_{rsj}-K\tilde{h}^{pr}\tilde{h}^{qs}h_{pqi}h_{rsj}.$ (4.18) Hence by (4.16) at $(x_{0},t_{0})$, $\displaystyle\nabla_{j}\nabla_{i}\Theta$ (4.19) $\displaystyle=$ $\displaystyle\phi^{\alpha}f\dot{K}^{pq}h_{ijpq}+\phi^{\alpha}fKHh_{ij}-n\phi^{\alpha}fKh_{il}h_{lj}-n\phi^{\alpha}fK\delta{ij}+\phi^{\alpha}f\dot{K}^{pp}h_{ij}+\phi^{\alpha}f\frac{\nabla_{i}K\nabla_{j}K}{K}$ $\displaystyle-\phi^{\alpha}fK\tilde{h}^{pp}\tilde{h}^{qq}h_{pqi}h_{pqj}+\alpha(\alpha-1)\phi^{\alpha-2}\phi^{\prime 2}fK\nabla_{i}\rho\nabla_{j}\rho+\alpha\phi^{\alpha-1}\phi^{\prime}fK\nabla_{j}\nabla_{i}\rho$ $\displaystyle+\phi^{\alpha}K\nabla_{j}\nabla_{i}f+\alpha\phi^{\alpha-1}\phi^{\prime}K\nabla_{i}\rho\nabla_{j}f+\alpha\phi^{\alpha-1}\phi^{\prime}K\nabla_{j}\rho\nabla_{i}f$ $\displaystyle+\alpha\phi^{\alpha-1}\phi^{\prime}f\nabla_{i}\rho\nabla_{j}K+\alpha\phi^{\alpha-1}\phi^{\prime}f\nabla_{j}\rho\nabla_{i}K+\phi^{\alpha}\nabla_{i}f\nabla_{j}K+\phi^{\alpha}\nabla_{j}f\nabla_{i}K.$ Direct computation gives $\displaystyle\nabla_{j}\nabla_{i}Q=$ $\displaystyle\frac{\nabla_{j}\nabla_{i}\upsilon}{\upsilon}-\frac{\nabla_{i}\upsilon\nabla_{j}\upsilon}{\upsilon^{2}}+A\nabla_{j}\nabla_{i}\rho$ (4.20) $\displaystyle=$ $\displaystyle-\tilde{h}^{11}\nabla_{j}\nabla_{i}h_{11}+2\tilde{h}^{11}\tilde{h}^{pp}\nabla_{i}h_{1p}\nabla_{j}h_{1p}-(\tilde{h}^{11})^{2}\nabla_{i}h_{11}\nabla_{j}h_{11}+A\nabla_{j}\nabla_{i}\rho.$ Recall (2.8) and (4.3). By (4.13) and (4.19), we obtain $\displaystyle\partial_{t}Q=$ $\displaystyle\frac{\partial_{t}\tilde{h}^{11}}{\tilde{h}^{11}}+A\partial_{t}\rho$ (4.21) $\displaystyle=$ $\displaystyle-\tilde{h}^{11}\left(\nabla_{1}\nabla^{1}\Theta+\Theta h_{11}^{2}-\phi^{\prime}\tilde{\eta}(t)h_{11}+(\tilde{\eta}(t)u-\Theta)\right)-A\frac{\Theta}{w}+A\tilde{\eta}(t)\phi$ $\displaystyle=$ $\displaystyle-\tilde{h}^{11}[\phi^{\alpha}f\dot{K}^{pq}h_{11pq}+\phi^{\alpha}fKHh_{11}-n\phi^{\alpha}fK(h_{11})^{2}-n\phi^{\alpha}fK+\phi^{\alpha}f\dot{K}^{pp}h_{11}$ $\displaystyle+\phi^{\alpha}f\frac{\nabla_{1}K\nabla_{1}K}{K}-\phi^{\alpha}fK\tilde{h}^{pp}\tilde{h}^{qq}(h_{pq1})^{2}+\alpha(\alpha-1)\phi^{\alpha-2}\phi^{\prime 2}(\nabla_{1}\rho)^{2}fK+\alpha\phi^{\alpha-1}\phi^{\prime}fK\nabla_{1}\nabla_{1}\rho$ $\displaystyle+\phi^{\alpha}K\nabla_{1}\nabla_{1}f+2\alpha\phi^{\alpha-1}\phi^{\prime}K\nabla_{1}\rho\nabla_{1}f+2\alpha\phi^{\alpha-1}\phi^{\prime}f\nabla_{1}\rho\nabla_{1}K+2\phi^{\alpha}\nabla_{1}f\nabla_{1}K]$ $\displaystyle-\Theta h_{11}+\phi^{\prime}\tilde{\eta}(t)-(\tilde{\eta}(t)u-\Theta)\tilde{h}^{11}-A\frac{\Theta}{w}+A\tilde{\eta}(t)\phi.$ Substituting (4.20) into (4.21), we get $\displaystyle\partial_{t}Q=$ $\displaystyle\phi^{\alpha}f\dot{K}^{ij}\nabla_{j}\nabla_{i}Q-2\phi^{\alpha}fK\tilde{h}^{11}\tilde{h}^{ij}\tilde{h}^{pp}\nabla_{i}h_{1p}\nabla_{j}h_{1p}+\phi^{\alpha}fK\tilde{h}^{ij}(\tilde{h}^{11})^{2}\nabla_{i}h_{11}\nabla_{j}h_{11}$ (4.22) $\displaystyle-A\phi^{\alpha}f\dot{K}^{ij}\nabla_{j}\nabla_{i}\rho-\phi^{\alpha}fKH+n\phi^{\alpha}fKh_{11}+n\phi^{\alpha}fK\tilde{h}^{11}-\phi^{\alpha}fK\tilde{h}^{pp}-\phi^{\alpha}f\frac{(\nabla_{1}K)^{2}}{K}\tilde{h}^{11}$ $\displaystyle+\phi^{\alpha}fK\tilde{h}^{11}\tilde{h}^{pp}\tilde{h}^{qq}(h_{pq1})^{2}-\alpha(\alpha-1)\phi^{\alpha-2}\phi^{\prime 2}(\nabla_{1}\rho)^{2}fK\tilde{h}^{11}-\alpha\phi^{\alpha-1}\phi^{\prime}fK\tilde{h}^{11}\nabla_{1}\nabla_{1}\rho$ $\displaystyle-\phi^{\alpha}K\tilde{h}^{11}\nabla_{1}\nabla_{1}f-2\alpha\phi^{\alpha-1}\phi^{\prime}K\tilde{h}^{11}\nabla_{1}\rho\nabla_{1}f-2\alpha\phi^{\alpha-1}\phi^{\prime}f\tilde{h}^{11}\nabla_{1}\rho\nabla_{1}K$ $\displaystyle-2\phi^{\alpha}\tilde{h}^{11}\nabla_{1}f\nabla_{1}K-\Theta h_{11}+\phi^{\prime}\tilde{\eta}(t)-(\tilde{\eta}(t)u-\Theta)\tilde{h}^{11}-A\frac{\Theta}{w}+A\tilde{\eta}(t)\phi.$ Dividing (4.22) by $\Theta$ on both sides, we obtain at $(x_{0},t_{0})$ $\displaystyle\frac{\partial_{t}Q}{\Theta}\leq$ $\displaystyle\tilde{h}^{ij}\nabla_{j}\nabla_{i}Q-A\tilde{h}^{ij}\nabla_{j}\nabla_{i}\rho+nh_{11}+n\tilde{h}^{11}-(\nabla_{1}\log K)^{2}\tilde{h}^{11}-\alpha(\alpha-1)(\frac{\phi^{\prime}}{\phi})^{2}(\nabla_{1}\rho)^{2}\tilde{h}^{11}$ (4.23) $\displaystyle-\alpha\frac{\phi^{\prime}}{\phi}\tilde{h}^{11}\nabla_{1}\nabla_{1}\rho-\frac{\nabla_{1}\nabla_{1}f}{f}\tilde{h}^{11}-2\alpha\frac{\phi^{\prime}}{\phi}\tilde{h}^{11}\nabla_{1}\rho\nabla_{1}\log f-2\alpha\frac{\phi^{\prime}}{\phi}\tilde{h}^{11}\nabla_{1}\rho\nabla_{1}\log K$ $\displaystyle-2\tilde{h}^{11}\nabla_{1}\log f\nabla_{1}\log K+\frac{\phi^{\prime}}{\Theta}\tilde{\eta}(t)+\tilde{h}^{11}+A\frac{\tilde{\eta}(t)\phi}{\Theta},$ where we wipe off some nonpositive terms. Recall (2.4), we have at $(x_{0},t_{0})$ $\displaystyle|\nabla\rho|^{2}=\frac{\sum_{i=1}^{n}\langle X_{i},V\rangle^{2}}{\phi^{2}}=\frac{|V|^{2}}{\phi^{2}}-\frac{\langle\nu,V\rangle^{2}}{\phi^{2}}\leq 1-(\frac{u}{\phi})^{2}<1-c_{1}$ (4.24) where $c_{1}>0$ depends on the lower bound of $u$ and the upper bound of $\rho$. Now substituting (2.4) into (4.23) and using the Cauchy-Schwartz inequality, we obtain at $(x_{0},t_{0})$ $\displaystyle\frac{\partial_{t}Q}{\Theta}\leq$ $\displaystyle\tilde{h}^{ij}\nabla_{j}\nabla_{i}Q-A\tilde{h}^{ii}\left(\frac{\phi^{\prime}}{\phi}\left(1-(\nabla_{i}\rho)^{2}\right)-\frac{uh_{ii}}{\phi}\right)+nh_{11}+(n+1)\tilde{h}^{11}$ (4.25) $\displaystyle-\alpha(\alpha-1)(\frac{\phi^{\prime}}{\phi})^{2}(\nabla_{1}\rho)^{2}\tilde{h}^{11}-\alpha\frac{\phi^{\prime}}{\phi}\tilde{h}^{11}\left(\frac{\phi^{\prime}}{\phi}\left(1-(\nabla_{1}\rho)^{2}\right)-\frac{uh_{11}}{\phi}\right)-\frac{\nabla_{1}\nabla_{1}f}{f}\tilde{h}^{11}$ $\displaystyle-2\alpha\frac{\phi^{\prime}}{\phi}\tilde{h}^{11}\nabla_{1}\rho\nabla_{1}\log f+2\alpha^{2}(\frac{\phi^{\prime}}{\phi})^{2}(\nabla_{1}\rho)^{2}\tilde{h}^{11}+2(\nabla_{1}\log f)^{2}\tilde{h}^{11}+\frac{\phi^{\prime}}{\Theta}\tilde{\eta}(t)+A\frac{\tilde{\eta}(t)\phi}{\Theta}.$ Using the maximum principle, by (4.24) at $(x_{0},t_{0})$, we have $\displaystyle\frac{\partial_{t}Q}{\Theta}\leq$ $\displaystyle- Ac_{1}\tilde{h}^{11}\frac{\phi^{\prime}}{\phi}+A(\frac{nu}{\phi}+\frac{\tilde{\eta}(t)\phi}{\Theta})+nh_{11}+(n+1)\tilde{h}^{11}-\alpha(\alpha-1)(\frac{\phi^{\prime}}{\phi})^{2}(\nabla_{1}\rho)^{2}\tilde{h}^{11}$ (4.26) $\displaystyle-\alpha(\frac{\phi^{\prime}}{\phi})^{2}(1-\rho_{1}^{2})\tilde{h}^{11}+\alpha\frac{u\phi^{\prime}}{\phi^{2}}-\frac{\nabla_{1}\nabla_{1}f}{f}\tilde{h}^{11}-2\alpha\frac{\phi^{\prime}}{\phi}\tilde{h}^{11}\nabla_{1}\rho\nabla_{1}\log f$ $\displaystyle+2\alpha^{2}(\frac{\phi^{\prime}}{\phi})^{2}(\nabla_{1}\rho)^{2}\tilde{h}^{11}+2(\nabla_{1}\log f)^{2}\tilde{h}^{11}+\frac{\phi^{\prime}}{\Theta}\tilde{\eta}(t).$ Write $X\in\mathbb{H}^{n+1}$ as $X=(\rho,\theta)\in\mathbb{R}^{+}\times\mathbb{S}^{n}$. Then we extend $f$ to $\left(\mathbb{H}^{n+1},g_{\mathbb{H}^{n+1}}\right)$ as $\displaystyle f(X)=f(\theta).$ (4.27) We have by Reilly formula, $\displaystyle D_{j}D_{i}f=\nabla_{j}\nabla_{i}f+h_{ij}D_{\nu}f.$ (4.28) where $D$ is the standard Levi-Civita connection with respect to $g_{\mathbb{H}^{n+1}}$. Then at $(x_{0},t_{0})$, $\nabla_{1}\nabla_{1}f$ can be further estimated as $\displaystyle|\nabla_{1}\nabla_{1}f(x_{0},t_{0})|\leq C\left(\max\rho,\min\rho,\|f\|_{C^{2}}(\mathbb{S}^{n})\right)(1+h_{11}(x_{0},t_{0})).$ (4.29) By (4.7) and (4.28), We obtain at $(x_{0},t_{0})$ $\displaystyle 0\leq\frac{\partial_{t}Q}{\Theta}\leq-\left(\frac{Ac_{1}}{2}-c_{2}\right)\tilde{h}^{11}-A\left(\frac{c_{1}\tilde{h}^{11}}{2}-c_{3}\right)+c_{4}+c_{5}\frac{1}{\tilde{h}^{11}}$ (4.30) for some positive constants $c_{1}$, $c_{2}$, $c_{3}$, $c_{4}$, $c_{5}$ depends on the uniform upper and lower bounds of the $\rho$, $\tilde{\eta}$, $u$ from Corollary 3.1, $K$ from Lemma 4.1-4.2 and $\alpha$, $n$, $\inf f$, $\|f\|_{C^{2}}(\mathbb{S}^{n})$. We deduce from (4.30) that if we choose $A=\frac{2c_{2}}{c_{1}}$, $\lambda(x_{0},t_{0})$ can’t be too large, i.e., $\lambda(x_{0},t_{0})$ has a uniform upper bound $C(c_{i},i=1,\cdots,5)$ independent of time $T_{0}$. Hence we have the uniform upper bound of the principal raddi of $M_{t}$, which means that the principal curvatures are bounded from below by some positive constant $c_{6}$. Meanwhile, by Lemma 4.2, one can get $\displaystyle C\geq K\geq c_{6}^{n-1}\kappa_{\max}$ for some constant $C$ from Lemma 4.2. Hence, the principal curvatures are bounded from above. This completes the proof of Lemma 4.3. ∎ Now we have obtained the a priori estimates of the flow (1.3). By Lemma 3.1, Lemma 3.2 and Lemma 4.1, these flows have short time existence. Using the $C^{2}$ estimate given in Lemma 4.3, due to [And04, Theorem 6], we obtain the $C^{2,\lambda}$ estimate of the scalar equation (2.17). Then the standard regularity theory implies the estimates for higher order derivatives. Hence, we obtain the long time existence and regularity for these flows. ###### Theorem 4.1. Let $M_{0}$ be be as in Theorem 1.1. Assume that $f$ is a smooth positive function on $\mathbb{S}^{n}$. If $\alpha>n+1$ or $\alpha=n+1$ and $f$ satisfies $f<1$, then the smooth uniformly convex solution to the flow (1.3) exists for all time $t\in[0,+\infty)$ and there is a constant $C_{m,\lambda}>0$ depending on $M_{0}$, $\alpha$, $n$, $m$, $\lambda$, and $f$ such that $\displaystyle\|\rho\|_{C^{m,\lambda}_{(\theta,t)}(\mathbb{S}^{n}\times[0,+\infty))}\leq C_{m,\lambda}.$ ###### Remark 4.1. If the Gauss curvature $K$ in flow (1.3) is replaced by the $k$-th mean curvature $\sigma_{k}$ of the hypersurface, using the similar argument we can obtain the same long time existence results as in Theorem 4.1. In other words, there is a subsequence of time $\\{t_{i}\\}$, such that $\\{M_{t_{i}}\\}$ converge to a smooth positive uniformly convex hypersurface $M_{\infty}$ in $C^{\infty}$ topology. Next we shall see $M_{\infty}$ is a solution to (1.2). ## 5\. Proof of Theorem 1.1 In this section, we consider the hyperbolic space as the hyperboloid in Lorentz space $\mathbb{R}^{n+1,1}$, where $\displaystyle\mathbb{H}^{n+1}=\\{(x_{1},\cdots,x_{n+1},x_{n+2})\in\mathbb{R}^{n+1,1}|\sum_{i=1}^{n+1}x_{i}^{2}-x_{n+2}^{2}=-1,\ x_{n+2}>0\\}.$ Let $p=(0,\cdots,0,1)$ and $L_{p}$ be the tangent plane of $\mathbb{H}^{n+1}$ at $p$. Denote the projection from $\mathbb{H}^{n+1}$ to $L_{p}$ as $\displaystyle\pi_{p}:\mathbb{H}^{n+1}$ $\displaystyle\rightarrow L_{p}$ $\displaystyle z=(x_{1},\cdots,x_{n+1},x_{n+2})$ $\displaystyle\mapsto\frac{z}{-\langle z,p\rangle}=(\frac{x_{1}}{x_{n+2}},\cdots,\frac{x_{n+1}}{x_{n+2}},1).$ Since $\sum_{i=1}^{n+1}\left(\frac{x_{i}}{x_{n+2}}\right)^{2}<1$ on $\pi_{p}(\mathbb{H}^{n+1})$, $\pi_{p}(\mathbb{H}^{n+1})$ is contained in the unit ball $B^{n+1}_{p}(1)$ centered at $p$. Recall that $\rho(q)$ is the geodesic distance between $q$ and $p$ in $\mathbb{H}^{n+1}$. If we regard $r$ as the Euclidean distance from $\pi(q)$ to $p$ in $B^{n+1}_{p}(1)$, we have the following relation $\displaystyle r(\theta)=\tanh\rho(\theta).$ (5.1) Direct computation gives $\displaystyle\frac{1}{(\cosh\rho)^{2}}=1-r^{2},$ (5.2) $\displaystyle\sinh\rho=\frac{r}{\sqrt{1-r^{2}}},$ (5.3) $\displaystyle|\overline{\nabla}\rho|^{2}=\frac{|\overline{\nabla}r|^{2}}{(1-r^{2})^{2}},$ (5.4) $\displaystyle w=\sqrt{1+\frac{|\overline{\nabla}\rho|^{2}}{\phi^{2}}}=\frac{r}{\hat{u}}\sqrt{\frac{1-\hat{u}^{2}}{1-r^{2}}},$ (5.5) $\displaystyle u=\frac{\hat{u}}{\sqrt{1-\hat{u}^{2}}}.$ (5.6) Here we can see $u>0$ as long as $1>\hat{u}>0$, which means that $\pi_{p}(M_{t})$ remains star-shaped as long as $M_{t}$ does. From (5.1), we have the scalar equation of the radial graph of $\pi_{p}(M_{t})$ under the flow (2.5)($\tilde{\eta}\equiv 1$), $\displaystyle\partial_{t}r(\theta,t)=\partial_{\rho}(\tanh\rho)\partial_{t}\rho=\frac{-w\phi(\rho)^{\alpha}f(\theta)K}{(\cosh\rho)^{2}}+\frac{\phi}{(\cosh\rho)^{2}}$ (5.7) where we denote by $(r(\theta),\theta)$ the radial graph of $\pi_{p}(M_{t})$. Similar to (2.13)-(2.15), we have in $\hat{M}_{t}=\pi(M_{t})$, $\displaystyle\hat{\nu}=\frac{r}{\sqrt{r^{2}+|\overline{\nabla}r|^{2}}}(\partial_{r}-\frac{\overline{\nabla}r}{r^{2}}),$ (5.8) $\displaystyle\hat{u}=\frac{r^{2}}{\sqrt{r^{2}+|\overline{\nabla}r|^{2}}},$ (5.9) $\displaystyle\hat{g}_{ij}=r_{i}r_{j}+r^{2}\delta_{ij}$ (5.10) and $\displaystyle\hat{h}_{ij}=\frac{-rr_{ij}+2r_{i}r_{j}+r^{2}\delta_{ij}}{\sqrt{r^{2}+|\overline{\nabla}r|^{2}}}.$ (5.11) Therefore, $\displaystyle\hat{K}=\frac{\det{\hat{h}_{ij}}}{\det{\hat{g}_{ij}}}=\frac{\det(-rr_{ij}+2r_{i}r_{j}+r^{2}\delta_{ij})}{(r^{2}+|\overline{\nabla}r|^{2})^{\frac{n+2}{2}}r^{2n-2}}.$ (5.12) Recall (2.14) and (2.15). Substituting (5.1) in (5.12), we have(see [CH21]) $\displaystyle K=\hat{K}\left(\frac{(r^{2}+|\overline{\nabla}r|^{2})}{r^{2}+\frac{|\overline{\nabla}r|^{2}}{1-r^{2}}}\right)^{\frac{n+2}{2}}=\hat{K}\left(\frac{1-r^{2}}{1-\hat{u}^{2}}\right)^{\frac{n+2}{2}}.$ (5.13) Now we calculate the scalar parabolic PDE of the support function $\hat{u}$ of $\pi_{p}(M_{t})$. Substituting (5.2), (5.3), (5.5) and (5.13) into (5.7), we have $\displaystyle\partial_{t}\hat{u}(x,t)=\frac{\hat{u}}{r}\partial_{t}r(\theta,t)$ $\displaystyle=(-\phi(\rho)^{\alpha}f(\theta)Kw+\phi)(1-r^{2})\frac{\hat{u}}{r}$ (5.14) $\displaystyle=-r^{\alpha}f(\theta)\hat{K}(1-r^{2})^{\frac{n+3-\alpha}{2}}({1-\hat{u}^{2}})^{-\frac{n+1}{2}}+\hat{u}\sqrt{1-r^{2}}.$ Denote by $\psi(r)=r^{\alpha}(1-r^{2})^{\frac{n+2-\alpha}{2}}$ and $\varphi(\hat{u})=\hat{u}^{-1}({1-\hat{u}^{2}})^{-\frac{n+1}{2}}$. (5.14) becomes $\displaystyle\partial_{t}\hat{u}=-\psi(r)\sqrt{1-r^{2}}f(\theta)\varphi(\hat{u})\hat{u}\hat{K}+\hat{u}\sqrt{1-r^{2}}.$ (5.15) Let $\Psi=\displaystyle\int_{a}^{r}\psi^{-1}(s)s^{n}\mathrm{d}s$, $\Omega=\displaystyle\int_{a}^{\hat{u}}\varphi(s)\mathrm{d}s$ and $Q(t)=\displaystyle\int_{\mathbb{S}^{n}}\Psi f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}-\displaystyle\int_{\mathbb{S}^{n}}\Omega\mathrm{d}\sigma_{\mathbb{S}^{n}}.$ (5.16) where $0<a<1$ some constant to be chosen later for convenience. $a=\frac{1}{2}\min\limits_{\hat{M_{0}}}r=\frac{1}{2}\min\limits_{\hat{M_{0}}}u$. We have the following properties of $Q$ along (5.14) (referring to [LL20, LSW20a]). ###### Lemma 5.1. Along (5.14), $Q(t)$ is non-decreasing and the equality holds if and only if $\pi_{p}(M_{t})$ satisfies the following equation $r^{\alpha}(1-r^{2})^{\frac{n+2-\alpha}{2}}f\hat{K}=\hat{u}(1-\hat{u}^{2})^{\frac{n+1}{2}}.$ (5.17) ###### Proof. $\displaystyle Q^{\prime}(t)=$ $\displaystyle\partial_{t}\int_{\mathbb{S}^{n}}\Psi f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}-\partial_{t}\int_{\mathbb{S}^{n}}\Omega\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (5.18) $\displaystyle=$ $\displaystyle\int_{\mathbb{S}^{n}}\psi^{-1}(r)r^{n}\partial_{t}rf^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}-\int_{\mathbb{S}^{n}}\varphi(\hat{u})\partial_{t}\hat{u}\mathrm{d}\sigma_{\mathbb{S}^{n}}.$ Since $\frac{\partial_{t}r(\theta,t)}{r}=\frac{\partial_{t}\hat{u}(x,t)}{\hat{u}}$ and $r^{n+1}\mathrm{d}\theta_{\mathbb{S}^{n}}=\frac{\hat{u}}{\hat{K}}\mathrm{d}\sigma_{\mathbb{S}^{n}}$, (5.18) becomes $\displaystyle Q^{\prime}(t)=$ $\displaystyle\int_{\mathbb{S}^{n}}\left(\psi^{-1}(r)f^{-1}(\theta)\hat{K}^{-1}-\varphi(\hat{u})\right)\partial_{t}\hat{u}\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (5.19) $\displaystyle=$ $\displaystyle\int_{\mathbb{S}^{n}}\left(\psi^{-1}(r)f^{-1}(\theta)\hat{K}^{-1}-\varphi(\hat{u})\right)^{2}\psi(r)\sqrt{1-r^{2}}f(\theta)\hat{u}\hat{K}\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle 0.$ Here the equality holds if and only if $\varphi(r)\psi(\hat{u})f(\theta)\hat{K}\equiv 1$. ∎ By Lemma 3.1, we obtain the uniform upper and lower bounds of $\rho$ in the hyperbolic space. Thus from (5.1), there exists $0<c_{1}<c_{2}<1$ independent of time, such that $c_{1}<r<c_{2}$ and $c_{1}<\hat{u}<c_{2}$. From (5.16), $Q(t)$ is uniformly bounded independent of time, i.e., there exists a constant $C>0$, such that $|Q(t)|\leq C$. Since $\displaystyle Q(t)-Q(0)=\int_{0}^{t}\int_{\mathbb{S}^{n}}\left(\psi^{-1}(r)f^{-1}(\theta)\hat{K}^{-1}-\varphi(\hat{u})\right)^{2}\psi(r)\sqrt{1-r^{2}}f(\theta)\hat{u}\hat{K}\mathrm{d}\sigma_{\mathbb{S}^{n}}dt,$ (5.20) we obtain $\displaystyle\displaystyle\int_{0}^{\infty}\displaystyle\int_{\mathbb{S}^{n}}\left(\psi^{-1}(r)f^{-1}(\theta)\hat{K}^{-1}-\varphi(\hat{u})\right)^{2}\psi(r)\sqrt{1-r^{2}}f(\theta)\hat{u}\hat{K}\mathrm{d}\sigma_{\mathbb{S}^{n}}dt<\infty.$ (5.21) By Theorem 4.1, there exists a subsequence $\\{t_{i}\\}$, such that $M_{t_{i}}$ converge smoothly to $M_{\infty}$ and $\displaystyle\displaystyle\int_{\mathbb{S}^{n}}\left(\psi^{-1}(r)f^{-1}(\theta)\hat{K}^{-1}-\varphi(\hat{u})\right)^{2}\mathrm{d}\sigma_{\mathbb{S}^{n}}\rightarrow 0$ as $t_{i}\rightarrow\infty$, since $\hat{K}$ is uniformly bounded from below by (5.13) and Lemma 4.1. Thus $\hat{M}_{\infty}$ satisfies $\displaystyle\psi(r)f(\theta)\hat{K}=\varphi(\hat{u})^{-1}.$ (5.22) Combining with (5.1), (5.2), (5.3) etc., we obtain $M_{\infty}$ satisfies (1.2). Now we show the uniqueness of the solution of (1.2) when $\alpha\geq n+1$. First we assume that there exists two solutions $\rho_{1}$ and $\rho_{2}$. Let $\gamma(\rho)=\log(1-\frac{2}{e^{\rho}+1})$ and $G=\gamma_{1}-\gamma_{2}$, then $\overline{\nabla}\gamma=\frac{\overline{\nabla}\rho}{\phi}$. Assume $G$ attains its maximal point at $\theta_{0}$ and $G(\theta_{0})>0$, which implies $\rho_{1}(\theta_{0})>\rho_{2}(\theta_{0})$. At $\theta_{0}$, we have $\displaystyle\overline{\nabla}\gamma_{1}=\overline{\nabla}\gamma_{2}$ (5.23) and $\overline{\nabla}^{2}_{ij}G\leq 0$, i.e., $\displaystyle\overline{\nabla}^{2}_{ij}\gamma_{1}\leq\overline{\nabla}^{2}_{ij}\gamma_{2}.$ (5.24) Substituting $\gamma$ into (2.14) and (2.16), we have $\displaystyle g_{ij}=\phi^{2}\left(\delta^{ij}+\gamma_{i}\gamma_{j}\right),\quad g^{kj}=\frac{1}{\phi^{2}}\left(\delta^{kj}-\frac{\gamma_{k}\gamma_{j}}{1+|\overline{\nabla}\gamma|^{2}}\right)$ (5.25) and $\displaystyle h_{ik}=\frac{\phi}{\sqrt{1+|\overline{\nabla}\gamma|^{2}}}\left(-\gamma_{ik}+\phi^{\prime}\gamma_{i}\gamma_{k}+\phi^{\prime}\delta_{ik}\right).$ (5.26) Plugging (5.25) and (5.26) in (1.2) we have $\displaystyle\sigma_{n}(h_{i}{}^{j})=\frac{f}{\phi(\rho)^{\alpha-1}\sqrt{1+|\overline{\nabla}\gamma|^{2}}},$ (5.27) where $\displaystyle h_{i}{}^{j}=\frac{1}{\phi\sqrt{1+|\overline{\nabla}\gamma|^{2}}}\left(-\gamma_{ik}+\phi^{\prime}\gamma_{i}\gamma_{k}+\phi^{\prime}\delta_{ik}\right)\left(\delta^{kj}-\frac{\gamma_{k}\gamma_{j}}{1+|\overline{\nabla}\gamma|^{2}}\right).$ (5.28) Then $\displaystyle\sigma_{n}\left(\frac{1}{\sqrt{1+|\overline{\nabla}\gamma|^{2}}}\left(-\gamma_{ik}+\phi^{\prime}\gamma_{i}\gamma_{k}+\phi^{\prime}\delta_{ik}\right)\left(\delta^{kj}-\frac{\gamma_{k}\gamma_{j}}{1+|\overline{\nabla}\gamma|^{2}}\right)\right)=\frac{f}{\phi(\rho)^{\alpha-n-1}\sqrt{1+|\overline{\nabla}\gamma|^{2}}}.$ (5.29) Using (5.23), we have at $\theta$ $\displaystyle\phi(\rho_{1})^{\alpha-n-1}\sigma_{n}\left(-(\gamma_{1}){}_{ik}+\phi(\rho_{1})^{\prime}(\gamma_{1}){}_{i}(\gamma_{1}){}_{k}+\phi(\rho_{1})^{\prime}\delta_{ik}\right)$ $\displaystyle=$ $\displaystyle\phi(\rho_{2})^{\alpha-n-1}\sigma_{n}\left(-(\gamma_{2}){}_{ik}+\phi(\rho_{2})^{\prime}(\gamma_{2}){}_{i}(\gamma_{2}){}_{k}+\phi(\rho_{2})^{\prime}\delta_{ik}\right).$ (5.30) Since $\rho_{1}(\theta)>\rho_{2}(\theta)$ and both of the solutions are uniformly convex, by (5.23) and (5.24) $\displaystyle-(\gamma_{1}){}_{ik}+\phi^{\prime}(\rho_{1})(\gamma_{1}){}_{i}(\gamma_{1}){}_{k}+\phi^{\prime}(\rho_{1})\delta_{ik}>-(\gamma_{2}){}_{ik}+\phi(\rho_{2})^{\prime}(\gamma_{2}){}_{i}(\gamma_{2}){}_{k}+\phi(\rho_{2})^{\prime}\delta_{ik}>0.$ (5.31) When $\alpha\geq n+1$, we have $\displaystyle\phi(\rho_{1})^{\alpha-n-1}\sigma_{n}\left(-(\gamma_{1}){}_{ik}+\phi(\rho_{1})^{\prime}(\gamma_{1}){}_{i}(\gamma_{1}){}_{k}+\phi(\rho_{1})^{\prime}\delta_{ik}\right)$ $\displaystyle>$ $\displaystyle\phi(\rho_{2})^{\alpha-n-1}\sigma_{n}\left(-(\gamma_{2}){}_{ik}+\phi(\rho_{2})^{\prime}(\gamma_{2}){}_{i}(\gamma_{2}){}_{k}+\phi(\rho_{2})^{\prime}\delta_{ik}\right)$ which is contrary to (5). So $G\leq 0$, which implies $\gamma_{1}\equiv\gamma_{2}$ and $\rho_{1}\equiv\rho_{2}$. Now we complete the proof of Theorem 1.1. ## 6\. Proof of Theorem 1.2 In this section, we consider the flow (1.3) for $\alpha=n+1$ and the corresponding even Alexandrov problem in $\mathbb{H}^{n+1}$. Throughout this section, if not specified, we regard $c_{i}$, $C_{i}$ for $i\in\mathbb{N}$ as some positive constants. By (2.17), when $\alpha=n+1$ the scalar equation of the radial function $\rho$ becomes $\left\\{\begin{aligned} \frac{\partial}{\partial t}\rho(\theta,t)=&-\left(\phi(\rho)\right)^{n+1}f(\theta)wK+\phi(\theta,t),\quad\text{for }(\theta,t)\in\mathbb{S}^{n}\times[0,+\infty),\\\ \rho(\cdot,0)=&\rho_{0}(\cdot).\end{aligned}\right.$ (6.1) By the projection $\pi_{p}$ in Section 5 and the relation between $M_{t}$ and $\pi_{p}(M_{t})$, (5.1)-(5.6) and (5.13), we obtain the scalar parabolic PDE of the radial function $r$ of $\pi_{p}(M_{t})$ along the flow (1.3) $\displaystyle\partial_{t}r=-r^{n+2}(1-r^{2})\hat{u}^{-1}(1-\hat{u}^{2})^{-\frac{n+1}{2}}f\hat{K}+r\sqrt{1-r^{2}}.$ (6.2) Then we have the following results. ###### Lemma 6.1. Let $\rho$ be a smooth, positive, uniformly convex and origin-symmetric solution to (6.1) on $\mathbb{S}^{n}\times[0,T)$. If $f$ is a smooth positive even function on $\mathbb{S}^{n}$ satisfying $\int_{\mathbb{S}^{n}}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}>|S^{n}|$, then there exists a positive constant $C$ depending on $n$, $\max f$, $\min f$ and the initial hypersurface, such that $\displaystyle\frac{1}{C}\leq\rho\leq C,\quad\forall t\in[0,T).$ (6.3) ###### Proof. The upper bound of $\rho$ is obtained directly from the proof of Lemma 3.1, i.e., there exists a positive constant $c_{1}$, such that $\rho\leq c_{1}$. Combining with (5.3), we obtain that the radial function $r$ of $\pi_{p}(M_{t})$ is away from $1$, i.e., there exists a positive constant $c_{2}$, such that $\hat{u}\leq r\leq c_{2}<1$. Under the flow (6.2), we have the following monotone non-decreasing function by Lemma 5.1 $\displaystyle Q(t)=\displaystyle\int_{\mathbb{S}^{n}}\int_{a}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}-\displaystyle\int_{\mathbb{S}^{n}}\int_{a}^{\hat{u}}s^{-1}(1-s^{2})^{-\frac{n+1}{2}}\mathrm{d}s\mathrm{d}\sigma_{\mathbb{S}^{n}}.$ (6.4) where we can choose $a=c_{2}$ without loss of generality. Then there exists a positive constant $C_{1}$ depending on the initial hypersurface of (1.3), such that $\displaystyle-C_{1}\leq Q(t)\leq$ $\displaystyle- f_{\max}^{-1}\int_{\mathbb{S}^{n}}\int_{r}^{c_{2}}s^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}+(1-c_{2}^{2})^{-\frac{n+1}{2}}\int_{\mathbb{S}^{n}}\int_{\hat{u}}^{c_{2}}s^{-1}\mathrm{d}s\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (6.5) $\displaystyle=$ $\displaystyle-f_{\max}^{-1}\int_{\mathbb{S}^{n}}(\log c_{2}-\log r)\mathrm{d}\theta_{\mathbb{S}^{n}}+(1-c_{2}^{2})^{-\frac{n+1}{2}}\int_{\mathbb{S}^{n}}(\log c_{2}-\log\hat{u})\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\leq$ $\displaystyle C_{2}+f_{\max}^{-1}\int_{\mathbb{S}^{n}}\log r\ \mathrm{d}\theta_{\mathbb{S}^{n}}-(1-c_{2}^{2})^{-\frac{n+1}{2}}\int_{\mathbb{S}^{n}}\log\hat{u}\ \mathrm{d}\sigma_{\mathbb{S}^{n}}.$ Assume $r_{\min}(t)$ is attained at $\theta_{0}=(1,\vec{0})$ at any fixed time $t$. Here we define by $\vec{0}$ an $n$-dimensional zero vector. We parametrize any point $\theta\in\mathbb{S}^{n}$ as $\displaystyle\theta=(\cos\theta_{1},\sin\theta_{1}\vec{x}),$ (6.6) where $0\leq\theta_{1}\leq\pi$, and $\vec{x}=(x_{2},\cdots,x_{n+1})\in\mathbb{S}^{n-1}$ is an $n$-dimensional unit vector. We have $\langle\theta,\theta_{0}\rangle=\cos\theta_{1}$. Then $r(\theta,t)\leq\frac{r_{\min}(t)}{|\cos\theta_{1}|}$ because the flow hypersurface $M_{t}$ is strictly convex and origin-symmetric. Assume $\hat{u}_{\max}(t)$ is attained at $v_{0}\in\mathbb{S}^{n}$. We have $\hat{u}(v,t)\geq\hat{u}_{\max}(t)|\langle v,v_{0}\rangle|$ (referring to [Sch13, p.44]). Take the direction of $v_{0}$ as $x$-axis and regard $v_{1}$ as the angle with the $x$-axis. We parametrize any point $v$ in $\mathbb{S}^{n}$ as $\displaystyle v=\left(\cos v_{1},\sin v_{1}\vec{y}\right),$ (6.7) where $0\leq v_{1}\leq\pi$, and $\vec{y}=(y_{2},\cdots,y_{n+1})\in\mathbb{S}^{n-1}$ is an $n$-dimensional unit vector. We have $\langle v,v_{0}\rangle=\cos v_{1}$. By a direct calculation, the area measure $\mathrm{d}\sigma_{\mathbb{S}^{n}}$ becomes $\displaystyle\mathrm{d}\sigma_{\mathbb{S}^{n}}=(\sin v_{1})^{n-1}\mathrm{d}v_{1}\mathrm{d}\sigma_{\mathbb{S}^{n-1}}.$ (6.8) Similarly, $\displaystyle\mathrm{d}\theta_{\mathbb{S}^{n}}=(\sin\theta_{1})^{n-1}\mathrm{d}\theta_{1}\mathrm{d}\theta_{\mathbb{S}^{n-1}}.$ (6.9) Then $\displaystyle\int\limits_{\mathbb{S}^{n}}\log\hat{u}\mathrm{d}\sigma_{\mathbb{S}^{n}}\geq$ $\displaystyle\int\limits_{\mathbb{S}^{n}}\log(\hat{u}_{\max}|\cos v_{1}|)\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (6.10) $\displaystyle=$ $\displaystyle 2\int_{0}^{\frac{\pi}{2}}\int_{\mathbb{S}^{n-1}}\left(\log\hat{u}_{\max}+\log\cos v_{1}\right)(\sin v_{1})^{n-1}\mathrm{d}v_{1}\mathrm{d}\sigma_{\mathbb{S}^{n-1}}$ $\displaystyle\geq$ $\displaystyle|S^{n}|\log\hat{u}_{\max}+2|S^{n-1}|\int_{0}^{\frac{\pi}{2}}\log\cos v_{1}\mathrm{d}v_{1}\geq|S^{n}|\log\hat{u}_{\max}-C_{3}.$ The second term in the last inequality is convergent, since $\log(\cos v_{1})\geq\log\left(\frac{\pi}{4}-\frac{v_{1}}{2}\right)$ at $v_{1}\in[0,\frac{\pi}{2}]$. Besides, we have $\displaystyle\int\limits_{\mathbb{S}^{n}}\log r\mathrm{d}\theta_{\mathbb{S}^{n}}\leq$ $\displaystyle\int\limits_{\mathbb{S}^{n}}\log\frac{r_{\min}}{|\cos\theta_{1}|}\mathrm{d}\theta_{\mathbb{S}^{n}}$ (6.11) $\displaystyle=$ $\displaystyle 2\int_{0}^{\frac{\pi}{2}}\int_{\mathbb{S}^{n-1}}\left(\log r_{\min}-\log\cos\theta_{1}\right)(\sin\theta_{1})^{n-1}\mathrm{d}\theta_{1}\mathrm{d}\theta_{\mathbb{S}^{n-1}}$ $\displaystyle\leq$ $\displaystyle|S^{n}|\log r_{\min}-2|S^{n-1}|\int_{0}^{\frac{\pi}{2}}\log\cos\theta_{1}\mathrm{d}\theta_{1}\leq|S^{n}|\log r_{\min}+C_{3},$ where the second term in the last inequality is convergent for the same reason as (6.10). Plugging (6.10) and (6.11) in (6.5), we obtain $\displaystyle-C_{1}\leq C_{4}+f_{\max}^{-1}|S^{n}|\log r_{\min}-(1-c_{2}^{2})^{-\frac{n+1}{2}}|S^{n}|\log\hat{u}_{\max}.$ (6.12) Hence, there exists positive constants $c_{3}$ and $c_{4}$, such that $\displaystyle r_{\min}\geq c_{3}r_{\max}^{c_{4}},$ (6.13) where $c_{4}=f_{\max}(1-c_{2}^{2})^{-\frac{n+1}{2}}$. Dividing the both sides of (6.2) by $r\sqrt{1-r^{2}}f$, we have $\displaystyle\frac{\partial_{t}r}{r\sqrt{1-r^{2}}f}=-r^{n+1}\sqrt{1-r^{2}}\hat{u}^{-1}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\hat{K}+f^{-1}(\theta).$ (6.14) Integrating (6.14) over $\mathbb{S}^{n}$, we get $\displaystyle\int_{\mathbb{S}^{n}}\frac{\partial_{t}r}{r\sqrt{1-r^{2}}f}\mathrm{d}\theta_{\mathbb{S}^{n}}=$ $\displaystyle-\int_{\mathbb{S}^{n}}r^{n+1}\sqrt{1-r^{2}}\hat{u}^{-1}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\hat{K}\mathrm{d}\theta_{\mathbb{S}^{n}}+\int_{\mathbb{S}^{n}}f^{-1}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}}$ (6.15) $\displaystyle=$ $\displaystyle-\int_{\mathbb{S}^{n}}\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}+\int_{\mathbb{S}^{n}}f^{-1}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}},$ where we use $r^{n+1}d\theta_{\mathbb{S}^{n}}=\frac{\hat{u}}{\hat{K}}d\sigma_{\mathbb{S}^{n}}$ in the last equality. Note that the left hand side of (6.15) is $\displaystyle\int_{\mathbb{S}^{n}}\frac{\partial_{t}r}{r\sqrt{1-r^{2}}f}\mathrm{d}\theta_{\mathbb{S}^{n}}=\partial_{t}\displaystyle\int_{\mathbb{S}^{n}}\int_{c_{2}}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}.$ (6.16) Besides, we have $\displaystyle\int_{\mathbb{S}^{n}}\int_{c_{2}}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ (6.17) $\displaystyle\leq$ $\displaystyle- f_{\max}^{-1}\int_{\mathbb{S}^{n}}\int_{r}^{c_{2}}s^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle=$ $\displaystyle-|S^{n}|f_{\max}^{-1}\log c_{2}+f_{\max}^{-1}\int_{\mathbb{S}^{n}}\log r\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle\leq$ $\displaystyle C_{5}+|S^{n}|f_{\max}^{-1}\log r_{\max}$ and $\displaystyle\int_{\mathbb{S}^{n}}\int_{c_{2}}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ (6.18) $\displaystyle\geq$ $\displaystyle-\left(1-c_{2}^{2}\right)^{-\frac{1}{2}}f_{\min}^{-1}\int_{\mathbb{S}^{n}}\int_{r}^{c_{2}}s^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle=$ $\displaystyle-\left(1-c_{2}^{2}\right)^{-\frac{1}{2}}|S^{n}|f_{\min}^{-1}\log c_{2}+\left(1-c_{2}^{2}\right)^{-\frac{1}{2}}f_{\min}^{-1}\int_{\mathbb{S}^{n}}\log r\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle\left(1-c_{2}^{2}\right)^{-\frac{1}{2}}|S^{n}|f_{\min}^{-1}\log r_{\min}.$ Since $r(\theta(v),t)=\sqrt{\hat{u}(v)^{2}+|\nabla\hat{u}(v)|^{2}}\geq\hat{u}(v,t)$ on $\mathbb{S}^{n}$, we have $\displaystyle\int_{\mathbb{S}^{n}}\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}\leq\int_{\mathbb{S}^{n}}(1-\hat{u}^{2})^{-\frac{n}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}\leq(1-\hat{u}_{\max}^{2})^{-\frac{n}{2}}|S^{n}|.$ (6.19) When $\hat{u}_{\max}\rightarrow 0$, $(1-\hat{u}_{\max}^{2})^{-\frac{n}{2}}|S^{n}|\rightarrow|S^{n}|$. By (6.19) and the condition of $f$, there exists a positive constant $c_{5}$, such that when $u_{\max}\leq c_{5}$, $\int_{\mathbb{S}^{n}}\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}\leq\frac{1}{2}(1+\int_{\mathbb{S}^{n}}f^{-1}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}})<\int_{\mathbb{S}^{n}}f^{-1}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}}$. This together with (6.15) and (6.16) implies $\displaystyle\partial_{t}\int_{\mathbb{S}^{n}}\int_{c_{2}}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}=-\int_{\mathbb{S}^{n}}\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}+\int_{\mathbb{S}^{n}}f^{-1}(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}}>0.$ (6.20) When $\hat{u}_{\max}=c_{5}$, inserting (6.13) into (6.18), we have $\displaystyle\int_{\mathbb{S}^{n}}\int_{c_{2}}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}\geq$ $\displaystyle\left(1-c_{2}^{2}\right)^{-\frac{1}{2}}|S^{n}|f_{\min}^{-1}\log c_{3}+c_{4}\left(1-c_{2}^{2}\right)^{-\frac{1}{2}}|S^{n}|f_{\min}^{-1}\log r_{\max}$ (6.21) $\displaystyle\geq$ $\displaystyle- C_{6}+(1-c_{2}^{2})^{-\frac{n+2}{2}}|S^{n}|f_{\max}f_{\min}^{-1}\log c_{5},$ where we use $\max\limits_{\hat{M}_{t}}\hat{u}=\max\limits_{\hat{M}_{t}}r_{\max}$. If the maximal radial function of the initial hypersurface satisfies $\hat{u}_{\max}(0)>c_{5}$, then once $\hat{u}_{\max}(t)\leq c_{5}$, by (6.17), (6.20) and (6.21) we obtain $\displaystyle\log r_{\max}\geq- C_{7}+(1-c_{2}^{2})^{-\frac{n+1}{2}}f_{\max}^{2}f_{\min}^{-1}\log c_{5},$ (6.22) which implies that there exists a positive constant $c_{6}=e^{-C_{7}}c_{5}^{(1-c_{2}^{2})^{-\frac{n+1}{2}}f_{\max}^{2}f_{\min}^{-1}}<c_{5}$, such that $r_{\max}\geq c_{6}$, untill $\hat{u}_{\max}>c_{5}$ again. If the initial hypersurface of (6.2) satisfies $\hat{u}_{\max}(0)\leq c_{5}$, then from (6.20), we note that $\int_{\mathbb{S}^{n}}\int_{a}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ will monotone increasing till $\hat{u}_{\max}(t)>c_{5}$. This together with (6.17) implies that $\displaystyle|S^{n}|f_{\max}^{-1}\log r_{\max}+C_{5}\geq\int_{\mathbb{S}^{n}}\int_{c_{2}}^{r}s^{-1}(1-s^{2})^{-\frac{1}{2}}f^{-1}\mathrm{d}s\mid_{t=0}.$ (6.23) Hence there exists a positive constant $c_{7}$, such that $r_{\max}\geq c_{7}$. Clearly, we can deduce from (6.17) that $c_{7}\leq\hat{u}_{\max}(0)\leq c_{5}$. Since $\hat{u}_{\max}=r_{\max}$ in $\hat{M}_{t}$, we obtain $r_{\max}(t)\geq\min\\{c_{6},c_{7}\\}$. All those constants only depend on $n$, $f$ and the initial hypersurface $M_{0}$. By (6.13), there exists a positive constant $c_{8}$, such that $r_{\min}\geq c_{8}$. Thus the radial function $r$ of $\hat{M}_{t}$ satisfying $0<c_{8}\leq r\leq c_{2}<1$. By (5.3), we obtain the uniform bounds of $\rho$ and complete the proof. ∎ Proof of Theorem 1.2 Similar to the proof of Theorem 1.1, by Lemma 6.1, Lemma 3.2 and Lemma 4.3, we establish the a priori estimates and obtain the long time existence of (1.3) for the case $\alpha=n+1$. Using the same argument in Section 5, we show that the flow (1.3) converges smoothly to the unique smooth even solution of (1.2). Hence we complete the proof of Theorem 1.2.∎ ###### Remark 6.1. Note that the conditions of $\alpha$ and $f$ in Theorem 1.1 and Theorem 1.2 are necessary to the convergence and asymptotic results of (1.3). When $\alpha<n+1$, or When $\alpha=n+1$ and $f\geq 1$ at the same time, consider a geodesic sphere with its radial function $\rho\equiv c$. Note that $\sinh\rho\rightarrow 0$ and $\cosh\rho\rightarrow 1$ as $\rho\rightarrow 0$. When $\alpha<n+1$, $\displaystyle\sinh\rho^{\alpha-n-1}\cosh\rho^{n}\rightarrow\infty.$ When $\alpha=n+1$, $\displaystyle\sinh\rho^{\alpha-n-1}\cosh\rho^{n}\rightarrow 1^{+}.$ Hence there exists a constant $\rho_{0}>0$, such that for any $\rho\in(0,\rho_{0})$, $-\sinh\rho^{\alpha-n-1}\cosh\rho^{n}f+1<0$ when $\alpha<n+1$, or when $\alpha=n+1$ and $f\geq 1$ at the same time. Assume the initial hypersurface of the flow (1.3) is a geodesic sphere with its radial function $\rho\in(0,\rho_{0})$. Then combining (2.17) ($\tilde{\eta}(t)\equiv 1$) and (3.1), we obtain $\displaystyle\frac{\partial}{\partial t}\rho(\theta,t)=$ $\displaystyle-\phi(\rho)^{\alpha}f(\theta)w\sigma_{k}+\phi(\theta,t)$ $\displaystyle=$ $\displaystyle\phi(\rho)\left(-\phi(\rho)^{\alpha-n-1}\phi^{\prime}(\rho)^{n}+1)\right)$ $\displaystyle<$ $\displaystyle 0.$ So the initial geodesic sphere keeps shrinking along (1.3). However, for the cases $\alpha>n+1$, the same initial geodesic sphere of (1.3) will expand untill it converges smoothly to the solution of (1.2). For the case $\alpha=n+1$, convergence results of the flow (1.3) need the assumption of $f$ in addition. ## 7\. Proof of Theorem 1.3 In this part, we consider the flow (1.6) for the cases $2<\alpha\leq n+1$. First, we parametrize $M_{t}$ as a graph of the radial function $\rho(\theta,t):\mathbb{S}^{n}\times[0,T)\to\mathbb{R}$. By (2.17), the scalar parabolic PDE of the radial function turns to $\frac{\partial}{\partial t}\rho(\theta,t)=-\phi(\rho)^{\alpha}f(\theta)K(x,t)w+\frac{\displaystyle\int_{\mathbb{S}^{n}}\frac{K}{u}\phi^{n+1}\mathrm{d}\theta_{\mathbb{S}^{n}}}{\displaystyle\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}}\phi.$ (7.1) Throughout this section, if not specified, we will regard $C_{i}$ and $C_{I}^{\prime}$ for $i\in\mathbb{N}$ as some positive constants. Observe that along (1.6), the hypersurfaces have the following properties. ###### Lemma 7.1. Denote $\Omega(\rho)=\displaystyle\int_{b}^{\rho(\theta,t)}(\sinh s)^{n-\alpha}\mathrm{d}s$. Then along the flow (1.6), $\displaystyle\displaystyle\int_{\mathbb{S}^{n}}\frac{\Omega(\rho)}{f}\mathrm{d}\theta_{\mathbb{S}^{n}}=const,\quad\text{for }t\geq 0.$ (7.2) Here $\rho(\dot{,}t)$ is the radial function of $M_{t}$. ###### Proof. By (7.1), $\displaystyle\partial_{t}\int_{\mathbb{S}^{n}}\frac{\Omega(\rho)}{f}\mathrm{d}\theta_{\mathbb{S}^{n}}=$ $\displaystyle\int_{\mathbb{S}^{n}}\phi(\rho)^{n-\alpha}f^{-1}\partial_{t}\rho\mathrm{d}\theta_{\mathbb{S}^{n}}=0$ (7.3) where we use $w=\frac{\phi}{u}$ in the last equality. ∎ ###### Remark 7.1. Without loss of generality, we can choose $b=\frac{1}{2}\min\limits_{M_{0}}\rho$. From Lemma 7.1, we have the following proposition. ###### Proposition 7.1. Along (7.1), the maximal radial functions $\rho_{\max}(t)$ of $M_{t}$ have uniform lower bounds. ###### Proof. Observe that when $\alpha<n+1$, since $\cosh s>0$ and it is monotone increasing, we have $\displaystyle\phi^{n+1-\alpha}(\rho_{\max})|\mathbb{S}^{n}|-\phi^{n+1-\alpha}(b)|\mathbb{S}^{n}|$ (7.4) $\displaystyle\geq$ $\displaystyle\int_{\mathbb{S}^{n}}\left(\phi^{n+1-\alpha}(\rho)-\phi^{n+1-\alpha}(b)\right)\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle=$ $\displaystyle(n+1-\alpha)\int_{\mathbb{S}^{n}}\int_{b}^{\rho}(\sinh s)^{n-\alpha}\cosh s\ \mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle(n+1-\alpha)\int_{\mathbb{S}^{n}}\int_{b}^{\rho}(\sinh s)^{n-\alpha}\cosh b\ \mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}\geq c>0.$ Here we use Lemma 7.1 in the last inequality. The positive constant $c$ depends on the initial hypersurface, $\min f$ and $a$. Similarly, when $\alpha=n+1$, there exists a positive constant $c^{\prime}$, such that $\displaystyle\log\phi(\rho_{\max})|\mathbb{S}^{n}|-\log\phi(b)|\mathbb{S}^{n}|\geq$ $\displaystyle\int_{\mathbb{S}^{n}}\left(\log\phi(\rho)-\log\phi(b)\right)\mathrm{d}\theta_{\mathbb{S}^{n}}$ (7.5) $\displaystyle=$ $\displaystyle\int_{\mathbb{S}^{n}}\int_{b}^{\rho}(\sinh s)^{-1}\cosh s\ \mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle\int_{\mathbb{S}^{n}}\int_{b}^{\rho}(\sinh s)^{-1}\cosh b\ \mathrm{d}s\mathrm{d}\theta_{\mathbb{S}^{n}}\geq c^{\prime}>0.$ If $\rho_{\max}\rightarrow 0$, we have $\phi(\rho_{\max})^{n+1-\alpha}\rightarrow 0$ and $\log\phi(\rho_{\max})\rightarrow-\infty$, which is contrary to (7.4) and (7.5). Hence, we obtain the uniform lower bound of $\phi_{\max}$ and complete the proof. ∎ Using the projection $\pi_{p}$ in Section 5 and the relation between $M_{t}$ and $\pi_{p}(M_{t})$, (5.1)-(5.6) and (5.13), we obtain the scalar equation of the support function $\hat{u}$ of $\pi_{p}(M_{t})$, $\displaystyle\partial_{t}\hat{u}=-r^{\alpha}(1-r^{2})^{\frac{n+3-\alpha}{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}f\hat{K}+\eta(t)\hat{u}\sqrt{1-r^{2}},$ (7.6) where $\eta(t)=\frac{\displaystyle\int\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}}{\displaystyle\int\frac{r^{n+1-\alpha}}{(1-r^{2})^{\frac{n+1-\alpha}{2}}}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}}$. Under the flow (7.6), we have the following monotone function $\displaystyle\mathcal{J}(\hat{u})=\int_{\mathbb{S}^{n}}\Psi(\hat{u})\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (7.7) where $\Psi(\hat{u})=\int_{a}^{\hat{u}(x,t)}\frac{1}{s}(1-s^{2})^{-\frac{n+1}{2}}\mathrm{d}s$. Without loss of generality, we can choose $a=\frac{1}{2}\min\limits_{\hat{M_{0}}}\hat{u}$. ###### Lemma 7.2. Along (7.6), $\mathcal{J}(\hat{u})$ is non-increasing and the equality holds if and only if $\pi_{p}(M_{t})$ satisfies the following equation $\displaystyle r^{\alpha}(1-r^{2})^{\frac{n+3-\alpha}{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}f\hat{K}=c\hat{u}\sqrt{1-r^{2}}$ (7.8) for some positive constant $c$. ###### Proof. By (7.6), $\displaystyle\partial_{t}\mathcal{J}(\hat{u})=$ $\displaystyle\int_{\mathbb{S}^{n}}\frac{1}{\hat{u}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\partial_{t}u\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (7.9) $\displaystyle=$ $\displaystyle\frac{\left(\displaystyle\int\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}\right)^{2}}{\displaystyle\int\frac{r^{-\alpha}\hat{u}}{(1-r^{2})^{\frac{n+1-\alpha}{2}}\hat{K}}f^{-1}\mathrm{d}\sigma_{\mathbb{S}^{n}}}-\int r^{\alpha}(1-r^{2})^{\frac{n+3-\alpha}{2}}(1-\hat{u}^{2})^{-(n+1)}f\frac{\hat{K}}{\hat{u}}\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\leq$ $\displaystyle 0.$ Here we use H$\ddot{\text{o}}$lder inequality in the last inequality and the equality holds if and only if (7.8) holds. ∎ Next, we will show the radial function $\rho(\cdot,t)$ is uniformly bounded along the flow (1.6) under the following assumption. ###### Proposition 7.2. Let $\rho$ be a smooth, positive, uniformly convex and origin-symmetric solution to (7.1) on $\mathbb{S}^{n}\times[0,T)$. If $2<\alpha\leq n+1$ and $f$ is a positive even function on $\mathbb{S}^{n}$, then there exists a positive constant $C$ depending on $\alpha$, $n$, $f$ and the initial hypersurface, such that $\displaystyle\frac{1}{C}\leq\rho\leq C,\quad\forall t\in[0,T).$ (7.10) ###### Proof. Firstly, we prove $\rho$ has a uniform upper bound. Motivated by the proof in Lemma 3.1, we will show that $\frac{\displaystyle\int_{\mathbb{S}^{n}}{\frac{K}{u}\phi^{n+1}\mathrm{d}\theta_{\mathbb{S}^{n}}}}{\displaystyle\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}}$ is bounded from above by multiple $\phi_{\max}$. By Lemma 7.2, there exists a positive constant $C_{1}>0$, such that $\displaystyle C_{1}\geq$ $\displaystyle\mathcal{J}(\hat{u})$ (7.11) $\displaystyle=$ $\displaystyle\int_{\mathbb{S}^{n}}\int_{a}^{\hat{u}}\frac{1}{s}(1-s^{2})^{-\frac{n+1}{2}}\mathrm{d}s\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle 2^{\frac{n+1}{2}}\int\limits_{\mathbb{S}^{n}\bigcap\\{x|\hat{u}>a\\}}\int_{a}^{\hat{u}}(1-s)^{-\frac{n+1}{2}}\mathrm{d}s\mathrm{d}\sigma_{\mathbb{S}^{n}}-(1-a^{2})^{-\frac{n+1}{2}}\int\limits_{\mathbb{S}^{n}\bigcap\\{x|\hat{u}\leq a\\}}\int_{\hat{u}}^{a}\frac{1}{s}\mathrm{d}s\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle C_{2}\int\limits_{\mathbb{S}^{n}\bigcap\\{x|\hat{u}>a\\}}\left((1-\hat{u})^{-\frac{n-1}{2}}-(1-a)^{\frac{n-1}{2}}\right)\mathrm{d}\sigma_{\mathbb{S}^{n}}-C_{3}\int\limits_{\mathbb{S}^{n}\bigcap\\{x|\hat{u}\leq a\\}}\left(\log a-\log\hat{u}\right)\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle- C_{4}+C_{2}\int_{\mathbb{S}^{n}}(1-\hat{u})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}-C_{2}\int\limits_{\mathbb{S}^{n}\bigcap\\{x|\hat{u}\leq a\\}}(1-\hat{u})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}+C_{3}\int\limits_{\mathbb{S}^{n}}\log\hat{u}\ \mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\geq$ $\displaystyle- C_{4}+C_{2}\int_{\mathbb{S}^{n}}(1-\hat{u})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}+C_{3}\int\limits_{\mathbb{S}^{n}}\log\hat{u}\ \mathrm{d}\sigma_{\mathbb{S}^{n}}.$ Similar to the proof in (6.5), we have when $n=1$ $\displaystyle- C_{5}^{\prime}\int_{\mathbb{S}^{1}}\log(1-\hat{u})\mathrm{d}\sigma_{\mathbb{S}^{n}}+C_{6}^{\prime}\int_{\mathbb{S}^{1}}\log\hat{u}\mathrm{d}\sigma_{\mathbb{S}^{1}}\leq C_{7}^{\prime},$ (7.12) and when $n\geq 2$ $\displaystyle C_{5}\int\limits_{\mathbb{S}^{n}}(1-\hat{u})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}+C_{6}\int\limits_{\mathbb{S}^{n}}\log\hat{u}\mathrm{d}\sigma_{\mathbb{S}^{n}}\leq C_{7}.$ (7.13) Assume $\hat{u}_{\max}(t)$ is attained at $v_{0}\in\mathbb{S}^{n}$ at any fixed time $t$. Because the flow hypersurfaces are uniformly convex and origin-symmetric, we have $\hat{u}(v)\geq\hat{u}_{\max}|\langle v,v_{0}\rangle|$. Then $\displaystyle\int_{\mathbb{S}^{n}}\log\hat{u}\mathrm{d}\sigma_{\mathbb{S}^{n}}\geq\int_{\mathbb{S}^{n}}(\log\hat{u}_{\max}+\log|\langle v,v_{0}\rangle|)\mathrm{d}\sigma_{\mathbb{S}^{n}}.$ (7.14) The second integral in (7.14) is convergent, for the same reason as (6.10). Thus by (7.12) and (7.13), there exists positive constants $C_{5}$ and $C_{5}^{\prime}$ such that, when $n=1$ $\displaystyle\int_{\mathbb{S}^{1}}\log(1-\hat{u}^{2})\mathrm{d}\sigma_{\mathbb{S}^{1}}\geq\int_{\mathbb{S}^{1}}\log(1-\hat{u})\mathrm{d}\sigma_{\mathbb{S}^{1}}\geq- C_{8}^{\prime},$ (7.15) and when $n\geq 2$ $\displaystyle C_{8}\geq\int_{\mathbb{S}^{n}}(1-\hat{u})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}\geq\int_{\mathbb{S}^{n}}(1-\hat{u}^{2})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}.$ (7.16) (7.4) and (7.5) implies that there exists a constant $c_{1}>0$, such that $\rho_{max}\geq c_{1}$, i.e., by (5.3) there exists $c_{2}>0$, such that $r\geq c_{2}$. Recall the relation between $M_{t}$ and $\pi_{p}(M_{t})$, (5.1)-(5.6) and (5.13). Note that $r(\theta,t)=\sqrt{\hat{u}^{2}+|\nabla\hat{u}|^{2}}\geq\hat{u}(v,t)$ and $\max\limits_{\hat{M}_{t}}r=\max\limits_{\hat{M}_{t}}\hat{u}$. At a fixed time $t$, we have $\displaystyle\int_{\mathbb{S}^{n}}\frac{K}{u}\phi^{n+1}\mathrm{d}\theta_{\mathbb{S}^{n}}=$ $\displaystyle\int_{\mathbb{S}^{n}}\sqrt{1-r^{2}}(1-\hat{u}^{2})^{-\frac{n+1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (7.17) $\displaystyle\leq$ $\displaystyle\frac{1}{\sqrt{1-\hat{u}_{\max}^{2}}}\int_{\mathbb{S}^{n}}(1-\hat{u}^{2})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}$ $\displaystyle\leq$ $\displaystyle\frac{C_{8}}{c_{2}}\frac{r_{\max}}{\sqrt{1-r_{\max}^{2}}}=C_{9}\phi_{\max},$ where we use (7.16) in the second inequality and (5.3) in the last equality. Moreover, the uniform lower bound of $\displaystyle\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}$ is obtained directly by (7.4). At the maximal point of $\rho$, inserting (7.17) into (7.1), similar to the proof in Lemma 3.1, we have $\displaystyle\partial_{t}\rho_{\max}\leq$ $\displaystyle-\phi_{\max}^{\alpha-n}\phi_{\max}^{\prime n}f+\frac{\displaystyle\int_{\mathbb{S}^{n}}\frac{K}{u}\phi^{n+1}\mathrm{d}\theta_{\mathbb{S}^{n}}}{\displaystyle\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}f^{-1}\mathrm{d}\theta_{\mathbb{S}^{n}}}\phi_{\max}$ (7.18) $\displaystyle\leq$ $\displaystyle\phi_{\max}^{2}(-\phi_{\max}^{\alpha-2}f+C_{7}).$ Hence when $\alpha>2$, $\phi=\sinh\rho$ has an uniform upper bound. Thus by (5.3) there exists a constant $c_{3}>0$, such that $r\leq c_{3}<1$. In order to obtain the lower bound of $\rho$, we parametrize any point $\theta$ in $\mathbb{S}^{n}$ as in (6.6). Assume $r_{\min}$ is attained at $\theta_{0}=(1,\vec{0})$. We have $\langle\theta,\theta_{0}\rangle=\cos\theta_{1}$. Since $r\leq c_{3}<1$, there exists a positive constant $c_{4}$, such that $\phi=\frac{r}{\sqrt{1-r^{2}}}\leq c_{4}r$. Denote by $\delta=\sqrt{r_{\min}}$. For $2<\alpha<n+1$, we have $\displaystyle\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}(\rho)\mathrm{d}\theta_{\mathbb{S}^{n}}\leq$ $\displaystyle c_{4}^{n+1-\alpha}\int\limits_{S_{1}=\mathbb{S}^{n}\bigcap\\{r\leq\delta\\}}r^{n+1-\alpha}\mathrm{d}\theta_{\mathbb{S}^{n}}+c_{4}^{n+1-\alpha}\int\limits_{S_{2}=\mathbb{S}^{n}\bigcap\\{r>\delta\\}}r^{n+1-\alpha}\mathrm{d}\theta_{\mathbb{S}^{n}}$ (7.19) $\displaystyle\leq$ $\displaystyle C_{8}\delta^{n+1-\alpha}|S^{n}|+C_{8}c_{3}^{n+1-\alpha}|S_{2}|.$ We also have $r(\theta)\leq\frac{r_{\min}(\theta_{0})}{|\cos\theta_{1}|}$ because the flow hypersurface $M_{t}$ are strictly convex and origin symmetric. Then $S_{2}=\\{\theta|\cos\theta_{1}<\sqrt{r_{\min}}\\}$. Suppose $r_{\min}\rightarrow 0$. Clearly, $\delta=\sqrt{r_{\min}}\rightarrow 0$ and $|S_{2}|\rightarrow 0$ at the same time. By (7.19), we have $\int_{\mathbb{S}^{n}}\phi^{n+1-\alpha}(\rho)\mathrm{d}\theta_{\mathbb{S}^{n}}\rightarrow 0$, which is contrary to (7.4). For $\alpha=n+1$, we have $\displaystyle\int_{\mathbb{S}^{n}}\log\phi(\rho)\mathrm{d}\theta_{\mathbb{S}^{n}}\leq$ $\displaystyle\int_{\mathbb{S}^{n}}\log r(\theta)\mathrm{d}\theta_{\mathbb{S}^{n}}+|S^{n}|\log c_{4}$ (7.20) $\displaystyle\leq$ $\displaystyle|S^{n}|\log r_{\min}(\theta_{0})-\int_{\mathbb{S}^{n}}\log|\cos\theta_{1}|\mathrm{d}\theta_{\mathbb{S}^{n}}+|S^{n}|\log c_{4}.$ For the same reason as (6.11), the second term in (7.20) is convergent. So $\int_{\mathbb{S}^{n}}\log\phi(\rho)\mathrm{d}\theta_{\mathbb{S}^{n}}\rightarrow-\infty$ as $r_{\min}\rightarrow 0$, which is contrary to (7.5). Hence we obtain the uniform lower bounds of $r$ as well as $\rho$ by (5.3) and complete the proof. ∎ ###### Remark 7.2. Similar to the Euclidean space, we hope (7.11) implies the uniform upper bound of the support function $\hat{u}$ away from $1$. Unfortunately, we find an example here with unbounded $u$, i.e, $\hat{u}$ is very close to $1$ satisfying (7.11). Suppose $\hat{E}(e_{1},e_{2},\cdots,e_{2})$ is a rotation symmetric ellipsoid in $\mathbb{R}^{n+1}$ with its centroid in the origin represented by $\frac{x_{1}^{2}}{e_{1}^{2}}+\frac{x_{2}^{2}}{e_{2}^{2}}+\cdots+\frac{x_{n+1}^{2}}{e_{2}^{2}}=1$. Here we assume $0<e_{2}<e_{1}<1$. Then $\hat{E}$ can be parametrized by $x=(e_{1}\cos\theta,e_{2}\sin\theta\vec{x})$ where $\vec{x}\in\mathbb{S}^{n-1}$ is an $n$-dimensional unit vector. Correspondingly, the unit outward norm vector is $\displaystyle\frac{(e_{2}\cos\theta,e_{1}\sin\theta\vec{x})}{\sqrt{e_{2}^{2}\cos^{2}\theta+e_{1}^{2}\sin^{2}\theta}}.$ (7.21) The support function becomes $\displaystyle\hat{u}(\theta)=\frac{e_{1}e_{2}}{\sqrt{e_{2}^{2}\cos^{2}\theta+e_{1}^{2}\sin^{2}\theta}}.$ (7.22) If we parametrize $\mathbb{S}^{n}$ as in (6.7), by comparing with (7.21) we have $\displaystyle\tan v_{1}=\frac{e_{1}}{e_{2}}\tan\theta.$ (7.23) By (7.22) and (7.23), the support function can be parametrized by $v_{1}$ as $\displaystyle\hat{u}(v_{1})=$ $\displaystyle\frac{e_{1}e_{2}\sqrt{\tan\theta^{2}+1}}{\sqrt{e_{2}^{2}+e_{1}^{2}\tan\theta^{2}}}=\frac{e_{1}\sqrt{\tan v_{1}^{2}\frac{e_{2}^{2}}{e_{1}^{2}}+1}}{\sqrt{1+\tan v_{1}^{2}}}$ (7.24) $\displaystyle=$ $\displaystyle\sqrt{\tan v_{1}^{2}e_{2}^{2}+e_{1}^{2}}\cos v_{1}=\sqrt{e_{1}^{2}-(e_{1}^{2}-e_{2}^{2})\sin v_{1}^{2}}.$ Substitute (6.7) into (7.15) and (7.16) respectively. When $n=1$, we obtain $\displaystyle\int_{\mathbb{S}^{1}}\log(1-\hat{u}^{2})\mathrm{d}\sigma_{\mathbb{S}^{1}}=$ $\displaystyle\int_{0}^{2\pi}\log(1-e_{1}^{2}+(e_{1}^{2}-e_{2}^{2})(\sin v_{1})^{2})dv_{1}$ (7.25) $\displaystyle>$ $\displaystyle 2\int_{0}^{\pi}\left(\log(e_{1}^{2}-e_{2}^{2})+2\log\sin v_{1}\right)\mathrm{d}v_{1}\geq-C(e_{1},e_{2}).$ When $n\geq 2$ $\displaystyle\int_{\mathbb{S}^{n}}(1-\hat{u}^{2})^{-\frac{n-1}{2}}\mathrm{d}\sigma_{\mathbb{S}^{n}}$ (7.26) $\displaystyle=$ $\displaystyle\int_{0}^{\pi}\int_{\mathbb{S}^{n-1}}(1-e_{1}^{2}+(e_{1}^{2}-e_{2}^{2})\sin v_{1}^{2})^{-\frac{n-1}{2}}(\sin v_{1})^{n-1}\mathrm{d}v_{1}\mathrm{d}\sigma_{\mathbb{S}^{n-1}}$ $\displaystyle<$ $\displaystyle\int_{0}^{\pi}\int_{\mathbb{S}^{n-1}}\left((e_{1}^{2}-e_{2}^{2})\sin v_{1}^{2}\right)^{-\frac{n-1}{2}}(\sin v_{1})^{n-1}\mathrm{d}v_{1}\mathrm{d}\sigma_{\mathbb{S}^{n-1}}$ $\displaystyle\leq$ $\displaystyle C(e_{1},e_{2},n).$ Now we construct a sequence of $\hat{E}(e_{1}^{(n)},e_{2}^{(n)})$ satisfying $e_{1}^{(n)}\rightarrow 1$ and $e_{2}^{(n)}\leq c<1$. Note that by (5.6), $\max u_{n}\rightarrow\infty$ as $\max\hat{u}_{n}=e_{1}\rightarrow 1$, which implies that the preimages of $\hat{E}(e_{1}^{(n)},e_{2}^{(n)})$ in Hyperbolic space tend to $\infty$. However, by inserting (7.25), (7.26) into (7.12)and (7.13), we show that $\mathcal{J}(\hat{u})_{\hat{E}(e_{1}^{(n)},e_{2}^{(n)})}$ still can be bounded from above by some positive constant $C$ independent of how far $e_{1}$ is away from $1$. Proof of Theorem 1.3 Combining Lemma 3.2, we established the $C^{0}$, $C^{1}$ estimates of the flow (1.6) under the evenness assumption. Employing (5.3) and (5.6), we obtain the uniform upper and lower bounds of $\eta(t)$ in (7.6). Then the $C^{2}$ estimate follows from Lemma 4.3. Thus we obtain the long time existence and regularity for the flow (1.6). Furthermore, Lemma 7.2 and the similar argument in Section 5 imply the asymptotic behaviour of the flow (1.6) and we complete the proof of the Theorem 1.3. ∎ ## References
# Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios $\text{Ziqiang Li}^{1,*},\text{Hong Sun}^{1,*},\text{Pengfei Xia}^{1},\text{Heng Li}^{2},\text{Beihao Xia}^{2},\text{Yi Wu}^{1},\text{Bin Li}^{1,\dagger}$ ${}^{1}\text{Big Data and Decision Lab, University of Science and Technology of China.}$ ${}^{2}\text{Huazhong University of Science and Technology}$ <EMAIL_ADDRESS> <EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Recent deep neural networks (DNNs) have came to rely on vast amounts of training data, providing an opportunity for malicious attackers to exploit and contaminate the data to carry out backdoor attacks. However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data. In this paper, we introduce a more realistic attack scenario where victims collect data from multiple sources, and attackers cannot access the complete training data. We refer to this scenario as data-constrained backdoor attacks. In such cases, previous attack methods suffer from severe efficiency degradation due to the entanglement between benign and poisoning features during the backdoor injection process. To tackle this problem, we introduce three CLIP-based technologies from two distinct streams: Clean Feature Suppression and Poisoning Feature Augmentation. The results demonstrate remarkable improvements, with some settings achieving over 100% improvement compared to existing attacks in data-constrained scenarios. Code is available at Data-constrained backdoor attacks. 11footnotetext: The first two authors contributed equally to this paper.22footnotetext: Corresponding Author: Bin Li. ## 1 Introduction Deep neural networks (DNNs) are powerful ML algorithms inspired by the human brain, used in various applications such as image recognition He et al. (2016), natural language processing Liu et al. (2023), image generation Li et al. (2023b; 2022c; 2022b); Wu et al. (2023), and trajectory prediction Wong et al. (2022); Xia et al. (2022a). DNN effectiveness depends on training data quantity/quality. For example, Stable Diffusion (983M parameters) Rombach et al. (2022) excels in image generation due to pre-training on 5B image-text pairs. As the demand for data continues to rise, many users and businesses resort to third-party sources or online collections as a convenient means of acquiring the necessary data. However, recent studies Pan et al. (2022); Li et al. (2021b); Yang et al. (2021) have demonstrated that such practices can be maliciously exploited by attackers to contaminate the training data, significantly impairing the functionality and reliability of trained models. The growing adoption of neural networks across different domains has made them an attractive target for malicious attacks. One particular attack technique gaining attention is the backdoor attack Goldblum et al. (2022); Nguyen & Tran (2020); Xia et al. (2022c). In backdoor attacks, a neural network is deliberately injected with a hidden trigger by introducing a small number of poisoning samples into the benign training set during the training. Once the model is deployed, the attacker can activate the backdoor by providing specific inputs containing the hidden trigger, causing the model to produce incorrect results. Backdoor attacks continue to present a significant and pervasive threat across multiple sectors, including image classification Gu et al. (2019), natural language processing Pan et al. (2022); Zeng et al. (2023), and malware detection Li et al. (2021a). In this paper, we focus on the widely studied field of image classification. (a) O2O Data Collection Mode (b) M2O Data Collection Mode Figure 1: One-to-one (O2O) and many-to-one (M2O) data collection modes. M2O mode is more in line with practical scenarios where data collectors collect data from multiple sources. In this mode, the attacker cannot have all the data available to the victims. It’s worth noting that previous backdoor attacks rely on a potentially overly broad assumption. They assume that all training data comes from a single source, and the collected source has been poisoned by attacker (as shown in O2O data collection mode in Fig. 1). This assumption grants attackers full access to the entire training dataset, making it easy to poison. However, it doesn’t accurately represent real-world attack scenarios. Consider a scenario where victims have a limited sample private dataset. To compensate, they may augment it by collecting additional data from various online sources (referred to as the public dataset) and combine it with their private data for training, as depicted in the M2O data collection mode in Fig. 1. In this case, some of the sources may be secretly poisoned by attackers. Attackers cannot access the private dataset and can only manipulate a portion of the public dataset for poisoning. Consequently, a discrepancy arises between the distribution of the poisoned data and the training data, deviating from previous poisoning attack pipeline. In this paper, we address a more realistic backdoor attack scenario called data-constrained backdoor attacks, where the attackers do not have access to the entire training set. To be more precise, we classify data-constrained backdoor attacks into three types based on different types of data sources: number-constrained backdoor attacks, class-constrained backdoor attacks, and domain-constrained backdoor attacks. Upon investigation, we have discovered that existing attack methods exhibit significant performance degradation when dealing with these data-constrained backdoor attacks. We propose that the entanglement between benign and poisoning features is a crucial factor contributing to this phenomenon. Entanglement refers to the neural networks utilizing both benign and poisoning features to make decisions for poisoning samples. However, this approach is not efficient for backdoor attacks. Ideally, an efficient backdoor attack should solely rely on the poison feature generated by the trigger to make decisions, irrespective of how the benign feature is expressed. To enhance the efficiency of poisoning attacks in data- constrained backdoor scenarios, we introduce two streams: Clean Feature Suppression and Poisoning Feature Augmentation to reduce the influence of clean features and amplify the expression of poisoning features, respectively. To achieve these goals, we propose three techniques utilizing the CLIP Radford et al. (2021). Our main contributions are summarized as follows. i) We present a novel and contemporary backdoor attack scenario called data-constrained backdoor attacks, which assumes that attackers lack access to the entire training data, making it a versatile and practical attack with broad applicability. ii) Through a systematic analysis of previous attack methods, we identify the entanglement between poisoning and benign features as the primary contributing factor to their performance degradation. iii) To address this issue, we introduce the pre-trained CLIP model into the field of backdoor attacks for the first time. We propose three innovative technologies: CLIP-CFE, CLIP-UAP, and CLIP-CFA. Extensive evaluations conducted on 3 datasets and 3 target models, and over 15 different settings demonstrate the significant superiority of our proposed CLIP-UAP and CLIP-CFA over existing backdoor attacks. Furthermore, CLIP-CFE complements existing attack methods and can be seamlessly integrated with them, resulting in further efficiency improvements. ## 2 Background and Related Work Here, we provide a concise overview of the typical pipeline for backdoor attacks on neural networks. Sec. A.1 provides a comprehensive exploration of related work on backdoor attacks and CLIP. ### 2.1 Backdoor Attacks on Neural Networks Backdoor attacks aim to introduce hidden triggers into DNNs, allowing the attacked models to behave correctly on clean samples while exhibiting malicious behavior when triggered by specific inputs. These attacks can occur at various stages of Artificial Intelligence (AI) system development Gao et al. (2020). The surface of backdoor attacks has been systematically categorized into six groups: code-based Bagdasaryan & Shmatikov (2021), outsourcing, pretrained model-based Wang et al. (2020); Ge et al. (2021), poisoning-based Liao et al. (2018), collaborative learning-based Nguyen et al. (2020), and post-deployment attacks Rakin et al. (2020). Among these categories, poisoning-based backdoor attacks, which involve introducing a backdoor trigger during the training process by mixing a few poisoning samples, are the most straightforward and commonly used method. This study focuses on addressing concerns related to poisoning-based backdoor attacks. ### 2.2 General Pipeline of Backdoor Attacks Consider a learning model $f(\cdot;\Theta):X\rightarrow Y$, where $\Theta$ represents the model’s parameters and $X(Y)$ denotes the input (output) space, with given dataset $\mathcal{D}\subset X\times Y$. Backdoor attacks typically involve three essential steps: poisoning set generation, backdoor injection, and backdoor activation. Poisoning set generation. In this step, attackers employ a pre-defined poison generator $\mathcal{T}(x,t)$ to introduce a trigger $t$ into a clean sample $x$. Specifically, they select a subset $\mathcal{P^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,P\\}$ from the clean training set $\mathcal{D}=\\{(x_{i},y_{i})|i=1,\cdots,N\\}$ ($\mathcal{P^{\prime}}\subset\mathcal{D}$, and $P\ll N$) and result in the corresponding poisoning set $\mathcal{P}=\\{(x^{\prime}_{i},k)|x^{\prime}_{i}=\mathcal{T}(x_{i},t),(x_{i},y_{i})\in\mathcal{P^{{}^{\prime}}},i=1,\cdots,P\\}$. Here, $y_{i}$ and $k$ represent the true label and the attack-target label of the clean sample $x_{i}$ and the poisoning sample $x^{\prime}_{i}$, respectively. Backdoor injection. In this step, The attackers mix the poisoning set $\mathcal{P}$ into the clean training set $\mathcal{D}$ and release the new dataset. The victims download the poisoning dataset and use it to train their own DNN models Gu et al. (2019): $\begin{aligned} \underset{\Theta}{\min}\quad\frac{1}{N}\sum_{(x,y)\in\mathcal{D}}L\left(f(x;\Theta),y\right)+\frac{1}{P}\sum_{\left(x^{\prime},k\right)\in\mathcal{P}}L\left(f(x^{\prime};\Theta),k\right)\end{aligned}\text{,}$ (1) where $L$ is the classification loss such as the commonly used cross entropy loss. In this case, backdoor injection into DNNs has been completed silently. Backdoor activation. In this step, the victims deploy their compromised DNN models on model-sharing platforms and model-selling platforms. The compromised model behaves normally when presented with benign inputs, but attackers can manipulate its predictions to align with their malicious objectives by providing specific samples containing pre-defined triggers. ## 3 Data-constrained Backdoor Attacks We first show the considered pipeline of data-constrained backdoor attacks, and than illustrate the performance degradation of previous methods on proposed pipeline. Finally, we attribute the degradation to the entanglement between the benign and poisoning features during the poisoning injection. ### 3.1 Preliminaries Previous methods Chen et al. (2017); Liu et al. (2017); Li et al. (2022a); Nguyen & Tran (2020) have commonly followed the attack pipeline outlined in Sec. 2.2. However, this widely adopted pipeline relies on an overly loose assumption that all training data is collected from a single source and that the attacker has access to the entire training data, which is often not the case in real-world attack scenarios. In this paper, we focus on a more realistic scenario: Data-constrained Backdoor Attacks. Pipeline of data-constrained backdoor attacks. The proposed pipeline also consists of three steps: poisoning set generation, backdoor injection, and backdoor activation. The backdoor injection and activation steps remain unchanged from the previous attack pipeline. However, in the poisoning set generation step, data-constrained attacks only assume access to a clean training set $\mathcal{D^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\\}$, which follows a different data distribution from $\mathcal{D}$. To address this, the attacker randomly selects a subset $\mathcal{P^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,P\\}$ from the accessible dataset $\mathcal{D^{\prime}}$, and creates the corresponding poisoning set $\mathcal{P}=\\{(x^{\prime}_{i},k)|x^{\prime}_{i}=\mathcal{T}(x_{i},t),(x_{i},y_{i})\in\mathcal{P^{{}^{\prime}}},i=1,\cdots,P\\}$. Additionally, based on the different constraints imposed by the accessible training set $\mathcal{D^{\prime}}$, data-constrained backdoor attacks are further categorized into three types: Number-constrained Backdoor Attacks, Class-constrained Backdoor Attacks, and Domain-constrained Backdoor Attacks. The details pipeline can be found in Sec. A.2.3, Sec A.2.4, and Sec. A.2.5, respectively. Empirical results. Noticing that the experimental setting of this experiment can be found in Sec. A.2.2. Fig. 2 (a), (b), and (c) illustrate the attack success rate on number-constrained, class-constrained, and domain-constrained backdoor attacks, respectively. The results demonstrate that ASR experiences a significant decrease as the number of poisoning samples ($P$), the number of class ($C^{\prime}$), or domain rate (the proportion of poisoning sets sampled from $\mathcal{D}\setminus\mathcal{D^{\prime}}$ and $\mathcal{D^{\prime}}$) in the poisoning set decreases, particularly for Blended backdoor attacks. It is worth noting that Universal Adversarial Perturbations (UAP) achieves relatively favorable results even with a low poisoning rate. This can be attributed to the utilization of a proxy model that is pre-trained on the entire training set ($\mathcal{D}$). However, UAP is not accessible in our settings, and we present the results for UAP to effectively demonstrate the performance degradation even when a pre-trained proxy model is available. (a) Number-constrained (b) Class-constrained (c) Domain-constrained Figure 2: Attack success rate (ASR) in the different data-constrained backdoor attack. The experiment is repeated 5 times, and the solid lines represent the mean results. (a): The abscissa is the number ($P$) of samples in the poisoning set $\mathcal{P^{\prime}}$. (b): The experiments are conducted with triggers BadNets, Blended, and UAP, with a poisoning rate of $2\%$ ($P=1000$) for each. The x-axis represents the number of classes ($|Y^{\prime}|$) in the poisoning set $\mathcal{P^{\prime}}$. Specifically, ’1 (1)’ and ’1 (0)’ denote dirty-label single-class ($Y^{\prime}=\\{c\\},c\neq k$) and clean-label single-class ($Y^{\prime}=\\{k\\}$), respectively. (c): The poisoning rates of experiments with trigger BadNets, Blended, and UAP are $2\%$ ($P=1000$), $2\%$ ($P=1000$), and $1\%$ ($P=500$), respectively. The abscissa is the domain rate that represents the proportion of poisoning sets sampled from $\mathcal{D}\setminus\mathcal{D^{\prime}}$ and $\mathcal{D^{\prime}}$). ### 3.2 Entanglement Between Benign and Poisoning Features In Sec A.3, we present three observations pertaining to data-constrained backdoor attacks, serving as compelling evidence for the presence of a significant interdependence between benign and poisoning features during the backdoor injection. This intricate entanglement is identified as the primary factor responsible for the inadequacies exhibited by current attack methodologies within data-constrained scenarios. Our study is a pioneering exploration of feature entanglement in the context of backdoor attacks, yielding fresh and valuable insights into the realm of backdoor learning. Ideally, one would anticipate backdoor models to exclusively rely on poisoning features when confronted with poisoning samples, as this would be the most efficient strategy for executing successful backdoor attacks. However, neural networks tend to be greedy and utilize all features for decision-making Li et al. (2023c), leading to activation of both poisoning and benign features during backdoor injection. This results in reduced poisoning efficiency when there is a difference in benign features between the backdoor injection and activation phases, as evidenced in data-constrained backdoor attacks. ## 4 CLIP-guided Backdoor Attacks Method We present our approach, which consists of two components: Clean Feature Suppression and Poisoning Feature Augmentation. These components are independent of each other and can be seamlessly combined. The threat model considered in our study has been introduced in Sec. A.4 ### 4.1 Clean Feature Suppression As described in Sec. 3, the effectiveness of data-constrained backdoor attacks is hindered due to the entanglement of benign and poisoning features during the backdoor injection phase. To address this challenge, we propose a solution called ”clean feature suppression” in this section. The primary objective of this approach is to minimize the impact of benign features on the decision- making process, thus amplifying the significance of poisoning features. #### 4.1.1 CLIP-based Clean Feature Erasing To achieve clean feature suppression, we can employ a feature extractor pre- trained on the entire training set (As shown in Sec. Clean Feature Erasing Noise). However, since our data-constrained backdoor attacks lack access to the complete training set, an alternative solution is required. Recent studies have shown that pre-trained CLIP Radford et al. (2021) generates consistent and robust semantic representations across a wide range of (image, text) pairs, enabling impressive zero-shot classification performance comparable to supervised learning accuracy on challenging datasets like ImageNet (As shown in A.5). Hence, we can utilize the pre-trained general model CLIP, which replaces the feature extractor trained on the entire training set, allowing us to achieve clean feature suppression (As shown in CLIP for Clean Feature Erasing). Clean Feature Erasing Noise. The technique of clean feature suppression aims to eliminate the clean information present in images by introducing optimized noise, denoted as $\delta$, which helps modify the input image to resemble the unbiased class. In accordance with the data-constrained backdoor attack pipeline outlined in Sec. 3, we assume that the chosen clean training dataset for generating the poisoning set consists of $P$ clean examples, denoted as $\mathcal{P^{\prime}}\subset X\times Y$ (where $\mathcal{P^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,P\\}$). Here, $x_{i}\in X$ represents the inputs, $y_{i}\in Y=\\{1,2,\cdots,C\\}$ represents the labels, and $C$ denotes the total number of classes. We refer to the modified version as $\mathcal{P}_{e}=\\{(x_{e,i},y_{i})|i=1,\cdots,P\\}$, where $x_{e,i}=x_{i}+\delta_{i}$ represents the erased version of the training example $x_{i}\in\mathcal{P^{\prime}}$. The term $\delta_{i}\in\Delta$ denotes the ”invisible” noise applied to achieve the erasing effect. The noise $\delta_{i}$ is subject to the constraint $||\delta_{i}||_{p}\leq\epsilon$, where $||\cdot||_{p}$ represents the $L_{p}$ norm, and $\epsilon$ is set to a small value to ensure the stealthiness of the backdoor attacks. Our objective in erasing the clean features is to ensure that the pre-trained feature extractor does not extract any meaningful information from the given images $x$. This is achieved by introducing customized and imperceptible noise, denoted as $\delta_{i}$. To be more specific, for a clean example $x_{i}$, we propose to generate the noise $\delta_{i}$ that erases the features by solving the following optimization problem: $\delta_{i}=\mathop{\arg\min}_{\delta_{i}}L(f^{\prime}(x_{i}+\delta_{i}),y_{m})\quad\text{s.t.}\quad||\delta_{i}||_{p}\leq\epsilon,$ (2) where $L$ represents the mean squared error (MSE) loss, defined as $L(a,b)=||a-b||^{2}$. The function $f^{\prime}(\cdot)$ corresponds to the pre- trained feature extractor employed for noise generation. Additionally, $y_{m}$ denotes the unbiased label for the classification task, which is defined as $y_{m}=[\frac{1}{C},\frac{1}{C},\cdots,\frac{1}{C}]$, where $C$ signifies the total number of classes. While this vanilla method proves effective in erasing clean features, it requires a proxy feature extractor that has been pre- trained on the entire training set. This approach is not suitable for our data-restricted backdoor attacks. CLIP for Clean Feature Erasing (CLIP-CFE). Taking inspiration from CLIP’s approach to zero-shot classification (Sec. A.5), we leverage a general CLIP model to optimize the feature erasing noise. This allows us to relax the need for a proxy feature extractor pre-trained on the entire training set. We consider $C$ prompts, ”a photo of a $c_{i}$,” corresponding to different classes $c_{i}$ in the dataset, where $i=1,\cdots,C$. The CLIP-based feature erasing noise, denoted as $\delta_{i}$, is proposed for the input $x_{i}$ by solving the following optimization problem: $\delta_{i}=\mathop{\arg\min}_{\delta_{i}}L(f_{CLIP}(x_{i}+\delta_{i},\mathbb{P}),y_{m})\quad\text{s.t.}\quad||\delta_{i}||_{p}\leq\epsilon,$ (3) where $L$ represents the mean squared error (MSE) loss, $y_{m}$ denotes the unbiased label for the classification task defined as $y_{m}=[\frac{1}{C},\frac{1}{C},\cdots,\frac{1}{C}]$, $\mathbb{P}$ represents the set of prompts corresponding to different classes in the dataset, and $f_{CLIP}$ denotes the CLIP-based model used to obtain the label of the input image. Specifically, $\mathbb{P}=\\{p_{1},p_{2},\cdots,p_{C}\\}=\\{\text{"a photo of a $c_{i}$"}|i=1,2,\cdots,C\\},$ (4) $\displaystyle f_{CLIP}(x_{i}+\delta_{i},\mathbb{P})=\bigg{[}\frac{\langle\hat{\mathcal{E}}_{i}(x_{i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{1})\rangle}{\sum_{i=1}^{C}\langle\hat{\mathcal{E}}_{i}(x_{i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{i})\rangle},\cdots,\frac{\langle\hat{\mathcal{E}}_{i}(x_{i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{C})\rangle}{\sum_{i=1}^{C}\langle\hat{\mathcal{E}}_{i}(x_{i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{i})\rangle}\bigg{]}.$ (5) To solve the constrained minimization problem illustrated in Eq. 3, we utilize the first-order optimization method known as Projected Gradient Descent (PGD) Madry et al. (2017). The PGD method enables us to find a solution by iteratively updating the noise as follows: $\delta^{t+1}_{i}=\prod_{\epsilon}\big{(}\delta^{t}_{i}-\alpha\cdot\text{sign}(\nabla_{\delta}L(f_{CLIP}(x_{i}+\delta^{t}_{i},\mathbb{P}),y_{m}))\big{)},$ (6) where $t$ represents the current perturbation step, with a total of $T=50$ steps. $\nabla_{\delta}L(f_{CLIP}(x_{i}+\delta^{t}_{i},\mathbb{P}),y_{m})$ denotes the gradient of the loss with respect to the input. The projection function $\prod$ is applied to restrict the noise $\delta$ within the $\epsilon$-ball (with $\epsilon=8/255$ in our paper) around the original example $x$, ensuring it does not exceed this boundary. The step size $\alpha$ determines the magnitude of the noise update at each iteration. The resulting erasing examples are then obtained as follows: $\mathcal{P}_{e}=\\{(x_{e,i},y_{i})|i=1,\cdots,P\\},\quad\text{where}\quad x_{e,i}=x_{i}+\delta^{T}_{i}.$ (7) ### 4.2 Poisoning Feature Augmentation In addition to eradicating clean features in images to tackle the entanglement between benign and poisoning features, enhancing the expression of poisoning features is another effective approach. In this section, we present two parallel triggers aimed at augmenting the poisoning features: CLIP-based Contrastive Feature Augmentation (4.2.1) and CLIP-based Universal Adversarial Perturbations (A.6). (a) (b) (c) (d) Figure 3: Attack success rate (ASR) of the (a): number-constrained backdoor attacks, (b): clean-label single-class attack (the access category $Y^{\prime}$ is set to $\\{0\\}$), (c): dirty-label single-class attack (the access category $Y^{\prime}$ is set to $\\{1\\}$), and (d): domain-constrained backdoor attacks (domain rate is set to 0) on the CIFAR-100 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CLIP-CFE), while the green points represents w/ CLIP-CFE. All experiments are repeated 5 times, and the results are computed withthe mean of five different runs. #### 4.2.1 CLIP-based Contrastive Feature Augmentation While the CLIP-UAP method has shown impressive results in terms of poisoning efficiency, it requires customization for different attack-target labels. In this section, we propose a more versatile trigger design that is independent of the attack-target label, enhancing the poisoning feature. Drawing inspiration from the entanglement between benign and poisoning features discussed in Sec. 3.2, we utilize contrastive optimization to augment the poisoning feature. Our expectation is that the poisoning feature extracted from the designed trigger will be more expressive compared to the clean feature extracted from the clean samples. Specifically, given a trigger $\delta^{t+1}_{\text{con}}$ to be optimized, two random views (query: $x+\delta_{\text{con}}$ and key: $x_{1}+\delta_{\text{con}}$) are created by different clean samples ($x$ and $x_{1}$). Positive pair is defined as such query-key pair, between different poisoning samples. Negative pairs are defined as pairs between poisoning example and its corresponding clean example, i.e. between $x+\delta_{\text{con}}$ and $x$. All views are passed through the pre-trained image encoder $\hat{\mathcal{E}}_{i}(\cdot)$ of the CLIP to acquire the representation $v$: $v_{q}=\hat{\mathcal{E}}_{i}(x+\delta_{\text{con}}),\quad v_{+}=\hat{\mathcal{E}}_{i}(x_{1}+\delta_{\text{con}}),\quad v_{-}=\hat{\mathcal{E}}_{i}(x).$ (8) CLIP-Contrastive feature augmentation (CLIP-CFA) focuses on optimizing the general trigger by maximizing the similarity between positive pairs while ensuring dissimilarity between negative pairs. To achieve this, we design a loss function as follows: $L_{\text{con}}(x,x_{1},\delta_{\text{con}})=-\frac{\langle v_{q},v_{+}\rangle}{\langle v_{q},v_{-}\rangle},$ (9) where $\langle\cdot,\cdot\rangle$ represents the cosine similarity between two vectors and the $\delta_{\text{con}}$ is optimized with: $\delta_{\text{con}}=\mathop{\arg\min}_{||\delta_{\text{con}}||_{p}\leq\epsilon}\sum_{\left(x,y\right)\in\mathcal{D}^{\prime}}L_{\text{con}}(x,x_{1},\delta_{\text{con}}).$ (10) Similar to Eq. 14, we also adopt the first-order optimization method PGD Madry et al. (2017) to solve the constrained minimization problem as follows: $\delta^{t+1}_{\text{con}}=\prod_{\epsilon}\big{(}\delta^{t}_{\text{con}}-\alpha\cdot\text{sign}(\nabla_{\delta_{\text{con}}}L_{\text{con}}(x,x_{1},\delta_{\text{con}}))\big{)}.$ (11) Therefore, the optimization should also be accumulated on all samples in the accessible clean training set $\mathcal{D^{\prime}}$. Finally, the CLIP-CFA of set $\mathcal{D^{\prime}}$ can be formulated as $\delta_{\text{con}}=\delta^{T}_{\text{con}}$, and the poison generator is formulated as $\mathcal{T}(x,\delta_{\text{con}})=x+\delta_{\text{con}}$. ### 4.3 Attack Summary we present two independent trigger design methods: CLIP-UAP (Sec. A.6) and CLIP-CFA (Sec. 4.2.1). These triggers are aimed at enhancing the expression of poisoning features and can replace previous trigger design approaches, leading to improved performance in data-constrained backdoor attacks. Additionally, in Sec. 4.1, we introduce a CLIP-CFE method. This approach minimizes the influence of clean features during the poisoning process and can be integrated into any aforementioned trigger design methods. By combining trigger design and clean feature erasing, our final approach achieves state-of-the-art performance in all types of data-constrained backdoor attacks. ## 5 Experiments We provides an overview of the experimental settings, covering datasets, model architecture, evaluation metrics, baselines, and implementations (Appendix A.7). Subsequently, we perform comprehensive experiments to assess the effectiveness of our proposed methods through answering the following research questions: RQ1: Are proposed technologies effective on three backdoor attacks? (Sec. 5.1). RQ2: Are proposed technologies harmless for Benign Accuracy? (Sec. 5.2). RQ3: Are proposed technologies stealthy for victims? (Sec. A.8). RQ4: Are proposed technologies effective for different poisoning settings? (Sec. 5.3). In this section, we present the results specifically for CIFAR-100 datasets. Experimental outcomes for CIFAR-10 and ImageNet-50 are provided in Appendix A.9 and Appendix A.10 respectively. Additionally, for more experiments and further discussions, please refer to Appendix A.11, A.12, A.13, A.14, and A.15. ### 5.1 RQ1: Are proposed technologies effective on three backdoor attacks? To assess the effectiveness of our proposed technologies, we conduct attacks on various target models and datasets, evaluating the Attack Success Rate (ASR) for each target model. In order to establish a basis for comparison, we introduce two baseline attack methods: BadNets Gu et al. (2019) and Blended Chen et al. (2017), as discussed in Sec. A.2.1. Fig. 3 illustrates the performance of the following types of backdoor attacks on the CIFAR-100 dataset: (a) number-constrained, (b) clean-label single-class (class- constrained), (c) dirty-label single-class (class-constrained), and (d) out- of-the-domain (domain-constrained)111Both the clean-label single-class and dirty-label single-class backdoor attacks represent extreme scenarios of the class-constrained backdoor attack. In the clean-label single-class attack, the targeted class category is set to $Y^{\prime}={k}$, while in the dirty-label single-class attack, it is set to $Y^{\prime}={c}$ where $c\neq k$. Similarly, the out-of-the-domain backdoor attack is an extreme scenario of the domain- constrained backdoor attack, with a domain rate of 0. For further details, please refer to Appendix A.7.. CLIP-based poisoning feature augmentation is more effective than previous attack methods. Our proposed methods, CLIP-UAP and CLIP-CFA, outperform the baseline techniques (BadNets Gu et al. (2019) and Blended Chen et al. (2017) 222While there are several more effective techniques for poisoning attacks, they typically necessitate access to the entire training data, rendering them unsuitable for our data-limited backdoor attacks.) in terms of consistency across different attacks and target models. Specifically, we achieved an ASR of 0.878, 0.825, 0.984, and 0.988 for BadNets, Blended, CLIP-UAP, and CLIP- CFA, respectively, in the number-constrained backdoor attack on the VGG-16 model. These results provide evidence that our proposed poisoning feature augmentation generates more effective triggers compared to other methods. CLIP-based Clean Feature Suppression is useful for different attack methods. Our proposed method, CLIP-CFE, has shown significant improvements in effectiveness compared to the baseline method without CLIP-CFE. In various cases, CLIP-CFE has enhanced the poisoning efficiency significantly. For instance, in the clean-label single-class backdoor attack on the VGG-16 dataset, we observed remarkable improvements of $187\%$, $150\%$, $110\%$, and $229\%$ for BadNets, Blended, CLIP-UAP, and CLIP-CFA, respectively. However, it is worth noting that in the results of the domain-constrained backdoor attacks on MobileNet-V2 (as depicted in the right part of Fig. 3 (d)), CLIP- CFA and CLIP-UAP only slightly outperform the corresponding methods with CFE. More discussion. While our technologies have shown significant improvements in poisoning efficiency compared to baselines, there are still important discussions that need to be addressed. We aim to provide answers to the following questions in a systematic manner in Appendix A.15: i) Why do we observe performance degradation in the clean-label single-class attack? ii) Why are domain-constrained backdoor attacks generally easier compared to class-constrained backdoor attacks? Table 1: The Benign Accuracy (BA) on the CIFAR-100 dataset. All results are computed the mean by 5 different run. Trigger | Clean Feature Suppression | Backdoor Attacks | Average ---|---|---|--- Number Constrained | Class Constrained ($Y^{\prime}=\\{0\\}$) | Class Constrained ($Y^{\prime}=\\{1\\}$) | Domain Constrained V-16 | R-18 | M-2 | V-16 | R-18 | M-2 | V-16 | R-18 | M-2 | V-16 | R-18 | M-2 BadNets | w/o CLIP-CFE | 0.698 | 0.728 | 0.722 | 0.698 | 0.730 | 0.728 | 0.700 | 0.728 | 0.729 | 0.699 | 0.727 | 0.728 | 0.718 w/ CLIP-CFE | 0.700 | 0.730 | 0.728 | 0.701 | 0.731 | 0.723 | 0.698 | 0.730 | 0.726 | 0.701 | 0.730 | 0.724 | 0.719 Blended | w/o CLIP-CFE | 0.700 | 0.727 | 0.722 | 0.700 | 0.726 | 0.725 | 0.701 | 0.729 | 0.723 | 0.698 | 0.729 | 0.725 | 0.717 w/ CLIP-CFE | 0.700 | 0.730 | 0.727 | 0.701 | 0.729 | 0.727 | 0.699 | 0.730 | 0.724 | 0.700 | 0.731 | 0.727 | 0.719 CLIP-UAP | w/o CLIP-CFE | 0.702 | 0.730 | 0.727 | 0.702 | 0.729 | 0.727 | 0.701 | 0.730 | 0.725 | 0.702 | 0.731 | 0.729 | 0.720 w/ CLIP-CFE | 0.700 | 0.731 | 0.725 | 0.702 | 0.732 | 0.726 | 0.699 | 0.732 | 0.724 | 0.700 | 0.730 | 0.725 | 0.719 CLIP-CFA | w/o CLIP-CFE | 0.703 | 0.731 | 0.727 | 0.701 | 0.730 | 0.725 | 0.701 | 0.730 | 0.727 | 0.700 | 0.731 | 0.727 | 0.719 w/ CLIP-CFE | 0.702 | 0.729 | 0.729 | 0.701 | 0.730 | 0.727 | 0.702 | 0.731 | 0.725 | 0.702 | 0.730 | 0.727 | 0.720 ### 5.2 RQ2: Are proposed technologies harmless for Benign Accuracy? As shown in Table 1, our CLIP-UAP and CLIP-CFA exhibit similar or even better average Benign Accuracy (BA) compared to the baseline methods, BadNets Gu et al. (2019) and Blended Chen et al. (2017). Additionally, it is worth noting that our proposed method, CLIP-CFE, does not negatively impact BA. This finding confirms that our technologies are harmless to the benign accuracy compared to baseline methods, even under various settings and different backdoor attacks. ### 5.3 RQ4: Are proposed technologies effective for different poisoning settings? Experiments on different poison rates for number-constrained backdoor attacks. We conduct ablation studies to assess the effectiveness of our proposed methods in reducing the number of poisoning samples (poisoning rates) for number-constrained backdoor attacks. The results depicted in Fig. 4 (a) demonstrate the following: i) The attack success rate increases with higher poisoning rates for different attacks. ii) Our proposed CLIP-UAP and CLIP-CFA outperform the baseline techniques, BadNets Gu et al. (2019) and Blended Chen et al. (2017). iii) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers. Experiments on different poison classes for class-constrained backdoor attacks. We conduct ablation studies to assess the effectiveness of our proposed methods in increasing the number of poisoning classes for class- constrained backdoor attacks. The results presented in Fig. 4 (b) demonstrate the following: i) The attack success rate increases with higher poisoning classes for different attacks. ii) The attack success rate of clean-label single-class attack is lower than that of dirty-label single-class attacks. iii) Our proposed methods, CLIP-UAP and CLIP-CFA, outperform the baseline techniques, BadNets Gu et al. (2019) and Blended Chen et al. (2017). iv) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers. Experiments on different domain rates for domain-constrained backdoor attacks. We conduct ablation studies to assess the effectiveness of our methods in increasing the domain rate for domain-constrained backdoor attacks. The results depicted in Fig. 4 (c) demonstrate the following: i) The ASR increases with higher domain rates for different attacks. ii) Our proposed CLIP-UAP and CLIP-CFA outperform the baseline techniques, BadNets Gu et al. (2019) and Blended Chen et al. (2017). iii) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers. Experiments on different large pre-trained models. We utilize the pre-trained CLIP model as the basis for our technologies. It’s worth noting that the community has proposed various CLIP variants. Therefore, an important practical consideration is whether our proposed technologies remain robust when applied different pre-trained CLIP models. To investigate this, we conduct ablation studies on different CLIP models for number-constrained backdoor attacks, as depicted in Fig. 4 (d). The results demonstrate that our proposed technologies exhibit robustness across different CLIP models, with ViT-B/32 emerging as a competitive choice for all methods. (a) (b) (c) (d) Figure 4: The ablation studies on the CIFAR-100 dataset. All results were computed as the mean of five different runs. (a): The ASR with different poisoning rates. (B): The ASR with different accessible class of poisoning samples, where 1 (0) and 1 (1) in the abscissa represent the clean-label and dirty-label single-class attacks, respectively. (C): The ASR with different domain rates. (D): The ASR across different pre-trained CLIP models for number-constrained backdoor attacks. ## 6 Conclusion In this paper, we address the challenges of data-constrained backdoor attacks, which occur in more realistic scenarios where victims collect data from multiple sources and attackers cannot access the full training data. To overcome the performance degradation observed in previous methods under data- constrained backdoor attacks, we propose three technologies from two streams that leverage the pre-trained CLIP model to enhance the efficiency of poisoning. Our goal is to inspire the research community to explore these realistic backdoor attack scenarios and raise awareness about the threats posed by such attacks. In the Sec A.16, we discuss the limitations of our approach and outline potential future directions for backdoor learning research. ## Acknowledgments This work was funded by the National Natural Science Foundation of China (U19B2044). Sponsored by Zhejiang Lab Open Research Project (NO. K2022QA0AB04). Supported by the Fundamental Research Funds for the Central Universities. ## References * Arto et al. (2021) Arto, Dev Vidhani, Goutham, Mayank Bhaskar, Ritobrata Ghosh, and Sujit Pal. Fine tuning clip with remote sensing (satellite) images and captions, 2021\. https://huggingface.co/blog/fine-tune-clip-rsicd,. * Bagdasaryan & Shmatikov (2021) Eugene Bagdasaryan and Vitaly Shmatikov. Blind backdoors in deep learning models. In _30th USENIX Security Symposium (USENIX Security 21)_ , pp. 1505–1521, 2021. * Barni et al. (2019) Mauro Barni, Kassem Kallas, and Benedetta Tondi. A new backdoor attack in cnns by training set corruption without label poisoning. In _2019 IEEE International Conference on Image Processing (ICIP)_ , pp. 101–105. IEEE, 2019. * Chen et al. (2017) Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. _arXiv preprint arXiv:1712.05526_ , 2017. * Cheng et al. (2021a) Ruizhe Cheng, Bichen Wu, Peizhao Zhang, Peter Vajda, and Joseph E Gonzalez. Data-efficient language-supervised zero-shot learning with self-distillation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 3119–3124, 2021a. * Cheng et al. (2021b) Siyuan Cheng, Yingqi Liu, Shiqing Ma, and Xiangyu Zhang. Deep feature space trojan attack of neural networks by controlled detoxification. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pp. 1148–1156, 2021b. * Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pp. 248–255. Ieee, 2009. * Doan et al. (2021) Khoa Doan, Yingjie Lao, and Ping Li. Backdoor attack with imperceptible input and latent modification. _Advances in Neural Information Processing Systems_ , 34:18944–18957, 2021. * Dräger et al. (2023) Nikolaus Dräger, Yonghao Xu, and Pedram Ghamisi. Backdoor attacks for remote sensing data with wavelet transform. _IEEE Transactions on Geoscience and Remote Sensing_ , 2023. * Farooq & Hafeez (2020) Muhammad Farooq and Abdul Hafeez. Covid-resnet: A deep learning framework for screening of covid19 from radiographs. _arXiv preprint arXiv:2003.14395_ , 2020. * Feng et al. (2022) Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, and Dacheng Tao. Fiba: Frequency-injection based backdoor attack in medical image analysis. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 20876–20885, 2022. * Gao et al. (2020) Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, and Hyoungshick Kim. Backdoor attacks and countermeasures on deep learning: A comprehensive review. _arXiv preprint arXiv:2007.10760_ , 2020. * Gao et al. (2023) Yinghua Gao, Yiming Li, Linghui Zhu, Dongxian Wu, Yong Jiang, and Shu-Tao Xia. Not all samples are born equal: Towards effective clean-label backdoor attacks. _Pattern Recognition_ , 139:109512, 2023. * Ge et al. (2021) Yunjie Ge, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen, and Cong Wang. Anti-distillation backdoor attacks: Backdoors can really survive in knowledge distillation. In _Proceedings of the 29th ACM International Conference on Multimedia_ , pp. 826–834, 2021. * Goldblum et al. (2022) Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2022. * Gu et al. (2019) Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. _IEEE Access_ , 7:47230–47244, 2019. * Guo et al. (2023) Wei Guo, Benedetta Tondi, and Mauro Barni. A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection. _IEEE Transactions on Dependable and Secure Computing_ , 2023. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 770–778, 2016. * Huynh-Thu & Ghanbari (2008) Quan Huynh-Thu and Mohammed Ghanbari. Scope of validity of psnr in image/video quality assessment. _Electronics letters_ , 44(13):800–801, 2008\. * Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\. * Li et al. (2021a) Chaoran Li, Xiao Chen, Derui Wang, Sheng Wen, Muhammad Ejaz Ahmed, Seyit Camtepe, and Yang Xiang. Backdoor attack on machine learning based android malware detectors. _IEEE Transactions on Dependable and Secure Computing_ , 19(5):3357–3370, 2021a. * Li et al. (2020) Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, and Xinpeng Zhang. Invisible backdoor attacks on deep neural networks via steganography and regularization. _IEEE Transactions on Dependable and Secure Computing_ , 18(5):2088–2105, 2020. * Li et al. (2021b) Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, and Jialiang Lu. Hidden backdoors in human-centric language models. In _Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security_ , pp. 3123–3140, 2021b. * Li et al. (2022a) Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor learning: A survey. _IEEE Transactions on Neural Networks and Learning Systems_ , 2022a. * Li et al. (2021c) Yuezun Li, Yiming Li, Baoyuan Wu, Longkang Li, Ran He, and Siwei Lyu. Invisible backdoor attack with sample-specific triggers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 16463–16472, 2021c. * Li et al. (2022b) Ziqiang Li, Chaoyue Wang, Heliang Zheng, Jing Zhang, and Bin Li. Fakeclr: Exploring contrastive learning for solving latent discontinuity in data-efficient gans. In _European Conference on Computer Vision_ , pp. 598–615. Springer, 2022b. * Li et al. (2022c) Ziqiang Li, Pengfei Xia, Rentuo Tao, Hongjing Niu, and Bin Li. A new perspective on stabilizing gans training: Direct adversarial training. _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 7(1):178–189, 2022c. * Li et al. (2023a) Ziqiang Li, Hong Sun, Pengfei Xia, Beihao Xia, Xue Rui, Wei Zhang, and Bin Li. A proxy-free strategy for practically improving the poisoning efficiency in backdoor attacks. _arXiv preprint arXiv:2306.08313_ , 2023a. * Li et al. (2023b) Ziqiang Li, Muhammad Usman, Rentuo Tao, Pengfei Xia, Chaoyue Wang, Huanhuan Chen, and Bin Li. A systematic survey of regularization and normalization in gans. _ACM Comput. Surv._ , 55(11), 2023b. ISSN 0360-0300. doi: 10.1145/3569928. * Li et al. (2023c) Ziqiang Li, Pengfei Xia, Xue Rui, and Bin Li. Exploring the effect of high-frequency components in gans training. _ACM Transactions on Multimedia Computing, Communications and Applications_ , 19(5):1–22, 2023c. * Li et al. (2023d) Ziqiang Li, Pengfei Xia, Hong Sun, Yueqi Zeng, Wei Zhang, and Bin Li. Explore the effect of data selection on poison efficiency in backdoor attacks. _arXiv preprint arXiv:2310.09744_ , 2023d. * Liao et al. (2018) Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, and David Miller. Backdoor embedding in convolutional neural network models via invisible perturbation. _arXiv preprint arXiv:1808.10307_ , 2018. * Liu et al. (2018) Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In _International symposium on research in attacks, intrusions, and defenses_ , pp. 273–294. Springer, 2018. * Liu et al. (2023) Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _ACM Computing Surveys_ , 55(9):1–35, 2023. * Liu et al. (2017) Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. 2017\. * Liu et al. (2020) Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. Reflection backdoor: A natural backdoor attack on deep neural networks. In _European Conference on Computer Vision_ , pp. 182–199. Springer, 2020. * Madhawa & Carlomagno (2022) Kaushalya Madhawa and Raul Carlomagno. Medclip: A pre-trained clip model for medical image search, 2022. https://github.com/Kaushalya/medclip,. * Madry et al. (2017) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. _arXiv preprint arXiv:1706.06083_ , 2017. * Min et al. (2023) Rui Min, Zeyu Qin, Li Shen, and Minhao Cheng. Towards stable backdoor purification through feature shift tuning. _arXiv preprint arXiv:2310.01875_ , 2023. * Moosavi-Dezfooli et al. (2017) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 1765–1773, 2017. * Nguyen & Tran (2021) Anh Nguyen and Anh Tran. Wanet–imperceptible warping-based backdoor attack. _arXiv preprint arXiv:2102.10369_ , 2021. * Nguyen et al. (2020) Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, and Ahmad-Reza Sadeghi. Poisoning attacks on federated learning-based iot intrusion detection system. In _Proc. Workshop Decentralized IoT Syst. Secur.(DISS)_ , pp. 1–7, 2020. * Nguyen & Tran (2020) Tuan Anh Nguyen and Anh Tran. Input-aware dynamic backdoor attack. _Advances in Neural Information Processing Systems_ , 33:3454–3464, 2020. * Niu et al. (2022) Hongjing Niu, Hanting Li, Feng Zhao, and Bin Li. Domain-unified prompt representations for source-free domain generalization. _arXiv preprint arXiv:2209.14926_ , 2022. * Pan et al. (2022) Xudong Pan, Mi Zhang, Beina Sheng, Jiaming Zhu, and Min Yang. Hidden trigger backdoor attack on $\\{$NLP$\\}$ models via linguistic style manipulation. In _31st USENIX Security Symposium (USENIX Security 22)_ , pp. 3611–3628, 2022. * Patashnik et al. (2021) Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 2085–2094, 2021. * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pp. 8748–8763. PMLR, 2021. * Rakin et al. (2020) Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. Tbt: Targeted neural network attack with bit trojan. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 13198–13207, 2020. * Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 10684–10695, 2022. * Saha et al. (2020) Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. Hidden trigger backdoor attacks. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, pp. 11957–11965, 2020. * Sandler et al. (2018) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 4510–4520, 2018. * Selvaraju et al. (2017) Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proceedings of the IEEE international conference on computer vision_ , pp. 618–626, 2017. * Simonyan & Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_ , 2014. * Souri et al. (2021) Hossein Souri, Micah Goldblum, Liam Fowl, Rama Chellappa, and Tom Goldstein. Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch. _arXiv preprint arXiv:2106.08970_ , 2021. * Souri et al. (2022) Hossein Souri, Liam Fowl, Rama Chellappa, Micah Goldblum, and Tom Goldstein. Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch. _Advances in Neural Information Processing Systems_ , 35:19165–19178, 2022. * Turner et al. (2019) Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Label-consistent backdoor attacks. _arXiv preprint arXiv:1912.02771_ , 2019. * Wang et al. (2019a) Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In _2019 IEEE Symposium on Security and Privacy (SP)_ , pp. 707–723. IEEE, 2019a. * Wang et al. (2019b) Cheng Wang, Delei Chen, Lin Hao, Xuebo Liu, Yu Zeng, Jianwei Chen, and Guokai Zhang. Pulmonary image classification based on inception-v3 transfer learning model. _IEEE Access_ , 7:146533–146541, 2019b. * Wang et al. (2020) Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, and Tianle Chen. Backdoor attacks against transfer learning with pre-trained deep learning models. _IEEE Transactions on Services Computing_ , 2020. * Wang et al. (2004) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ , 13(4):600–612, 2004. * Wen et al. (2020) Long Wen, Xinyu Li, and Liang Gao. A transfer convolutional neural network for fault diagnosis based on resnet-50. _Neural Computing and Applications_ , 32:6111–6124, 2020\. * Wong et al. (2022) Conghao Wong, Beihao Xia, Ziming Hong, Qinmu Peng, Wei Yuan, Qiong Cao, Yibo Yang, and Xinge You. View vertically: A hierarchical network for trajectory prediction via fourier spectrums. In _European Conference on Computer Vision_ , pp. 682–700. Springer, 2022. * Wu et al. (2022) Tong Wu, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar, and Prateek Mittal. Just rotate it: Deploying backdoor attacks via rotation transformation. _arXiv preprint arXiv:2207.10825_ , 2022. * Wu et al. (2023) Yi Wu, Ziqiang Li, Chaoyue Wang, Heliang Zheng, Shanshan Zhao, Bin Li, and Dacheng Ta. Domain re-modulation for few-shot generative domain adaptation. _arXiv preprint arXiv:2302.02550_ , 2023. * Xia et al. (2022a) Beihao Xia, Conghao Wong, Qinmu Peng, Wei Yuan, and Xinge You. Cscnet: Contextual semantic consistency network for trajectory prediction in crowded spaces. _Pattern Recognition_ , 126:108552, 2022a. * Xia et al. (2022b) Pengfei Xia, Ziqiang Li, Wei Zhang, and Bin Li. Data-efficient backdoor attacks. _arXiv preprint arXiv:2204.12281_ , 2022b. * Xia et al. (2022c) Pengfei Xia, Hongjing Niu, Ziqiang Li, and Bin Li. Enhancing backdoor attacks with multi-level mmd regularization. _IEEE Transactions on Dependable and Secure Computing_ , 20(2):1675–1686, 2022c. * Xia et al. (2023) Pengfei Xia, Yueqi Zeng, Ziqiang Li, Wei Zhang, and Bin Li. Efficient trojan injection: 90% attack success rate using 0.04% poisoned samples, 2023. URL https://openreview.net/forum?id=ogsUO9JHZu0. * Xia et al. (2017) Xiaoling Xia, Cui Xu, and Bing Nan. Inception-v3 for flower classification. In _2017 2nd international conference on image, vision and computing (ICIVC)_ , pp. 783–787. IEEE, 2017. * Yang et al. (2021) Limin Yang, Wenbo Guo, Qingying Hao, Arridhana Ciptadi, Ali Ahmadzadeh, Xinyu Xing, and Gang Wang. Cade: Detecting and explaining concept drift samples for security applications. In _USENIX security symposium_ , pp. 2327–2344, 2021. * Yang & Newsam (2010) Yi Yang and Shawn Newsam. Bag-of-visual-words and spatial extensions for land-use classification. In _Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems_ , pp. 270–279, 2010. * Zeng et al. (2021) Yi Zeng, Won Park, Z Morley Mao, and Ruoxi Jia. Rethinking the backdoor attacks’ triggers: A frequency perspective. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 16473–16481, 2021. * Zeng et al. (2022) Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, and Ruoxi Jia. Narcissus: A practical clean-label backdoor attack with limited information. _arXiv preprint arXiv:2204.05255_ , 2022. * Zeng et al. (2023) Yueqi Zeng, Ziqiang Li, Pengfei Xia, Lei Liu, and Bin Li. Efficient trigger word insertion. _arXiv preprint arXiv:2311.13957_ , 2023. * Zhao et al. (2020) Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, and Yu-Gang Jiang. Clean-label backdoor attacks on video recognition models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 14443–14452, 2020. * Zhong et al. (2020) Haoti Zhong, Cong Liao, Anna Cinzia Squicciarini, Sencun Zhu, and David Miller. Backdoor embedding in convolutional neural network models via invisible perturbation. In _Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy_ , pp. 97–108, 2020. ## Appendix A Appendix ### Contents ### A.1 Detailed Background and Related Work #### A.1.1 Poisoning Efficiency in Backdoor Attacks Existing studies aim at improving the poisoning efficiency of backdoor attacks can be categorized into two main areas. Designing Efficient Triggers The design of efficient triggers that are easier for DNNs to learn has garnered significant interest. Researchers have recently drawn inspiration from Universal Adversarial Perturbations (UAPs) Moosavi- Dezfooli et al. (2017) and optimized UAPs on pre-trained clean models to create effective triggers, which have been widely utilized in various studies Zhong et al. (2020); Li et al. (2020); Doan et al. (2021). However, this approach requires a pre-trained clean model on the training set, which is not practical for data-constrained backdoor attacks. Selecting Efficient Poisoning Samples Efficient sample selection for poisoning attacks is a critical yet under-explored aspect that is distinct from trigger design. Xia et al. (2022b) were among the first to investigate the contribution of different data to backdoor injection. Their research revealed that not all poisoning samples contribute equally, and appropriate sample selection can greatly enhance the efficiency of data in backdoor attacks. Additionally, various studies Li et al. (2023a; d); Gao et al. (2023); Guo et al. (2023) follow this setting, and the sample efficiency has been further improved. #### A.1.2 Poisoning Stealthiness in Backdoor Attacks Existing studies focused on increasing the stealthiness of backdoor attacks can be categorized into two main areas. Designing Invisible Triggers The concept of invisible triggers aims to ensure that poisoning images are visually indistinguishable from clean samples, thus evading detection in both pixel and feature spaces. This perspective is the most straightforward approach to bypass defenses. Chen et al. (2017) first propose a blended strategy to evade human detection by blending clean samples with the trigger to create poisoning samples. Subsequent studies Zhong et al. (2020); Li et al. (2020); Doan et al. (2021) focuse on constraining the norm of the trigger through optimization methods. Moreover, some studies have explored the use of natural patterns such as warping Nguyen & Tran (2021), rotation Wu et al. (2022), style transfer Cheng et al. (2021b), frequency Feng et al. (2022); Zeng et al. (2021), and reflection Liu et al. (2020) to create triggers that are more imperceptible to human inspection. In contrast to previous works that employ universal triggers, Li et al. (2021c) employ GAN models to generate sample-specific triggers, which are similar to adversarial examples and extremely imperceptible to humans. Clean-label Attacks. Clean-label attacks refer to backdoor attacks where the target labels of the poisoning samples align with their perception labels. Turner et al. (2019) is the first to explore clean-label attacks by employing GAN-based and adversarial-based perturbations. Compared to standard backdoor attacks, clean-label attacks are typically less effective due to the model’s tendency to associate natural features, rather than backdoor triggers, with the target class. Recent studies have focused on aligning features Saha et al. (2020) or gradients Souri et al. (2021) between perturbed inputs from the target class and trigger-inserted inputs from the non-target class through pretraining on the entire training set. Additionally, some study Zeng et al. (2022) has proposed optimizing the backdoor trigger using only the knowledge about the target-class training data. In this approach, the trigger is optimized to point towards the interior of the target class, resulting in improved effectiveness. #### A.1.3 Contrastive Language-Image Pre-Training (CLIP) Model Our method introduces the Contrastive Language-Image Pre-Training (CLIP) Radford et al. (2021) model into backdoor injection and we introduce it here. CLIP is a revolutionary deep learning model developed by OpenAI that is designed to connects texts and images by bringing them closer in a shared latent space, under a contrastive learning manner. The CLIP model is pre- trained on 400 million image-text pairs harvested from the Web, containing two encoder: CLIP text encoder $\hat{\mathcal{E}}_{t}(\cdot)$ and CLIP image encoder $\hat{\mathcal{E}}_{i}(\cdot)$. These encoders project the text and image to the CLIP common embedded feature space. Since natural language is able to express a much wider set of visual concepts, it contains ability to generalize across a wide range of tasks and domains, such as text-driven image manipulation Patashnik et al. (2021), zero-shot classification Cheng et al. (2021a), domain generalization Niu et al. (2022). To our best knowledge, our paper is the first study to explore the usage of CLIP model in the security community. ### A.2 Detailed Pipeline of Data-constrained Backdoor Attacks. #### A.2.1 Examples of Backdoor Attacks in Our Study Here, we present three popular backdoor attack methods that serve as the baseline for our preliminary experiments, providing insight into the motivation discussed in Sec. 3. All attacks follow the pipeline described in Sec. 2.2. BadNets. BadNets Gu et al. (2019) is the pioneering backdoor attack in deep learning and is often used as a benchmark for subsequent research. It utilizes a $2\times 2$ attacker-specified pixel patch as the universal trigger pattern attached to benign samples. Blended. Chen et al. (2017) first discuss the requirement for invisibility in backdoor attacks. They propose that the poisoning image should be visually indistinguishable from its benign counterpart to evade human inspection. To meet this requirement, they introduce a blending strategy where poisoning images are created by blending the backdoor trigger with benign images. Formally, the poison generator can be formulated as $\mathcal{T}(x,t)=\lambda\cdot t+(1-\lambda)\cdot x$, where $\lambda$ represents the blend ratio (we set $\lambda=0.15$ for all experiments in this paper), and $t$ is an attacker-specified benign image serving as the universal trigger pattern. Universal Adversarial Perturbations (UAP). Inspired by Universal Adversarial Perturbations (UAPs) in adversarial examples, some studies Zhong et al. (2020); Li et al. (2020); Doan et al. (2021) propose optimizing a UAP on a pre-trained clean model as the natural trigger, formulated as $\mathcal{T}(x,t)=x+t$, where $t$ is a pre-defined UAP serving as the universal trigger pattern. It’s worth noting that UAP-based backdoor attacks require a clean model pre-trained on the entire training set, which is not suitable for the discussed settings. However, to better explain our motivation that previous technologies exhibit significant performance degradation in data-constrained backdoor attacks, we assume the availability of a clean model pre-trained on the original training dataset in this section. It is important to acknowledge that this assumption does not hold in an actual attack scenario. #### A.2.2 Experimental Settings. To evaluate the performance of three backdoor attack methods (BadNets, Blended, and UAP) under data-constrained scenarios, we conduct experiments on the CIFAR-10 dataset. Specifically, we consider three types of data constraints: number, class, and domain. The settings for the poisoning attacks follow those described in Sec. A.2.1. In all attacks, we set the attack-target label $k$ to category 0. For our experiments, we select the VGG-16 model as the victim model and employ SGD as the optimizer with a weight decay of 5e-4 and a momentum of 0.9. The batch size is set to 256, and the initial learning rate is set to 0.01. The learning rate is multiplied by 0.1 at the 35-th and 55-th epochs, and the training is conducted for a total of 70 epochs. #### A.2.3 Number-constrained Backdoor Attacks Definition. Let $\mathcal{D^{\prime}}$ denote the data manipulable by the malicious source, and $\mathcal{D}$ represent all the data available to the data collector. In the number-constrained scenario, as illustrated in Fig. 5 (a), the data collector gathers data from multiple sources, including both malicious and benign sources, to form $\mathcal{D}$. The data provided by each data source is independently and identically distributed. In other words, $\mathcal{D}$ and $\mathcal{D^{\prime}}$ belong to the same distribution, but in terms of quantity, $N^{\prime}<N$. The setting of number-constrained backdoor attacks is similar to that of data-efficient backdoor attacks discussed in previous studies Xia et al. (2022b); Zhong et al. (2020). Both aim to improve the Attack Success Rate (ASR) under a low poisoning rate. However, previous studies assumed that the attacker has access to the entire training set $\mathcal{D}$, which enables efficient trigger design and sample selection. For example, some studies Zhong et al. (2020) draw inspiration from Universal Adversarial Perturbations (UAPs) in adversarial examples and propose to optimize a UAP on a clean model pre-trained on the training set as the natural trigger. Xia et al. Xia et al. (2022b) enhance the poisoning efficiency in backdoor attacks by selecting poisoning data from the entire training set. Although these methods have achieved remarkable results, they cannot be directly applied to number-constrained backdoor attacks due to the lack of access to the entire training set. Experimental results. In this section, we investigate the performance degradation of previous studies in number-constrained backdoor attacks. As shown in Fig. 2 (a), the attack success rate experiences a significant decrease as the number ($P$) of poisoning samples decreases, particularly for Blended backdoor attacks. It is worth noting that Universal Adversarial Perturbations (UAP) achieves relatively favorable results even with a low poisoning rate. This can be attributed to the utilization of a proxy model that is pre-trained on the entire training set ($\mathcal{D}$). However, in our settings, UAP is not accessible, and we present the results for UAP to effectively demonstrate the performance degradation even when a pre-trained proxy model is available. #### A.2.4 Class-constrained Backdoor Attacks Definition. In the class-constrained scenario, let $\mathcal{D^{\prime}}$ represent the data manipulable by the malicious source, and $\mathcal{D}$ denote all the data available to the data collector. As depicted in part (b) of Fig. 5, the data collector gathers data from multiple sources, including both malicious and benign sources, to form $\mathcal{D}$. Each data source provides data belonging to different categories, resulting in $\mathcal{D^{\prime}}$ containing only a subset of categories present in $\mathcal{D}$. Therefore, $\mathcal{D}$ and $\mathcal{D^{\prime}}$ follow distinct distributions. More specifically, the accessible clean training set $\mathcal{D^{\prime}}\subset X\times Y^{\prime}(\mathcal{D^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\\})$ is a subset of the entire training set $\mathcal{D}\subset X\times Y(\mathcal{D}=\\{(x_{i},y_{i})|i=1,\cdots,N\\})$, where $Y^{\prime}\subset Y=\\{1,2,\cdots,C\\}$. Class-constrained backdoor attacks can be seen as a general setting of clean-label backdoor attacks Turner et al. (2019); Saha et al. (2020); Souri et al. (2021). In clean-label backdoor attacks, the accessible clean training set $\mathcal{D^{\prime}}$ is defined as $\mathcal{D^{\prime}}\subset X\times Y^{\prime}$, where $Y^{\prime}=\\{k\\}$ and $k$ represents the attack-target label. Experimental results. In this section, we explore the performance degeneration of previous studies in class-constrained backdoor attacks. As illustrated in Fig. 2 (b), attack success rate decreases as the number of class ($C^{\prime}$) in the poisoning set decreases, which is the similar as experimental results on the number-constrained backdoor attacks. (a) Number-constrained Scenario (b) Class-constrained Scenario (c) Domain- constrained Scenario Figure 5: Three data-constrained attack scenarios, where the data provided by each data source is independently and identically distribution in number- constrained backdoor attacks, each data source provides data belonging to different categories in class-constrained backdoor attacks, and each data source provides data from different domains in domain-constrained backdoor attacks. #### A.2.5 Domain-constrained Backdoor Attacks Definition. In the domain-constrained scenario, as depicted in part (c) of Fig. 5, the data collector gathers data from multiple sources (both malicious and benign) to form $\mathcal{D}$. Each data source provides data from a different domain, resulting in $\mathcal{D^{\prime}}$ containing only a subset of the domains present in $\mathcal{D}$. Consequently, $\mathcal{D}$ and $\mathcal{D^{\prime}}$ belong to different distributions. We examine an extreme scenario in domain-constrained backdoor attacks, where the test dataset follows the same distribution as the benign source ($\mathcal{D}\setminus\mathcal{D^{\prime}}$) and is outside the domain of the malicious source $\mathcal{D^{\prime}}\subset X\times Y^{\prime}$ ($\mathcal{D^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\\}$). Evidently, images belonging to the same class can stem from diverse distributions, which we term as domain distinctions. To illustrate, consider the ”Car” category found in both the ImageNet and CIFAR-10 datasets. Despite sharing this category, they exhibit disparate distributions, classifying them into distinct domains. In our context, characterized as ”domain-constrained” backdoor attacks, the domain pertains to the differing distributions within the same image class. This delineation underpins a tangible attack scenario. For instance, consider a victim endeavoring to train a comprehensive classifier capable of generalizing across varied data distributions. However, the assailant lacks comprehensive knowledge of the domains from which the victim sources data; thus, their capacity is confined to contaminating images within one or several domains, among the array of domains within the training set. Experimental results. To simulate the domain-constrained scenario, we conducted experiments with the following settings in this section: we designate the CIFAR-10 dataset as the benign source, the ImageNet dataset as the malicious source, and evaluated the attack performance on the CIFAR-10 dataset. Fig. 2 (c) illustrates the results, showing a decrease in the attack success rate as the domain rate (the proportion of poisoning sets sampled from $\mathcal{D}\setminus\mathcal{D^{\prime}}$ and $\mathcal{D^{\prime}}$) in the poisoning set decreases. This observation aligns with the experimental findings in the number-constrained and class-constrained backdoor attacks. (a) ASR of three backdoor attacks($p=1000$ for all experiments). (b) Visualizations of the backdoor injection and activation phases for three backdoor attacks. Figure 6: Analyses the entanglement between benign and poisoning features on the number-constrained, dirty-label single-class, and clean-label single-class backdoor attacks. Figure 7: The Grad-CAM of the backdoor model on the CIFAR-100 dataset. ### A.3 Analyzing the Entanglement Between Benign and Poisoning Features In this section, we provide three observations on data-constrained backdoor attacks to demonstrate the entanglement between benign and poisoning features does exist in the backdoor injection process, and it is the main reason why current attack methods fail in data-constrained scenarios. Observation 1: BadNet outperforms Blended notably in data-constrained attack scenarios. In a practical experimentation setting, BadNet and Blended exhibit comparable performance under unrestricted attack conditions (the leftmost point on the horizontal axis of Fig. 2). Conversely, in data-constrained attack scenarios, BadNet outperforms Blended notably. This intriguing disparity requires elucidation. BadNet employs a $2\times 2$ attacker- specified pixel patch as a universal trigger pattern attached to benign samples, whereas Blended employs an attacker-specified benign image for the same purpose. Comparatively, Blended’s trigger exhibits greater feature similarity to benign images, engendering a more pronounced feature entanglement between poisoned and benign attributes. Accordingly, the performance of the blended dirty-label single-class scenario significantly lags behind other cases, lending credence to our hypothesis that entanglement underpins the degradation of data-constrained backdoor attacks. Observation 2: Attack efficiency of number-constrained, dirty-label single- class, and clean-label single-class backdoor attacks decreases in turn under the same poison rate. We further investigated our hypothesis and present our findings of entanglement in Fig. 6. As shown in Fig. 6 (a), the attack efficiency of number-constrained, dirty-label single-class, and clean-label single-class backdoor attacks1 decreases in turn under the same poison rate. To understand the reason behind this phenomenon, we provide visualizations of the backdoor injection and activation phases for these three attacks in Fig. 6 (b). For the number-constrained backdoor attack, the distribution of poisoning samples (consisting of both benign and poisoning features) in the backdoor injection phase is the same as that in the backdoor activation phase. In other words, both benign and poisoning features are activated simultaneously during both phases. However, for the dirty-label single-class backdoor attack, the distribution of poisoning samples (consisting of single-class benign and poisoning features) in the backdoor injection phase is different from that in the backdoor activation phase. During the injection phase, both benign and poisoning features are activated, but during activation phase, only the poisoning feature is activated. This is the reason why previous attack methods on dirty-label single-class backdoor attacks exhibit performance degeneration. The clean-label single-class backdoor attack is similar to the dirty-label single-class backdoor attack in terms of the distribution of poisoning samples. However, during backdoor injection, there is competing activation333In the clean-label single-class backdoor attack, the benign feature of the accessible class (the same as the attack-target class) in both poisoning and clean sets is labeled with the same label (e.g., ”Fish” in Fig. 6), and the clean set contains more samples of the attack-target class. As a result, the presence of the benign feature in the poisoning set hampers the activation of the poisoning features. In contrary, the benign feature of the accessible class in poisoning and clean sets is labeled with the different label in the dirty-label single-class backdoor attack (e.g., the benign feature in the clean set is labeled as ”Frog”, while the benign+poisoning feature in the poisoning set is labeled as ”Fish”). Consequently, the benign feature in the poisoning set does not impact the activation of the poisoning features. between benign and poisoning features. Consequently, the poisoning efficiency of clean-label single-class backdoor attacks is lower than that of dirty-label single-class backdoor attacks. Observation 3: Substantial dissimilarities in activation between poisoned samples that share the same trigger. We have embraced Grad-CAM Selvaraju et al. (2017) to more effectively corroborate the presence of a correlation between the benign and the backdoor features. To substantiate this hypothesis, we have incorporated Grad-CAM outcomes to our study. Sgrad-campecifically, we have applied this technique to the BadNet-based poison model on the CIFAR-100 dataset, as depicted in Figure 7. The discernible results from these Grad-CAM visualizations underscore the substantial dissimilarities in activation between poisoned samples that share the same trigger. This visual evidence compellingly demonstrates the intricate entanglement existing between the benign and the backdoor features during instances of backdoor attacks. ### A.4 Threat Model Attack scenario. The proliferation of large-scale artificial intelligence models, such as ChatGPT and Stable Diffusion, necessitates the collection of massive amounts of data from the web. However, the security and trustworthiness of this data cannot always be guaranteed. This data collection pipeline inadvertently introduces vulnerabilities that can be exploited by data-based backdoor attacks. Attackers can strategically inject poisoning data into the training dataset and publish it on the internet, potentially compromising the integrity and performance of these models. Unlike previous attack scenarios where all training data is sourced from a single provider, we consider a more realistic scenario in which victims collect data from multiple sources. In this scenario, attackers only have access to a portion of the training dataset. This situation mirrors the real-world training process of models that utilize diverse public data. By acknowledging the challenges posed by multi-source data collection and limited attacker access, our study provides valuable insights into the security implications of such scenarios. Attack goal. The objective of our paper is aligned with popular backdoor attacks, as seen in previous studies Gu et al. (2019); Li et al. (2021a). The attackers aim to activate a hidden trigger within the model by providing specific inputs, leading the model to produce incorrect results. Our attack strategy emphasizes three key properties: (i) Minimal side effects: The backdoor attacks should not adversely impact the accuracy of the model on benign inputs. (ii) Effective backdoor: The attack should have a high success rate across various datasets and models, ensuring its efficiency. (iii) Stealthy attack: The backdoor attacks should be inconspicuous and difficult to detect, maintaining their stealthiness. Our research aims to develop invigorative backdoor attacks that strike a balance between effectiveness and preserving the integrity of the model’s performance on legitimate inputs. Attackers’ prior knowledge. In order to simulate a realistic scenario, we assume that the attackers have no access to the models or training details. They possess only general knowledge about the class labels involved in the task. This assumption reflects a more challenging and practical setting, where attackers have limited information about the target system. Attackers’ capabilities. Building upon previous studies Gu et al. (2019), we make the assumption that the attackers possess the capability to control the training data. However, we further impose a stricter assumption in this work, stating that the attackers have control over only a portion of the training data. Consequently, we divide the attack scenario into three distinct tasks, each representing different capabilities of the attacker. These tasks include: (i) Number-constrained backdoor attacks, where the attacker has access to only a subset of the training data; (ii) Class-constrained backdoor attacks, where the attacker has access to only a subset of the classes in the training data; and (iii) Domain-constrained backdoor attacks, where the attacker has access to only a subset of the domains within the training data. By considering these various constraints, we provide a comprehensive analysis of backdoor attacks in different data-constrained scenarios. ### A.5 CLIP for Zero-shot Classification The pre-trained CLIP model Radford et al. (2021) possesses the ability to express a broader range of visual concepts and has been utilized as a general feature extractor in various tasks. These tasks include text-driven image manipulation Patashnik et al. (2021), zero-shot classification Cheng et al. (2021a), and domain generalization Niu et al. (2022). In this section, we introduce the pipeline of CLIP in zero-shot classification, which can serve as inspiration for incorporating it into our clean feature erasing approach. CLIP achieves zero-shot classification by aligning text and image features. Firstly, CLIP employs its text encoder, denoted as $\hat{\mathcal{E}}_{t}(\cdot)$, to embed the input prompts (”a photo of a $c_{i}$”) into text features $T_{i}\in\mathbb{R}^{d}$, where $i=\\{1,2,\cdots,C\\}$ represents the classes. Subsequently, the image feature $I_{j}\in\mathbb{R}^{d}$ of image $x_{j}$ is embedded using the image encoder, denoted as $\hat{\mathcal{E}}_{i}(\cdot)$. During the inference phase, the classification prediction $y_{j}$ is computed using the cosine similarity between $T_{i}$ and $I_{j}$. This can be expressed as: $y_{j}=\mathop{\arg\max}_{i}(\langle I_{j},T_{i}\rangle),\quad i\in\\{1,2,\cdots,C\\},$ (12) where $C$ represents the number of classes, and $\langle\cdot,\cdot\rangle$ represents the cosine similarity between two vectors. ### A.6 CLIP-based Universal Adversarial Perturbations In this section, we also employ the widely-used pre-trained CLIP model Radford et al. (2021) to generate universal adversarial perturbations as the backdoor trigger. Xia et al. (2023) argue that deep models inherently possess flaws, and it is easier to exploit and enhance an existing flaw to serve as a backdoor rather than implanting a new one from scratch (BadNets Gu et al. (2019) and Blended Chen et al. (2017)). Universal Adversarial Perturbations (UAP) Zhong et al. (2020); Xia et al. (2023) utilize these inherent flaws in models as triggers, providing a straightforward method for augmenting the poisoning feature. However, this approach typically requires a feature extractor that has been pre-trained on the entire training set, which is not practical in data-constrained backdoor attacks. To address this limitation, we propose a CLIP-based Universal Adversarial Perturbations (CLIP-UAP) method. Specifically, given an accessible clean training set $\mathcal{D^{\prime}}=\\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\\}$ and an attack-target label $k$, the defined trigger can be formulated as follows: $\delta_{\text{uap}}=\mathop{\arg\min}_{||\delta_{\text{uap}}||_{p}\leq\epsilon}\sum_{\left(x,y\right)\in\mathcal{D}^{\prime}}L(f_{CLIP}(x+\delta_{\text{uap}},\mathbb{P}),k),$ (13) where $\mathbb{P}$ and $f_{CLIP}$ are defined as shown in Eq. 4 and Eq. 5, respectively. Similar to Eq. 6, we utilize the first-order optimization method known as Projected Gradient Descent (PGD) Madry et al. (2017) to solve the constrained minimization problem. The optimization process can be expressed as follows: $\delta^{t+1}_{\text{uap}}=\prod_{\epsilon}\big{(}\delta^{t}_{\text{uap}}-\alpha\cdot\text{sign}(\nabla_{\delta_{\text{uap}}}L(f_{CLIP}(x+\delta^{t}_{\text{uap}},\mathbb{P}),k))\big{)},$ (14) where $t$, $\nabla_{\delta_{\text{uap}}}L(f_{CLIP}(x+\delta^{t}_{\text{uap}},\mathbb{P}),k)$, and $\prod$ hold the same meaning as in Eq. 6. Unlike the sample-wise clean feature erasing noise, the CLIP-UAP serves as a universal trigger for the entire training set. Therefore, it follows the optimization formulation presented in Eq. 14 to generate $\delta^{t+1}_{\text{uap}}$ at each step $t$. The optimization process is performed on all samples in the accessible clean training set $\mathcal{D^{\prime}}$. Consequently, the CLIP-UAP for the set $\mathcal{D^{\prime}}$ can be represented as $\delta_{\text{uap}}=\delta^{T}_{\text{uap}}$, and the poison generator is formulated as $\mathcal{T}(x,\delta_{\text{uap}})=x+\delta_{\text{uap}}$. ### A.7 Experimental Setup #### A.7.1 Datasets We use the following three popular datasets in image classification: CIFAR-10 Krizhevsky et al. (2009). CIFAR-10 is a tiny object classification dataset containing 50,000 training images and 10,000 testing images. Each image has a size of $32\times 32\times 3$ and belongs to one of 10 classes. CIFAR-100 Krizhevsky et al. (2009). Similar to CIFAR-10, CIFAR-100 is also a tiny object classification dataset containing 50,000 training images and 10,000 testing images. Each image has a size of $32\times 32\times 3$ and belongs to one of 100 classes. ImageNet-50 Deng et al. (2009). ImageNet is the most popular object classification dataset containing 1.3M training images and 50K testing images. Each image has a size of $224\times 224\times 3$ and belongs to one of 1000 classes. For simplicity, we randomly sampled 50 categories to compose a tiny dataset: ImageNet-50. Our ImageNet-50 dataset contains 60K training images and 2.5K testing images. Figure 8: Visualizations of the poisoning samples with different triggers. #### A.7.2 Model Architecture. We verify the performance on three popular model architectures of image classification: VGG-16 Simonyan & Zisserman (2014), ResNet-18 He et al. (2016), and MobileNet-V2 Sandler et al. (2018). All of them are widely used in various areas of artificial intelligence, such as flower classification Xia et al. (2017), pulmonary image classification Wang et al. (2019b), fault diagnosis Wen et al. (2020), and Covid-19 screening Farooq & Hafeez (2020). #### A.7.3 Baseline and Comparison. Our method contains two aspects: clean feature suppression and poisoning feature augmentation. Among them, poisoning feature augmentation can be accomplished through designing efficient and data-independent triggers, while clean feature suppression is orthogonal to previous trigger designing and can be integrated into any backdoor triggers. Therefore, we compare our two designed triggers CLIP-based universal adversarial perturbations (CLIP-UAP) and CLIP-based contrastive feature augmentation (CLIP-CFA) with two popular triggers: BadNets Gu et al. (2019)and Blended Chen et al. (2017). All of them are independent of the training data therefore can be implemented easily in the introduced data-constrained backdoor attacks. Although there are also contain other advanced clean-label backdoor attacks Liu et al. (2020); Barni et al. (2019); Saha et al. (2020); Souri et al. (2022) and state-of-the-art backdoor attacks Zhong et al. (2020), a substantial proportion of them Liu et al. (2020); Saha et al. (2020); Souri et al. (2022); Zhong et al. (2020) operate within the threat model that necessitates a proxy model pre-trained on the entire training set. This precondition becomes challenging to fulfill in the context of a many-to-one (M2O) data collection attack scenario. To verify the validity of the clean feature suppression, we integrate the proposed CLIP- based clean feature erasing (CLIP-CFE) onto currently designed triggers: two our designed triggers and two baseline triggers. Table 2: The Peak Signal-to-noise Ratio (PSNR) and Structural Similarity Index (SSIM) on the ImageNet-50 dataset. All results are computed on the 500 examples. Metrics | Trigger | Clean Feature Suppression | Backdoor Attacks ---|---|---|--- Number Constrained | Clean-label single-class | Dirty-label single-class | Domain Constrained PSNR ($\uparrow$) | BadNets | w/o CLIP-CFE | 32.18 | 31.26 | 30.91 | 32.22 w/ CLIP-CFE | 32.19 | 31.41 | 31.39 | 32.17 Blended | w/o CLIP-CFE | 21.68 | 21.37 | 20.90 | 21.60 w/ CLIP-CFE | 21.67 | 21.31 | 20.99 | 21.58 CLIP-UAP | w/o CLIP-CFE | 33.11 | 33.66 | 32.84 | 32.64 w/ CLIP-CFE | 33.10 | 33.64 | 32.81 | 32.59 CLIP-CFA | w/o CLIP-CFE | 32.74 | 32.54 | 32.70 | 32.46 w/ CLIP-CFE | 32.72 | 32.52 | 32.67 | 32.41 SSIM ($\uparrow$) | BadNets | w/o CLIP-CFE | 0.995 | 0.995 | 0.995 | 0.995 w/ CLIP-CFE | 0.995 | 0.995 | 0.995 | 0.995 Blended | w/o CLIP-CFE | 0.794 | 0.820 | 0.730 | 0.787 w/ CLIP-CFE | 0.827 | 0.834 | 0.772 | 0.819 CLIP-UAP | w/o CLIP-CFE | 0.719 | 0.822 | 0.641 | 0.702 w/ CLIP-CFE | 0.843 | 0.885 | 0.793 | 0.822 CLIP-CFA | w/o CLIP-CFE | 0.707 | 0.795 | 0.637 | 0.692 w/ CLIP-CFE | 0.830 | 0.857 | 0.788 | 0.810 #### A.7.4 Implementations. In order to demonstrate the effectiveness of our proposed method, we conduct experiments on three datasets (CIFAR-10, CIFAR-100, and ImageNet-50). For the CIFAR-10 and CIFAR-100 datasets, we choose VGG-16, ResNet-18, and MobileNet-V2 as the victim models. All models use the SGD optimizer with a momentum of 0.9, weight decay of 5e-4, and a learning rate of 0.01 (0.1 for MobileNet-V2), which is multiplied by 0.1 at epoch 35 and 55. For the ImageNet-50 dataset, we use VGG-16 and MobileNet-V2 as the victim models. We use the SGD optimizer with a momentum of 0.9, weight decay of 5e-4, and a learning rate of 0.05 (0.01 for VGG-16), which is multiplied by 0.1 at epoch 35 and 55. The complete training epochs is 70. In the number-constrained scenario, we conducted experiments with poisoning rates of 0.01 (P=500), 0.015 (P=750), and 0.007 (P=453) for the CIFAR10, CIFAR100, and ImageNet-50 threes datasets, respectively. In the class- constrained scenario, experiments with poisoning rates of 0.02 (P=1000), 0.01 (P=500), and 0.02 (P=1296) for three datasets. We choose two extreme scenario in the class-constrained backdoor attacks, denoted as clean-label single-class backdoor attack and dirty-label single-class backdoor attack. Specifically, the accessed class category is set to $Y^{\prime}=\\{k\\}$ and $Y^{\prime}=\\{c\\},c\neq k$ for clean-label single-class backdoor attack and dirty-label single-class backdoor attack, respectively. In the domain- constrained scenario, experiments with poisoning rates of 0.02 (P=1000, 1000, and 1296) for three datasets. The out-of-domain samples of all experiments are selected from other ImageNet-1K datasets that are not ImageNet-50. The attack- target class $k$ is set to category 0 for all experiments of above three data- constrained backdoor attacks. #### A.7.5 Evaluation Metrics. We evaluate the performance of our method in terms of Harmlessness: Benign Accuracy (BA), Effectiveness: Attack Success Rate (ASR), and Stealthiness: Peak Signal-to-noise Ratio (PSNR) Huynh-Thu & Ghanbari (2008) and Structural Similarity Index (SSIM) Wang et al. (2004). Benign Accuracy (BA). BA is the clean accuracy of the testing set $\mathcal{D}_{t}=\\{(x_{i},y_{i})|i=1,\cdots,M\\}$ and is applied to evaluate the Harmlessness of the backdoor. When BA of the infected model is similar to the accuracy of the clean model, we believe that the current attack technique is harmless. Attack Success Rate (ASR). ASR is applied to evaluate the effectiveness of the backdoor attack, which is the fraction of testing images with specific trigger that are predicted as the target class. Specifically, For $M^{\prime}$ images in the testing set that do not belong to the attack-target class ($k$), the ASR is formulated as: $\text{ASR}=\frac{\sum_{i=1}^{M^{\prime}}\mathbb{I}(f(\mathcal{T}(x_{i},t);\Theta)=k)}{M^{\prime}},\quad(x_{i},y_{i})\in\mathcal{D}^{\prime}_{t},$ (15) where $\mathcal{D}^{\prime}_{t}$ is a subset of testing set $\mathcal{D}_{t}$ ($\mathcal{D}^{\prime}_{t}\subset\mathcal{D}_{t}$), containing the images whose label is not the attack-target class $k$. Peak Signal-to-noise Ratio (PSNR) Huynh-Thu & Ghanbari (2008). PSNR is applied to measure the similarity between clean images and the corresponding poisoning images. Give a image $x_{i}\in\mathcal{D}_{t}$ and the corresponding poisoning image $x^{\prime}_{i}=\mathcal{T}(x_{i},t)$, the PSNR is formulated as: $\displaystyle\operatorname{PSNR}=\frac{1}{M}\sum_{i=1}^{M}\operatorname{PSNR_{i}}(x_{i},x^{\prime}_{i}),$ (16) $\displaystyle\operatorname{where}\quad\operatorname{PSNR_{i}}(x_{i},x^{\prime}_{i})=10\log_{10}\left(255^{2}/\operatorname{MSE}(x_{i},x^{\prime}_{i})\right),$ $\displaystyle\operatorname{MSE}(f,g)=\frac{1}{HW}\sum_{i=1}^{H}\sum_{j=1}^{W}\left(f_{ij}-g_{ij}\right)^{2},$ $H$ and $W$ are height and width of the image, respectively. Larger PSNR means larger similarity between clean images and the corresponding poisoning images, therefore larges stealthiness of backdoor attacks. Structural Similarity Index (SSIM) Wang et al. (2004). Similar to PSNR, SSIM is another metrics to represent the stealthiness of backdoor attacks, which is formulated as: $\displaystyle\operatorname{SSIM}=\frac{1}{M}\sum_{i=1}^{M}\operatorname{SSIM_{i}}(x_{i},x^{\prime}_{i}),$ (17) $\displaystyle\operatorname{where}\quad\operatorname{SSIM_{i}}(x_{i},x^{\prime}_{i})=l(x_{i},x^{\prime}_{i})\cdot c(x_{i},x^{\prime}_{i})\cdot s(x_{i},x^{\prime}_{i}),$ $\displaystyle\left\\{\begin{array}[]{l}l(f,g)=\frac{2\mu_{f}\mu_{g}+C_{1}}{\mu_{f}^{2}+\mu_{g}^{2}+C_{1}}\\\ c(f,g)=\frac{2\sigma_{f}\sigma_{g}+C_{2}}{\sigma_{f}^{2}+\sigma_{g}^{2}+C_{2}}\\\ s(f,g)=\frac{\sigma_{fg}+C_{3}}{\sigma_{f}\sigma_{g}+C_{3}}\end{array},\right.$ where $\mu$ and $\sigma$ are mean and variance of image, respectively. Similarly, Larger SSIM means larger similarity between clean images and the corresponding poisoning images, therefore larges stealthiness of backdoor attacks. Figure 9: Defense results of pruning Liu et al. (2018) on the number- constrained backdoor attack of the CIFAR-100 dataset and VGG-16 model, where the number of clean samples owned by the defender is 50 and the poisoned rate is $2\%$. Figure 10: Defense results of Neural cleanse Wang et al. (2019a) on the number-constrained backdoor attack of the CIFAR-100 dataset and ResNet-18 model, where the number of clean samples owned by the defender is 1500, and the poisoned rate is $2\%$. (a) (b) Figure 11: Defense results of FST Min et al. (2023) on the number-constrained backdoor attack of the CIFAR-100 dataset and ResNet-18 model, where the number of clean samples owned by the defender is from 1000 to 2500 and the poisoned rate is $2\%$. Figure 12: Attack success rate (ASR) of the number-constrained backdoor attacks on the CIFAR-10 dataset. The red points represents w/o CLIP- based Clean Feature Erasing (CFE), while the green points represents w/ CLIP- based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 13: The attack success rate (ASR) of the class-constrained backdoor attacks (the access category $Y^{\prime}$ is set to {0}) on the CIFAR-10 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 14: The attack success rate (ASR) of the class-constrained backdoor attacks (the access category $Y^{\prime}$ is set to {1}) on the CIFAR-10 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 15: The attack success rate (ASR) of the domain-constrained backdoor attacks (Domain rate is set to 0) on the CIFAR-10 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 16: The attack success rate on the CIFAR-10 dataset at different poisoning rates. All results were computed as the mean of five different runs. Figure 17: The attack success rate on the CIFAR-10 dataset at different accessible class of poisoning samples. All results were computed as the mean of five different runs. Figure 18: The attack success rate on the CIFAR-10 dataset at different proportions of in-domain samples. All results were computed as the mean of five different runs. Table 3: The Benign Accuracy (BA) on the CIFAR-10 dataset. All results are computed the mean by 5 different run. Trigger | Clean Feature Suppression | Backdoor Attacks ---|---|--- Number Constrained | Class Constrained ($Y^{\prime}=\\{0\\}$) | Class Constrained ($Y^{\prime}=\\{1\\}$) | Domain Constrained V-16 | R-18 | M-2 | V-16 | R-18 | M-2 | V-16 | R-18 | M-2 | V-16 | R-18 | M-2 BadNets | w/o CLIP-CFE | 0.920 | 0.926 | 0.928 | 0.921 | 0.928 | 0.928 | 0.923 | 0.928 | 0.928 | 0.921 | 0.928 | 0.929 w/ CLIP-CFE | 0.920 | 0.927 | 0.928 | 0.920 | 0.927 | 0.928 | 0.920 | 0.926 | 0.927 | 0.920 | 0.927 | 0.928 Blended | w/o CLIP-CFE | 0.921 | 0.927 | 0.930 | 0.921 | 0.928 | 0.929 | 0.919 | 0.928 | 0.928 | 0.920 | 0.926 | 0.929 w/ CLIP-CFE | 0.921 | 0.926 | 0.929 | 0.921 | 0.927 | 0.929 | 0.920 | 0.927 | 0.929 | 0.921 | 0.927 | 0.929 CLIP-UAP | w/o CLIP-CFE | 0.922 | 0.928 | 0.927 | 0.921 | 0.928 | 0.928 | 0.920 | 0.928 | 0.928 | 0.920 | 0.930 | 0.928 w/ CLIP-CFE | 0.922 | 0.928 | 0.929 | 0.922 | 0.928 | 0.929 | 0.920 | 0.927 | 0.929 | 0.922 | 0.928 | 0.928 CLIP-CFA | w/o CLIP-CFE | 0.922 | 0.928 | 0.928 | 0.921 | 0.927 | 0.927 | 0.920 | 0.928 | 0.928 | 0.921 | 0.929 | 0.928 w/ CLIP-CFE | 0.922 | 0.929 | 0.927 | 0.920 | 0.927 | 0.928 | 0.921 | 0.929 | 0.929 | 0.921 | 0.929 | 0.928 Figure 19: Attack success rate (ASR) of the number-constrained backdoor attacks on the ImageNet-50 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 20: The attack success rate (ASR) of the class-constrained backdoor attacks (the access category $Y^{\prime}$ is set to {0}) on the ImageNet-50 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 21: The attack success rate (ASR) of the class- constrained backdoor attacks (the access category $Y^{\prime}$ is set to {1}) on the ImageNet-50 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. Figure 22: The attack success rate (ASR) of the domain-constrained backdoor attacks (Domain rate is set to 0) on the ImageNet-50 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CFE), while the green points represents w/ CLIP-based Clean Feature Erasing (CFE). The experiment is repeated 5 times, and the results were computed as the mean of five different runs. ### A.8 RQ3: Are Proposed Technologies Stealthy for Victims? Qualitative and Quantitative Results. Fig. 8 showcases examples of poisoning images generated by different attacks on the ImageNet-50444As shown in Appendix A.7, CIFAR-10 and CIFAR-100 have low resolution, which makes unclear visualizations. Therefore, we show the results on the ImageNet-50 dataset in this section. dataset. While our CLIP-UAP and CLIP-CFA may not achieve the highest stealthiness in terms of SSIM (as indicated in Table 2), the poisoning images generated by our methods appear more natural to human inspection compared to the baseline attacks. Additionally, incorporating CLIP-CFE has minimal impact on both PSNR and the natural appearance of the images, while achieving higher stealthiness in terms of SSIM. Stealth in attack defense methods. Attack stealthiness needs to be evaluated through algorithm. (i) As depicted in Figure 9, our preliminary evaluation conducted on the VGG-16 and CIFAR-100 datasets vividly demonstrates that our proposed method can attack against pruning-based defense method Liu et al. (2018) more effectively than other attack methods. (ii) As depicted in Figure 10, our preliminary evaluation conducted on the ResNet-18 model and CIFAR-100 datasets vividly demonstrates that our proposed method can attack against another pruning-based defense method Neural cleanse Wang et al. (2019a) more effectively than other attack methods. (iii) Similarly, as depicted in Figure 11, our preliminary evaluation conducted on the ResNet-18 and CIFAR-100 datasets vividly demonstrates that our proposed method can attack against tune-based defense method FST Min et al. (2023) more effectively than other attack methods. ### A.9 Experiments on the CIFAR-10 Dataset #### A.9.1 RQ1: Are Proposed Technologies Effective on Different Backdoor Attacks. In this section, we utilize our proposed technologies to attack different target models on the CIFAR-10 dataset. Our objective is to verify the effectiveness of the attack and calculate the ASR for each target model. The baseline attack methods, BadNets Gu et al. (2019) and Blended Chen et al. (2017) were introduced in Sec. A.2.1. The attack performance of the number- constrained, clean-label single-class (class-constrained), dirty-label single- class (class-constrained), and out-of-the-domain (domain-constrained) backdoor attacks on CIFAR-10 dataset are reflected in Fig. 16, 13, 14, and 18, respectively. CLIP-based Poisoning Feature Augmentation Is More Effective Than Previous Attack Methods. Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets Gu et al. (2019) and Blended Chen et al. (2017) baseline methods in terms of ASR under the most attacks and datasets. This confirms that the proposed poisoning feature augmentation generates more efficient triggers than other methods. CLIP-based Clean Feature Suppression Is Useful For Different Attack Methods. Our proposed CLIP-CFE method improves the poisoning effectiveness on most cases compared to the baseline without CLIP-based Clean Feature Erasing. Only on small cases the BadNets and CLIP-UAP slightly outperform the corresponding methods with CFE. #### A.9.2 RQ2: Are Proposed Three Technologies Harmless for Benign Accuracy. Table 3 illustrates that our proposed CLIP-UAP and CLIP-CFA methods have similar or even better average Benign Accuracy (BA) compared to the baseline methods BadNets Gu et al. (2019) and Blended Chen et al. (2017). Additionally, our proposed CLIP-CFE method has no negative effect on BA, confirming that our technologies are harmless for benign accuracy under various settings and different backdoor attacks. #### A.9.3 RQ3: Are Proposed Technologies Effective for Different Poisoning Settings. Ablation of Different Poison Rates on The Number-constrained Backdoor Attacks. We conducted ablation studies to verify the effectiveness of the proposed methods in reducing the number of poisoning samples (poisoning rates) on the number-constrained backdoor attacks. The results in Fig. 16 illustrate that: i) The attack success rate increases with the increase of poisoning rate for different attacks; ii) Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets Gu et al. (2019) and Blended Chen et al. (2017); iii) The proposed CLIP-CFE further improves the poisoning effectiveness upon the different triggers. Ablation of Different Poison Classes on The Class-constrained Backdoor Attacks. In this section, we conducted ablation studies to verify the effectiveness of the proposed methods in increasing the number of poisoning classes on the class-constrained backdoor attacks. The results in Fig. 17 illustrate that: i) The attack success rate increases with the increase of poisoning classes for different attacks; ii) The attack success rate of clean- label single-class attack is lower than that of dirty-label single-class attacks; iii) Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets Gu et al. (2019) and Blended Chen et al. (2017) methods; iv) The proposed CLIP-CFE method further improves the poisoning effectiveness with different triggers. Ablation of different domain rates on the domain-constrained backdoor attacks. In this section, we conducted ablation studies to verify the effectiveness of the proposed methods in increasing the domain rate on the domain-constrained backdoor attacks. The results in Fig. 18 illustrate that: i) The attack success rate increases with the increase of the domain rates for different attacks; ii) Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets Gu et al. (2019) and Blended Chen et al. (2017) methods; iii) The proposed CLIP-CFE method further improves the poisoning effectiveness with different triggers. ### A.10 Experiments on the ImageNet-50 Dataset #### A.10.1 RQ1: Are proposed Technologies Effective on Different Backdoor Attacks. In this section, we utilized our proposed technologies to attack different target models on the ImageNet-50 dataset and calculate the ASR for each target model to verify the attack effectiveness. The baseline attack methods, BadNets Gu et al. (2019)and Blended Chen et al. (2017) were introduced in Sec. A.2.1. Fig. 19, 20, 21, and 22 reflect the attack performance of the number- constrained, clean-label single-class (class-constrained), dirty-label single- class (class-constrained), and out-of-the domain (domain-constrained) backdoor attacks on the ImageNet-50 dataset, respectively. CLIP-based Poisoning Feature Augmentation Is More Effective Than Previous Attack Methods. Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets Gu et al. (2019) and Blended Chen et al. (2017) baseline methods in terms of consistency under different attacks and datasets. This confirms that the proposed poisoning feature augmentation generates more efficient triggers than other methods. CLIP-based Clean Feature Suppression Is Useful For Different Attack Methods. Our proposed CLIP-CFE method improves the poisoning effectiveness on most cases compared to the baseline without CLIP-based Clean Feature Erasing. Only on the MobileNet-V2 results of the number-constrained backdoor attacks (right part of the Fig. 19), CLIP-UAP slightly outperform the corresponding methods with CFE. #### A.10.2 RQ2: Are Proposed Three Technologies Harmless for Benign Accuracy. Table 4: The Benign Accuracy (BA) on the ImageNet-50 dataset. All results are computed the mean by 5 different run. Trigger | Clean Feature Suppression | Backdoor Attacks ---|---|--- Number Constrained | Class Constrained ($Y^{\prime}=\\{0\\}$) | Class Constrained ($Y^{\prime}=\\{1\\}$) | Domain Constrained V-16 | M-2 | V-16 | M-2 | V-16 | M-2 | V-16 | M-2 BadNets | w/o CLIP-CFE | 0.784 | 0.731 | 0.788 | 0.731 | 0.788 | 0.733 | 0.783 | 0.735 w/ CLIP-CFE | 0.787 | 0.733 | 0.788 | 0.735 | 0.790 | 0.734 | 0.788 | 0.735 Blended | w/o CLIP-CFE | 0.789 | 0.733 | 0.788 | 0.736 | 0.787 | 0.727 | 0.788 | 0.731 w/ CLIP-CFE | 0.790 | 0.731 | 0.787 | 0.734 | 0.788 | 0.731 | 0.786 | 0.728 CLIP-UAP | w/o CLIP-CFE | 0.788 | 0.729 | 0.788 | 0.726 | 0.786 | 0.728 | 0.789 | 0.733 w/ CLIP-CFE | 0.785 | 0.732 | 0.786 | 0.729 | 0.786 | 0.734 | 0.791 | 0.730 CLIP-CFA | w/o CLIP-CFE | 0.784 | 0.730 | 0.789 | 0.732 | 0.785 | 0.729 | 0.787 | 0.735 w/ CLIP-CFE | 0.787 | 0.732 | 0.786 | 0.734 | 0.789 | 0.735 | 0.784 | 0.732 Table 4 illustrates that our proposed CLIP-UAP and CLIP-CFA methods have similar or even better average Benign Accuracy (BA) compared to the baseline methods BadNets Gu et al. (2019) and Blended Chen et al. (2017). Additionally, our proposed CLIP-CFE method has no negative effect on BA, confirming that our technologies are harmless for benign accuracy under various settings and different backdoor attacks. ### A.11 Experiments on More Complex Constraints in Data-constraint Backdoor Attacks Previous sections have primarily focused on specific sub-variants of number, class, and domain constraints, which might not comprehensively represent all real-world limitations. This section delves into more intricate constraints. Specifically, we investigate two additional configurations: Config A: Poisoning rate = 0.01 (P=500), poisoning classes=1 (1), and domain rate=0.5. Config B: Poisoning rate = 0.01 (P=500), poisoning classes=3, and domain rate=0.25. The experiments are performed using the VGG-16 model and the CIFAR-100 dataset. As depicted in Fig. 23, our methods and conclusions are demonstrated to be equally viable in scenarios involving more complex data constraints. (a) (b) Figure 23: The Attack Success Rate (ASR) on more complex data-constraint attacks. (a) Config A: Poisoning rate = 0.01 (P=500), poisoning classes=1 (1), and domain rate=0.5. (b) Config B: Poisoning rate = 0.01 (P=500), poisoning classes=3, and domain rate=0.25. ### A.12 Experiments on Domains Different From CLIP’s Training Domain In this section, we verify the performance of the proposed methods on domain drastically differs from CLIP’s training set. Typically, we select the commonly used dataset UCM Yang & Newsam (2010) in the field of remote sensing. UCM Dataset. The UCM contains 100 images in each of the 21 categories for a total of 2100 images. Each image has a size of $256\times 256$ and a spatial resolution of 0.3 m/pixel. The images are captured in the optical spectrum and represented in the RGB domain. The data are extracted from aerial ortho imagery from the U.S. Geological Survey (USGS) National Map. The categories in this dataset are: agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golf course, harbor, intersection, medium-density residential, mobile home park, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis courts. Experiments. Similar to Dräger et al. (2023), we randomly select 1050 samples to form the training set, while the remaining samples make up the test set. The attack-target class $k$ is set to ”agricultural”. The poisoning rate is set to 0.05 in all experiments. As illustrated in Table 5, we illustrate the Attack Success Rate of number-Constrained and class-Constrained backdoor attacks on UCM dataset. The results indicate a performance decline on the UCM dataset. However, as seen in Sec. A.11.1, by adjusting the constraints ($\epsilon$) of the optimized noise, we observed an improved attack success rate in our methods compared to baseline methods. Additionally, replacing CLIP with the Satellite Arto et al. (2021) model, a large model fine-tuned on remote sensing images, further augmented the attack success rate. These outcomes demonstrate the adaptability of our methods to various domains by replacing CLIP with domain-specific pre-trained models. Table 5: The Attack Success Rate (ASR) on the VGG-16 model and UCM Yang & Newsam (2010) dataset. All results are computed the mean by 5 different run. In the class-Constrained Scenario, the poisoning classes is set to 1, and we choose the dirty-label setting. Trigger | Clean Feature Suppression | Backdoor Attacks ---|---|--- Number Constrained | Class Constrained CLIP ($\epsilon$=8/255) | CLIP ($\epsilon$=16/255) | Satellite ($\epsilon$=8/255) | CLIP ($\epsilon$=8/255) | CLIP ($\epsilon$=16/255) | Satellite ($\epsilon$=8/255) BadNets | w/o CLIP-CFE | 0.916 | 0.916 | 0.916 | 0.692 | 0.692 | 0.692 w/ CLIP-CFE | 0.921 | 0.944 | 0.952 | 0.661 | 0.791 | 0.837 Blended | w/o CLIP-CFE | 0.798 | 0.798 | 0.798 | 0.329 | 0.329 | 0.329 w/ CLIP-CFE | 0.805 | 0.882 | 0.904 | 0.338 | 0.474 | 0.526 CLIP-UAP | w/o CLIP-CFE | 0.860 | 0.895 | 0.917 | 0.722 | 0.783 | 0.823 w/ CLIP-CFE | 0.962 | 0.981 | 0.990 | 0.719 | 0.824 | 0.865 CLIP-CFA | w/o CLIP-CFE | 0.898 | 0.924 | 0.937 | 0.585 | 0.713 | 0.735 w/ CLIP-CFE | 0.961 | 0.980 | 0.989 | 0.717 | 0.805 | 0.833 ### A.13 Experiments on Fine-grained Datasets In this section, we rigorously assess the effectiveness of our proposed methodologies using fine-grained datasets, with a specific focus on the widely recognized Oxford-Flowers dataset within the domain of fine-grained image classification. Oxford-Flowers Dataset. The Oxford-Flowers dataset comprises 102 categories of common UK flower images, totaling 8189 images, with each category containing 40 to 258 images. Our experimental setup involves employing 6140 images for training purposes, while the remaining samples constitute the test set. Experiments. For our experiments, we set the attack-target class ($k$) to 0 and maintain a consistent poisoning rate of 0.05 (p=307) across all conducted trials. Illustrated in Figure 24 (a), we showcase the Attack Success Rate of number-Constrained backdoor attacks on the Oxford-Flowers dataset. Our findings conclusively demonstrate the superior efficacy of CLIP-based poisoning feature augmentation compared to prior attack methodologies. Additionally, CLIP-based Clean Feature Suppression emerges as a valuable strategy across diverse attack methods. Moreover, as depicted in Figure 24 (b), our results unequivocally indicate that our technologies maintain benign accuracy at par with baseline methods, even under varying settings and diverse backdoor attacks. This underlines the robustness and non-disruptive nature of our methodologies. (a) (b) Figure 24: The Attack Success Rate (ASR) and Benign Accuracy (BA) on Oxford- Flowers dataset. Poisoning rate = 0.01 (P=307). All results were computed as the mean of five different runs. ### A.14 Experiments on ViT Architecture For our experiments, we set the attack-target class ($k$) to 0 and maintain a consistent poisoning rate of 0.05 (p=500) across all conducted trials. Illustrated in Figure 25 (a), we showcase the Attack Success Rate of number- Constrained backdoor attacks on the ViT-Small architecture and CIFAR-10 dataset. Our findings conclusively demonstrate the superior efficacy of CLIP- based poisoning feature augmentation compared to prior attack methodologies. Additionally, CLIP-based Clean Feature Suppression emerges as a valuable strategy across diverse attack methods. Moreover, as depicted in Figure 25 (b), our results unequivocally indicate that our technologies maintain benign accuracy at par with baseline methods, even under varying settings and diverse backdoor attacks. This underlines the robustness and non-disruptive nature of our methodologies. (a) (b) Figure 25: The Attack Success Rate (ASR) and Benign Accuracy (BA) on ViT-small architecture and CIFAR-10 dataset. Poisoning rate = 0.01 (P=500). All results were computed as the mean of five different runs. ### A.15 Discussion #### A.15.1 Performance degradation in the clean-label single-class backdoor attack. As depicted in Fig. 3 (b), Fig. 13, and Fig. 20, both the baseline and our attack methods exhibit poor Attack Success Rate (ASR) in the clean-label single-class backdoor attack. In this section, we aim to enhance the attack strength of our methods and devise a more efficient attack strategy for the clean-label single-class backdoor attack. In our optimization equations, namely Eq. 3, Eq. 13, and Eq. 10, we impose constraints on the optimized noise, denoted as $\delta_{i}$, $\delta_{\text{uap}}$, and $\delta_{\text{con}}$, respectively. These constraints are specified as $||\delta_{i}||_{p}\leq\epsilon$, $||\delta_{\text{uap}}||_{p}\leq\epsilon$, and $||\delta_{\text{con}}||_{p}\leq\epsilon$, where $||\cdot||_{p}$ represents the $L_{p}$ norm, and we set $\epsilon$ to $8/255$ to ensure the stealthiness of the backdoor attacks, as observed in our previous experiments. To bolster the attack strength and subsequently increase the ASR in the clean- label single-class backdoor attack, we investigate the impact of adjusting the constraint on $\delta_{i}$. As demonstrated in Fig. 26 and Fig. 27, significant (more than $500\%$) improvements are observed in the ASR of the clean-label single-class backdoor attack when we set the constraint on $\delta_{i}$ to $||\delta_{i}||_{p}\leq 16/255$. This finding validates the efficacy of our method in the clean-label single-class backdoor attack, albeit at the expense of compromising stealthiness. This sacrifice, which is common in previous backdoor attack methods Zeng et al. (2022), is a low-cost trade- off. Figure 26: The attack success rate on the CIFAR-10 dataset at different $\epsilon$. All results are computed as the mean of five different runs. Figure 27: The attack success rate on the CIFAR-100 dataset at different $\epsilon$. All results are computed as the mean of five different runs. #### A.15.2 Domain-constrained backdoor attacks are easier than class- constrained backdoor attacks. Fig. 28 provides a visualization of the Attack Success Rate (ASR) achieved by different attack methods on the CIFAR-10 dataset in domain-constrained (domain rate set to 0) and dirty-label single-class backdoor attacks. While domain- constrained backdoor attacks impose stricter restrictions (assumptions that attackers have no access to any data in the training set), the ASR in domain- constrained backdoor attacks consistently surpasses that of dirty-label single-class backdoor attacks. This observation leads us to propose that the diversity of samples in the poisoning set is another crucial factor affecting attack efficiency. Consequently, we recommend that attackers fully consider the diversity of poisoning samples during the poisoning set generation phase. #### A.15.3 Time Consumption of Different Attack Methods. We would like to highlight that in the case of BadNet and Blended, adding a pre-defined trigger into benign images to build poisoned images only needs to perform one time addition calculation, which incurs negligible time consumption. It’s important to note, however, that these straightforward implementations fall short in terms of both poison efficiency and stealthiness in the context of backdoor attacks. Recent advancements in backdoor attack techniques have sought to enhance efficiency and covert effectiveness by introducing optimization-driven processes to define triggers. While these refinements do entail some additional time overhead, they significantly elevate the attack’s efficacy. We undertook an evaluation of the time overhead associated with diverse attack methods using an A100 GPU. As outlined in Table 6, the time consumption exhibited by optimization-based methods grows linearly with the expansion of the poison sample count. Despite this, the overall time overhead remains remarkably modest, with the entirety of the process being accomplished within mere minutes. #### A.15.4 Explaining the Difference Between Data-constraint Backdoor Attacks and backdoor attacks of federated learning. The landscape of backdoor attacks in the context of federated learning shares certain similarities with our proposed data-constraint backdoor attacks. Notably, both scenarios assume the utilization of training data originating from diverse sources. However, it’s important to note that the threat model for backdoor attacks in federated learning adopts a distinct perspective. In this model, the attacker exercises complete control over one or several participants: (1) The attacker possesses authority over the local training data of any compromised participant. (2) It wields control over the local training process and associated hyperparameters, including parameters like the number of training epochs and learning rate. (3) The attacker can manipulate the model’s weights prior to submitting them for aggregation. (4) It retains the capability to dynamically adjust its local training strategy from one round to the next. In contrast, our data-constraint backdoor attacks center exclusively on the attacker’s capability to manipulate local training data. This introduces a heightened level of challenge for potential attackers. Furthermore, to the best of our knowledge, the existing backdoor attack strategies within the realm of federated learning primarily revolve around the number-constrained scenario. Notably, the class-constrained and domain-constrained scenarios have yet to be comprehensively explored within the community’s discourse on this subject. Figure 28: The attack success rate on the CIFAR-10 dataset at domain-constrained (domain rate is set to 0) and dirty-label single-class backdoor attacks. poisoning rate is set to 0.02 and all results are computed as the mean of five different runs. Table 6: The time overhead (min) of different attacks on a A100 GPU. All results are computed with the mean of 5 different runs. Number of Poisoned set | BadNet | BadNet +CLIP-CFE | UAP | UAP +CLIP-CFE | CLIP-UAP | CLIP-UAP +CLIP-CFE | CLIP-CFA | CLIP-CFA +CLIP-CFE ---|---|---|---|---|---|---|---|--- 500 | 0 | 1 | 0.7 | 1.6 | 1 | 2.1 | 0.9 | 1.9 1000 | 0 | 1.9 | 1.5 | 3.3 | 2 | 4 | 1.9 | 3.9 2000 | 0 | 3.7 | 2.9 | 6.5 | 4 | 7.9 | 3.9 | 7.8 ### A.16 Limitations and Future Works In this section, we discuss the limitations of our approach and outline potential future directions for backdoor learning research. Performance Degradation in Clean-label Backdoor Attacks. Clean-label backdoor attacks present a significant challenge Zhao et al. (2020). As shown in Fig. 3, previous methods exhibit a poor ASR, and our technologies show limited improvement in clean-label backdoor attacks when the poisoning rate is low. In future research, we will investigate the underlying reasons for this situation and explore more efficient attack methods specifically designed for clean- label backdoor attacks. Application Limitations. Our technologies depend on the CLIP model that is pre-trained on natural images, which may limit their applicability to certain domains such as medical images or remote sensing. In such cases, a possible solution is to replace CLIP with a domain-specific pre-trained model, such as MedCLIP Madhawa & Carlomagno (2022) for medical images or Satellite Arto et al. (2021) for remote sensing, to adapt our methods to the target domain. Transfer to Other Domains. The attack scenario we have defined is not limited to a specific domain and can be applied to other important applications, including backdoor attacks for malware detection, deepfake detection, and federated learning. In our future work, we plan to explore the design of realistic attack scenarios and efficient backdoor attacks specifically tailored for these applications.
# Evidential strength of categorical expert witness statements R.J.F.Ypma In many jurisdictions, forensic evidence is presented in the form of categorical statements by forensic experts. In recent years, the scientific validity of forensic science practices have been questioned. In response, several large-scale performance studies have been performed, reporting various error rates for different forensic fields. This important work has elucidated the uncertainty associated with categorical statements. There is growing scientific consensus that the likelihood ratio (LR) framework is the logically correct form of presentation for forensic evidence evaluation. Yet, the very relevant results from the large-scale performance studies have not been cast in this framework. Here, I show how to straightforwardly calculate an LR for any given categorical statement using data from the performance studies. This number quantifies how much more we should believe the hypothesis of same source vs different source, when provided a particular expert witness statement. LRs are reported for categorical statements resulting from the analysis of latent fingerprints, bloodstain patterns, handwriting, footwear and firearms. The highest LR found for statements of identification was 376 (fingerprints), the lowest found for statements of exclusion was 1/28 (handwriting). The LRs found may be more insightful for those used to this framework than the various error rates reported previously. An additional advantage of using the LR in this way is the relative simplicity; there are no decisions necessary on what error rate to focus on or how to handle inconclusive statements. The values found are closer to 1 than many would have expected. One possible explanation for this mismatch is that we undervalue numerical LRs. Finally, a note of caution: the LR values reported here come from a simple calculation that does not do justice to the nuances of the large-scale studies and their differences to casework, and should be treated as ball-park figures rather than definitive statements on the evidential value of whole forensic scientific fields. ## 1 Introduction Forensic evidence plays a large role in criminal law. In many jurisdictions, expert witnesses give categorical statements, i.e. claiming identification of a person or object from forensic traces. In recent decades forensic sciences have been criticized for ignoring or being unable to quantify the uncertainty associated with such statements [1, 2]. In response, several large-scale studies have been published that investigate this uncertainty for a range of forensic fields [3, 4, 5, 6, 7]. In these studies, uncertainty is quantified as error rates. This is however not a trivial computation, as several relevant error rates exist (e.g. false negative rate, false positive rate, false discovery rate) and there is no single correct way of handling statements of the form ‘unsuitable for analysis’ or ‘inconclusive’ [8]. The likelihood ratio (LR) framework provides an alternative form of reporting evidence, and is widely recommended as logically correct [9, 10, 11, 8]. The likelihood ratio is defined as the likelihood of the evidence $E$ under one hypothesis ($H_{1}$) divided by the likelihood under a competing hypothesis ($H_{2}$): $LR=P(E|H_{1})/P(E|H_{2})$ Adoption of numerical LRs is largest in the field of forensic DNA analysis, which boasts well-developed statistical models. However, an increasing number of forensic fields is following suit in developing numerical LR systems [12]. The framework is equally applicable to subjective judgements, e.g. taking the form ‘the evidence provides strong support for the first proposition relative to the alternative’. No trier-of-fact calculates a probability of guilt. Rather, there is a psychological process that incorporates the evidence and results in a conviction of proven or not-proven. It is unclear whether this process takes into account uncertainty, expressed by error rates or likelihood ratios, in the way that the forensic scientist intended. Most likely, it does not [13, 14, 15, 16, 17]. It seems reasonable that training in and exposure to particular forms of expressing evidence will improve interpretation [18]. Thus, for those used to the likelihood ratio framework, it may be of use to quantify the uncertainty measured in the performance studies in the form of an LR. Below, I will illustrate how likelihood ratios for expert witness statements can be straightforwardly computed from performance studies. In fact, if only the two statements of ‘same source’ or ‘different source’ are used, the LR is simply (1 - false negative rate) / false positive rate. The interpretation of these calculated LRs is the support for same source relative to different source that is given by a particular statement, from an expert and sample pair comparable to those in the performance study. None of these numbers should be treated as a thorough assessment of the evidential strength of a statement in the context of a specific case. ## 2 Methods We assume we have access to many evaluations made by a set of experts for which ground truth is known. Scott et al. [4] provides such data for firearm examiners. Possible conclusions for the examiner to reach were individualisation (‘ID’), three levels of inconclusive (‘Inconcl. A-C’), ‘Elimination’ and ‘Other’. For example, for the statement of individualisation the LR would be: $LR=\frac{P(\textrm{expert concludes bullet fired from firearm}|\textrm{bullet was fired from firearm})}{P(\textrm{expert concludes bullet fired from firearm}|\textrm{bullet was not fired from firearm})}$ Table 1 shows how often conclusions were reached for matching and non-matching pairs. The LR can be calculated in a straightforward manner. The probability in the numerator is equal to the number of times that an expert said ‘ID’ when $H_{1}$ was true, i.e. the bullet was fired from the firearm, divided by the total number of times $H_{1}$ was true. The first is 1076, the second is 1076+127+125+36+41+24 = 1429. This gives us a probability of $\begin{split}&P(\textrm{expert concludes bullet fired from firearm}|\textrm{bullet was fired from firearm})\\\ &=1076/1429\\\ &\approx 0.75\end{split}$ Likewise for the denominator, we get $\begin{split}&P(\textrm{expert concludes bullet fired from firearm}|\textrm{bullet was not fired from firearm})\\\ &=20/2891\\\ &\approx 0.007\end{split}$ The ratio of these probabilities is the likelihood ratio: $LR\approx 0.75/0.007\approx 109$. This calculation is exactly the same for all studies we looked at. Sometimes an additional tallying step is needed as the data are not given in aggregated form. Code is available from github111github.com/NetherlandsForensicInstitute/evidential_value_of_expert_witness_statements. | ID | Inconcl.-A | Inconcl.-B | Inconcl.-C | Elimination | Other ---|---|---|---|---|---|--- Matching | 1,076 | 127 | 125 | 36 | 41 | 24 Non-matching | 20 | 268 | 848 | 745 | 961 | 49 Table 1: Statistics on bullet evaluations by expert witnesses, from [4]. ## 3 Results Table 2 gives the LRs for all categories considered for bullet evaluations. LRs for all different statement categories considered for the other performance studies (footwear, cartridge, handwriting, fingerprints, bloodstain patterns) are given in the tables in the appendix. Table 3 summarises these by giving for every study the LR calculated for the identification and exclusion statements. | ID | Inconcl.-A | Inconcl.-B | Inconcl.-C | Elimination | Other ---|---|---|---|---|---|--- LR | 109 | 1 | 1 / 3 | 1 / 10 | 1 / 12 | 1 Table 2: Likelihood ratios for bullet evaluation conclusions by expert witnesses, calculated from data in [4]. | LR (identification) | LR (exclusion) ---|---|--- pattern type from bloodstain pattern [3] | 6 | 1/4 writer from handwriting sample [7] | 17 | 1/28 firearm from cartridge [4] | 81 | 1/28 firearm from bullet [4] | 109 | 1/12 footwear from print [6] | 113 | 1/7 person from latent fingerprints [5] | 376 | 1/11 Table 3: Likelihood ratios for expert witness from various fields, for statements of identification and exclusion. ## 4 Discussion This study presents the likelihood ratios for expert witness statements for five forensic fields from large performance studies. The LRs are an alternative to error rates for quantifying the uncertainty associated with these statements. This alternative may be useful to those who are more used to the LR framework than error rates, and may help bridge the gap between the two frameworks. Incidentally, the LR framework simplifies the uncertainty analysis as we don’t have to consider various different error rates and ways of handling ‘inconclusive’ statements. The likelihood ratios reported should be treated as ‘ballpark’ figures, giving some insight into what LRs to associate with particular statements, such as identification. We should be very hesitant in interpreting the LRs reported here as the evidential strength of an expert witness statement in a given case. The performance studies measure the performance of a set of experts, which may not be representative for the case, on a specific set of samples, which may not be representative for the case. For example, several studies explicitly select different-source samples that are very challenging to evaluate [7, 5, 4]. As an illustration, considering only the top 1% hardest different-source comparisons would result in an LR 100 times lower (assuming no errors would be made on the remaining 99%). Furthermore, it is common practice in casework to have forensic witness evaluations reviewed by an independent expert - this extra quality check is absent in the performance studies. Lastly, we stress that evidential strength is only one aspect of the value of forensic evidence; moderately strong evidence that a suspect held the murder weapon may be much more relevant than very strong evidence that he was near the crime scene. The actual values of LRs found are interesting. As expected, statements of increasing certainty result in higher LRs and ‘inconclusive’ statements generally result in LRs close to 1. More surprising is that even the highest LRs found for inclusion were below 400, and the lowest found for exclusion above 1/50. This range is small. For example, both experts and the public expect a statement of fingerprint identification to be stronger than an LR of 100,000 [19]. The caveats mentioned above only partly explain the difference of several orders of magnitude between expected and calculated LRs. It seems we either overvalue categorical statements, undervalue numerical LRs, or both. This undervaluation is something we anecdotally encounter in practice, and is perhaps partly caused by the forensic science community. After all, an LR of 1000 is strong enough to convert a weak prior probability of 0.10 to a posterior of 0.99. Indeed, an LR of 1000 is referred to as ‘decisive’ in general scientist’s nomenclature [20, 21]. Yet in forensic science we call it ‘moderately strong’ [9, 10], and in DNA analysis this number may even be considered too low to report [22]. Clearly, more research is needed in how ‘mid range’ LRs are actually interpreted, and how we can improve this. ## Acknowledgements My sincere thanks to Marjan Sjerps, Wauter Bosma, David vd Vloed and Charles Berger for critical reading of this manuscript. ## References * [1] President’s Council of Advisors on Science and Technology. Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods. Technical report, 2016. * [2] National Research Council, Division on Engineering and Physical Sciences, Committee on Applied and Theoretical Statistics, Policy and Global Affairs, Committee on Science, Technology, and Law, and Committee on Identifying the Needs of the Forensic Sciences Community. Strengthening Forensic Science in the United States: A Path Forward. National Academies Press, August 2009. * [3] R Austin Hicklin, Kevin R Winer, Paul E Kish, Connie L Parks, William Chapman, Kensley Dunagan, Nicole Richetelli, Eric G Epstein, Madeline A Ausdemore, and Thomas A Busey. Accuracy and reproducibility of conclusions by forensic bloodstain pattern analysts. Forensic Sci. Int., 325:110856, August 2021. * [4] L Scott Chumbley, Max D Morris, Stanley J Bajic, Daniel Zamzow, Erich Smith, Keith Monson, and Gene Peters. Accuracy, repeatability, and reproducibility of firearm comparisons part 1: Accuracy. July 2021. * [5] Bradford T Ulery, R Austin Hicklin, Joann Buscaglia, and Maria Antonia Roberts. Accuracy and reliability of forensic latent fingerprint decisions. Proc. Natl. Acad. Sci. U. S. A., 108(19):7733–7738, May 2011. * [6] R. Austin Hicklin, Brian C. McVicker, Connie Parks, Jan LeMay, Nicole Richetelli, Michael Smith, JoAnn Buscaglia, Rebecca Schwartz Perlman, Eugene M. Peters, and Brian A. Eckenrode. Accuracy, reproducibility, and repeatability of forensic footwear examiner decisions. Forensic Science International, 339:111418, 2022. * [7] R Austin Hicklin, Linda Eisenhart, Nicole Richetelli, Meredith D Miller, Peter Belcastro, Ted M Burkes, Connie L Parks, Michael A Smith, Joann Buscaglia, Eugene M Peters, Rebecca Schwartz Perlman, Jocelyn V Abonamah, and Brian A Eckenrode. Accuracy and reliability of forensic handwriting comparisons. Proc. Natl. Acad. Sci. U. S. A., 119(32):e2119944119, August 2022\. * [8] Jonathan J Koehler, Jennifer L Mnookin, and Michael J Saks. The scientific reinvention of forensic science. Proc. Natl. Acad. Sci. U. S. A., 120(41):e2301840120, October 2023\. * [9] Association of Forensic Science Providers. Standards for the formulation of evaluative forensic science expert opinion. Sci. Justice, 49(3):161–164, September 2009. * [10] ENFSI (European network of forensic science institutes). ENFSI guideline for evaluative reporting in forensic science, 2015. * [11] American Statistical Association. American statistical association position on statistical statements for forensic evidence, January 2019. * [12] R van Lierop, Ramos D, Sjerps M, and Ypma RJF. An overview of log likelihood ratio cost in forensic sciences – where is it used and what values can we expect? submitted. * [13] Raymond Marquis, Alex Biedermann, Liv Cadola, Christophe Champod, Line Gueissaz, Geneviève Massonnet, Williams David Mazzella, Franco Taroni, and Tacha Hicks. Discussion on how to implement a verbal scale in a forensic laboratory: Benefits, pitfalls and suggestions to avoid misunderstandings. Sci. Justice, 56(5):364–370, September 2016. * [14] William C Thompson and Eryn J Newman. Lay understanding of forensic statistics: Evaluation of random match probabilities, likelihood ratios, and verbal equivalents. Law Hum. Behav., 39(4):332–349, August 2015. * [15] Marjan Sjerps and Dirk B Biesheuvel. The interpretation of conventional and ’bayesian’ verbal scales for expressing expert opinion: a small experiment among jurists. Int. J. Speech Lang. Law, 6(2):214–227, August 1999. * [16] Heidi Eldridge. Juror comprehension of forensic expert testimony: A literature review and gap analysis. Forensic Sci Int Synerg, 1:24–34, March 2019. * [17] Jan de Keijser and Henk Elffers. Understanding of forensic expert reports by judges, defense lawyers and forensic professionals. Psychol. Crime Law, 18(2):191–207, February 2012. * [18] Eric Rassin, Nurul Arbiyah, Irena Boskovic, Henry Otgaar, and Harald Merckelbach. Likelihood ratios in psychological expert opinion, and their reception by professional judges. The International Journal of Evidence & Proof, 26(4):325–341, October 2022. * [19] Thomas Busey and Morgan Klutzke. Calibrating the perceived strength of evidence of forensic testimony statements. Sci. Justice, 63(1):38–53, January 2023. * [20] Jeffreys H. Theory of Probability (3rd Ed.). Oxford University Press, 1961. * [21] Andrew F Jarosz and Jennifer Wiley. What are the odds? a practical guide to computing and reporting bayes factors. The Journal of Problem Solving, 7(1):2, 2014. * [22] Meinhard Hahn, Katja Anslinger, Martin Eckert, Rolf Fimmers, Stefanie Grethe, Carsten Hohoff, Sebastian Kranz, Christoph Leuker, Claus Oppelt, Sven Razbin, Thomas Rothämel, Harald Schneider, Michael Templin, Marielle Vennemann, Andrea Wächter, Volker Weirich, Peter Zimmermann, and Peter M Schneider. Gemeinsame empfehlungen der projektgruppe ‘biostatistische dna-berechnungen’ und der spurenkommission zur biostatistischen bewertung forensischer DNA-analytischer befunde mit vollkontinuierlichen modellen (VKM). Rechtsmedizin, 33(1):3–12, February 2023. certainty of bloodstain pattern type classification | LR ---|--- identification (‘definitive’) | 6 possible (‘included’) | 1 excluded | 1 / 4 Table 4: Likelihood ratios for bloodstain pattern evaluation conclusions by expert witnesses.[3] | LR ---|--- The questioned sample was written by the known writer | 17 The questioned sample was probably written by the known writer | 7 No conclusion | 1 / 2 The questioned sample was probably not written by the known writer | 1 / 22 The questioned sample was not written by the known writer | 1 / 28 Table 5: Likelihood ratios for handwriting evaluation conclusions by expert witnesses.[7] | LR ---|--- identification | 113 high association | 13 association | 1 limited association | 1 no association | 1 / 6 exclusion | 1 / 7 inconclusive | 1 / 2 not suitable | 1 / 2 Table 6: Likelihood ratios for footwear print evaluation conclusions by expert witnesses.[6] | LR ---|--- identification | 81 inconclusive A | 2 inconclusive B | 1 / 2 inconclusive C | 1 / 14 elimination | 1 / 28 other | 1 Table 7: Likelihood ratios for cartridge case evaluation conclusions by expert witnesses.[4] | LR ---|--- individualisation | 376 inconclusive | 2 exclusion | 1 / 11 Table 8: Likelihood ratios for latent fingerprint evaluation conclusions by expert witnesses.[5]
040005 2012 M. C. Barbosa 040005 With the aim of establishing a criterion for identifying when a lipid bilayer has reached steady state using the molecular dynamics simulation technique, lipid bilayers of different composition in their liquid crystalline phase were simulated in aqueous solution in presence of CaCl2 as electrolyte, at different concentration levels. In this regard, we used two different lipid bilayer systems: one composed by 288 DPPC (DiPalmitoylPhosphatidylCholine) and another constituted by 288 DPPS (DiPalmitoylPhosphatidylSerine). In this sense, for both type of lipid bilayers, we have studied the temporal evolution of some lipids properties, such as the surface area per lipid, the deuterium order parameter, the lipid hydration and the lipid-calcium coordination. From their analysis, it became evident how each property has a different time to achieve equilibrium. The following order was found, from faster property to slower property: coordination of ions $\approx$ deuterium order parameter $>$ area per lipid $\approx$ hydration. Consequently, when the hydration of lipids or the mean area per lipid are stable, we can ensure that the lipid membrane has reached the steady state. # A criterion to identify the equilibration time in lipid bilayer simulations Rodolfo D. Porasso [inst1] J.J. López Cascales E-mail<EMAIL_ADDRESS>[inst2] (3 September 2012; 24 October 2012) ††volume: 4 99 inst1 Instituto de Matemática Aplicada San Luis (IMASL) - Departamento de Física, Universidad Nacional de San Luis/CONICET, D5700HHW, San Luis, Argentina. inst2 Universidad Politécnica de Cartagena, Grupo de Bioinformática y Macromoléculas (BioMac) Aulario II, Campus de Alfonso XIII, 30203 Cartagena, Murcia, Spain. ## 1 Introduction Over the last few decades, different computational techniques have emerged in different fields of science, some of them being extensively implemented and used by a great number of scientists around the globe. Among others, the Molecular Dynamics (MD) simulation is a very popular computational technique, which is widely used to obtain insight with atomic detail of steady and dynamic properties in the fields of biology, physics and chemistry. In this regard, a critical aspect that must be identified in all the MD simulations is related to the required equilibration time to achieve a steady state. This point is crucial in order to avoid simulation artifacts that could lead to wrong conclusions. Currently, with the increment of the computing power accessible to different investigation groups, much longer simulation trajectories are being carried out to obtain reliable information about the systems, with the purpose of approaching the time scale of the experimental phenomena. However, even when this fact is objectively desirable without further objections, nowadays, much longer equilibration times are arbitrarily being required by certain reviewers during the revision process. From our viewpoint, this should be thoroughly revised due to the following two main reasons: first, because it results in a limiting factor in the use of this technique by other research groups which cannot access to very expensive computing centers (assuming that authors provide enough evidence of the equilibration of the system). Second, to avoid wasting expensive computing time in the study of certain properties which do not require such long equilibration times, once the steady state of the system has been properly identified. Phospholipid bilayers are of a high biological relevance, due to the fact that they play a crucial role in the control of the diffusion of small molecules, cell recognition, and signal transduction, among others. In our case, we have chosen the PhosphatidylCholine (PC) bilayer because it has been very well studied by MD simulations [1, 2, 3, 4, 5, 6, 7] and experimentally as well [8, 9, 10, 11, 12, 13, 14]. Furthermore, studies of the effects of different types of electrolytes on a PC bilayer have also been studied, experimentally [15, 16, 17, 18, 19, 20, 21, 22, 23, 24] and by simulation [25, 26, 27, 28, 29, 30, 31, 32, 33]. As mentioned above, the Molecular Dynamics (MD) simulations have emerged during the last decades as a powerful tool to obtain insight with atomic detail of the structure and dynamics of lipid bilayers [34, 35, 36]. Several MD simulations of membranes under the influence of different salt concentrations have been carried out. One of the main obstacles related to these studies has been the time scale associated to the binding process of ions to the lipid bilayer. Considering the literature, a vast dispersion of equilibration times associated to the binding of ions to the membrane has been reported, where values ranging from 5 to 100 ns have been suggested for monovalent and divalent cations [25, 27, 28, 29, 32, 37, 38]. In this regard, we carried out four independent simulations of a lipid bilayer formed by 288 DPPC in aqueous solutions, for different concentrations of CaCl2 to provide an overview of their equilibration times. Among other properties, the surface area per lipid, the deuterium order parameters, lipid hydration and lipid- calcium coordination were studied. Finally, in order to generalize our results, a bilayer formed by 288 DPPS in its liquid crystalline phase, in presence of CaCl2 at 0.25N, was simulated as well . ## 2 Methodology Different molecular Dynamics (MD) simulations of lipid bilayer formed by 288 DPPC were carried out in aqueous solutions for different concentrations of CaCl2, from 0, up to 0.50 N. Furthermore, with the aim of generalizing our results, a bilayer of 288 DPPS in presence of CaCl2 at 0.25 N was simulated as well. Note that the concentration of CaCl2 in terms of normality is defined as: $\textrm{normality}=\frac{n_{\textrm{equivalent grams}}}{l_{\textrm{solution}}}$ (1) where $n_{\textrm{equivalent grams}}=\frac{\textrm{gr(solute)}}{\textrm{equivalent weight}}$ and $\textrm{equivalent weight}=\frac{\textrm{Molecular weight}}{n}$, being $n$ the charge of the ions in the solution. In Table 1, the number of molecules that constitute each system, applying Eq. (1), is summarized. Type of Lipid | [CaCl2] N | Ca2+ | Cl- | Water ---|---|---|---|--- DPPC | 0 | 0 | 0 | 10068 DPPC | 0.06 | 5 | 10 | 10053 DPPC | 0.13 | 12 | 24 | 10032 DPPC | 0.25 | 23 | 46 | 9999 DPPC | 0.50 | 46 | 92 | 9930 DPPS | 0.25 | 204 | 120 | 26932 Table 1: The simulated bilayer systems. Note that the salt concentration is given in normal units. The numerals describe the number of molecules contained in the simulation box. To build up the original system, a single DPPC lipid molecule, or DPPS lipid (Fig. 1), was placed with its molecular axis perpendicular to the membrane surface ($xy$ plane). Next, each DPPC, or DPPS, was randomly rotated and copied 144 times on each leaflet of the bilayer. Finally, the gaps existing in the computational box (above and below the phospholipid bilayer) were filled using an equilibrated box containing 216 water molecules of the extended simple point charge (SPC/E) [39] water model. Figure 1: Structure and atom numbers for DPPC and DPPS used in this work. Thus, the starting point of the first system of Table 1 was formed by 288 DPPC in absence of CaCl2. Once this first system was generated, the whole system was subjected to the steepest descent minimization process to remove any excess of strain associated with overlaps between neighboring atoms of the system. Thereby, the following DPPC systems in presence of CaCl2 were generated as follows: to obtain a [CaCl${}_{2}]$=0.06 N, 15 water molecules were randomly substituted by 5 Ca2+ and 10 Cl-. An analogous procedure was applied to the rest of the systems, where 36, 69 and 138 water molecules were substituted by 12, 23 and 46 Ca2+ and 24, 46 and 92 Cl-, to obtain a [CaCl${}_{2}]$ concentration of 0.13 N, 0.25 N and 0.50 N, respectively. Finally, the DPPS bilayer was generated following the same procedure described above for the DPPC, starting from a single DPPS molecule and once the lipid bilayer in presence of water passed the minimization process, 324 water molecules were substituted by 204 Ca2+ and 120 Cl- (note that 144 of the 204 calcium ions were added to balance the negative charge associated with the DPPS). GROMACS 3.3.3 package [40, 41] was used in the simulations, and the properties showed in this work were obtained using our own code. The force field proposed by Egberts et al. [2] was used for the lipids, and a time step of 2 fs was used as integration time in all the simulations. A cut-off of 1.0 nm was used for calculating the Lennard-Jones interactions. The electrostatic interaction was evaluated using the particle mesh Ewald method [42, 43]. The real space interaction was evaluated using a 0.9 nm cut-off, and the reciprocal space interaction using a 0.12 nm grid with a fourth-order spline interpolation. A semi-isotropic coupling pressure was used for the coupling pressure bath, with a reference pressure of 1 atm which allowed the fluctuation of each axis of the computer box independently. For the DPPC bilayer, each component of the system (i.e., lipids, ions and water) was coupled to an external temperature coupling bath at 330 K, which is well above the transition temperature of 314 K [44, 45]. For DPPS bilayer, each component of the system was coupled to an external temperature coupling bath at 350 K, which is above the transition temperature [46, 47]. All the MD simulations were carried out using periodic boundary conditions. The total trajectory length of each simulated system was of 80 ns of MD simulation, where the coordinates of the system were recorded every 5 ps for their appropriate analysis. Finally, in order to study the effect of the temperature, only the case corresponding to 0.25 N CaCl2 was investigated at two additional temperatures, 340 K and 350 K. ## 3 Results and discussion ### 3.1 Effect of the CaCl2 concentration #### 3.1.1 Surface area per lipid Surface area per lipid $\langle A\rangle$ is a property of lipid bilayers which has been accurately measured from experiments [48]. The calculation of mean area per lipid can be determined from the MD simulation as: $\langle A\rangle=\frac{x\cdot y}{N}$ (2) where $x$ and $y$ represent the box sizes in the direction $x$ and $y$ (perpendicular to the membrane surface) over the simulation, and $N$ is the number of lipids contained in one leaflet, in our case $N=144$. Figure 2: Running area per lipid at T = 330 K in presence of [CaCl2] at (A) 0.06 N, (B) 0.13 N, (C) 0.25 N, (D) 0.50 N and (E) 0.25 N (in this case, T = 350 K). Solid lines represent the mean area obtained from the last 70 ns of the simulated trajectories (see text for further explanation). The type of lipid is indicated in the legends. Focusing on the study of the time evolution of the area per lipid, Figure 2 depicts the running surface area per lipid for different concentrations of CaCl2 and type of lipid. In general, for the 5 bilayers formed by DPPC or DPPS, the area per lipid achieved a steady state after 10 ns of simulation, being this equilibration time almost independent of the concentration of CaCl2 and type of lipid which composed the membrane. In absence of salt, an average area per lipid of $\langle A\rangle=0.663\pm 0.008$ nm2 was calculated from the last 70 ns of the simulated trajectory, discarding the first 10 ns corresponding to the equilibration time. This value agrees with experimental data, where values in a range from 0.55 to 0.72 nm2 have been measured [10, 11, 48, 49, 50, 51]. Table 2 shows the mean surface area per lipid (again, after discarding the equilibration time of 10 ns) with their corresponding error bar. From the simulation results, a shrinking in the surface area per lipid with the increment of the ionic strength of the solution is observed. This shrinking is expected and attributed to the complexation of lipid molecules by calcium, such as it has been pointed out in previous studies [28, 29, 52]. #### 3.1.2 Deuterium order parameter The deuterium order parameter, $S_{CD}$, is measured from 2H-NMR experiments. This parameter provides relevant information related to the disorder of the hydrocarbon region in the interior of the lipid bilayers by measuring the orientation of the hydrogen dipole of the methylene groups with respect to the perpendicular axis to the lipid bilayer. Due to the fact that hydrogens of the lipid methylene groups (CH2) have not been taken into account (in an explicit way) in our simulations, the order parameter $-S_{CD}$ on the $i+1$ methylene group was defined as the normal unitary vector to the vector defined from the $i$ to the $i+2$ CH2 group and contained in the plane formed by the methylene groups $i$, $i+1$ and $i+2$. Thus, the deuterium order parameter $-S_{CD}$ on the $i-th$ of the CH2 group can be estimated by Molecular Dynamics simulations as follows: $-S_{CD}=\frac{1}{2}\langle{3\cos^{2}(\theta)-1}\rangle$ (3) where $\theta$ is the angle formed between the unitary vector defined above and the $z$ axis. The expression in brackets $\langle\dots\rangle$ denotes an average over all the lipids and time. Hence, note that the $-S_{CD}$ can adopt any value between -0.5 (corresponding to a parallel orientation to the lipid/water interface) and 1 (oriented along the normal axis to the lipid bilayer). Type of | [CaCl2] N | $\langle A\rangle$ (nm2) | Hydration ---|---|---|--- Lipid | | | Number DPPC | 0 | 0.663 $\pm$0.008 | 1.758 $\pm$0.009 DPPC | 0.06 | 0.658 $\pm$0.008 | 1.740 $\pm$0.009 DPPC | 0.13 | 0.651 $\pm$0.007 | 1.719 $\pm$0.010 DPPC | 0.25 | 0.641 $\pm$0.009 | 1.680 $\pm$0.015 DPPC | 0.50 | 0.628 $\pm$0.010 | 1.610 $\pm$0.015 DPPS | 0.25 | 0.522 $\pm$0.007 | 2.552 $\pm$0.010 Table 2: Area per lipid and lipid hydration number as a function of salt concentration (see text for further explanation). Note that the salt concentration is given in normal units. Error bars were calculated for each system separately from subtrajectories of 10 ns length. Simulation temperature = 330 K. Figure 3 shows the running $-S_{CD}$ for different carbons of the DPPC and DPPS tails and salt concentrations. Only the carbons which correspond to the initial (hydrocarbons 2 and 6), the middle (hydrocarbon 10) and final (hydrocarbons 13 and 15) methylene groups of the lipid tails were depicted in this figure. Each point of the figure represents the average values of $-S_{CD}$ over 5 ns of subtrajectory length, and the lines represent the mean values calculated from the last 70 ns of the trajectories simulated. From this figure, it is observed how in all the cases, the required equilibration time is less than 10 ns of simulation time, independently of the salt concentration and the type of lipid. Finally, it is noted that Figure 3 exhibits an increase in the deuterium order parameters with the salt concentration, consistent with the shrinking of the area per lipid described above. Figure 3: Running deuterium order parameter, $-S_{CD}$, in presence of [CaCl2] at (A) 0.06 N, (B) 0.13 N, (C) 0.25 N, (D) 0.50 N and (E) 0.25 N. DPPC simulations were performed at 330 K and DPPS simulation temperature was 350 K. Solid lines represent mean values $-S_{CD}$ obtained from the last 70 ns of the simulated trajectories. The type of lipid is indicated in the legends. Symbols: $\circ$ hydrocarbon 2; $\diamond$ hydrocarbon 6; $\triangleleft$ hydrocarbon 10; $+$ hydrocarbon 13 and $\times$ hydrocarbon 15. Note that the error bars have the same size as symbol. #### 3.1.3 Lipid hydration To analyze the lipid hydration, the radial distribution function $g(r)$ of water around one of the oxygens of the phosphate group (atom number 10 in Fig. 1 for DPPC and DPPS) was calculated. The radial distribution function $g\left(r\right)$ is defined as follows: $g(r)=\frac{N(r)}{4{\pi}r^{2}{\rho}\delta r}$ (4) where $N\left(r\right)$ is the number of atoms in a spherical shell at distance $r$ and thickness $\delta r$ from a reference atom. $\rho$ is the density number taken as the ratio of atoms to the volume of the total computing box. From numerical integration of the first peak of the radial distribution function, the hydration numbers can be estimated for different atoms of the DPPC or DPPS. Figure 4 depicts the hydration number of phosphate oxygen (atom 10 in Fig. 1 for DPPC and DPPS) in presence of CaCl2, where each point represents the average of 5 ns subtrajectory length. These results show how this property reached a steady state for the cases (A), (B) and (E), after 10 ns of simulation. However, for the cases (C) and (D), 5 ns of extra simulation trajectory were required to reach a steady state. Table 2 shows the hydration numbers for the last 70 ns of the trajectory length, corresponding to four concentrations of CaCl2 and both types of lipids, DPPC and DPPS. In this regard, from Fig. 4, the significant lipid dehydration with the increment of the ionic strength of the solution is evident, in good accordance with previous results [52]. Figure 4: Hydration number of the phosphate oxygen (atom 10 in Fig. 1) along the simulated trajectories in presence of [CaCl2] at (A) 0.06 N, (B) 0.13 N, (C) 0.25 N, (D) 0.50 N (for DPPC T = 330 K) and (E) 0.25 N. In this case, T = 350 K. Solid lines represent the mean value of the hydration number calculated from the last 70 ns of the simulated trajectories. The type of lipid is indicated in the legends. Note that the error bars have the same size as symbol. #### 3.1.4 Phospholipid-calcium coordination Some authors have reported how the lipid coordination by divalent cations widely varies . Thus, on the one hand, some authors [25] have reported that this is a very slow process, which requires about 85 ns of simulation time, but, on the other hand, other authors [26] have suggested that this process results much more rapid, taking less than 1 ns. In this sense, the coordination of DPPC-Ca2+ was studied by monitoring the oxygen-calcium coordination of the carbonyl oxygens (atoms 16 and 35 in Fig. 1) and phosphate oxygens (atoms 9 and 10 of DPPC in Fig. 1), as a function of time. The left column in Fig. 5 represents the oxygen coordination number, while the right one depicts the percentage of calcium ions involved in the coordination process with respect to the total number of calcium ions present in the aqueous solution. Figure 5 shows how the DPPC coordination by calcium is a quick process, taking less than 5 ns of simulation time to achieve a steady state. The kinetic of this process appears to be related to the ratio between calcium/lipid. Thus, after the first 5 ns of simulation time, the Ca–lipid coordination presents some fluctuation along the rest of the simulated trajectory. However, in Fig. 5 (A) and (B) (for the cases of lower concentration), it is observed how the percentage of coordination fluctuates between a 60% and a 100%. We consider that this broad fluctuation is related to the limited sample size of our simulations that introduces a certain noise in our results. Figure 5: Left column represents the number of Ca2+ coordinated to lipids in presence of [CaCl2] at (A) 0.06 N, (B) 0.13 N, (C) 0.25 N and (D) 0.50 N, for T = 330 K. Right column shows the quantity of calcium ions coordinated to lipids expressed in percentage, along the simulated trajectory. ### 3.2 Effect of temperature This section focuses on the study of the role played by temperature on the equilibration process. In this regard, only the system corresponding to a concentration of 0.25 N in CaCl2 was studied, for a range of temperatures from 330 K to 350 K (all of them above the transition temperature of 314 K [44, 45] for the DPPC). Figure 6 shows the running area along the trajectory. In this case, it was noticed how the systems achieve a steady state after a trajectory length of roughly 10 ns, where Table 3 shows the mean area per lipid calculated from the last 70 ns of simulation time. Figure 7 shows the deuterium order parameter of the methylene groups along the lipid tails, calculated from Eq. (3). Figure 7, on the one hand, clearly shows that for the three temperatures the systems have reached the steady state before the first 10 ns of simulation. On the other hand, it shows an increase in the disorder of the lipid tails with temperature, which is closely related with the increase of the area per lipid, such as it was pointed out above. Figure 8 depicts the results of the hydration numbers of DPPC for the three temperatures studied, where the equilibrated state was achieved after 10 ns of simulation time. Table 3 provides the calculated hydration numbers in the equilibrium, showing how the lipid hydration remained invariable with the rising of the temperature. Concerning the lipid-calcium coordination, Fig. 9 represents the lipid-calcium coordination number, and the right column represents the calcium that participates in the coordination expressed in percentage respect the total of calcium ions in solution. From simulation, it becomes evident how calcium ions required less than 5 ns to achieve an equilibrated state for the three temperatures studied. In summary, for all the properties studied in this section, a slight decrease in the equilibration time with the increasing temperature was appreciated. T (K) | $\langle A\rangle$ (nm2) | Hydration Number ---|---|--- 330 | 0.642 $\pm$0.009 | 1.680 $\pm$ 0.010 340 | 0.650 $\pm$0.007 | 1.683 $\pm$ 0.020 350 | 0.666 $\pm$0.008 | 1.689 $\pm$ 0.015 Table 3: Area per lipid and the lipid hydration number as a function of temperature. Error bars were calculated from subtrajectories of 10 ns length. DPPC bilayer in presence of [CaCl2] = 0.25 N. Figure 6: Running area per lipid for [CaCl2] = 0.25 N at different temperatures, (A) T = 330 K, (B) T = 340 K and (C) T = 350 K. Solid lines represent the mean values obtained from the last 70 ns of simulation. Figure 7: Deuterium Order Parameter, $-S_{CD}$, along the simulated trajectory for a concentration of [CaCl2] = 0.25 N, for the following temperatures: (A) T = 330 K, (B) T = 340 K and (C) T = 350 K. Solid lines represent the average order parameter for the last 70 ns of simulation. Note that the error bars have the same size as symbol. Figure 8: Hydration number of phosphate oxygen (atom 10 in Fig. 1) along the simulated trajectories for a [CaCl2] = 0.25 N at different temperatures: (A) T = 330 K, (B) T = 340 K and (C) T = 350 K. Solid lines represent the average hydration number for the last 70 ns of simulation. Note that the error bars have the same size as symbol. Figure 9: Left column represents the number of calcium ions involved in the lipid coordination along time for a concentration of [CaCl2]= 0.25 N at different temperatures: (A) T = 330 K, (B) T = 340 K and (C) T = 350 K. The right column shows the same information expressed as a percentage of the total number of calcium ions in solution. ## 4 Conclusions The present work deals with the simulation time required to achieve the steady state for a lipid bilayer system in presence of CaCl2. In this regard, we studied two different systems: one with DPPC and another one with DPPS bilayer; both systems in presence of CaCl2 (at different level concentration). The salt free case was also studied, as control. The analysis of various lipid properties studied here indicates that some properties reach the steady state more quickly than others. In this sense, we found that the area per lipid and the hydration number are slower than the deuterium order parameter and the coordination of cations. Consequently, to ensure that a system composed by a lipid bilayer has reached a steady state, the criterion that we propose is to show that the area per lipid or the hydration number have reached the equilibrium. From our results, two important aspects should be remarked: 1. 1. The equilibration time is strongly dependent on the starting conformation of the system. Wrong starting conformations will require much longer equilibration times, even of one order of magnitude higher than the requested from a more refined starting conformation. 2. 2. Temperature is a critical parameter for reducing the equilibration time in our simulations, due to the fact that higher temperatures increase the kinetic processes, i.e., the sampling of the configurational space of the system. ###### Acknowledgements. Authors wish to thank the assistance of the Computing Center of the Universidad Politécnica de Cartagena (SAIT), Spain. RDP is member of ‘Carrera del Investigador’, CONICET, Argentine. ## References * [1] O Berger, O Edholm, F Jahnig, Molecular dynamics simulations of a fluid bilayer of dipalmitoylphosphatidylcholine at full hydration, constant pressure, and constant temperature, Biophys. J. 72, 2002 (1997). * [2] E Egberts, S J Marrink, H J C Berendsen, Molecular dynamics simulation of a phospholipid membrane, Eur. Biophys. J. 22, 423 (1994). * [3] U Essmann, L Perera, M L Berkowitz, The origin of the hydration interaction of lipid bilayers from MD simulation of dipalmitoylphosphatidylcholine membranes in gel and crystalline phases, Langmuir 11, 4519 (1995). * [4] S E Feller, Y Zhang, R W Pastor, R B Brooks, Constant pressure molecular dynamics simulations: The Langevin piston method, J. Chem. Phys. 103, 4613 (1995). * [5] W Shinoda, T Fukada, S Okazaki, I Okada, Molecular dynamics simulation of the dipalmitoylphosphatidylcholine (DPPC) lipid bilayer in the fluid phase using the Nosr-Parrinello-Rahman NPT ensemble, Chem. Phys. Lett. 232, 308 (1995). * [6] D P Tieleman, H J C Berendsen, Molecular dynamics simulations of a fully hydrated dipalmitoylphosphatidylcholine bilayer with different macroscopic boundary conditions and parameters, J. Chem. Phys. 105, 4871 (1996). * [7] K Tu, D J Tobias, M L Klein, Constant pressure and temperature molecular dynamics simulation of a fully hydrated liquid crystal phase dipalmitoylphosphatidylcholine bilayer, Biophys. J. 69, 2558 (1995). * [8] M F Brown, Theory of spin-lattice relaxation in lipid bilayers and biological membranes. Dipolar relaxation, J. Chem. Phys. 80, 2808 (1984). * [9] M F Brown, Theory of spin-lattice relaxation in lipid bilayers and biological membranes. ${}^{2}H$ and ${}^{14}N$ quadrupolar relaxation, J. Phys. Chem. 77, 1576 (1982). * [10] J F Nagle, R Zang, S Tristam-Nagle, W S Sun, H I Petrache, R M Suter, X-ray structure determination of fully hydrated L. phase dipalmitoylphosphatidylcholine bilayers, Biophys. J. 70, 1419 (1996). * [11] R P Rand, V A Parsegian, Hydration forces between phospholipid bilayers, Biochim. Biophys. Acta 988, 351 (1989). * [12] J Seelig, Deuterium magnetic resonance: Theory and application to lipid membranes, Q. Rev. Biophys. 10, 353 (1977). * [13] J Seelig, A Seelig, Lipid conformation in model membranes and biological systems, Q. Rev. Biophys. 13, 19 (1980). * [14] W J Sun, R M Suter, M A Knewtson, C R Worthington, S Tristram-Nagle, R Zhang, J F Nagle, Order and disorder in fully hydrated unoriented bilayers of gel phase dipalmitoylphosphatidylcholine, Phys. Rev. E. 49, 4665 (1994). * [15] H Akutsu, J Seelig, Interaction of metal ions with phosphatidylcholine bilayer membranes, Biochemistry 20, 7366 (1981). * [16] M G Ganesan, D L Schwinke, N Weiner, Effect of Ca2+ on thermotropic properties of saturated phosphatidylcholine liposomes, Biochim. Biophys. Acta 686, 245 (1982). * [17] L Herbette, C A Napolitano, R V McDaniel, Direct determination of the calcium profile structure for dipalmitoyllecithin multilayers using neutron diffraction, Biophys. J. 46, 677 (1984). * [18] D Huster, K Arnold, K Gawrisch, Strength of Ca2+ binding to retinal lipid membrane: Consequences for lipid organization, Biophys. J. 78, 3011 (2000). * [19] Y Inoko, T Yamaguchi, K Furuya, T Mitsui, Effects of cations on dipalmitoyl phosphatidylcholine/cholesterol/water systems, Biochim. Biophys. Acta 413, 24 (1975). * [20] R Lehrmann, J J Seelig, Adsorption of Ca2+ and La3+ to bilayer membranes: Measurement of the adsorption enthalpy and binding constant with titration calorimetry, Biochim. Biophys. Acta 1189, 89 (1994). * [21] L J Lis, W T Lis, V A Parsegian, R P Rand, Adsorption of divalent cations to a variety of phosphatidylcholine bilayers, Biochemistry 20, 1771 (1981). * [22] L J Lis, V A Parsegian, R P Rand, Binding of divalent cations to dipalmitoylphosphatidylcholine bilayers and its effect on bilayer interaction, Biochemistry 20, 1761 (1981). * [23] T Shibata, Pulse NMR study of the interaction of calcium ion with dipalmitoylphosphatidylcholine lamellae, Chem. Phys. Lipids. 53, 47 (1990). * [24] S A Tatulian, V I Gordeliy, A E Sokolova, A G Syrykh, A neutron diffraction study of the influence of ions on phospholipid membrane interactions, Biochim. Biophys. Acta 1070, 143 (1991). * [25] R A Böckmann, H Grubmüller, Multistep binding of divalent cations to phospholipid bilayers: A molecular dynamics study, Angewandte Chemie 43, 1021 (2004). * [26] J Faraudo, A Travesset, Phosphatidic acid domains in membranes: Effect of divalent counterions, Biophys. J. 92, 2806 (2007). * [27] A A Gurtovenko, Asymmetry of lipid bilayers induced by monovalent salt: Atomistic molecular-dynamics study, J. Chem. Phys. 122, 244902 (2005). * [28] P Mukhopadhyay, L Monticelli, D P Tieleman, Molecular dynamics simulation of a palmitoyl-oleoyl phosphatidylserine bilayer with Na+ counterions and NaCl, Biophys. J. 86, 1601 (2004). * [29] S A Pandit, D Bostick, M L Berkowitz, Molecular dynamics simulation of a dipalmitoylphosphatidylcholine bilayer with NaCl, Biophys. J. 84, 3743 (2003). * [30] U R Pedersen, C Laidy, P Westh, G H Peters, The effect of calcium on the properties of charged phospholipid bilayers, Biochim. Biophys. Acta 1758, 573 (2006). * [31] J N Sachs, H Nanda, H I Petrache, T B Woolf, Changes in phosphatidylcholine headgroup tilt and water order induced by monovalent salts: Molecular dynamics simulations, Biophys. J. 86, 3772 (2004). * [32] K Shinoda, W Shinoda, M Mikami, Molecular dynamics simulation of an archeal lipid bilayer whit sodium chloride, Phys. Chem. Chem. Phys. 9, 643 (2007). * [33] N L Yamada, H Seto, T Takeda, M Nagao, Y Kawabata, K Inoue, SAXS, SANS and NSE studies on “unbound state” in DPPC/water/CaCl2 system, J. Phys. Soc. Jpn. 74, 2853 (2005). * [34] D Frenkel, B Smit, Understanding molecular simulations, Academic Press, New York (2002). * [35] J J López Cascales, J García de la Torre, S J Marrink, H J C Berendsen, Molecular dynamics simulation of a charged biological membrane, J. Chem. Phys. 104, 2713 (1996). * [36] W F van Gunsteren, H J C Berendsen, Computer simulations of molecular dynamics: Methodology, applications and perspectives in chemistry, Angew. Chem Int. Ed. Engl. 29, 992 (1990). * [37] R A Böckmann, A Hac, T Heimburg, H Grubmüller, Effect of sodium chloride on a lipid bilayer, Biophys. J. 85, 1647 (2003). * [38] A A Gurtovenko, I Vattulainen, Effect of NaCl and KCl on phosphatidylcholine and phosphatidylethanolamine lipid membranes: Insight from atomic-scale simulations for understanding salt-induced effects in the plasma membrane, J. Phys. Chem. B. 112, 1953 (2008). * [39] H J C Berendsen, J R Grigera, T P Straatsma, The missing term in effective pair potentials, J. Phys. Chem. 91, 6269 (1987). * [40] H J C Berendsen, D van der Spoel, R van Drunen, A message-passing parallel molecular dynamics implementation, Comp. Phys. Comm. 91, 43 (1995). * [41] E Lindahl, B Hess, D van der Spoel, GROMACS 3.0: A package for molecular simulation and trajectory analysis, J. Mol. Mod. 7, 306 (2001). * [42] T Darden, D York, L Pedersen, Particle mesh Ewald: An N.log(N) method for Ewald sums in large systems, J. Chem. Phys. 98, 10089 (1993). * [43] U Essmann, L Perea, M L Berkowitz, T Darden, H Lee, L G Pedersen, A smooth particle mesh Ewald method, J. Chem. Phys. 103, 8577 (1995). * [44] L R De Young, K A Dill, Solute partitioning into lipid bilayer-membranes, Biochemistry 27, 5281 (1988). * [45] A Seelig, J Seelig, The dynamic structure of fatty acyl chains in a phospholipid bilayer measured by deuterium magnetic resonance, Biochemistry 13, 4839 (1974). * [46] G Cevc, A Watts, D Marsh, Titration of the phase transition of phosphatidilserine bilayer membranes. Effect of pH, surface electrostatics, ion binding and head-group hydration, Biochemistry 20, 4955 (1981). * [47] H Hauser, F Paltauf, G G Shipley, Structure and thermotropic behavior of phosphatidylserine bilayer membranes, Biochemistry 21, 1061 (1982). * [48] J F Nagle, S Tristam-Nagle, Structure of lipid bilayers, Biochim. Biophy. Acta 1469, 159 (2000). * [49] B A Lewis, D M Engelman, Lipid bilayer thickness varies linearly with acyl chain length in fluid phosphatidylcholine vesicles, J. Mol. Biol. 166, 211 (1983). * [50] R J Pace, S I Cham, Molecular motions in lipid bilayer. I. Statistical mechanical model of acyl chains motion, J. Chem. Phys. 76, 4217 (1982). * [51] R L Thurmond, S W Dodd, M F Brown, Molecular areas of phospholipids as determined by 2H NMR spectroscopy, Biphys. J. 59, 108 (1991). * [52] R D Porasso, J J López Cascales, Study of the effect of Na+ and Ca2+ ion concentration on the structure of an asymmetric DPPC/DPPS + DPPS lipid bilayer by molecular dynamics simulation, Coll. and Surf. B. Bioint. 73, 42 (2009).
June 29, 2022 # Nematic Tomonaga-Luttinger Liquid Phase in an $S\\!=\\!1/2$ Ferromagnetic- Antiferromagnetic Bond-Alternating Chain Takashi Tonegawa1,2,3 Kiyomi Okamoto2 Kiyohide Nomura4 and Tôru Sakai2,5 1 Professor Emeritus1 Professor Emeritus Kobe University Kobe University Kobe 657-8501 Kobe 657-8501 Japan 2 Graduate School of Science Japan 2 Graduate School of Science University of Hyogo University of Hyogo Hyogo 678-1297 Hyogo 678-1297 Japan 3 Department of Physics Japan 3 Department of Physics Osaka Metropolitan University Osaka Metropolitan University Sakai 599-8531 Sakai 599-8531 Japan 4 Department of Physics Japan 4 Department of Physics Kyushu University Kyushu University Fukuoka 812-8581 Fukuoka 812-8581 Japan 5 National Institute for Quantum Science and Technology (QST) Japan 5 National Institute for Quantum Science and Technology (QST) SPring-8 SPring-8 Hyogo 679-5148 Hyogo 679-5148 Japan Japan<EMAIL_ADDRESS> ###### Abstract We numerically investigate the ground-state phase diagram of the $S\\!=\\!1/2$ ferromagnetic-antiferromagnetic bond-alternating chain, in which the ferromagnetic interactions are stronger than the antiferromagnetic ones, and the anisotropies of the former and latter interactions are of the Ising-type and the $XY$-type, respectively. We use various numerical methods, such as the level spectroscopy and phenomenological renormalization-group analyses of the numerical data obtained by the exact diagonalization method, and so on. The resultant phase diagrams contain the ferromagnetic, $XY$1, singlet-dimer, and up-up-down-down phases as well as the nematic Tomonaga-Luttinger liquid (nTLL) phase which appears in a wide region of the interaction parameters. Perturbation calculations from the strong limit of the ferromagnetic interactions reproduce fairly well the numerical results of the phase boundary lines associated with the nTLL phase in the phase diagrams. $S\\!=\\!1/2$ ferromagnetic-antiferromagnetic bond-alternating chain, ground- state phase diagram, nematic Tomonaga-Luttinger liquid phase, numerical calculation, perturbation theory ## 1 Introduction Over the past several decades, a great deal of numerical, theoretical, and experimental work has been devoted to clarifying the quantum phase transition in one-dimensional $S\\!=\\!1/2$ systems. As is well known, due to the strong quantum fluctuation, a variety of exotic quantum phases appear in the ground- state phase diagrams of these systems. A typical example attracted recently much attention is the nematic Tomonaga-Luttinger liquid (nTLL) phase [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], which is characterized not only by the formation of two-magnon bound pairs but also by the dominant nematic four-spin correlation function. Recently, we[12] have numerically explored the ground-state phase diagram of an $S\\!=\\!1/2$ anisotropic two-leg ladder with different leg interactions in the absence of external magnetic field. This system has frustration when the signs of the two kinds of leg interactions are different from each other. Then, we have found that, when the ferromagnetic rung interactions with the Ising-type anisotropy is much stronger than the antiferromagnetic leg interactions with the $XY$-type anisotropy, the nTLL phase appears in the unfrustrated region as well as in the frustrated region. For this phase in the former region, the asymptotic form of the nematic correlation function shows the power-law decay with the uniform character; on the other hand, for this phase in the latter region, that shows the power-law decay with the staggered character. Thus, both nTLL phases are different phases. As far as we know, this is the first report of the realization of the nTLL phase in an $S\\!=\\!1/2$ unfrustrated one-dimensional spin system under no external magnetic field. According to this result, it is reasonably expected that the nTLL state appears as the zero-field ground state in general $S\\!=\\!1/2$ unfrustrated one-dimensional systems in which pairs of $S\\!=\\!1/2$ spins coupled strongly with the Ising-type ferromagnetic interaction are connected by the weak antiferromagnetic interactions. As an example of such systems, we investigate in this paper an $S\\!=\\!1/2$ ferromagnetic-antiferromagnetic bond- alternating chain. We express the Hamiltonian describing this system as ${\cal H}=-J_{{\rm F}}\sum_{j=1}^{N/2}\bigl{\\{}\Gamma_{{\rm F}}\bigl{(}S_{2j-1}^{x}S_{2j}^{x}+S_{2j-1}^{y}S_{2j}^{y}\bigr{)}+S_{2j-1}^{z}S_{2j}^{z}\bigr{\\}}+J_{\rm AF}\sum_{j=1}^{N/2}\bigl{\\{}S_{2j}^{x}S_{2j+1}^{x}+S_{2j}^{y}S_{2j+1}^{y}+\Delta_{{\rm AF}}\,S_{2j}^{z}S_{2j+1}^{z}\bigr{\\}}\,,$ (1) where $S_{j}^{\mu}$ ($\mu\\!=\\!x,y,z$) is the $\mu$-component of the $S\\!=\\!1/2$ operator $S$j at the $j$th site; $J_{{\rm F}}$ and $J_{{\rm AF}}$ denote, respectively, the magnitudes of the ferromagnetic and antiferromagnetic interactions; $\Gamma_{{\rm F}}$ and $\Delta_{{\rm AF}}$ are, respectively, the parameters representing the $XXZ$-type anisotropies of the former and latter interactions; $N$ is the total number of spins in the system, which is assumed to be a multiple of four. We assume that $J_{{\rm F}}\\!>\\!J_{{\rm AF}}\\!>\\!0.0$, $1.0\geq\Gamma_{{\rm F}}\geq 0.0$, and $1.0\geq|\Delta_{{\rm AF}}|$, that is, the ferromagnetic interactions are stronger than the antiferromagnetic ones, and the anisotropies of the former and latter interactions are of the Ising-type and the $XY$-type, respectively. ## 2 Ground-State Phase Diagram First of all, we show in Fig. 1 our numerical results of the ground-state phase diagrams obtained in the following cases: (a) the phase diagram on the $\Delta_{\rm AF}$ versus $\Gamma_{\rm F}$ plane in the case of $J_{{\rm F}}\\!=\\!1.0$ and $J_{{\rm AF}}\\!=\\!0.1$, (b) that on the $\Delta_{\rm AF}$ versus $J_{\rm AF}$ plane in the case of $J_{{\rm F}}\\!=\\!1.0$ and $\Gamma_{{\rm F}}\\!=\\!0.8$, and (c) that on the $\Gamma_{\rm F}$ versus $J_{\rm AF}$ plane in the case of $J_{{\rm F}}\\!=\\!1.0$ and $\Delta_{{\rm AF}}\\!=\\!-0.12$. These diagrams contain the ferromagnetic (F), $XY$1, singlet-dimer (SD), and up-up-down-down (UUDD) phases in addition to the nTLL phase which appears in wide regions. The physical pictures of the SD and UUDD states are sketched in Fig. 2. The SD state is a unique and symmetry protected topological gapped state and the UUDD state is a doubly degenerate gapped state. Figure 1: Ground-state phase diagrams (a) on the $\Delta_{\rm AF}$ versus $\Gamma_{\rm F}$ plane in the case of $J_{{\rm F}}\\!=\\!1.0$ and $J_{{\rm AF}}\\!=\\!0.1$, (b) on the $\Delta_{\rm AF}$ versus $J_{\rm AF}$ plane in the case of $J_{{\rm F}}\\!=\\!1.0$ and $\Gamma_{{\rm F}}\\!=\\!0.8$, and (c) on the $\Gamma_{\rm F}$ versus $J_{\rm AF}$ plane in the case of $J_{{\rm F}}\\!=\\!1.0$ and $\Delta_{{\rm AF}}\\!=\\!-0.12$, determined numerically in the present work. In (a’), (b’), and (c’), parts of (a), (b), and (c) are enlarged, respectively. The results of the perturbation calculations are shown in (a”), (b”), and (c”); by these calculations the $XY$1-SD, SD-UUDD, and F-$XY1$ phase transition lines cannot be obtained. Figure 2: Physical pictures of the SD (left) and UUDD (right) states. Open circles denote $S\\!=\\!1/2$ spins, and solid and dotted lines denote ferromagnetic and antiferromagnetic bonds, respectively. Ellipses represent singlet pairs of two $S\\!=\\!1/2$ spins, while arrows denote fixed projection values of the $S\\!=\\!1/2$ spins. As can be seen from these figures, the SD phase is a unique gapped phase, while the UUDD phase is a doubly degenerate gapped phase. Let us now discuss how to determine numerically the phase boundary lines in the phase diagram shown in Fig. 1. We denote, respectively, by $E_{0}^{\rm P}(N,M)$ and $E_{1}^{\rm P}(N,M)$, the lowest and second-lowest energy eigenvalues of the Hamiltonian ${\cal H}$ under periodic boundary conditions within the subspace of fixed $N$ and $M$, where $M(=\\!0,\pm 1,\cdots,\pm N/2)$ is the total magnetization. Furthermore, we denote by $E_{0}^{\rm T}(N,M,P)$ the lowest eigenvalue of ${\cal H}$ under twisted boundary conditions within the subspace of fixed $N$, $M$, and $P$, where $P(=\\!\pm 1)$ is the eigenvalue of the space inversion operator with respect to the twisted bond. We have numerically calculated these energies for finite-size systems with up to $N\\!=\\!28$ spins by means of the exact-diagonalization method. The ground-state energy of the finite-$N$ system is given by $E_{0}^{\rm P}(N,N/2)$ in the F phase and by $E_{0}^{\rm P}(N,0)$ in the other phases. In the following way, we have estimated the finite-size critical values of the interaction parameters for each phase transition. Then, the phase boundary line for the transition has been obtained by connecting the results for the $N\\!\to\\!\infty$ extrapolation of the finite-size critical values. First, the phase transition between the $XY$1 and SD phases is the Berezinskii-Kosterlitz-Thouless (BKT) transition[13, 14]. It is known that for this transition, the level spectroscopy method developed by Nomura and Kitazawa[16] is very powerful for calculating the phase transition line. This method implies that the finite-size critical values are estimated from $\Delta E_{0}^{\rm P}(N,2)=\Delta E_{0}^{\rm T}(N,0,-1),$ (2) under the condition that $\Delta E_{0}^{\rm T}(N,0,+1)$ is larger than these excitations, where $\Delta E_{0}^{\rm P}(N,M)=E_{0}^{\rm P}(N,M)-E_{0}^{\rm P}(N,0)$ and $\Delta E_{0}^{\rm T}(N,M,P)=E_{0}^{\rm T}(N,M,P)-E_{0}^{\rm P}(N,0)$. This equation is also applicable to estimate the nTLL-UUDD phase transition line (see Appendix). Secondly, the phase transition between the SD and UUDD phases is the 2D Ising-type transition. Therefore, as is well known, the phase transition line is determined by the phenomenological renormalization-group (PRG) method[17]. Then, to estimate the finite-size critical values, we solve the PRG equation, $N\,\Delta E_{1}^{\rm P}(N,0)=(N+4)\,\Delta E_{1}^{\rm P}(N+4,0)\,,$ (3) where $\Delta E_{1}^{\rm P}(N,M)\equiv E_{1}^{\rm P}(N,M)-E_{0}^{\rm P}(N,M)$. Thirdly, as for the phase transition between the nTLL and $XY$1 phases, the nTLL state accompanies two-magnon bound-states, while the $XY$1 state does not. Therefore, in the ground-state magnetization curve for the finite-size system, the magnetization increases from $M\\!=\\!0$ to $M\\!=\\!2$ in the former state and from $M\\!=\\!0$ to $M\\!=\\!1$ in the latter state. Thus, the finite-size critical values are estimated from $\Delta E_{0}^{\rm P}(N,2)=2\,\Delta E_{0}^{\rm P}(N,1)\,.$ (4) We note that the binding energy of two magnons[12] is defined by $E_{\rm bind}(N)\\!\equiv\\!\Delta E_{0}^{\rm P}(N,2)\\!-\\!2\,\Delta E_{0}^{\rm P}(N,1)$. Accordingly, Eq.(4) is also the condition of $E_{\rm bind}(N)\\!=\\!0$. Lastly, it is apparent that the finite-size critical values for the phase transitions between the F phase and one of the $XY$1 and nTLL phases are estimated from $E_{0}^{\rm P}(N,N/2)\\!=\\!E_{0}^{\rm P}(N,0)\,.$ (5) ## 3 Perturbation Theory In the nTLL phase, the important states of two $S\\!=\\!1/2$ spins connected by the Ising-like ferromagnetic coupling $J_{\rm F}$ are $|\uparrow\uparrow\rangle$ and $|\downarrow\downarrow\rangle$, and the remaining two states $|(1/\sqrt{2})(\uparrow\downarrow\pm\downarrow\uparrow)\rangle$ have higher energies. To describe the nTLL state, we introduce a pseudo-spin operator $T$ with $T\\!=\\!1/2$, where $|T^{z}=1/2\rangle=|\uparrow\uparrow\rangle$ and $|T^{z}=-1/2\rangle=|\downarrow\downarrow\rangle$. We perturbationally derive the effective Hamiltonian $\cal H_{\rm eff}$ described by $T$. We take the first term of the right-hand side of Eq.(1) as the unperturbed Hamiltonian, and the second term as the perturbation. Up to the second order perturbation calculation, we obtain the following $XXZ$ model, apart from the energy shift, ${\cal H}_{\rm eff}=\sum_{j=1}^{N/2}\left\\{J_{\rm eff}^{\perp}(T_{j}^{x}T_{j+1}^{x}+T_{j}^{y}T_{j+1}^{y})+J_{\rm eff}^{z}T_{j}^{z}T_{j+1}^{z}\right\\}$ (6) with $\displaystyle J_{\rm eff}^{\perp}=c_{1}-2c_{2}+c_{3},~{}~{}~{}~{}~{}J_{\rm eff}^{z}=b+c_{1}+2c_{2}+c_{3},$ (7) $\displaystyle b=J_{\rm AF}\Delta_{\rm AF},~{}~{}~{}~{}~{}c_{1}={J_{\rm AF}^{2}\over 8J_{\rm F}(1-\Gamma_{\rm F})},~{}~{}~{}~{}~{}c_{2}={J_{\rm AF}^{2}\over 8J_{\rm F}},~{}~{}~{}~{}~{}c_{3}={J_{\rm AF}^{2}\over 8J_{\rm F}(1+\Gamma_{\rm F})},$ (8) where $J_{\rm eff}^{\perp}\geq 0$. The very well known exact solution[14] for Eq. (6) shows that the ground state phase diagram of ${\cal H}_{\rm eff}$ consists of the F phase, the TLL phase, and the Néel phase. These phases correspond, respectively, to the F phase, the nTLL phase and the UUDD phase of the original model (1). Thus, the F-nTLL boundary line of the original model (1) is determined by $J_{\rm eff}^{z}=-J_{\rm eff}^{\perp}$, while the nTLL- UUDD boundary line by $J_{\rm eff}^{z}=J_{\rm eff}^{\perp}$. We note that the same effective Hamiltonian with the periodic boundary condition is obtained irrespective of the periodic boundary condition or the twisted boundary condition of the original Hamiltonian (1). The nTLL-$XY1$ boundary can be estimated by considering the energy cost of replacing a $|T^{z}=\pm 1/2\rangle$ pseudo-spin in the ${\cal H}_{\rm eff}$ picture by $|(1/\sqrt{2})(\uparrow\downarrow+\downarrow\uparrow)\rangle$. If this energy cost is negative, a macroscopic number of spin pair states with $|(1/\sqrt{2})(\uparrow\downarrow+\downarrow\uparrow)\rangle$ are generated, which brings about the breakdown of the ${\cal H}_{\rm eff}$ picture. On the other hand, if this energy cost is positive, the $|(1/\sqrt{2})(\uparrow\downarrow+\downarrow\uparrow)\rangle$ spin pair state are scarcely yielded, resulting in the stability of the ${\cal H}_{\rm eff}$ picture. The replacement of a pseudo-spin $|T^{z}=-1/2\rangle\Rightarrow|(1/\sqrt{2})(\uparrow\downarrow+\downarrow\uparrow)\rangle$ in the $M=0$ ground state leads to the $M=1$ state. Thus, if this cost is positive, the $M=1$ excitation is gapped, which is consistent with the picture of the nTLL state. We note that the replacement of a $|T^{z}=-1/2\rangle$ spin state by a $|T^{z}=+1/2\rangle$ one brings about the $M=2$ state. After some calculations, we obtain this energy cost as $(1/2)\\{J_{\rm F}(1-\Gamma_{\rm F})-J_{\rm AF}\\}$. Thus, the nTLL-$XY1$ boundary line is estimated by $J_{\rm AF}=J_{\rm F}(1-\Gamma_{\rm F})$. The above three boundary lines are depicted in Figs. 1(a”), (b”), and (c”). ## 4 Concluding Remarks We have determined the ground-state phase diagram of the $S\\!=\\!1/2$ ferromagnetic-antiferromagnetic bond-alternating chain described by the Hamiltonian of Eq. (1) by using mainly the numerical methods. Discussing the case where the ferromagnetic interactions with the Ising-type anisotropy are much stronger than the antiferromagnetic ones with the $XY$-type anisotropy, we have shown that in the phase diagrams (see Fig. 1), there appear the F, $XY$1, SD, and UUDD phases as well as the nTLL phase. We have also developed the perturbation theory, which qualitatively explains the numerically obtained phase boundary lines associated with the nTLL phase. This paper gives, in a succession of our previous work[12], a report of the appearance of the nTLL phase in an unfrustrated $S\\!=\\!1/2$ chain under no external magnetic field. It should be noted that the SDW2[3] (’SDW’ is an abbreviation for ’spin- density-wave’) does not appear in the ground-state phase diagram of the present model. The reason for this is as follows. In both of the nTLL state and the SDW2 state, both of the nematic four-spin correlation $C_{2}(j)\equiv\langle S_{1}^{+}S_{2}^{+}S_{1+j}^{-}S_{2+j}^{-}\rangle$ and the longitudinal two-spin correlation $C_{z}(j)\equiv\langle S_{1}^{z}S_{1+j}^{z}\rangle$, where $j$ is even, decay in power laws. The former correlation is dominant in the nTLL state, while the latter is dominant in the SDW2 state. Namely, $\eta_{2}<\eta_{z}$ in the nTLL state, whereas $\eta_{2}>\eta_{z}$ in the SDW2 state, where $\eta_{2}$ and $\eta_{z}$ are the power-decay exponents of $C_{2}(j)$ and $C_{z}(j)$, respectively. Since there is the TLL relation $\eta_{2}\eta_{z}=1$, it holds $\eta_{2}>1$ in the SDW2 state. However, this condition $\eta_{2}>1$ is nothing but the necessary condition for the UUDD state (see Appendix). In the zero magnetization case, the operator coming from the Umklapp process becomes relevant due to $\eta_{2}>1$, which leads to the UUDD state. Thus, the SDW2 region does not exist in the ground state phase diagram. On the other hand, in the finite magnetization case, since the operator induced by the Umklapp process does not exist, the SDW2 region appears in the phase diagram. One can see several tricritical points in the phase diagrams given in Fig. 1; these are the F-$XY$1-nTLL, $XY$1-SD-UUDD, and $XY$1-UUDD-nTLL tricritical points in the case (a), the F-$XY$1-nTLL, $XY$1-SD-nTLL, and SD-UUDD-nTLL tricritical points in the case (b), and the SD-UUDD-nTLL and $XY$1-SD-nTLL tricritical points in the case (c). According to the discussion on the university class (see Appendix), on the other hand, two tricritical points associated with the $XY$1, SD, UUDD, and nTLL phases should merge into one $XY$1-SD-UUDD-nTLL tetracritical point. In order to obtain numerically this tetracritical point, it is indispensable to develop a new numerical method which is applicable commonly to the phase transition between the $XY1$ and nTLL phases and to that between the SD and UUDD phase. Since both of these transitions are of the 2D Ising-type, it seems not to be impossible to find this method. However, it is has not yet been succeeded at present, and thus this problem is beyond the scope of the present study. Furthermore, it may be expected that the nTLL phase appear even in the case where the antiferromagnetic interactions are stronger than the ferromagnetic interactions in the ground-state phase diagram of the present system, if $1.0\gg\Gamma_{{\rm F}}\\!\geq\\!0.0.$ and $1.0\gg|\Delta_{{\rm AF}}|$. We are now planning to perform this calculation, and the results will be reported in the near future. ## Acknowledgments This work has been partly supported by JSPS KAKENHI, Grant Numbers 16K05419, 16H01080 (J-Physics), 18H04330 (J-Physics), JP20K03866, JP20H05274, and 21H05021. We also thank the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and the Computer Room, Yukawa Institute for Theoretical Physics, Kyoto University for computational facilities. ## Appendix A The Level Spectroscopy Method for the nTLL-UUDD Transition Let us consider the transverse two-spin correlation $C_{1}(j)\equiv\langle S_{1}^{+}S_{1+j}^{-}\rangle$ and the nematic four-spin correlation $C_{2}(j)\equiv\langle S_{1}^{+}S_{2}^{+}S_{1+j}^{-}S_{2+j}^{-}\rangle$, where $j$ is even. Both of them decay in power laws in the $XY1$ phase with the relation $\eta_{2}=4\eta_{1}$ [12], where $\eta_{1}$ and $\eta_{2}$ are the power-decay exponents of $C_{1}(j)$ and $C_{2}(j)$, respectively. As is well known, the BKT transition from the $XY1$ phase to the unique gapped phase (the SD phase in the present case) occurs at $\eta_{1}=1/4$ (hence $\eta_{2}=1$), whereas to the doubly degenerate gapped phase (the UUDD phase in the present case) at $\eta_{1}=1$ (accordingly $\eta_{2}=4$) [15, 16]. In the level spectroscopy method by Nomura and Kitazawa[16] for the BKT transition between the $XY1$ phase and the SD phase, we search the $\eta_{2}=1$ line. This is clear from the fact that the excitation $\Delta E_{0}^{\rm P}(N,2)$ is used in Eq.(2), since $\Delta E_{0}^{\rm P}(N,2)$ is closely related to $\eta_{2}$ as $\eta_{2}=\lim_{N\to\infty}[N\Delta E_{0}^{\rm P}(N,2)/\pi v_{\rm s}]$, where $v_{\rm s}$ is the spin wave velocity [15, 16, 18, 19]. On the other hand, in the nTLL phase, $C_{1}(j)$ exhibits the exponential decay, while $C_{2}(j)$ the power decay. Thus, the role of $\eta_{1}$ in the BKT transitions from the $XY1$ phase is superseded by $\eta_{2}$ in those from the nTLL phase. Namely, the BKT transition from the nTLL phase to the unique gapped phase occurs at $\eta_{2}=1/4$, while that to the doubly degenerate gapped state at $\eta_{2}=1$. Hence, the BKT transition from the nTLL phase to the doubly degenerate gapped phase (the UUDD phase in the present case) can also be determined by Eq.(2) in which the line $\eta_{2}=1$ is searched. Neither the BKT transition from the $XY1$ phase to the UUDD phase nor that from the nTLL phase to the SD phase occurs on the $\eta_{2}=1$ line. This is because the former transition should occur at $\eta_{1}=1$ ($\eta_{2}=4$) even if it occurs. Similarly the latter should occur at $\eta_{2}=1/4$ even if it occurs. Accordingly there should exist the $XY$1-SD-UUDD-nTLL tetracritical point on the $\eta_{2}=1$ line which is determined by Eq.(2). A similar discussion can also be applied to the $S\\!=\\!1$ chain with the XXZ and on- site anisotropies which has been discussed, for example, by Chen et al. [20]. Namely, the $XY$1-Haldane-Néel-nTLL tetracritical point should exist in this $S\\!=\\!1$ chain. ## References * [1] A. V. Chubukov, Phys. Rev. B 44, 4693 (1991). * [2] T. Vekua, A. Honecker, H.-J. Mikeska and F. Heidrich-Meisner, Phys. Rev. B 76, 174420 (2007). * [3] T. Hikihara, L. Kecke, T. Momoi and A. Furusaki, Phys. Rev. B 78, 144404 (2008). * [4] J. Sudan, A. Lüscher and A. M. Läuchli, Phys. Rev. B 80, 140402(R) (2009). * [5] T. Sakai, T. Tonegawa and K. Okamoto, Phys. Status Solidi B 247, 583 (2010). * [6] M. Sato, T. Hikihara and T. Momoi, Phys. Rev. Lett. 110, 077206 (2013). * [7] O. A. Starykh and L. Balents, Phys. Rev. B 89, 104407 (2014). * [8] N. Büttgen, K. Nawa, T. Fujita, M. Hagiwara, P. Kuhns, A. Prokofiev, A. P. Reyes, L. E. Svistov, K. Yoshimura and M. Takigawa, Phys. Rev. B 90, 134401 (2014). * [9] A. Orlova, E. L. Green, J. M. Law, D. I. Gorbunov, G. Chanda, S. Krämer, M. Horvatić, R. K. Kremer, J. Wosnitza and G. L. J. A. Rikken, Phys. Rev. Lett. 118, 247201 (2017). * [10] A. Parvej and M. Kumar, Phys. Rev. B 96, 054413 (2017). * [11] M. Bosioc̆ić, F. Bert, S. E. Dutton, R. J. Cava, P. J. Baker, M. Poz̆ek and P. Mendels, Phys. Rev. B 96, 224424 (2017). * [12] T. Tonegawa, T. Hikihara, K. Okamoto, S. C. Furuya and T. Sakai, J. Phys. Soc. Jpn. 87, 104002 (2018). * [13] Z. L. Berezinskii, Sov. Phys. JETP 34, 610 (1971); J. M. Kosterlitz and D. J. Thouless, J. Phys. C 6, 1181 (1973). * [14] T. Giamarchi, Quantum Physics in One Dimension (Clarendon Press, Oxford, 2003). * [15] A. Kitazawa, K. Nomura and K. Okamoto, Phys. Rev. Lett. 76, 4038 (1996). * [16] K. Nomura and A. Kitazawa, J. Phys. A 31, 7341 (1998). * [17] M. P. Nightingale, Physica A 83, 561 (1976). * [18] J. L. Cardy, J. Phys. A: Math. Gen. 17, L385 (1984). * [19] T. Sakai, Phys. Rev. B 58, 6268 (1998). * [20] W. Chen, K. Hida and B. C. Sanctuary, Phys. Rev.B 67, 104401 (2003).
# Fractional Fourier transforms, harmonic oscillator propagators and Strichartz estimates on Pilipović and modulation spaces Joachim Toft Department of Mathematics, Linnæus University, Växjö, Sweden <EMAIL_ADDRESS>, Divyang G. Bhimani Department of Mathematics, Indian Institute of Science Education and Research, Pune, India <EMAIL_ADDRESS>and Ramesh Manna School of Mathematical Sciences, National Institute of Science Education and Research, Bhubaneswar, An OCC of Homi Bhabha National Institute, Jatni 752050, India. <EMAIL_ADDRESS> ###### Abstract. We give a proof of that harmonic oscillator propagators and fractional Fourier transforms are essentially the same. We deduce continuity properties and fix time estimates for such operators on modulation spaces, and apply the results to prove Strichartz estimates for such propagators when acting on Pilipović and modulation spaces. Especially we extend some results by Balhara, Cordero, Nicola, Rodino and Thangavelu. We also show that general forms of fractional harmonic oscillator propagators are continuous on suitable Pilipović spaces. ###### Key words and phrases: Pilopović spaces, Modulation spaces, Wiener amalgam, Bargmann transform, Harmonic oscillator, propagators, Strichartz estimates ###### 2010 Mathematics Subject Classification: primary 46F05; 44A15; 32A25; 35Q41; secondary 32A36 ## 0\. Introduction In the paper we investigate mapping properties for powers of harmonic oscillators, their propagators and fractional Fourier transforms on Pilipović spaces and modulation spaces. Especially we link fractional Fourier transforms with harmonic oscillator propagators, which opens up for full transitions of various properties between these operators. We also deduce certain continuity properties for fractional Fourier transforms on modulation spaces (including Wiener amalgam spaces). By using the link between the fractional Fourier transform and harmonic oscillator propagators we extend at the same time certain continuity properties of harmonic oscillator propagators on modulation spaces, given in [8, 9, 15, 16]. These investigations are also related to [13], where among others, $L^{p}$ and Hausdorff-Young estimates for fractional Fourier transforms are established. Thereafter we apply such continuity results to extend certain Strichartz estimates for harmonic oscillator propagators in [16] when acting on modulation spaces. In the end we apply our results to certain general classes of time-dependent equations, similar to Schrödinger and heat equations. We prove that several of these equations are ill-posed in the framework of Schwartz space, Gelfand-Shilov spaces and their distribution spaces, but well- posed in the framework of suitable Pilipović spaces. Harmonic oscillators and their propagators are important in quantum mechanics, e. g. when investigating free particles in quantum systems. An important question concerns continuity for such operators. For the (standard) harmonic oscillator $H_{x}=|x|^{2}-\Delta_{x},\qquad x\in\mathbf{R}^{d},$ the corresponding propagator is given by $P_{\varrho}=e^{-i\varrho H_{x}},$ (0.1) which can also be formulated as $P_{\varrho}=e^{-iH_{x,\varrho}},\qquad H_{x,\varrho}=\varrho(|x|^{2}-\Delta_{x}).$ (0.1)′ Here $\varrho\in\mathbf{R}$. (See Sections 1–4 for more general operators of such forms.) It is well-known that these operators are homeomorphisms on the Schwartz space $\mathscr{S}(\mathbf{R}^{d})$ and its dual $\mathscr{S}^{\prime}(\mathbf{R}^{d})$, the set of tempered distributions. (See [35] and Section 1 for notations.) Similar continuity properties hold true with Pilipović spaces $\mathcal{H}_{0,s}(\mathbf{R}^{d})$ and $\mathcal{H}_{s}(\mathbf{R}^{d})$, and their distribution spaces $\mathcal{H}_{0,s}^{\prime}(\mathbf{R}^{d})$ and $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$, in place of $\mathscr{S}(\mathbf{R}^{d})$ and $\mathscr{S}^{\prime}(\mathbf{R}^{d})$. We recall that Fourier invariant (standard) Gelfand-Shilov are special cases of Pilipović spaces. More precisely we have $\displaystyle\mathcal{H}_{s}(\mathbf{R}^{d})$ $\displaystyle=\mathcal{S}_{s}(\mathbf{R}^{d})\neq\\{0\\},$ $\displaystyle\quad s$ $\displaystyle\geq\frac{1}{2},$ $\displaystyle\qquad\mathcal{H}_{s}(\mathbf{R}^{d})$ $\displaystyle\neq\mathcal{S}_{s}(\mathbf{R}^{d})=\\{0\\},$ $\displaystyle\quad s$ $\displaystyle<\frac{1}{2},$ $\displaystyle\mathcal{H}_{0,s}(\mathbf{R}^{d})$ $\displaystyle=\Sigma_{s}(\mathbf{R}^{d})\neq\\{0\\},$ $\displaystyle\quad s$ $\displaystyle>\frac{1}{2},$ $\displaystyle\qquad\mathcal{H}_{0,s}(\mathbf{R}^{d})$ $\displaystyle\neq\Sigma_{s}(\mathbf{R}^{d})=\\{0\\},$ $\displaystyle\quad s$ $\displaystyle\leq\frac{1}{2}$ (see [48, 53]). Harmonic oscillators and their propagators also possess convenient mapping properties in the background of suitable modulation spaces, a family of (quasi-)Banach spaces which were introduced in [23] by H. Feichtinger and further developed in [24, 25, 26, 28, 32]. For example, in [37, 38, 39, 40], several continuity results for Schrödinger propagators including potential terms when acting on modulation spaces are deduced. Harmonic oscillator propagators are then obtained by choosing the potentials as $c|x|^{2}$ for some positive constant $c$. For example, it is proved in [37, 38, 39] that for the Fourier invariant modulation space $M^{p}(\mathbf{R}^{d})$, the map $e^{iH_{x,\varrho}}\,:\,M^{p}(\mathbf{R}^{d})\to M^{p}(\mathbf{R}^{d})$ (0.2) is continuous. See also [8, 9, 15, 16] for other results concerning mapping properties of propagators on modulation spaces. In Section 3 we give a strict proof, based on the Bargmann transform, of that the propagator in (0.1) can be identified with fractional Fourier transforms by the formula $e^{-i\frac{\pi}{4}H_{x,\varrho}}=e^{-i\frac{\pi\varrho d}{4}}\mathscr{F}_{\\!\varrho},$ (0.3) for every $\varrho\in\mathbf{R}$. Here recall that the fractional Fourier transform $\mathscr{F}_{\\!\varrho}$, acting on $\mathscr{S}(\mathbf{R}^{d})$ is given by $\mathscr{F}_{\\!\varrho}f(\xi)=\langle K_{d,\varrho}(\xi,\,\cdot\,),f\rangle,$ where $K_{d,\varrho}(\xi,x)$ is the distribution kernel, given by $K_{d,\varrho}=\bigotimes_{j=1}^{d}K_{\varrho},$ (0.4) with $K_{\varrho}(\xi,x)=\begin{cases}\left(\frac{1-i\cot(\frac{\pi\varrho}{2})}{2\pi}\right)^{\frac{1}{2}}\exp\left(i\cdot\frac{(x^{2}+\xi^{2})\cos({\frac{\pi\varrho}{2}})-2{\xi x}}{2\sin(\frac{\pi\varrho}{2})}\right),&\\!\varrho\in\mathbf{R}\setminus 2\mathbf{Z},\\\\[4.30554pt] \delta(\xi-x),&\\!\varrho\in 4\mathbf{Z},\\\\[4.30554pt] \delta(\xi+x),&\\!\varrho\in 2+4\mathbf{Z},\end{cases}$ (0.5) Here $x,\xi\in\mathbf{R}$. (See e. g. [3, 45] and the references therein. See also Section 1 for more details on fractional Fourier transform.) We observe that $\displaystyle(\mathscr{F}_{1}f)(\xi)$ $\displaystyle=\frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbf{R}^{d}}f(x)e^{-i\langle x,\xi\rangle}\,dx$ and $\displaystyle(\mathscr{F}_{3}f)(\xi)$ $\displaystyle=\frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbf{R}^{d}}f(x)e^{i\langle x,\xi\rangle}\,dx$ are the (ordinary) Fourier transform and the inverse Fourier transform, respectively. Ideas on fractional Fourier transforms goes back to at least 1929 (cf. historical notes and references in [12]). On the other hand, the first rigorous approach seems to be given at first 1939 by Kober in [41]. Since then numerous applications of fractional Fourier transforms have appeared. For example, they were explicitly introduced in quantum mechanics around 1980 and thereafter applied in optics (see e. g. [22, 44] and the references therein). In quantum mechanics, the formulae (0.4) and (0.5) appear naturally by considering certain rotations in the phase space and their induced actions on quantum observables (see e. g. [15, 16, 18, 19, 20] and Remark 1.10 in Section 1). There are also several applications in signal analysis e. g. when discussing rotation properties for time-frequency representations in time- frequency analysis, phase retrieval, optics and pattern recognition. (See e. g. [2, 3, 22, 45] and the references therein.) Here we also remark that in some aspects, the theory of the fractional Fourier transform is merely a special case of the metaplectic representation (see [20]). In terms of the Bargmann transform $\mathfrak{V}_{d}$, fractional Fourier transforms take the convenient form $(\mathfrak{V}_{d}(\mathscr{F}_{\\!\varrho}f))(z)=(\mathfrak{V}_{d}f)(e^{-i\frac{\pi\varrho}{2}}z),\qquad z\in\mathbf{C}^{d}.$ (0.6) (See e. g. [5, 53].) The relation (0.6) was obtained for $\varrho=1$ already in [5] by Bargmann. A motivation of the identity (0.3) is given in e. g. [42], where several links between the harmonic oscillator and fractional Fourier transforms are established. We also remark that F. G. Mehler established a formula (afterwards named _Mehler’s formula_) for the operator $e^{-H_{x,\varrho}}$, $\varrho>0$, already in [43]. Observe that $e^{-H_{x,\varrho}}$ is the canonical density operator in statistical physics, see [49, Section 3.4]. Analytic extensions of Mehler’s formula lead to that the kernel of $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ is essentially the same as the kernel of the fractional Fourier transform (0.4) and (0.5). In the aftermath of (0.3) one may link several results in e. g. [52, 53] concerning mapping properties of fractional Fourier transforms on modulation spaces with analogous results in e. g. [8, 9, 15, 16, 37, 38, 39, 40] concerning mapping properties of the propagators on modulation spaces. Some explicit demonstration of such transfers are given in Section 3. Here it is addressed that $e^{iH_{x,\varrho}}$ is homeomorphic on $M^{q}_{(\omega)}(\mathbf{R}^{d})$ for suitable weight $\omega$, because the fractional Fourier transforms possess the same continuity properties in view of [53, Proposition 7.1]. (See Propositions 3.4 and 3.5 in Section 3.) More generally, in Section 3 we show that for $\varrho\in\mathbf{R}\setminus 2\mathbf{Z}$ and $q\leq p$, then $\mathscr{F}_{\\!\varrho}$ is continuous from $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ to $M^{q,p}_{(\omega)}(\mathbf{R}^{d})$, and from $W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ to $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$, for suitable weights $\omega$. Again, by using the link (0.3) we transfer these continuity properties for fractional Fourier transforms to harmonic oscillator propagators. In fact, the following proposition is obtained by letting the weights in Theorems 3.6 and 3.7 in Section 3, be trivially equal to one. (See also Proposition 0.1′ in Section 3.) ###### Proposition 0.1. Let $\varrho\in\mathbf{R}$ and $p,q\in(0,\infty]$ be such that $q\leq p$. Then the following is true: 1. (1) the map $\displaystyle\mathscr{F}_{\\!\varrho}=e^{-i\frac{\pi}{4}H_{x,\varrho}}\,$ $\displaystyle:\,M^{p,q}(\mathbf{R}^{d})+W^{q,p}(\mathbf{R}^{d})$ $\displaystyle\to M^{q,p}(\mathbf{R}^{d})+W^{p,q}(\mathbf{R}^{d})$ is continuous; 2. (2) if in addition $\varrho\notin\mathbf{Z}$, then the map $\displaystyle\mathscr{F}_{\\!\varrho}=e^{-i\frac{\pi}{4}H_{x,\varrho}}\,$ $\displaystyle:\,M^{p,q}(\mathbf{R}^{d})+W^{q,p}(\mathbf{R}^{d})$ $\displaystyle\to M^{q,p}(\mathbf{R}^{d}){\textstyle\bigcap}W^{p,q}(\mathbf{R}^{d})$ is continuous. There are several types of estimates behind the conclusions in the previous proposition, which are expected to be applicable in non-linear partial differential equations. Examples on such estimates are $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}}$ $\displaystyle=\|e^{-i\frac{\pi}{4}H_{x,\varrho}}f\|_{M^{q,p}}\lesssim|\sin(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{M^{p,q}},$ $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}}$ $\displaystyle=\|e^{-i\frac{\pi}{4}H_{x,\varrho}}f\|_{M^{q,p}}\lesssim|\cos(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{W^{q,p}},$ $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{W^{p,q}}$ $\displaystyle=\|e^{-i\frac{\pi}{4}H_{x,\varrho}}f\|_{W^{p,q}}\lesssim|\cos(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{M^{p,q}},$ and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{W^{p,q}}$ $\displaystyle=\|e^{-i\frac{\pi}{4}H_{x,\varrho}}f\|_{W^{p,q}}\lesssim|\sin(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{W^{q,p}},$ still with $q\leq p$ (see Theorems 3.6 and 3.7). Some of our investigations include more general propagators and fractional Fourier transforms, where $\varrho$ in (0.1)′ and (0.3) is allowed to be any complex number. If $\operatorname{Im}(\zeta)>0$, then $P_{\varrho}$ does neither make sense as a continuous operator on $\mathscr{S}(\mathbf{R}^{d})$, nor on any Fourier invariant Gelfand-Shilov space and their duals. Consequently, in order to investigate such extended class of propagators, or, even more generally, powers of $H_{x,\varrho}$ and their propagators, i. e. operators of the forms $H_{x,\varrho}^{r}\quad\text{and}\quad P_{\varrho,r}=e^{-iH_{x,\varrho}^{r}},\qquad\varrho\in\mathbf{C},\ r\in\mathbf{R},$ (0.7) other families of function and distribution spaces are needed. It turns out that such continuity discussions can be performed in the framework of certain Pilipović spaces and their distribution spaces. In order to shed some further lights, we present the following propositions, which are immediate consequences of our investigations. For the fractional Fourier transforms, these conclusions also follows from the analysis in [53]. ###### Proposition 0.2. Let $\varrho\in\mathbf{C}$ and $s\in\overline{\mathbf{R}}_{\flat}$. Then the following is true: 1. (1) if $s<\frac{1}{2}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are homeomorphisms on $\mathcal{H}_{s}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$; 2. (2) if $\operatorname{Im}(\varrho)<0$ and $s\geq\frac{1}{2}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are continuous injections but not surjections on $\mathcal{H}_{s}(\mathbf{R}^{d})$, $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$; 3. (3) if $\operatorname{Im}(\varrho)=0$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are homeomorphisms on $\mathcal{H}_{s}(\mathbf{R}^{d})$, $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$; 4. (4) if $\operatorname{Im}(\varrho)>0$ and $s\geq\frac{1}{2}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are discontinuous on $\mathcal{H}_{s}(\mathbf{R}^{d})$, $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$. The same holds true with $s>\frac{1}{2}$, $s\leq\frac{1}{2}$ and $\mathcal{H}_{0,s}$ in place of $s\geq\frac{1}{2}$, $s<\frac{1}{2}$ and $\mathcal{H}_{s}$ at each occurrence. ###### Proposition 0.3. Let $\varrho\in\mathbf{C}$. Then the following is true: 1. (1) if $\operatorname{Im}(\varrho)<0$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are continuous from $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$ to $\mathcal{S}_{1/2}(\mathbf{R}^{d})$, and $\mathscr{F}_{\\!\varrho}(\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d}))=e^{-i\frac{\pi}{4}H_{x,\varrho}}(\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d}))\subsetneq\mathcal{S}_{1/2}(\mathbf{R}^{d})\text{;}$ 2. (2) if $\operatorname{Im}(\varrho)>0$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are discontinuous from $\mathcal{S}_{1/2}(\mathbf{R}^{d})$ to $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$, and $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})\subsetneq\mathscr{F}_{\\!\varrho}(\mathcal{S}_{1/2}(\mathbf{R}^{d}))=e^{-i\frac{\pi}{4}H_{x,\varrho}}(\mathcal{S}_{1/2}(\mathbf{R}^{d}))\subsetneq\mathcal{H}_{0,1/2}^{\prime}(\mathbf{R}^{d}).$ By usual inclusion relations for Pilipović spaces, it follows that Proposition 0.3 is a refinement of (2) and (4) in Proposition 0.2. In fact, consider the inclusions $\displaystyle\mathcal{S}_{s}(\mathbf{R}^{d})$ $\displaystyle\subsetneq\Sigma_{t}(\mathbf{R}^{d})\subsetneq\mathcal{S}_{t}(\mathbf{R}^{d})\subsetneq\mathscr{S}(\mathbf{R}^{d})$ $\displaystyle\subsetneq\mathscr{S}^{\prime}(\mathbf{R}^{d})\subsetneq\mathcal{S}_{t}(\mathbf{R}^{d})\subsetneq\Sigma_{t}^{\prime}(\mathbf{R}^{d})\subsetneq\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d}),\quad\frac{1}{2}\leq s<t,$ (0.8) between the Schwartz space, its distribution space, and all (standard) Fourier invariant Gelfand-Shilov spaces of functions and ultra-distributions. Then Proposition 0.3 (1) shows that if $\operatorname{Im}(\varrho)<0$, then the images of the spaces in (0.8) under $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are strict subspaces of $\mathcal{S}_{1/2}(\mathbf{R}^{d})$, the smallest space in (0.8). If instead $\operatorname{Im}(\varrho)>0$, then Proposition 0.3 (2) shows that the image of this smallest space $\mathcal{S}_{1/2}(\mathbf{R}^{d})$ under $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ is a superspace of $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$, the largest space in (0.8). This implies, roughly speaking, that the (standard) spaces in (0.8) are disqualified when performing detailed continuity investigations of the canonical density operator $e^{-H_{x,\varrho}}$ in statistical physics, and that these spaces can not be used in continuity investigations of the inverse $e^{H_{x,\varrho}}$ of that operator. On the other hand, Proposition 0.2 (1) shows that Pilipović spaces and their distribution spaces, which are not Gelfand-Shilov spaces of functions and distributions, are suitable for continuity investigations of $e^{-H_{x,\varrho}}$ and $e^{H_{x,\varrho}}$. In the most general case we allow $\varrho\in\mathbf{C}^{d}$, in which case $H_{x,\varrho}$ is defined as $H_{x,\varrho}=\sum_{j=1}^{d}\rho_{j}(x_{j}^{2}-\partial_{j}^{2}),$ and $\mathscr{F}_{\\!\varrho}$ is a fractional Fourier transform of the multiple order $\varrho$ (see e. g. [20]). In Section 3 we show that (0.3) and (0.6) still hold true for such general $\varrho$. Finally, in Section 4 we apply our results to deduce certain types of Strichartz estimates. We recall that Strichartz estimates appears when finding properties on solutions to Cauchy problems like the Schrödinger equation $\begin{cases}i\partial_{t}u-H_{x}u=F,\\\\[4.30554pt] u(0,x)=u_{0}(x),\qquad(t,x)\in I\times\mathbf{R}^{d}.\end{cases}$ (0.9) Here $I=[0,\infty)$ or $I=[0,T]$ for some $T>0$, $F$ is a suitable function or (ultra-)distribution on $I\times\mathbf{R}^{d}$ and $u_{0}$ is a suitable function or (ultra-)distribution on $\mathbf{R}^{d}$. It follows that continuity properties of the propagator $\displaystyle(Ef)(t,x)$ $\displaystyle\equiv(e^{-itH_{x}}u_{0})(x),$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d},$ (0.10) $\displaystyle(S_{1}F)(t,x)$ $\displaystyle=\int_{0}^{t}(e^{-i(t-s)H_{x}}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\qquad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d},$ (0.11) and for $\displaystyle(S_{2}F)(t,x)$ $\displaystyle=\int_{I}(e^{-i(t-s)H_{x}}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d},$ (0.12) are essential when finding estimates for solutions to (0.9) (see [16, 30, 50]). Such estimates are called _Strichartz estimates_ (see also Subsection 1.8 for more details). In Section 4 we deduce Strichartz estimates of the operators $E$, $S_{1}$ and $S_{2}$ when acting on modulation spaces or Lebesgue spaces with values in modulation spaces. For example by straight-forward applications of Proposition 0.1, we get the following result, which is a special case of Theorem 4.3 in Section 4. ###### Theorem 0.4. Let $p,q,r_{0}\in(0,\infty]$ be such that $q\leq p$ and $\frac{1}{r_{0}}>d\left(\frac{1}{q}-\frac{1}{p}\right).$ Then $E$ is uniquely extendable to a continuous map $E:M^{p,q}(\mathbf{R}^{d})+W^{q,p}(\mathbf{R}^{d})\to L^{r_{0}}([0,T];M^{q,p}(\mathbf{R}^{d}))\bigcap L^{r_{0}}([0,T];W^{p,q}(\mathbf{R}^{d})).$ Another application of Proposition 0.1 in combination of the Hardy-Littlewood- Sobolev inequality leads to the following special case of Theorem 4.1 in Section 4. ###### Theorem 0.5. Let $p,p_{0},q\in(1,\infty]$ and $r_{0}\in(0,\infty)$ be such that $0\leq d\left(\frac{1}{q}-\frac{1}{p}\right)<1,\quad d\left(\frac{1}{q}-\frac{1}{p}\right)\leq 1+\frac{1}{r_{0}}-\frac{1}{p_{0}}.$ Then $S_{1}$ and $S_{2}$ from $C([0,T];M^{1}(\mathbf{R}^{d}))$ to $L^{\infty}([0,T];M^{1}(\mathbf{R}^{d}))$ are uniquely extendable to continuous mappings $\displaystyle S_{j}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];M^{p,q}(\mathbf{R}^{d})+W^{q,p}(\mathbf{R}^{d}))$ $\displaystyle\to L^{r_{0}}([0,T];M^{q,p}(\mathbf{R}^{d})\bigcap W^{p,q}(\mathbf{R}^{d})),$ $j=1,2$. In Section 4 we also discuss well-posed properties for more general equations, where $H_{x}$ in (0.9) is replaced by $\zeta H_{x,\varrho}^{r}$ for some $\zeta\in\mathbf{C}$, $\varrho\in\mathbf{C}^{d}$ and $r>0$ which satisfy $\operatorname{Im}(\zeta\varrho_{j})>0$ for some $j$. Here we show that such equations are ill-posed, not only in the framework of Schwartz functions and tempered distributions, but also for Gelfand-Shilov functions and distributions, while the equation is well-posed for suitable Pilipović spaces and their distributions. ## Acknowledgement The authors are grateful to Marianna Chatzakou, Maurice de Gosson and Alberto Parmeggiani for valuable comments. The first author was supported by Vetenskapsrådet (Swedish Science Council), within the project 2019-04890. The second author is thankful for the research grants (DST/INSPIRE/04/2016/001507) and the third author is thankful for the research grants (DST/INSPIRE/04/2019/001914). ## 1\. Preliminaries In this section we recall some basic facts. We start by discussing Gelfand- Shilov spaces, then modulation spaces and thereafter Pilipović spaces and some of their properties. Then we recall the Bargmann transform and some of its properties, and introduce suitable classes of power series expansions and entire functions on $\mathbf{C}^{d}$. Finally, in Subsection 1.8 we recall some facts on Strichartz estimates. ### 1.1. Gelfand-Shilov spaces and their distribution spaces We start by recalling definitions of Fourier invariant (standard) Gelfand- Shilov spaces and their distribution spaces (cf. e. g. [29, 47]). Let $s\geq 0$ and $h\in\mathbf{R}_{+}$ be fixed. Then $\mathcal{S}_{s,h}(\mathbf{R}^{d})$ is the set of all $f\in C^{\infty}(\mathbf{R}^{d})$ such that $\|f\|_{\mathcal{S}_{s,h}}\equiv\sup\frac{|x^{\beta}\partial^{\alpha}f(x)|}{h^{|\alpha|+|\beta|}(\alpha!\,\beta!)^{s}}$ is finite. Here the supremum is taken over all $\alpha,\beta\in\mathbf{N}^{d}$ and $x\in\mathbf{R}^{d}$. Obviously $\mathcal{S}_{s,h}\subseteq\mathscr{S}$ is a Banach space which increases with $h$ and $s$. The _Gelfand-Shilov space_ $\mathcal{S}_{s}(\mathbf{R}^{d})$ ($\Sigma_{s}(\mathbf{R}^{d})$) of Roumieu type (Beurling type) is the projective limit (projective limit) of $\mathcal{S}_{s,h}(\mathbf{R}^{d})$ with respect to $h$. This implies that $\mathcal{S}_{s}(\mathbf{R}^{d})=\bigcup_{h>0}\mathcal{S}_{s,h}(\mathbf{R}^{d})\quad\text{and}\quad\Sigma_{s}(\mathbf{R}^{d})=\bigcap_{h>0}\mathcal{S}_{s,h}(\mathbf{R}^{d})$ (1.1) is a so-called LB-space and Fréchet space, respectively, with semi norms $\|\,\cdot\,\|_{\mathcal{S}_{s,h}}$, $h>0$. Let $\mathcal{S}_{s,h}^{\prime}(\mathbf{R}^{d})$ be the ($L^{2}$-)dual of $\mathcal{S}_{s,h}(\mathbf{R}^{d})$. If $s\geq\frac{1}{2}$ ($s>\frac{1}{2}$), then the _Gelfand-Shilov distribution space_ $\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})$ ($\Sigma_{s}^{\prime}(\mathbf{R}^{d})$) is the projective limit (inductive limit) of $\mathcal{S}_{s,h}^{\prime}(\mathbf{R}^{d})$ with respect to $h>0$. Hence $\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})=\bigcap_{h>0}\mathcal{S}_{s,h}^{\prime}(\mathbf{R}^{d})\quad\text{and}\quad\Sigma_{s}^{\prime}(\mathbf{R}^{d})=\bigcup_{h>0}\mathcal{S}_{s,h}^{\prime}(\mathbf{R}^{d}).$ (1.1)′ We remark that (0.8) is true with dense embeddings, and that $\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})$ and $\Sigma_{t}^{\prime}(\mathbf{R}^{d})$ are the (strong) duals of $\mathcal{S}_{s}(\mathbf{R}^{d})$ and $\Sigma_{t}(\mathbf{R}^{d})$. On the other hand, if $s<t\leq\frac{1}{2}$, then $\mathcal{S}_{s}(\mathbf{R}^{d})$ and $\Sigma_{t}(\mathbf{R}^{d})$ are trivially equal to $\\{0\\}$ (cf. [29, 46, 47]). From now on we let $\mathscr{F}$ be the Fourier transform, given by $(\mathscr{F}f)(\xi)=\widehat{f}(\xi)\equiv(2\pi)^{-\frac{d}{2}}\int_{\mathbf{R}^{d}}f(x)e^{-i\langle x,\xi\rangle}\,dx$ when $f\in L^{1}(\mathbf{R}^{d})$. Here $\langle\,\cdot\,,\,\cdot\,\rangle$ denotes the usual scalar product on $\mathbf{R}^{d}$. The map $\mathscr{F}$ extends uniquely to homeomorphisms on $\mathscr{S}^{\prime}(\mathbf{R}^{d})$, $\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})$ and $\Sigma_{s}^{\prime}(\mathbf{R}^{d})$, and restricts to homeomorphisms on $\mathscr{S}(\mathbf{R}^{d})$, $\mathcal{S}_{s}(\mathbf{R}^{d})$ and $\Sigma_{s}(\mathbf{R}^{d})$, and to a unitary operator on $L^{2}(\mathbf{R}^{d})$. Next we recall some mapping properties of Gelfand-Shilov spaces under short- time Fourier transforms. Let $\phi\in\mathscr{S}(\mathbf{R}^{d})$ be fixed. For every $f\in\mathscr{S}^{\prime}(\mathbf{R}^{d})$, the _short-time Fourier transform_ $V_{\phi}f$ is the distribution on $\mathbf{R}^{2d}$ defined by the formula $(V_{\phi}f)(x,\xi)=\mathscr{F}(f\,\overline{\phi(\,\cdot\,-x)})(\xi)=(f,\phi(\,\cdot\,-x)e^{i\langle\,\cdot\,,\xi\rangle}).$ (1.2) We recall that if $T(f,\phi)\equiv V_{\phi}f$ when $f,\phi\in\mathcal{S}_{s}(\mathbf{R}^{d})$, then $T$ is uniquely extendable to sequentially continuous mappings $\displaystyle T\,$ $\displaystyle:\,$ $\displaystyle\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})\times\mathcal{S}_{s}(\mathbf{R}^{d})$ $\displaystyle\to\mathcal{S}_{s}^{\prime}(\mathbf{R}^{2d})\bigcap C^{\infty}(\mathbf{R}^{2d}),$ $\displaystyle T\,$ $\displaystyle:\,$ $\displaystyle\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})\times\mathcal{S}_{s}^{\prime}(\mathbf{R}^{d})$ $\displaystyle\to\mathcal{S}_{s}^{\prime}(\mathbf{R}^{2d}),$ and similarly with $\Sigma_{s}$ in place of $\mathcal{S}_{s}$ at each occurrence (cf. [17, 53]). We also note that $V_{\phi}f$ takes the form $V_{\phi}f(x,\xi)=(2\pi)^{-\frac{d}{2}}\int_{\mathbf{R}^{d}}f(y)\overline{\phi(y-x)}e^{-i\langle y,\xi\rangle}\,dy$ (1.2)′ for admissible $f$. There are several characterizations of Gelfand-Shilov spaces and their distribution spaces, e. g. by suitable estimates of their Fourier and Short- time Fourier transforms (cf. [14, 34, 53]). ### 1.2. Spaces of sequences The definitions of Pilipović spaces and spaces of power series expansions are based on certain spaces of sequences on $\mathbf{N}^{d}$, indexed by the extended set $\mathbf{R}_{\flat}=\mathbf{R}_{+}\bigcup\\{\,\flat_{\sigma}\,;\,\sigma\in\mathbf{R}_{+}\,\\},$ of $\mathbf{R}_{+}$. We extend the ordering relation on $\mathbf{R}_{+}$ to the set $\mathbf{R}_{\flat}$, by letting $s_{1}<\flat_{\sigma}<s_{2}\quad\text{and}\quad\flat_{\sigma_{1}}<\flat_{\sigma_{2}}$ when $s_{1},s_{2},\sigma_{1},\sigma_{2}\in\mathbf{R}_{+}$, satisfy $s_{1}<\frac{1}{2}\leq s_{2}$ and $\sigma_{1}<\sigma_{2}$. (Cf. [53].) ###### Definition 1.1. Let $s\in\mathbf{R}_{\flat}$ and $r,\sigma\in\mathbf{R}_{+}$. 1. (1) The set $\ell_{0}^{\prime}(\mathbf{N}^{d})$ consists of all formal sequences $a=\\{a(\alpha)\\}_{\alpha\in\mathbf{N}^{d}}\subseteq\mathbf{C}$, and $\ell_{0}(\mathbf{N}^{d})$ is the set of all $a\in\ell_{0}^{\prime}(\mathbf{N}^{d})$ such that $a(\alpha)\neq 0$ for at most finite numbers of $\alpha\in\mathbf{N}^{d}$; 2. (2) The Banach spaces $\ell_{s;r}^{\infty}(\mathbf{N}^{d})$ and $\ell_{s;r}^{\infty,*}(\mathbf{N}^{d})$ consist of all $a\in\ell_{0}^{\prime}(\mathbf{N}^{d})$ such that their corresponding norms $\displaystyle\|a\|_{\ell_{s;r}^{\infty}}$ $\displaystyle=\begin{cases}\underset{\alpha\in\mathbf{N}^{d}}{\sup}|a(\alpha)e^{r|\alpha|^{\frac{1}{2s}}}|,&s\in\mathbf{R}_{+},\\\\[4.30554pt] \underset{\alpha\in\mathbf{N}^{d}}{\sup}|a(\alpha)r^{-|\alpha|}\alpha!^{\frac{1}{2\sigma}}|,&s=\flat_{\sigma},\end{cases}$ and $\displaystyle\|a\|_{\ell_{s;r}^{\infty,*}}$ $\displaystyle=\begin{cases}\underset{\alpha\in\mathbf{N}^{d}}{\sup}|a(\alpha)e^{-r|\alpha|^{\frac{1}{2s}}}|,&s\in\mathbf{R}_{+},\\\\[4.30554pt] \underset{\alpha\in\mathbf{N}^{d}}{\sup}|a(\alpha)r^{-|\alpha|}\alpha!^{-\frac{1}{2\sigma}}|,&s=\flat_{\sigma}.\end{cases}$ respectively, are finite; 3. (3) The space $\ell_{s}(\mathbf{N}^{d})$ ($\ell_{0,s}(\mathbf{N}^{d})$) is the inductive limit (projective limit) of $\ell_{s;r}^{\infty}(\mathbf{N}^{d})$ with respect to $r>0$, and $\ell_{s}^{\prime}(\mathbf{N}^{d})$ ($\ell_{0,s}^{\prime}(\mathbf{N}^{d})$) is the projective limit (inductive limit) of $\ell_{s;r}^{\infty,*}(\mathbf{N}^{d})$ with respect to $r>0$. We also let $\|\,\cdot\,\|_{\ell_{0;N}}$ be the semi-norm $\|a\|_{\ell_{0;N}}\equiv\sup_{|\alpha|\leq N}|a(\alpha)|,\qquad a\in\ell_{0}^{\prime}(\mathbf{N}^{d}),$ (1.3) and $\ell_{0;N}(\mathbf{N}^{d})$ be the Banach space with norm (1.3) and consisting of all $a\in\ell_{0}^{\prime}(\mathbf{N}^{d})$ such that $a(\alpha)=0$ when $|\alpha|\geq N$. Then $\ell_{0}(\mathbf{N}^{d})$ is the inductive limit of $\ell_{0;N}(\mathbf{N}^{d})$ with respect to $N$, and $\ell_{0}^{\prime}(\mathbf{N}^{d})$ is a Fréchet space under the semi-norms (Cf. [53].) In what follows, $(\,\cdot\,,\,\cdot\,)_{\mathscr{H}}$ denotes the scalar product in the Hilbert space $\mathscr{H}$. ###### Remark 1.2. Let $s\in\overline{\mathbf{R}_{\flat}}$. Then the duals of $\ell_{0,s}(\mathbf{N}^{d}),\quad\ell_{s}(\mathbf{N}^{d}),\quad\ell_{s}^{\prime}(\mathbf{N}^{d})\quad\text{and}\quad\ell_{0,s}^{\prime}(\mathbf{N}^{d})$ (1.4) are given by $\ell_{0,s}^{\prime}(\mathbf{N}^{d}),\quad\ell_{s}^{\prime}(\mathbf{N}^{d}),\quad\ell_{s}(\mathbf{N}^{d})\quad\text{and}\quad\ell_{0,s}(\mathbf{N}^{d}),$ respectively, with respect to unique extensions of the form $(\,\cdot\,,\,\cdot\,)_{\ell^{2}(\mathbf{N}^{d})}$ on $\ell_{0}(\mathbf{N}^{d})\times\ell_{0}(\mathbf{N}^{d})$. If $s>0$, then $\ell_{0}(\mathbf{N}^{d})$ is dense in $\ell_{0}^{\prime}(\mathbf{N}^{d})$ and the spaces in (1.4). (See e. g. [53]) ### 1.3. Pilipović spaces and spaces of power series expansions on $\mathbf{C}^{d}$ We recall that the Hermite function of order $\alpha\in\mathbf{N}^{d}$ is defined by $h_{\alpha}(x)=\pi^{-\frac{d}{4}}(-1)^{|\alpha|}(2^{|\alpha|}\alpha!)^{-\frac{1}{2}}e^{\frac{1}{2}\cdot{|x|^{2}}}(\partial^{\alpha}e^{-|x|^{2}}).$ It follows that $h_{\alpha}(x)=((2\pi)^{\frac{d}{2}}\alpha!)^{-1}e^{-\frac{1}{2}\cdot{|x|^{2}}}p_{\alpha}(x),$ for some polynomial $p_{\alpha}$ of order $\alpha$ on $\mathbf{R}^{d}$, called the Hermite polynomial of order $\alpha$. The Hermite functions are eigenfunctions to the Fourier transform, and to the Harmonic oscillator $H_{x,c}\equiv H_{x}+c,\qquad H_{x}\equiv|x|^{2}-\Delta_{x},\qquad x\in\mathbf{R}^{d},$ (1.5) which acts on functions and (ultra-)distributions defined on $\mathbf{R}^{d}$. Here $c\in\mathbf{C}$ is fixed. More precisely, we have $H_{x,c}h_{\alpha}=(2|\alpha|+d+c)h_{\alpha}.$ (1.6) More generally, for any $c\in\mathbf{C}$ and $\varrho=(\varrho_{1},\dots,\varrho_{d})\in\mathbf{C}^{d}$, we let $H_{x,\varrho,c}\equiv\left(\sum_{j=1}^{d}\varrho_{j}(x_{j}^{2}-\partial_{x_{j}}^{2})\right)+c=\left(\sum_{j=1}^{d}\varrho_{j}H_{x_{j}}\right)+c,\qquad x\in\mathbf{R}^{d}.$ (1.7) Evidently, $H_{x,\varrho,c}$ is positive definite when $\varrho\in\mathbf{R}^{d}_{+}\quad\text{and}\quad c>-\sum_{j=1}^{d}\varrho_{j}.$ For conveniency we put $H_{x,\varrho,c}=H_{x,\varrho_{0},c}$ when $\varrho=(\varrho_{0},\dots,\varrho_{0})\in\mathbf{C}^{d}$, and observe that $H_{x,\varrho,c}=\varrho H_{x}+c\quad\text{when}\quad\varrho\in\mathbf{C}.$ It is well-known that the set of Hermite functions is a basis for $\mathscr{S}(\mathbf{R}^{d})$ and an orthonormal basis for $L^{2}(\mathbf{R}^{d})$ (cf. [48]). In particular, if $f,g\in L^{2}(\mathbf{R}^{d})$, then $\|f\|_{L^{2}(\mathbf{R}^{d})}^{2}=\sum_{\alpha\in\mathbf{N}^{d}}|c_{h}(f,\alpha)|^{2}\quad\text{and}\quad(f,g)_{L^{2}(\mathbf{R}^{d})}=\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)\overline{c_{h}(g,\alpha)},$ where $\displaystyle f(x)$ $\displaystyle=\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)h_{\alpha}(x)$ (1.8) is the Hermite seriers expansion of $f$, and $\displaystyle c_{h}(f,\alpha)$ $\displaystyle=(f,h_{\alpha})_{L^{2}(\mathbf{R}^{d})}$ (1.9) is the Hermite coefficient of $f$ of order $\alpha\in\mathbf{R}^{d}$. We let $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ be the set of all formal Hermite series expansions in (1.8), and $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$ be the set of all formal power series expansions $F(z)=\sum_{\alpha\in\mathbf{N}^{d}}c(F,\alpha)e_{\alpha}(z),\qquad e_{\alpha}(z)=\frac{z^{\alpha}}{\sqrt{\alpha!}},\ \alpha\in\mathbf{N}^{d},$ (1.10) on $\mathbf{C}^{d}$. Then the map $\displaystyle T_{\mathcal{H}}\,$ $\displaystyle:$ $\displaystyle\,\\{c(\alpha)\\}_{\alpha\in\mathbf{N}^{d}}$ $\displaystyle\mapsto\sum_{\alpha\in\mathbf{N}^{d}}c(\alpha)h_{\alpha}$ (1.11) is bijective from $\ell_{0}^{\prime}(\mathbf{N}^{d})$ to $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$, and $\displaystyle T_{\mathcal{A}}\,$ $\displaystyle:$ $\displaystyle\,\\{c(\alpha)\\}_{\alpha\in\mathbf{N}^{d}}$ $\displaystyle\mapsto\sum_{\alpha\in\mathbf{N}^{d}}c(\alpha)e_{\alpha}$ (1.12) is bijective from $\ell_{0}^{\prime}(\mathbf{N}^{d})$ to $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$. We let the topologies of $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ and $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$ be inherited from $\ell_{0}^{\prime}(\mathbf{N}^{d})$ through the mappings $T_{\mathcal{H}}$ and $T_{\mathcal{A}}$, respectively. ###### Definition 1.3. Let $s\in\overline{\mathbf{R}_{\flat}}$. 1. (1) The spaces $\mathcal{H}_{0,s}(\mathbf{R}^{d}),\quad\mathcal{H}_{s}(\mathbf{R}^{d}),\quad\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})\quad\text{and}\quad\mathcal{H}_{0,s}^{\prime}(\mathbf{R}^{d}),$ (1.13) and their topologies, are the images under the map $T_{\mathcal{H}}$ of the spaces and their topologies in (1.4), respectively. 2. (2) The spaces $\mathcal{A}_{0,s}(\mathbf{C}^{d}),\quad\mathcal{A}_{s}(\mathbf{C}^{d}),\quad\mathcal{A}_{s}^{\prime}(\mathbf{C}^{d})\quad\text{and}\quad\mathcal{A}_{0,s}^{\prime}(\mathbf{C}^{d}),$ (1.14) and their topologies, are the images under the map $T_{\mathcal{A}}$ of the spaces and their topologies in (1.4), respectively. The spaces $\mathcal{H}_{s}(\mathbf{R}^{d})$ and $\mathcal{H}_{0,s}(\mathbf{R}^{d})$ in Definition 1.3 are called _Pilipović spaces_ of _Roumieu_ respectively _Beurling types_ of order $s$, and $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$ and $\mathcal{H}_{0,s}^{\prime}(\mathbf{R}^{d})$ are called _Pilipović distribution spaces_ of _Roumieu_ respectively _Beurling types_ of order $s$. There are several characterizations of Pilipović spaces, e. g. in terms of estimates of powers of the harmonic oscillator on the involved functions (see e. g. [4, 27, 53]). ###### Remark 1.4. Let $s_{1},s_{2}\in\overline{\mathbf{R}}_{\flat}$. For future references we recall that $\displaystyle\mathcal{H}_{s_{1}}(\mathbf{R}^{d})$ $\displaystyle=\mathcal{S}_{s_{1}}(\mathbf{R}^{d}),$ $\displaystyle\quad\mathcal{H}_{s_{1}}^{\prime}(\mathbf{R}^{d})$ $\displaystyle=\mathcal{S}_{s_{1}}^{\prime}(\mathbf{R}^{d}),$ $\displaystyle\quad s_{1}$ $\displaystyle\geq\frac{1}{2},$ (1.15) $\displaystyle\mathcal{H}_{0,s_{2}}(\mathbf{R}^{d})$ $\displaystyle=\Sigma_{s_{2}}(\mathbf{R}^{d}),$ $\displaystyle\quad\mathcal{H}_{0,s_{2}}^{\prime}(\mathbf{R}^{d})$ $\displaystyle=\Sigma_{s_{2}}^{\prime}(\mathbf{R}^{d}),$ $\displaystyle\quad s_{2}$ $\displaystyle>\frac{1}{2},$ while for the other choices of $s_{1},s_{2}$ we have $\displaystyle\mathcal{H}_{s_{1}}(\mathbf{R}^{d})$ $\displaystyle\neq\mathcal{S}_{s_{1}}(\mathbf{R}^{d})=\\{0\\},$ $\displaystyle\qquad s_{1}$ $\displaystyle<\frac{1}{2},$ (1.16) $\displaystyle\mathcal{H}_{0,s_{2}}(\mathbf{R}^{d})$ $\displaystyle\neq\Sigma_{s_{2}}(\mathbf{R}^{d})=\\{0\\},$ $\displaystyle\qquad 0<s_{2}$ $\displaystyle\leq\frac{1}{2},$ and that $\mathcal{H}_{s_{1}}(\mathbf{R}^{d})$ and $\mathcal{H}_{0,s_{2}}(\mathbf{R}^{d})$ in (1.15) and (1.16) are dense in $\mathscr{S}(\mathbf{R}^{d})$. (See e. g. [47, 53].) Hence, any non-trivial Gelfand-Shilov space and its distribution space, agree with corresponding Pilipović space and its distribution space. In particular, Gelfand-Shilov spaces and their distribution spaces can be characterized in similar ways as Pilipović spaces and their distribution spaces in terms of estimates of their coefficients in their Hermite function expansions. In this context we also recall that the Schwartz space and the set of tempered distributions can be characterized as $\displaystyle f$ $\displaystyle\in\mathscr{S}(\mathbf{R}^{d})$ $\displaystyle\Leftrightarrow$ $\displaystyle\quad|c_{h}(f,\alpha)|$ $\displaystyle\lesssim\langle\alpha\rangle^{-N}$ $\displaystyle\text{for every}\ N\geq 0$ (1.17) and $\displaystyle f$ $\displaystyle\in\mathscr{S}^{\prime}(\mathbf{R}^{d})$ $\displaystyle\Leftrightarrow$ $\displaystyle\quad|c_{h}(f,\alpha)|$ $\displaystyle\lesssim\langle\alpha\rangle^{N}$ $\displaystyle\text{for some}\ N\geq 0.$ (1.18) (See e. g. [48]). Here and in what follows we let $\langle x\rangle=(1+|x|^{2})^{\frac{1}{2}}.$ ### 1.4. Weight functions Next we recall some facts on weight functions. A _weight_ on $\mathbf{R}^{d}$ is a positive function $\omega\in L^{\infty}_{loc}(\mathbf{R}^{d})$ such that $1/\omega\in L^{\infty}_{loc}(\mathbf{R}^{d})$. The set of weights on $\mathbf{R}^{d}$ is denoted by $\mathscr{P}_{\\!A}(\mathbf{R}^{d})$. In the sequel we usually assume that $\omega$ is _moderate_ , or _$v$ -moderate_ for some positive function $v\in L^{\infty}_{loc}(\mathbf{R}^{d})$. This means that $\omega(x+y)\lesssim\omega(x)v(y),\qquad x,y\in\mathbf{R}^{d}.$ (1.19) Here $A\lesssim B$ means that $A\leq cB$ for a suitable constant $c>0$, and we write $A\asymp B$ when $A\lesssim B$ and $B\lesssim A$. We note that (1.19) implies that $\omega$ fulfills the estimates $v(-x)^{-1}\lesssim\omega(x)\lesssim v(x),\quad x\in\mathbf{R}^{d}.$ (1.20) We let $\mathscr{P}_{E}(\mathbf{R}^{d})$ be the sets of all moderate weights on $\mathbf{R}^{d}$. In several situations we also deal with weights which are radial symmetric in each phase space variable $(x_{j},\xi_{j})$. The set of such weights is denoted by $\mathscr{P}_{\\!A,r}(\mathbf{R}^{2d})$. That is, $\mathscr{P}_{\\!A,r}(\mathbf{R}^{2d})$ consists of all $\omega\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ such that $\omega(x,\xi)=\omega_{0}(\rho)$ for some $\omega_{0}\in\mathscr{P}_{\\!A}(\mathbf{R}^{d})$, where $\rho_{j}=x_{j}^{2}+\xi_{j}^{2}$. It can be proved that if $\omega\in\mathscr{P}_{E}(\mathbf{R}^{d})$, then $\omega$ is $v$-moderate for some $v(x)=e^{r|x|}$, provided the positive constant $r$ is chosen large enough (cf. [33]). In particular, (1.20) shows that for any $\omega\in\mathscr{P}_{E}(\mathbf{R}^{d})$, there is a constant $r>0$ such that $e^{-r|x|}\lesssim\omega(x)\lesssim e^{r|x|},\quad x\in\mathbf{R}^{d}.$ (1.21) We also let $\mathscr{P}(\mathbf{R}^{d})$ be the set of all weights $\omega$ on $\mathbf{R}^{d}$ such that $\omega$ is moderated by $v(x,\xi)=(1+|x|+|\xi|)^{r}$, for some $r\geq 0$. Evidently, $\mathscr{P}(\mathbf{R}^{d})\subseteq\mathscr{P}_{E}(\mathbf{R}^{d})$. We say that $v$ is _submultiplicative_ if $v$ is _even_ and (1.19) holds with $\omega=v$. In the sequel, $v$ always stand for a submultiplicative weight if nothing else is stated. ### 1.5. Modulation spaces and Wiener amalgam spaces Before defining modulation spaces we first address some notions on mixed norm spaces of Lebesgue types. Let $p,q\in(0.\infty]$ and $r=\min(p,q)$. For any $f\in L^{r}_{\operatorname{loc}}(\mathbf{R}^{2d})$, let $\displaystyle\|f\|_{L^{p,q}}=\|f\|_{L^{p,q}(\mathbf{R}^{2d})}$ $\displaystyle\equiv\|g_{1,f,p}\|_{L^{q}(\mathbf{R}^{d})},$ where $\displaystyle\quad g_{1,f,p}(\xi)$ $\displaystyle\equiv\|f(\,\cdot\,,\xi)\|_{L^{p}(\mathbf{R}^{d})}$ and $\displaystyle\|f\|_{L^{p,q}_{*}}=\|f\|_{L^{p,q}_{*}(\mathbf{R}^{2d})}$ $\displaystyle\equiv\|g_{2,f,q}\|_{L^{p}(\mathbf{R}^{d})},$ where $\displaystyle\quad g_{2,f,q}(x)$ $\displaystyle\equiv\|f(x,\,\cdot\,)\|_{L^{q}(\mathbf{R}^{d})}.$ We also let $L^{p,q}(\mathbf{R}^{2d})$ and $L^{p,q}_{*}(\mathbf{R}^{2d})$ be the quasi-Banach spaces which consist of all $f\in L^{r}_{\operatorname{loc}}(\mathbf{R}^{2d})$ such that $\|f\|_{L^{p,q}}$ and $\|f\|_{L^{p,q}_{*}}$ are finite, respectively. Let $\phi\in\Sigma_{1}(\mathbf{R}^{d})\setminus 0$, $p,q\in(0.\infty]$ and $\omega\in\mathscr{P}_{E}(\mathbf{R}^{2d})$. Then the modulation spaces $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ and $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$ consist of all $f\in\Sigma_{1}^{\prime}(\mathbf{R}^{d})$ such that $V_{\phi}f\cdot\omega$ belongs to $L^{p,q}(\mathbf{R}^{2d})$ respectively $L^{p,q}_{*}(\mathbf{R}^{2d})$. We equip $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ and $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$ with the quasi-norms $f\mapsto\|f\|_{M^{p,q}_{(\omega)}}\equiv\|V_{\phi}f\cdot\omega\|_{L^{p,q}}\quad\text{and}\quad f\mapsto\|f\|_{W^{p,q}_{(\omega)}}\equiv\|V_{\phi}f\cdot\omega\|_{L^{p,q}_{*}},$ (1.22) respectively. For conveniency we also set $M^{p}_{(\omega)}=M^{p,p}_{(\omega)}$, and remark that $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ is one of the most common types of modulation spaces. We also set $M^{p,q}=M^{p,q}_{(\omega)},$ and $M^{p}=M^{p}_{(\omega)}$ when $\omega=1$, and similarly for $W^{p,q}_{(\omega)}$ spaces. Modulation spaces with $\omega\in\mathscr{P}(\mathbf{R}^{2d})$ were introduced by Feichtinger in [23]. The theory was thereafter extended and generalized in several ways (see e. g. [24, 25, 26, 28]). In the following proposition we list some basic properties for modulation spaces. We refer to [23, 25, 28, 32, 51] for the proof. ###### Proposition 1.5. Let $r\in(0,1]$, $p,p_{j},q_{j}\in(0,\infty]$ and $\omega,\omega_{j},v\in\mathscr{P}_{E}(\mathbf{R}^{2d})$, $j=1,2$, be such that $r\leq p,q$, $p_{1}\leq p_{2}$, $q_{1}\leq q_{2}$, $\omega_{2}\lesssim\omega_{1}$, and let $\omega$ be $v$-moderate. Then the following is true: 1. (1) $\Sigma_{1}(\mathbf{R}^{d})\subseteq M^{p,q}_{(\omega)}(\mathbf{R}^{d}),W^{p,q}_{(\omega)}(\mathbf{R}^{d})\subseteq\Sigma_{1}^{\prime}(\mathbf{R}^{d})$ with continuous inclusions. If in addition $p,q<\infty$, then $\Sigma_{1}(\mathbf{R}^{d})$ is dense in $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ and $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$. If, more restricted, $\omega\in\mathscr{P}(\mathbf{R}^{2d})$, then similar facts hold true with $\mathscr{S}$ in place of $\Sigma_{1}$ at each occurrence; 2. (2) if $\phi\in M^{r}_{(v)}(\mathbf{R}^{d})\setminus 0$, then $f\in M^{p,q}_{(\omega)}(\mathbf{R}^{d})$, if and only if $\|V_{\phi}f\cdot\omega\|_{L^{p,q}}$ is finite, and $f\in W^{p,q}_{(\omega)}(\mathbf{R}^{d})$, if and only if $\|V_{\phi}f\cdot\omega\|_{L^{p,q}_{*}}$ is finite. In particular, $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ and $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$ are independent of the choice of $\phi\in M^{r}_{(v)}(\mathbf{R}^{d})\setminus 0$. Moreover, different choices of $\phi$ in (1.22) give rise to equivalent quasi-norms; 3. (3) $M^{p_{1},q_{1}}_{(\omega_{1})}(\mathbf{R}^{d})\subseteq M^{p_{2},q_{2}}_{(\omega_{2})}(\mathbf{R}^{d})$ and $W^{p_{1},q_{1}}_{(\omega_{1})}(\mathbf{R}^{d})\subseteq W^{p_{2},q_{2}}_{(\omega_{2})}(\mathbf{R}^{d})$; 4. (4) if $\omega_{0}(\xi,x)=\omega(-x,\xi)$, then $\mathscr{F}$ is a homeomorphism from $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ to $W^{q,p}_{(\omega_{0})}(\mathbf{R}^{d})$. ###### Remark 1.6. In the framework of Pilipović distribution spaces, the definition of modulation spaces are extended in [53] to include more general weights (which are not necessary moderate). In these approaches the window function $\phi$ is fixed and equal to the Gaussian $\phi(x)=\pi^{-\frac{d}{4}}e^{-\frac{1}{2}|x|^{2}}$. For any $\omega\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ and $p,q\in(0,\infty]$, the modulation spaces $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ and $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$ consist of all $f\in\mathcal{H}_{\flat_{1}}^{\prime}(\mathbf{R}^{d})$ such that corresponding quasi-norms in (1.22) are finite. It is proved in [53] that $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ and $W^{p,q}_{(\omega)}(\mathbf{R}^{d})$ are _quasi-Banach spaces_. If in addition $p,q\geq 1$, then these spaces are _Banach spaces_. ### 1.6. Spaces of entire functions and the Bargmann transform Let $\Omega\subseteq\mathbf{C}^{d}$ be open. Then $A(\Omega)$ denotes the set of all analytic functions in $\Omega$. Next we recall some properties of the Bargmann transform (cf. [5, 6]). We set $\displaystyle\langle z,w\rangle=\sum_{j=1}^{d}z_{j}w_{j}\quad\text{and}\quad(z,w)=\langle z,\overline{w}\rangle,\quad\text{when}$ $\displaystyle z=(z_{1},\dots,z_{d})\in\mathbf{C}^{d}\quad\text{and}\quad w=(w_{1},\dots,w_{d})\in\mathbf{C}^{d},$ and otherwise $\langle\,\cdot\,,\,\cdot\,\rangle$ denotes the duality between test function spaces and their corresponding duals. The Bargmann transform $\mathfrak{V}_{d}f$ of $f\in L^{2}(\mathbf{R}^{d})$ is defined by the formula $(\mathfrak{V}_{d}f)(z)=\pi^{-\frac{d}{4}}\int_{\mathbf{R}^{d}}\exp\Big{(}-\frac{1}{2}(\langle z,z\rangle+|y|^{2})+2^{\frac{1}{2}}\langle z,y\rangle\Big{)}f(y)\,dy$ (1.23) (cf. [5]). We note that if $f\in L^{2}(\mathbf{R}^{d})$, then the Bargmann transform $\mathfrak{V}_{d}f$ of $f$ is the entire function on $\mathbf{C}^{d}$, given by $(\mathfrak{V}_{d}f)(z)=\int_{\mathbf{R}^{d}}\mathfrak{A}_{d}(z,y)f(y)\,dy,$ or $(\mathfrak{V}_{d}f)(z)=\langle f,\mathfrak{A}_{d}(z,\,\cdot\,)\rangle,$ (1.24) where the Bargmann kernel $\mathfrak{A}_{d}$ is given by $\mathfrak{A}_{d}(z,y)=\pi^{-\frac{d}{4}}\exp\Big{(}-\frac{1}{2}(\langle z,z\rangle+|y|^{2})+2^{\frac{1}{2}}\langle z,y\rangle\Big{)}.$ Evidently, the right-hand side in (1.24) makes sense when $f\in\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$ and defines an element in $A(\mathbf{C}^{d})$, since $y\mapsto\mathfrak{A}_{d}(z,y)$ can be interpreted as an element in $\mathcal{S}_{1/2}(\mathbf{R}^{d})$ with values in $A(\mathbf{C}^{d})$. It was proved in [5] that $f\mapsto\mathfrak{V}_{d}f$ is a bijective and isometric map from $L^{2}(\mathbf{R}^{d})$ to the Hilbert space $A^{2}(\mathbf{C}^{d})\equiv B^{2}(\mathbf{C}^{d})\cap A(\mathbf{C}^{d})$, where $B^{2}(\mathbf{C}^{d})$ consists of all measurable functions $F$ on $\mathbf{C}^{d}$ such that $\|F\|_{B^{2}}\equiv\Big{(}\int_{\mathbf{C}^{d}}|F(z)|^{2}d\mu(z)\Big{)}^{\frac{1}{2}}<\infty.$ (1.25) Here $d\mu(z)=\pi^{-d}e^{-|z|^{2}}\,d\lambda(z)$, where $d\lambda(z)$ is the Lebesgue measure on $\mathbf{C}^{d}$. We recall that $A^{2}(\mathbf{C}^{d})$ and $B^{2}(\mathbf{C}^{d})$ are Hilbert spaces, where the scalar product are given by $(F,G)_{B^{2}}\equiv\int_{\mathbf{C}^{d}}F(z)\overline{G(z)}\,d\mu(z),\quad F,G\in B^{2}(\mathbf{C}^{d}).$ (1.26) If $F,G\in A^{2}(\mathbf{C}^{d})$, then we set $\|F\|_{A^{2}}=\|F\|_{B^{2}}$ and $(F,G)_{A^{2}}=(F,G)_{B^{2}}$. In [5] it is also proved that $\mathfrak{V}_{d}h_{\alpha}=e_{\alpha},\quad\text{where}\quad e_{\alpha}(z)\equiv\frac{z^{\alpha}}{\sqrt{\alpha!}},\quad z\in\mathbf{C}^{d}.$ (1.27) In particular, the Bargmann transform maps the orthonormal basis $\\{h_{\alpha}\\}_{\alpha\in\mathbf{N}^{d}}$ in $L^{2}(\mathbf{R}^{d})$ bijectively into the orthonormal basis $\\{e_{\alpha}\\}_{\alpha\in\mathbf{N}^{d}}$ of monomials in $A^{2}(\mathbf{C}^{d})$. For general $f\in\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ we now set $\mathfrak{V}_{d}f\equiv(T_{\mathcal{A}}\circ T_{\mathcal{H}}^{-1})f,\qquad f\in\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d}),$ (1.28) where $T_{\mathcal{H}}$ and $T_{\mathcal{A}}$ are given by (1.11) and (1.12). It follows from (1.27) that $\mathfrak{V}_{d}f$ in (1.28) agrees with $\mathfrak{V}_{d}f$ in (1.23) when $f\in L^{2}(\mathbf{R}^{d})$, and that this is the only way to extend the Bargmann transform continuously to a continuous map from $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ to $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$. From these observations and definitions, we get the following. The details are left for the reader. ###### Proposition 1.7. Let $s\in\overline{\mathbf{R}_{\flat}}$. Then $\mathfrak{V}_{d}$ is a homeomorphism from $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ to $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$, and restricts to homeomorphisms from the spaces in (1.13) to the spaces in (1.14), respectively. It follows that if $f,g\in L^{2}(\mathbf{R}^{d})$ and $F,G\in A^{2}(\mathbf{C}^{d})$, then $\displaystyle(f,g)_{L^{2}(\mathbf{R}^{d})}$ $\displaystyle=\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)\overline{c_{h}(g,\alpha)},$ (1.29) $\displaystyle(F,G)_{A^{2}(\mathbf{C}^{d})}$ $\displaystyle=\sum_{\alpha\in\mathbf{N}^{d}}c(F,\alpha)\overline{c(G,\alpha)}.$ By the definitions we get the following proposition on duality for Pilipović spaces and their Bargmann images. The details are left for the reader. ###### Proposition 1.8. Let $s_{1}\in\mathbf{R}_{\flat}$ and $s_{2}\in\overline{\mathbf{R}}_{\flat}$. Then the form $(\,\cdot\,,\,\cdot\,)_{L^{2}(\mathbf{R}^{d})}$ on $\mathcal{H}_{0}(\mathbf{R}^{d})\times\mathcal{H}_{0}(\mathbf{R}^{d})$ is uniquely extendable to sesqui-linear forms on $\displaystyle\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d})\times\mathcal{H}_{s_{2}}(\mathbf{R}^{d}),$ $\displaystyle\mathcal{H}_{s_{2}}(\mathbf{R}^{d})\times\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d}),$ $\displaystyle\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d})\times\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$ and on $\displaystyle\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})\times\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d}).$ The (strong) duals of $\mathcal{H}_{s_{2}}(\mathbf{R}^{d})$ and $\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$ are equal to $\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d})$ and $\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d})$, respectively, through the form $(\,\cdot\,,\,\cdot\,)_{L^{2}(\mathbf{R}^{d})}$. The same holds true if the spaces in (1.13) and the form $(\,\cdot\,,\,\cdot\,)_{L^{2}(\mathbf{R}^{d})}$ are replaced by corresponding spaces in (1.14) and the form $(\,\cdot\,,\,\cdot\,)_{A^{2}(\mathbf{C}^{d})}$, at each occurrence. If $s\in\overline{\mathbf{R}_{\flat}}$, $f\in\mathcal{H}_{s}(\mathbf{R}^{d})$, $g\in\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$, $F\in\mathcal{A}_{s}(\mathbf{C}^{d})$ and $G\in\mathcal{A}_{s}^{\prime}(\mathbf{C}^{d})$, then $(f,g)_{L^{2}(\mathbf{R}^{d})}$ and $(F,G)_{A^{2}(\mathbf{C}^{d})}$ are defined by the formula (1.29). It follows that $c_{h}(f,\alpha)=c(F,\alpha)\quad\text{when}\quad F=\mathfrak{V}_{d}f,\ G=\mathfrak{V}_{d}g.$ (1.30) holds for such choices of $f$ and $g$. ###### Remark 1.9. In [27, 53], the spaces in (1.14), contained in $\mathcal{A}_{0,\flat_{1}}^{\prime}(\mathbf{C}^{d})=A(\mathbf{C}^{d})$ are identified as canonical spaces of analytic functions. For example it is here shown that if $\sigma_{1}>0$ and $\sigma_{2}>1$, then $\displaystyle\mathcal{A}_{\flat_{\sigma_{1}}}(\mathbf{C}^{d})$ $\displaystyle=\\{\,F\in A(\mathbf{C}^{d})\,;\,|F(z)|\lesssim e^{r|z|^{\frac{2\sigma_{1}}{\sigma_{1}+1}}}\ \text{for some $r>0$}\,\\}$ and $\displaystyle\mathcal{A}_{\flat_{\sigma_{2}}}^{\prime}(\mathbf{C}^{d})$ $\displaystyle=\\{\,F\in A(\mathbf{C}^{d})\,;\,|F(z)|\lesssim e^{r|z|^{\frac{2\sigma_{2}}{\sigma_{2}-1}}}\ \text{for every $r>0$}\,\\}.$ ### 1.7. Fractional Fourier transforms We recall that (multiple ordered) fractional Fourier transform $\mathscr{F}_{\\!\varrho}$ with respect to $\varrho=(\varrho_{1},\dots,\varrho_{d})\in\mathbf{R}^{d}$ is the operator with kernel given by $K_{d,\varrho}$ in (0.4) and (0.5) (see e. g. [20]). Evidently, $\mathscr{F}_{\\!\varrho}=\mathscr{F}_{\\!\varrho_{1}}\otimes\cdots\otimes\mathscr{F}_{\\!\varrho_{d}},\qquad\varrho=(\varrho_{1},\dots,\varrho_{d})\in\mathbf{R}^{d},$ (1.31) and it follows that $\mathscr{F}_{\\!\varrho}$ makes sense as homeomorphisms on $\mathcal{H}_{0,s}(\mathbf{R}^{d}),\quad\mathcal{H}_{s}(\mathbf{R}^{d}),\quad\mathscr{S}(\mathbf{R}^{d}),\quad\mathscr{S}^{\prime}(\mathbf{R}^{d}),\quad\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})\quad\text{and}\quad\mathcal{H}_{0,s}^{\prime}(\mathbf{R}^{d}),$ and to a unitary operators on $L^{2}(\mathbf{R}^{d})$ (see e. g. [5, 53]). For conveniency we put $\mathscr{F}_{\\!\varrho_{0}}=\mathscr{F}_{\\!\varrho}$ when $\varrho=(\varrho_{0},\dots,\varrho_{0})\in\mathbf{R}^{d}$. ###### Remark 1.10. Apart from the cases when $\varrho$ in $\mathscr{F}_{\\!\varrho}$ are (multiple) integers, the formula for the fractional Fourier transform might not look like a visual eye candy, because the kernel $K_{\varrho}(\xi,x)$ is rather complex. However the formula appear naturally by using suitable changes of symplectic coordinates in quantum mechanics. In fact, the symplectic map which rotates $(x,\xi)$ in the phase space $\mathbf{R}^{2d}$ with angle $-\frac{\pi}{2}$ into $(\xi,-x)$, induces that the observables in quantum mechanics (which are operators) should be conjugated by the Fourier transform $\mathscr{F}=\mathscr{F}_{1}$ (see e. g. [15, 16, 18, 19, 20]). It might then be natural to define the fractional Fourier transform $\mathscr{F}_{\\!\varrho}$ to be the operator which should conjugate the quantum observables, when the phase space is rotated with the angle $-\varrho\frac{\pi}{2}$. That is, _Symplectic map_ | | | _Conjugation of quantum observables_ ---|---|---|--- Rotation with angle $-\frac{\pi}{2}$ | | | $\mathscr{F}_{1}$ Rotation with angle $-\varrho\frac{\pi}{2}$ | | | $\mathscr{F}_{\\!\varrho}$ This gives a unique definition of $\mathscr{F}_{\\!\varrho}$, and after some computations, it follows that the kernel of $\mathscr{F}_{\\!\varrho}$ is given by $K_{d,\varrho}$ in (0.4) and (0.5). An equivalent approach and which leads to the same formulae, consist of using the metaplectic representation of symplectic group and considering metaplectic operators. (See e. g. [15, 16, 18, 19, 20] for more facts on metaplectic representations and corresponding operators.) A slightly equivalent way to reach the fractional Fourier transform consists of investigating mapping properties of the Bargmann transform, $\mathfrak{V}_{d}$. It is proved already in [5] that the Bargmann image of $\widehat{f}$ is given by $(\mathfrak{V}_{d}(\mathscr{F}f))(z)=(\mathfrak{V}_{d}\widehat{f})(z)=(\mathfrak{V}_{d}\widehat{f})(-iz)=(\mathfrak{V}_{d}\widehat{f})(e^{-i\frac{\pi}{2}}z).$ That is, the Bargmann image of $\widehat{f}$ is obtained by retrieving corresponding image of $f$ and then (again) rotating the argument with the angle $-\frac{\pi}{2}$. In particular, the Fourier transform can be evaluated as $\mathscr{F}_{1}=\mathfrak{V}_{d}^{-1}\circ U_{1}\circ\mathfrak{V}_{d},\qquad(U_{1}F)(z)=F(e^{-i\frac{\pi}{2}}z).$ It might then be natural to define the fractional Fourier transform as $\mathscr{F}_{\\!\varrho}=\mathfrak{V}_{d}^{-1}\circ U_{\varrho}\circ\mathfrak{V}_{d},\qquad(U_{\varrho}F)(z)=F(e^{-i\varrho\frac{\pi}{2}}z),$ and a straight-forward computations show that we attain the same formula of the kernel $K_{d,\varrho}$ of $\mathscr{F}_{\\!\varrho}$ as before. Due to the previous remark, the Bargmann image of $\mathscr{F}_{\\!\varrho}$ in (0.4), (0.5) and (1.31) takes the form $(\mathfrak{V}_{d}(\mathscr{F}_{\\!\varrho}f))(z)=(\mathfrak{V}_{d}f)(e^{-\frac{i}{2}{\pi}\varrho_{1}}z_{1},\dots,e^{-\frac{i}{2}{\pi}\varrho_{d}}z_{d}),\qquad f\in\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d}).$ (1.32) Let $\phi(x)=\pi^{-\frac{d}{4}}e^{-\frac{1}{2}|x|^{2}}$. Then we recall that the Bargmann transform and the short-time Fourier transform can be linked as $\displaystyle V_{\phi}f(x,\xi)$ $\displaystyle=(2\pi)^{-\frac{d}{2}}e^{-\frac{1}{4}|z|^{2}}e^{-\frac{i}{2}\langle x,\xi\rangle}(\mathfrak{V}_{d}f)(2^{-\frac{1}{2}}\overline{z}),$ (1.33) $\displaystyle z$ $\displaystyle=x+i\xi,\ x,\xi\in\mathbf{R}^{d}$ (see (1.28) in [52]). A combination of (1.32) and (1.33) gives that if $\varrho\in\mathbf{R}^{d}$, $\displaystyle R_{1,\varrho_{j}}(x_{j},\xi_{j})$ $\displaystyle=(\cos\theta_{j})x_{j}+(\sin\theta_{j})\xi_{j},$ (1.34) $\displaystyle R_{2,\varrho_{j}}(x_{j},\xi_{j})$ $\displaystyle=-(\sin\theta_{j})x_{j}+(\cos\theta_{j})\xi_{j},$ $\displaystyle\qquad\theta_{j}$ $\displaystyle=\textstyle{\frac{\pi\varrho_{j}}{2}}$ $\displaystyle R_{k,\varrho}(x,\xi)$ $\displaystyle=\big{(}R_{k,\varrho_{1}}(x_{1},\xi_{1}),\dots,R_{k,\varrho_{d}}(x_{d},\xi_{d})\big{)},$ $\displaystyle\qquad k$ $\displaystyle=1,2,$ $\displaystyle A_{d,\varrho}(x,\xi)$ $\displaystyle=(R_{1,\varrho}(x,\xi),R_{2,\varrho}(x,\xi)),$ $\displaystyle U_{d,\varrho}(x,\xi)$ $\displaystyle=R_{1,\varrho}(x,\xi)+iR_{2,\varrho}(x,\xi),$ then $\displaystyle(V_{\phi}(\mathscr{F}_{\\!\varrho}f))(x,\xi)$ $\displaystyle=(2\pi)^{-\frac{d}{2}}e^{-\frac{1}{4}|z|^{2}}e^{-\frac{i}{2}\langle x,\xi\rangle}(\mathfrak{V}_{d}f)(2^{-\frac{1}{2}}\overline{U_{d,\varrho}(z)})$ (1.35) $\displaystyle=e^{i\frac{1}{4}\Phi_{\varrho}(x,\xi)}V_{\phi}f(A_{d,\varrho}(x,\xi)),\quad z=x+i\xi\in\mathbf{C}^{d}.$ For conveniency we set $A_{d,\varrho_{0}}(x,\xi)=A_{d,\varrho}(x,\xi)\quad\text{and}\quad U_{d,\varrho_{0}}(x,\xi)=U_{d,\varrho}(x,\xi)$ when $\varrho=(\varrho_{0},\dots,\varrho_{0})\in\mathbf{R}^{d}\quad\text{and}\quad x,\xi\in\mathbf{R}^{d}.$ It it then clear that (1.35) still hold true when $\mathscr{F}_{\\!\varrho}$ is the fractional Fourier transform on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ of order $\varrho\in\mathbf{R}$. As in [53] we observe that if, more generally, $\varrho\in\mathbf{C}^{d}$, then the map $f\mapsto\big{(}z\mapsto(\mathfrak{V}_{d}f(e^{-\frac{i}{2}\pi\varrho_{1}}z_{1},\dots,e^{-\frac{i}{2}\pi\varrho_{d}}z_{d})\big{)}$ makes sense as a homeomorphism from $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ into $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$. In similar ways as in [53], we define the fractional Fourier transform $\mathscr{F}_{\\!\varrho}$ by (1.32) when $\varrho\in\mathbf{C}^{d}$. Then it follows that still $\mathscr{F}_{\\!\varrho}$ is continuous on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$. ### 1.8. Strichartz estimates We recall that for a linear operator $R$ acting on suitable functions or (ultra-)distributions on $\mathbf{R}^{d}$, Strichartz estimates appears when finding properties on solutions to Cauchy problems like the generalized inhomogeneous Schrödinger equation $\begin{cases}i\partial_{t}u-Ru=F,\\\\[4.30554pt] u(0,x)=u_{0}(x),\qquad(t,x)\in I\times\mathbf{R}^{d}.\end{cases}$ (1.36) (See e. g. [50] and the references therein.) Here $I=[0,\infty)$ or $I=[0,T]$ for some $T>0$, $F$ is a suitable function or (ultra-)distribution on $I\times\mathbf{R}^{d}$, $R$ is a linear operator acting on functions or distributions on $\mathbf{R}^{d}$, and $u_{0}$ is a suitable function or (ultra-)distribution on $\mathbf{R}^{d}$. The solution of (1.36) is formally given by $u(t,x)=(e^{-itR}u_{0})(x)-i\int_{0}^{t}(e^{i(t-s)R}F(t,\,\cdot\,))(x)\,ds.$ (1.37) In particular it follows that continuity properties of the propagator $\displaystyle(E_{R}f)(t,x)$ $\displaystyle\equiv(e^{-itR}u_{0})(x),$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d}.$ (1.38) as well as for the operator $\displaystyle(S_{1,R}F)(t,x)$ $\displaystyle=\int_{0}^{t}(e^{-i(t-s)R}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\qquad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d},$ (1.39) are essential for finding estimates for solutions to (1.36). We observe that the $L^{2}(I\times\mathbf{R}^{d})$ adjoint $E_{R}^{*}$ of $E_{R}$ is given by $\displaystyle(E_{R}^{*}F)(x)$ $\displaystyle=\int_{I}(e^{isR}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\quad x$ $\displaystyle\in\mathbf{R}^{d},$ (1.40) and that the composition $E_{R}\circ E_{R}^{*}$ of $E_{R}$ and $E_{R}^{*}$ is the operator $S_{2,R}$, similar to $S_{1,R}$, and given by $\displaystyle(S_{2,R}F)(t,x)$ $\displaystyle=\int_{I}(e^{-i(t-s)R}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d}.$ (1.41) We recall that continuity properties of $E_{R}$ (or $E_{R}^{*}$) are strongly linked to continuity properties for $S_{2,R}$ (see [30]). Estimates on the operator $E_{R}$ in (1.38) is called _homogeneous Strichartz estimates_ , while estimates for $S_{1,R}$ in (1.39), or even for $S_{2,R}$ in (1.41), are called _inhomogeneous Strichartz estimates_. In our situation, the operator $R$ is given by the operator $H_{x,\varrho,c}$ for some $c\in\mathbf{C}$ and $\varrho\in\mathbf{R}^{d}$ or $\varrho\in\mathbf{C}^{d}$, and for such choice of $R$, we put $E=E_{R}$ and $S_{j}=S_{j,R}$, $j=1,2$. Hence $\displaystyle(Ef)(t,x)$ $\displaystyle\equiv(e^{-itH_{x,\varrho,c}}u_{0})(x),$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d},$ (1.38)′ $\displaystyle(E^{*}F)(x)$ $\displaystyle=\int_{I}(e^{isH_{x,\varrho,c}}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\quad x$ $\displaystyle\in\mathbf{R}^{d}.$ (1.40)′ $\displaystyle(S_{1}F)(t,x)$ $\displaystyle=\int_{0}^{t}(e^{-i(t-s)H_{x,\varrho,c}}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d},$ (1.39)′ and $\displaystyle(S_{2}F)(t,x)$ $\displaystyle=\int_{I}(e^{-i(t-s)H_{x,\varrho,c}}F(s,\,\cdot\,))(x)\,ds,$ $\displaystyle\quad(t,x)$ $\displaystyle\in I\times\mathbf{R}^{d}.$ (1.41)′ In Section 4 we deduce continuity properties for the operators $E$ when acting on modulation spaces, and for $S_{1}$ and $S_{2}$ when acting on Lebesgue spaces with values in modulation spaces. ## 2\. Powers of generalized harmonic oscillator propagators on Pilipović spaces In this section we show that powers of harmonic oscillators, $H_{x,c}$, or more generally $H_{x,\varrho,c}$, are continuous on Pilipović spaces. If in addition $H_{x,\varrho,c}$ is injective, then we show that powers of $H_{x,\varrho,c}$ are in fact homeomorphisms on Pilipović spaces and their distribution spaces. We also consider harmonic oscillator propagators and deduce homeomorphism properties of such operators on Pilipović spaces. In the last part we show that powers of harmonic oscillators are continuous on Hilbert modulation spaces of the form $M^{2,2}_{(\vartheta_{r})}(\mathbf{R}^{d})$, where $\vartheta_{r}(x,\xi)=\langle(x,\xi)\rangle^{r}$. ### 2.1. Continuity of powers of $H_{x,c}$ and their propagators For any $c\in\mathbf{C}$, it follows that $H_{x,c}$ is continuous on $\mathcal{H}_{0}(\mathbf{R}^{d})$, and that $H_{x,c}f(x)=\sum_{\alpha\in\mathbf{N}^{d}}\left(2|\alpha|+d+c\right)c_{h}(f,\alpha)h_{\alpha}(x),$ (2.1) when $f\in\mathcal{H}_{0}(\mathbf{R}^{d})$ is given by (1.8). By duality it follows that $H_{x,c}$ on $\mathcal{H}_{0}(\mathbf{R}^{d})$ is uniquely extendable to a continuous map on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$, and that (2.1) still holds true when $f\in\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ is given by (1.8). In the same way it follows that if $r\geq 0$ is real, then $\displaystyle H_{x,c}^{r}\,$ $\displaystyle:$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)h_{\alpha}$ $\displaystyle\mapsto$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}(2|\alpha|+d+c)^{r}c_{h}(f,\alpha)h_{\alpha}$ (2.2) and $\displaystyle e^{\zeta H_{x,c}^{r}}\,$ $\displaystyle:$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)h_{\alpha}$ $\displaystyle\mapsto$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}e^{(2|\alpha|+d+c)^{r}}c_{h}(f,\alpha)h_{\alpha}$ (2.3) are continuous on $\mathcal{H}_{0}(\mathbf{R}^{d})$, and uniquely extendable to continuous mappings on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$. More generally, if $r\in\mathbf{R}$, and $\displaystyle c$ $\displaystyle\in\mathbf{C}\setminus\\{\,-2n-d\,;\,n\in\mathbf{N}\,\\}$ and $\displaystyle\quad r$ $\displaystyle\in\mathbf{R}$ (2.4) or $\displaystyle c$ $\displaystyle\in\mathbf{C}$ and $\displaystyle\quad r$ $\displaystyle\in\overline{\mathbf{R}}_{+},$ (2.5) then the operators (2.2) and (2.3) are continuous on $\mathcal{H}_{0}(\mathbf{R}^{d})$ and on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$. The following result extends these continuity properties to other Pilipović spaces. ###### Proposition 2.1. Let $\zeta\in\mathbf{C}$, $r\in\mathbf{R}$ and $c\in\mathbf{C}$ be as in (2.4) or as in (2.5), and let $s,s_{1},s_{2}\in\overline{\mathbf{R}_{\flat}}$ be such that $0<s_{1}\leq\frac{1}{2r}$ and $s_{2}<\frac{1}{2r}$. Then the following is true: 1. (1) the map (2.2) on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ restricts to continuous mappings on $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on the spaces in (1.13). If (2.4) holds true, then these mappings are homeomorphisms; 2. (2) the map (2.3) on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ restricts to homeomorphisms on $\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d}),\quad\mathcal{H}_{s_{2}}(\mathbf{R}^{d}),\quad\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d})\quad\text{and on}\quad\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d}).$ (2.6) If in addition $\zeta\in i\mathbf{R}$, then the map (2.3) is homeomorphic on $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and the spaces in (1.13) ###### Proof. The assertion (1) follows from (2.2), the definitions of the spaces in (1.13) and the fact that $\\{c(\alpha)\\}_{\mathbf{N}^{d}}\mapsto\\{(2|\alpha|+d+c)^{r}c(\alpha)\\}_{\mathbf{N}^{d}}$ (2.7) is continuous on the sequence spaces in Definition 1.1 (3). The homeomorphism property in the case $c\neq-d$ follows from the fact that the map (2.7) is then a continuous bijection. In the same way, (2) follows from (2.3) and the fact that $\\{c(\alpha)\\}_{\mathbf{N}^{d}}\mapsto\\{e^{\zeta(2|\alpha|+d+c)^{r}}c(\alpha)\\}_{\mathbf{N}^{d}}$ (2.8) is a homeomorphism on the sequence spaces which correspond to the spaces in (2.6). ∎ We also have the following negative result concerning continuity for the operator in (2.3). ###### Proposition 2.2. Let $\zeta\in\mathbf{C}$ be such that $\operatorname{Re}(\zeta)>0$, $r>0$, $c\in\mathbf{C}$, and let $s,s_{1},s_{2}\in\overline{\mathbf{R}_{\flat}}$ be such that $s_{1}>\frac{1}{2r}$ and $s_{2}\geq\frac{1}{2r}$. Then the following is true: 1. (1) the map (2.3) is discontinuous from $\mathscr{S}(\mathbf{R}^{d})$ to $\mathscr{S}^{\prime}(\mathbf{R}^{d})$; 2. (2) the map (2.3) is discontinuous from $\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$ to $\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d})$; 3. (3) the map (2.3) is discontinuous from $\mathcal{H}_{s_{2}}(\mathbf{R}^{d})$ to $\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d})$. ###### Proof. We only prove (1) and (2), and then in the case when $s_{1}\in\mathbf{R}$ and $c=0$. The other cases follow by similar arguments and are left for the reader. Since $e^{itH_{x}^{r}}$ is a homeomorphism on all involved spaces when $t$ is real, we may assume that $\zeta>0$ is real. Let $f=\sum_{\alpha\in\mathbf{N}^{d}}e^{-(\log(1+|\alpha|))^{2}}h_{\alpha},$ i. e. the Hermite coefficients for $f$ are given by $c_{h}(f,\alpha)=e^{-(\log(1+|\alpha|))^{2}}$. Since $e^{-(\log(1+|\alpha|))^{2}}\lesssim\langle\alpha\rangle^{-N}$ for every $N\geq 1$, it follows that $f\in\mathscr{S}(\mathbf{R}^{d})$ in view of (1.17). On the other hand, by (2.3) we get $c_{h}(e^{\zeta H_{x}^{r}}f,\alpha)=e^{\zeta(2|\alpha|+d)^{r}}e^{-(\log(1+|\alpha|))^{2}}\gtrsim e^{\zeta(|\alpha|+d)^{r}}\gtrsim\langle\alpha\rangle^{N},$ for every $N\geq 1$. Hence (1.18) shows that $e^{\zeta H_{x}^{r}}f\notin\mathscr{S}^{\prime}(\mathbf{R}^{d})$, while $f\in\mathscr{S}(\mathbf{R}^{d})$. This gives (1). In order to prove (2), let $s\in\mathbf{R}$ be such that $\frac{1}{2r}<s<s_{1}$, and let $f=\sum_{\alpha\in\mathbf{N}^{d}}e^{-(1+|\alpha|)^{\frac{1}{2s}}}h_{\alpha},$ i. e. the Hermite coefficients for $f$ are given by $c_{h}(f,\alpha)=e^{-(1+|\alpha|)^{\frac{1}{2s}}}$. Since $e^{-(1+|\alpha|)^{\frac{1}{2s}}}\lesssim e^{-r(1+|\alpha|)^{\frac{1}{2s_{1}}}}$ for every $r>0$, it follows that $f\in\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$ in view of the definitions of $\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$. On the other hand, by (2.3) we get $\displaystyle c_{h}(e^{\zeta H_{x}^{r}}f,\alpha)$ $\displaystyle=e^{\zeta(2|\alpha|+d)^{r}}e^{-(1+|\alpha|)^{\frac{1}{2s}}}$ $\displaystyle\gtrsim e^{\zeta(|\alpha|+d)^{r}}\gtrsim e^{r(1+|\alpha|)^{\frac{1}{2s}}}\gtrsim e^{r(1+|\alpha|)^{\frac{1}{2s_{1}}}},$ for every $r>0$. Hence $e^{\zeta H_{x}^{r}}f\notin\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d})$, due to the definitions, while $f\in\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$. This gives (2), and the result follows. ∎ ### 2.2. Extensions to powers of $H_{x,\varrho,c}$ and their propagators The previous results can be extended to allow $H_{x,\varrho,c}$ in place of $H_{x,c}$, for more general choices of $\varrho\in\mathbf{C}^{d}$. We notice that $\displaystyle H_{x,\varrho,c}^{r}\,$ $\displaystyle:$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)h_{\alpha}$ $\displaystyle\mapsto$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}(2\langle\alpha,\varrho\rangle+\operatorname{sum}(\varrho)+c)^{r}c_{h}(f,\alpha)h_{\alpha}$ (2.2)′ and $\displaystyle e^{\zeta H_{x,\varrho,c}^{r}}\,$ $\displaystyle:$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}c_{h}(f,\alpha)h_{\alpha}$ $\displaystyle\mapsto$ $\displaystyle\sum_{\alpha\in\mathbf{N}^{d}}e^{\zeta(2\langle\alpha,\varrho\rangle+\operatorname{sum}(\varrho)+c)^{r}}c_{h}(f,\alpha)h_{\alpha}$ (2.3)′ when $f\in\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$, and $\displaystyle c$ $\displaystyle\in\mathbf{C}\setminus\\{\,-2\langle\alpha,\varrho\rangle-\operatorname{sum}(\varrho)\,;\,\alpha\in\mathbf{N}^{d}\,\\}$ and $\displaystyle\quad r$ $\displaystyle\in\mathbf{R}$ (2.4)′ or $\displaystyle c$ $\displaystyle\in\mathbf{C}$ and $\displaystyle\quad r$ $\displaystyle\in\overline{\mathbf{R}}_{+},$ (2.5)′ Here and in what follows we let $\operatorname{sum}(\varrho)=\sum_{j=1}^{d}\varrho_{j},\quad\text{when}\quad\varrho=(\varrho_{1},\dots,\varrho_{d})\in\mathbf{C}^{d}.$ By similar arguments as in the proof of Proposition 2.1 we get the following extension. The details are left for the reader. ###### Proposition 2.1${}^{\prime}\\!\\!$. Let $\zeta\in\mathbf{C}$,$r\in\mathbf{R}$ $\varrho\in\mathbf{C}^{d}$ and $c\in\mathbf{C}$ be as in (2.4)′ or as in (2.5)′, and let $s,s_{1},s_{2}\in\overline{\mathbf{R}_{\flat}}$ be such that $0<s_{1}\leq\frac{1}{2r}$ and $s_{2}<\frac{1}{2r}$. Then the following is true: 1. (1) the map (2.2)′ on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ restricts to continuous mappings on $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on the spaces in (1.13). If in addition (2.4)′ holds true, then these mappings are homeomorphisms; 2. (2) the map (2.3)′ on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ restricts to homeomorphisms on $\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d}),\quad\mathcal{H}_{s_{2}}(\mathbf{R}^{d}),\quad\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d})\quad\text{and on}\quad\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d}).$ (2.6)′ If in addition $\zeta\varrho_{j}^{r}\in i\mathbf{R}$ for every $j$, then the map (2.3)′ is homeomorphic on $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and the spaces in (1.13) In the same way, similar arguments as in the proof of Proposition 2.2 gives the following extension. The details are left for the reader. ###### Proposition 2.2′. Let $r>0$, $\varrho\in\mathbf{C}^{d}$ be such that $\operatorname{Re}(\zeta\varrho_{j}^{r})>0$ for every $j=1,\dots,d$, $c\in\mathbf{C}$, and let $s,s_{1},s_{2}\in\overline{\mathbf{R}_{\flat}}$ be such that $s_{1}>\frac{1}{2r}$ and $s_{2}\geq\frac{1}{2r}$. Then the following is true: 1. (1) the map (2.3) is discontinuous from $\mathscr{S}(\mathbf{R}^{d})$ to $\mathscr{S}^{\prime}(\mathbf{R}^{d})$; 2. (2) the map (2.3) is discontinuous from $\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})$ to $\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d})$; 3. (3) the map (2.3) is discontinuous from $\mathcal{H}_{s_{2}}(\mathbf{R}^{d})$ to $\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d})$. ### 2.3. Powers of harmonic oscillator on Hilbert modulations spaces Let $r\in\mathbf{R}$, $p,q\in(0,\infty]$, $s>1$, $\omega\in\mathscr{P}_{E,s}(\mathbf{R}^{2d})$ and $\vartheta_{r}(x,\xi)=\langle(x,\xi)\rangle^{r}$. Then it follows from [1, Proposition 1.34′] that $H_{x}^{N}$ is homeomorphic from $M^{p,q}_{(\omega\vartheta_{N})}(\mathbf{R}^{d})$ to $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ when $N$ is an integer. So far, we are not able to prove any such result when the integer $N$ is replaced by a general real number for such general modulation spaces. On the other hand we have the following for certain Hilbert modulation spaces. See also [11, 21] for similar results. Here we restrict ourself to weights which are rotational invariant with respect to each phase space, or complex variable. That is, we assume that $\omega(x,\xi)=\omega_{0}(\rho),\qquad\rho_{j}=x_{j}^{2}+\xi_{j}^{2},$ (2.9) for some positive function $\omega_{0}$ on ${\overline{\mathbf{R}}}_{+}^{d}$. ###### Theorem 2.3. Let $r,t\in\mathbf{R}$, $\vartheta_{r}(x,\xi)=\langle(x,\xi)\rangle^{r}$ and suppose that $\omega\in\mathscr{P}_{E}(\mathbf{R}^{2d})$ satisfies (2.9) for some positive function $\omega_{0}$ on ${\overline{\mathbf{R}}}_{+}^{d}$. Then the following is true: 1. (1) $H_{x}^{r}$ on $\mathcal{H}_{0}(\mathbf{R}^{d})$ is uniquely extendable to a homeomorphism from $M^{2}_{(\omega)}(\mathbf{R}^{d})$ to $M^{2}_{(\omega/\vartheta_{r})}(\mathbf{R}^{d})$, with bound of $H_{x}^{r}$ which is independent of $r$; 2. (2) $e^{itH_{x}^{r}}$ on $\mathcal{H}_{0}(\mathbf{R}^{d})$ is uniquely extendable to a homeomorphism on $M^{2}_{(\omega)}(\mathbf{R}^{d})$, with bound of $e^{itH_{x}^{r}}$ which is independent of $t$ and $r$. The following lemma shows that $M^{2}_{(\omega)}(\mathbf{R}^{d})$ with weights of the form (2.9) can be expressed as norm estimates of Hermite series expansions of the involved functions and distributions. Here we extract the weight function $\nu_{\omega}:\mathbf{N}^{d}\to\mathbf{R}_{+},$ from $\omega$ by the formula $\displaystyle\nu_{\omega}(\alpha)$ $\displaystyle\equiv\left(\alpha!^{-1}\int_{\mathbf{R}^{d}_{+}}r^{\alpha}\omega_{0}(r)^{2}e^{-(r_{1}+\cdots+r_{d})}\,dr\right)^{\frac{1}{2}},\qquad\alpha\in\mathbf{N}^{d}.$ (2.10) ###### Lemma 2.4. Let $\omega\in\mathscr{P}_{E}(\mathbf{R}^{2d})$ be such that (2.9) holds for some positive function $\omega_{0}$ on ${\overline{\mathbf{R}}}_{+}^{d}$. Also let $\nu_{\omega}$ be given by (2.10). Then $M^{2}_{(\omega)}(\mathbf{R}^{d})$ consists of all $f\in\mathscr{S}^{\prime}(\mathbf{R}^{d})$ such that $\|f\|_{[\omega]}\equiv\left(\sum_{\alpha\in\mathbf{N}^{d}}|c_{h}(f,\alpha)\nu_{\omega}(\alpha)|^{2}\right)^{\frac{1}{2}}$ (2.11) is finite. The Hilbert norm $\|\,\cdot\,\|_{[\omega]}$ is equivalent to $\|\,\cdot\,\|_{M^{2}_{(\omega)}}$. ###### Proof. The result follows by straight-forward applications of [53, Theorem 3.5], and that the Bargmann image of $M^{2}_{(\omega)}(\mathbf{R}^{d})$ is equal to $A^{2}_{(\omega)}(\mathbf{C}^{d})$. The details are left for the reader. ∎ ###### Proof of Theorem 2.3. Let $\|\,\cdot\,\|_{[\omega]}$ be the norm given in (2.11). Then (2.2) shows that $\displaystyle c_{h}(H_{x}^{r}f,\alpha)$ $\displaystyle=(2|\alpha|+d)^{r}c(f,\alpha)$ and $\displaystyle c_{h}(e^{itH_{x}^{r}}f,\alpha)$ $\displaystyle=e^{it(2|\alpha|+d)^{r}}c(f,\alpha).$ Since $|e^{it(2|\alpha|+d)^{r}}|=1$ and that $\omega$ satisfies (2.9), the assertion follows from Lemma 2.4. ∎ ###### Remark 2.5. So far we are not able to extend Theorem 2.3 to more general modulation spaces $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$, without the assumption (2.9) on the weight $\omega$. Suppose that $\omega$ is the same as in Theorem 2.3. Then using standard embeddings for modulation spaces like $\displaystyle M^{p,q}_{(\omega\vartheta_{\tau_{1}})}(\mathbf{R}^{d})$ $\displaystyle\hookrightarrow M^{2}_{(\omega)}(\mathbf{R}^{d})\hookrightarrow M^{p,q}_{(\omega/\vartheta_{\tau_{2}})}(\mathbf{R}^{d}),$ (2.12) $\displaystyle\tau_{1}$ $\displaystyle\geq d\max\Big{(}0,\frac{1}{2}-\frac{1}{p},\frac{1}{2}-\frac{1}{q}\Big{)},$ $\displaystyle\tau_{2}$ $\displaystyle\geq d\max\Big{(}0,\frac{1}{p}-\frac{1}{2},\frac{1}{q}-\frac{1}{2}\Big{)},$ with strict inequalities if $p\neq 2$ or $q\neq 2$, it follows that $H^{r_{0}}e^{itH^{r}}:M^{p,q}_{(\omega)}(\mathbf{R}^{d})\to M^{p,q}_{(\omega/\vartheta_{\theta})}(\mathbf{R}^{d}),\quad\theta=r_{0}+\tau_{1}+\tau_{2}$ (2.13) is continuous, with bounds which are independent of $r$. In fact, by Theorem 2.3 and (2.12) we have $\displaystyle H^{r_{0}}e^{itH^{r}}M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ $\displaystyle\hookrightarrow H^{r_{0}}e^{itH^{r}}M^{2}_{(\omega/\vartheta_{\tau_{1}})}(\mathbf{R}^{d})$ $\displaystyle\hookrightarrow M^{2}_{(\omega/\vartheta_{r_{0}+\tau_{1}})}(\mathbf{R}^{d})\hookrightarrow M^{p,q}_{(\omega/\vartheta_{\theta})}(\mathbf{R}^{d}).$ ## 3\. General harmonic oscillator propagators and fractional Fourier transforms In this section we prove that generalized harmonic oscillator propagators of the forms $e^{irH_{x,\varrho,c}}$ are essentially fractional Fourier transforms of multiple orders. We use such identities to link some results in [8] with results in [53], especially when such operators act on (weighted) modulation spaces. ### 3.1. Identifications between fractional Fourier transforms and harmonic oscillator type propagators Let $\varrho\in\mathbf{C}^{d}$ and $c\in\mathbf{C}$. Then the operator $H_{x,\varrho,c}$ is transformed into the operator $\displaystyle 2H_{\varrho,c,\mathfrak{V}}$ $\displaystyle=2H_{\varrho,\mathfrak{V}}+\operatorname{sum}(\varrho)+c,\quad\text{where}\quad H_{\varrho,\mathfrak{V}}=\sum_{j=1}^{d}\varrho_{j}z_{j}\partial_{z_{j}}.$ That is, $\displaystyle\mathfrak{V}_{d}\circ H_{x,\varrho,c}$ $\displaystyle=(2H_{\varrho,\mathfrak{V}}+\operatorname{sum}(\varrho)+c)\circ\mathfrak{V}_{d},\quad H_{\varrho,\mathfrak{V}}=\sum_{j=1}^{d}\varrho_{j}z_{j}\partial_{z_{j}},$ (3.1) or equivalently, $\displaystyle\mathfrak{V}_{d}H_{x,\varrho,c}f$ $\displaystyle=(2H_{\varrho,\mathfrak{V}}+\operatorname{sum}(\varrho)+c)F,\qquad F=\mathfrak{V}_{d}f.$ For convenience we put $H_{\varrho_{0},c,\mathfrak{V}}=H_{\varrho,c,\mathfrak{V}}\quad\text{and}\quad H_{\varrho_{0},\mathfrak{V}}=H_{\varrho,\mathfrak{V}},$ when $\varrho=(\varrho_{0},\dots,\varrho_{0})\in\mathbf{C}^{d}$, and we put $H_{\mathfrak{V}}=H_{1,\mathfrak{V}}.$ In particular, it follows from (3.1) that $\frac{1}{2}H_{x,-d}$ is transformed into $H_{\mathfrak{V}}$. It also follows from (3.1) that if $r,\zeta\in\mathbf{C}$ satisfy $\operatorname{Re}(r)\geq 0$, then $\mathfrak{V}_{d}\circ e^{\zeta H_{x,\varrho,c}^{r}}=e^{\zeta(2H_{\varrho,\mathfrak{V}}+\operatorname{sum}(\varrho)+c)^{r}}\circ\mathfrak{V}_{d},$ (3.2) as continuous operators from $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$ to $\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d})$. By straight-forward computations we get $H_{\varrho,\mathfrak{V}}e_{\alpha}(z)=\langle\varrho,\alpha\rangle e_{\alpha}(z)$ (see e. g. [5]). This implies that $\displaystyle e^{\zeta H_{\varrho,\mathfrak{V}}}e_{\alpha}(z)$ $\displaystyle=e^{\zeta\langle\varrho,\alpha\rangle}e_{\alpha}(z)=e_{\alpha}(e^{\zeta\varrho_{1}}z_{1},\dots,e^{\zeta\varrho_{d}}z_{d}),$ (3.3) for every $\zeta\in\mathbf{C}$, which gives $\displaystyle(e^{\zeta H_{\varrho,\mathfrak{V}}}F)(z)$ $\displaystyle=F(e^{\zeta\varrho_{1}}z_{1},\dots,e^{\zeta\varrho_{d}}z_{d}),\qquad F\in\mathcal{A}_{0}^{\prime}(\mathbf{C}^{d}).$ (3.4) ###### Remark 3.1. We observe that the map $F(z)\mapsto(e^{\zeta H_{\varrho,\mathfrak{V}}}F)(z)=F(e^{\zeta\varrho_{1}}z_{1},\dots,e^{\zeta\varrho_{d}}z_{d})$ is a continuous bijection on the spaces $\mathcal{A}_{s_{1}}(\mathbf{C}^{d}),\quad\mathcal{A}_{0,s_{2}}(\mathbf{C}^{d}),\quad\mathcal{A}_{s_{1}}^{\prime}(\mathbf{C}^{d}),\quad\mathcal{A}_{0,s_{2}}^{\prime}(\mathbf{C}^{d}),$ (3.5) when $s_{1}<\frac{1}{2}$ and $s_{2}\leq\frac{1}{2}$. See [53] for definition and some characterizations of the spaces in (3.5). This is also a consequence of Proposition 2.1. We recall that the fractional Fourier transform $\mathscr{F}_{\\!\varrho}$ of (multiple) order $\varrho\in\mathbf{C}^{d}$ satisfies $(\mathfrak{V}_{d}(\mathscr{F}_{\\!\varrho}f))(z)=(\mathfrak{V}_{d}f)(e^{-i\frac{\pi\varrho_{1}}{2}}z_{1},\dots,e^{-i\frac{\pi\varrho_{d}}{2}}z_{d}),\qquad f\in\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d}).$ (3.6) Hence, a combination of Proposition 2.1′, (3.1), (3.2) and (3.4) with $e^{-iH_{x,\varrho,c_{1}}}=e^{i(c_{2}-c_{1})}e^{-iH_{x,\varrho,c_{2}}},\qquad c_{1},c_{2}\in\mathbf{C}$ gives the following extention of results given in [42, p. 161]. ###### Theorem 3.2. Let $\varrho\in\mathbf{C}^{d}$, $c\in\mathbf{C}$ and $s,s_{1},s_{2}\in\overline{\mathbf{R}_{\flat}}$ be such that $0<s_{1}\leq\frac{1}{2}$ and $s_{2}<\frac{1}{2}$. Then $e^{-i\frac{\pi}{4}H_{x,\varrho,c}}=e^{-i\frac{\pi}{4}(\operatorname{sum}(\varrho)+c)}\mathscr{F}_{\\!\varrho}$ (3.7) as operators on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$. The operators in (3.7) restrict to homeomorphisms on the spaces in (2.6). Evidently, (3.7) is the same as $\mathscr{F}_{\\!\varrho}=e^{i\frac{\pi}{4}(\operatorname{sum}(\varrho)+c)}e^{-i\frac{\pi}{4}H_{x,\varrho,c}},$ (3.7)′ and can be used to transfer properties between fractional Fourier transform and harmonic oscillator propagators. By combining with Theorem 3.2, we get the following extensions of Propositions 0.2 and 0.3 from the introduction. The details are left for the reader. ###### Proposition 0.2${}^{\prime}\\!\\!$. Let $I_{d}=\\{1,\dots,d\\}$, $\varrho\in\mathbf{C}^{d}$ and $s\in\overline{\mathbf{R}}_{\flat}$. Then the following is true: 1. (1) if $s<\frac{1}{2}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are homeomorphisms on $\mathcal{H}_{s}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$; 2. (2) if $\operatorname{Im}(\varrho_{j})\leq 0$ for every $j\in I_{d}$, $\operatorname{Im}(\varrho_{j_{0}})<0$ for some $j_{0}\in I_{d}$ and $s\geq\frac{1}{2}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are continuous injections but not surjections on $\mathcal{H}_{s}(\mathbf{R}^{d})$, $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$; 3. (3) if $\operatorname{Im}(\varrho_{j})=0$ for every $j\in I_{d}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are homeomorphisms on $\mathcal{H}_{s}(\mathbf{R}^{d})$, $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$; 4. (4) if $\operatorname{Im}(\varrho_{j})>0$ for some $j\in I_{d}$ and $s\geq\frac{1}{2}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are discontinuous on $\mathcal{H}_{s}(\mathbf{R}^{d})$, $\mathscr{S}(\mathbf{R}^{d})$, $\mathscr{S}^{\prime}(\mathbf{R}^{d})$ and on $\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d})$. The same holds true with $s>\frac{1}{2}$, $s\leq\frac{1}{2}$ and $\mathcal{H}_{0,s}$ in place of $s\geq\frac{1}{2}$, $s<\frac{1}{2}$ and $\mathcal{H}_{s}$ at each occurrence. ###### Proposition 0.3${}^{\prime}\\!\\!$. Let $I_{d}=\\{1,\dots,d\\}$ and $\varrho\in\mathbf{C}^{d}$. Then the following is true: 1. (1) if $\operatorname{Im}(\varrho_{j})<0$ for every $j\in I_{d}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are continuous from $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$ to $\mathcal{S}_{1/2}(\mathbf{R}^{d})$, and $\mathscr{F}_{\\!\varrho}(\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d}))=e^{-i\frac{\pi}{4}H_{x,\varrho}}(\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d}))\subsetneq\mathcal{S}_{1/2}(\mathbf{R}^{d})\text{;}$ 2. (2) if $\operatorname{Im}(\varrho_{j})>0$ for every $j\in I_{d}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are discontinuous from $\mathcal{S}_{1/2}(\mathbf{R}^{d})$ to $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$, and $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})\subsetneq\mathscr{F}_{\\!\varrho}(\mathcal{S}_{1/2}(\mathbf{R}^{d}))=e^{-i\frac{\pi}{4}H_{x,\varrho}}(\mathcal{S}_{1/2}(\mathbf{R}^{d}))\subsetneq\mathcal{H}_{0,1/2}^{\prime}(\mathbf{R}^{d})\text{;}$ 3. (3) if $\operatorname{Im}(\varrho_{j})>0$ for some $j\in I_{d}$, then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho}}$ are discontinuous from $\mathcal{S}_{1/2}(\mathbf{R}^{d})$ to $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$, and $\mathscr{F}_{\\!\varrho}f\in\mathcal{H}_{0,1/2}^{\prime}(\mathbf{R}^{d})\setminus\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})\quad\text{and}\quad e^{-i\frac{\pi}{4}H_{x,\varrho}}f\in\mathcal{H}_{0,1/2}^{\prime}(\mathbf{R}^{d})\setminus\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$ for some $f\in\mathcal{S}_{1/2}(\mathbf{R}^{d})$. In the following proposition we point out some auxiliary group properties for fractional Fourier transforms of complex orders, which extends similar results in [42] for fractional Fourier transforms of real orders. The result follows by straight-forward applications of (3.6) and the fact that the Bargmann transform is injective. The details are left for the reader. ###### Proposition 3.3. For any $\varrho\in\mathbf{C}^{d}$, let $\mathscr{F}_{\\!\varrho}$ be acting on $\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})$. Then $\\{\mathscr{F}_{\\!\varrho}\\}_{\varrho\in\mathbf{C}^{d}}$ is a commutative group under composition, with identity element $\mathscr{F}_{0}=\operatorname{Id}_{\mathcal{H}_{0}^{\prime}(\mathbf{R}^{d})}$, and $\displaystyle\mathscr{F}_{\\!\varrho_{1}}\circ\mathscr{F}_{\\!\varrho_{2}}=\mathscr{F}_{\\!\varrho_{1}+\varrho_{2}},\qquad\mathscr{F}_{\\!\varrho}^{-1}=\mathscr{F}_{\\!-\varrho},\qquad\mathscr{F}_{\\!\varrho+\varrho_{0}}$ $\displaystyle=\mathscr{F}_{\\!\varrho},$ (3.8) $\displaystyle\varrho,\varrho_{1},\varrho_{2}$ $\displaystyle\in\mathbf{C}^{d},\ \varrho_{0}\in 4\mathbf{Z}^{d}.$ ### 3.2. Continuity for one-parameters fractional Fourier transforms and harmonic oscillator propagators on modulation spaces As an example we shall next transfer mapping properties of $\mathscr{F}_{\\!\varrho}$, $\varrho\in\mathbf{R}$, when acting on certain classes of modulation spaces into analogous properties for harmonic oscillator propagators. For fractional Fourier transforms on modulation spaces we recall the following special case of [53, Proposition 7.1]. We refer to Subsection 1.4 for notations on weight classes. ###### Proposition 3.4. Let $\varrho\in\mathbf{R}$, $p\in(0,\infty]$ and $\omega\in\mathscr{P}_{\\!A,r}(\mathbf{R}^{2d})$. Then $\mathscr{F}_{\\!\varrho}$ is an isometric homeomorphism on $M^{p}_{(\omega)}(\mathbf{R}^{d})$. A combination of Theorem 3.2 and Proposition 3.4 now gives the following. (See [8, Theorem 2.6] and [9, Theorem 1.7] in the case when $p\geq 1$ and $\omega=1$. See also [15] in the case when $p\geq 1$ and $\omega$ is moderatated by polynomially bounded weights.) ###### Proposition 3.5. Let $c\in\mathbf{C}$, $\varrho\in\mathbf{R}$, $p\in(0,\infty]$ and $\omega\in\mathscr{P}_{\\!A,r}(\mathbf{R}^{2d})$. Then $e^{-i\frac{\pi}{4}H_{x,\varrho,c}}$ is a homeomorphism on $M^{p}_{(\omega)}(\mathbf{R}^{d})$. We also have the following extensions of [16, Proposition 5.1] and the previous propositions. Here the involved weights are allowed to belong to the general class $\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$, and are linked as $\displaystyle\omega_{\varrho}(x,\xi)$ $\displaystyle=\omega(A_{d,\varrho}(x,\xi))$ (3.9) (see Subsections 1.4, 1.7 and Remark 1.6) ###### Theorem 3.6. Let $\varrho\in\mathbf{R}\setminus 2\mathbf{Z}$, $c\in\mathbf{C}$, $\omega,\omega_{\varrho}\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ and $p,q\in(0,\infty]$ be such that $q\leq p$ and (3.9) hold. Then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho,c}}$ on $\mathcal{H}_{\flat_{1}}^{\prime}(\mathbf{R}^{d})$ restrict to continuous mappings from $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ to $M^{q,p}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and from $W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ to $W^{p,q}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle\lesssim|\sin(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{M^{p,q}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ (3.10) and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{W^{p,q}_{(\omega_{t})}}$ $\displaystyle\lesssim|\sin(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{W^{q,p}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in W^{q,p}_{(\omega)}(\mathbf{R}^{d}).$ (3.11) ###### Theorem 3.7. Let $\varrho\in\mathbf{R}\setminus(2\mathbf{Z}+1)$, $\omega,\omega_{\varrho}\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ and $p,q\in(0,\infty]$ be such that $q\leq p$ and (3.9) hold. Then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho,c}}$ on $\mathcal{H}_{\flat_{1}}^{\prime}(\mathbf{R}^{d})$ restrict to continuous mappings from $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ to $W^{p,q}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and from $W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ to $M^{q,p}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle\lesssim|\cos(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{M^{p,q}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ (3.12) and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{M^{q,p}_{(\omega_{t})}}$ $\displaystyle\lesssim|\cos(\textstyle{\frac{\pi\varrho}{2}})|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{W^{q,p}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in W^{q,p}_{(\omega)}(\mathbf{R}^{d}).$ (3.13) For the proofs of Theorems 3.6 and 3.7 we need the following version of Minkowski’s inequality. ###### Lemma 3.8. Let $\varrho\in\mathbf{R}$, $p,q\in(0,\infty]$ be such that $q\leq p$, $A_{d,\varrho}$ be as in Subsection 1.7 and let $T_{\varrho}$ from $\Sigma_{1}(\mathbf{R}^{2d})$ to $\Sigma_{1}(\mathbf{R}^{2d})$ be given by $(T_{\varrho}f)(x,\xi)=f(A_{d,\varrho}(x,\xi)),\qquad f\in L^{q}_{loc}(\mathbf{R}^{2d}).$ Then the following is true: 1. (1) if $\varrho\neq 2\mathbf{Z}$, then $T_{\varrho}$ extends uniquely to a continuous map from $L^{p,q}(\mathbf{R}^{2d})$ to $L^{q,p}(\mathbf{R}^{2d})$ and from $L^{q,p}_{*}(\mathbf{R}^{2d})$ $L^{p,q}_{*}(\mathbf{R}^{2d})$, and $\displaystyle\|T_{\varrho}f\|_{L^{q,p}}$ $\displaystyle\leq|\sin{\textstyle{(\frac{\pi\varrho}{2})}}|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{L^{p,q}},$ $\displaystyle\qquad f$ $\displaystyle\in L^{q}_{loc}(\mathbf{R}^{2d})\text{;}$ (3.14) 2. (2) if $\varrho\neq 2\mathbf{Z}+1$, then $T_{\varrho}$ extends uniquely to a continuous map from $L^{p,q}(\mathbf{R}^{2d})$ to $L^{p,q}_{*}(\mathbf{R}^{2d})$ and from $L^{q,p}_{*}(\mathbf{R}^{2d})$ to $L^{q,p}(\mathbf{R}^{2d})$, and $\displaystyle\|T_{\varrho}f\|_{L^{p,q}_{*}}$ $\displaystyle\leq|\cos{\textstyle{(\frac{\pi\varrho}{2})}}|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{L^{p,q}},$ $\displaystyle\qquad f$ $\displaystyle\in L^{q}_{loc}(\mathbf{R}^{2d})\text{.}$ (3.15) The same holds true with $L^{p,q}_{*}$ and $L^{q,p}_{*}$ in place of $L^{q,p}$ and $L^{p,q}$, respectively, at each occurrence. ###### Proof. We only prove (1). The assertion (2) follows by similar arguments and is left for the reader. First suppose that $p<\infty$. Then $q<\infty$. Let $\theta=\frac{\pi\varrho}{2}$, $f\in\mathscr{S}(\mathbf{R}^{d})$, $p_{0}=p/q\geq 1$, $h\in\mathscr{S}(\mathbf{R}^{d})$ be such that $\|h\|_{L^{p_{0}^{\prime}}}\leq 1$, $f_{\varrho}(x,\xi)=f(A_{d,\varrho}(x,\xi))\quad\text{and}\quad g_{\varrho}(\xi)=\int_{\mathbf{R}^{d}}|f_{\varrho}(x,\xi)|^{q}\,dx.$ Since $f_{0}=f$ and $(y,\eta)=A_{d,\varrho}(x,\xi)\quad\Leftrightarrow\quad(x,\xi)=A_{d,-\varrho}(y,\eta),$ we get $\displaystyle|(g_{\varrho},h)_{L^{2}}|$ $\displaystyle=\left|\iint_{\mathbf{R}^{2d}}|f_{0}(A_{d,\varrho}(x,\xi))|^{q}h(\xi)\,dxd\xi\right|$ $\displaystyle=\left|\iint_{\mathbf{R}^{2d}}|f_{0}(y,\eta)|^{q}h((\sin\theta)y+(\cos\theta)\eta)\,dxd\xi\right|$ $\displaystyle\leq\int_{\mathbf{R}^{d}}\||f_{0}(\,\cdot\,,\eta)|^{q}\|_{L^{p_{0}}}\|h((\sin\theta)\,\cdot\,+(\cos\theta)\eta)\|_{L^{p_{0}^{\prime}}}\,d\xi$ $\displaystyle=|\sin\theta|^{-\frac{d}{p_{0}^{\prime}}}\|h\|_{L^{p_{0}^{\prime}}}\int_{\mathbf{R}^{d}}\||f_{0}(\,\cdot\,,\eta)|^{q}\|_{L^{p_{0}}}\,d\xi\leq|\sin\theta|^{-\frac{d}{p_{0}^{\prime}}}\|f_{0}\|_{L^{p,q}}^{q}.$ By taking the supremum over all possible $h$ with $\|h\|_{L^{p_{0}^{\prime}}}\leq 1$ we obtain $\|g_{\varrho}\|_{L^{p_{0}}}\leq|\sin\theta|^{-\frac{d}{p_{0}^{\prime}}}\|f_{0}\|_{L^{p,q}}^{q}=|\sin\theta|^{-\frac{d}{p_{0}^{\prime}}}\|f\|_{L^{p,q}}^{q},$ which is the same as (3.14) when $f\in\mathscr{S}(\mathbf{R}^{2d})$. Since $\mathscr{S}(\mathbf{R}^{2d})$ is dense in $L^{p,q}(\mathbf{R}^{2d})$ when $p,q<\infty$, (3.14) follows when $p<\infty$. Next suppose that $p=\infty$. The result is obviously true when $q=\infty$. Therefore suppose that $q<\infty$, and let $f\in L^{q}_{loc}(\mathbf{R}^{2d})$ and $g(\xi)=\|f(\,\cdot\,,\xi)\|_{L^{p}}.$ Then $\int_{\mathbf{R}^{d}}|f_{\varrho}(x,\xi)|^{q}\,dx\leq\int_{\mathbf{R}^{d}}|g((-\sin\theta)x+(\cos\theta)\xi)|^{q}\,dx=|\sin\theta|^{-d}\|g\|_{L^{\infty,q}}^{q}$ If we take the supremum over $\xi\in\mathbf{R}^{d}$, then we obtain (3.14) for $p=\infty$, and we have proved (3.14) for any $p\in(0,\infty]$. This gives (1), and the result follows. ∎ ###### Proof of Theorems 3.6 and 3.7. By $\mathscr{F}(M^{p,q}_{(\omega_{1})}(\mathbf{R}^{d}))=W^{q,p}_{(\omega_{2})}(\mathbf{R}^{d}),\quad\text{when}\quad\omega_{1}(x,\xi)=\omega_{2}(\xi,-x)$ and that $|\cos(\frac{\pi(\varrho+1)}{2})|=|\sin(\frac{\pi\varrho}{2})|$, it follows that Theorem 3.7 is the Fourier version of Theorem 3.6. Hence it suffices to prove Theorem 3.6. By (3.7) it also follows that it suffices to prove the norm estimates for $\mathscr{F}_{\\!\varrho}f$ in (3.10) and (3.11). Let $\phi(x)=\pi^{-\frac{d}{4}}e^{-\frac{1}{2}|x|^{2}}$. Then (1.35) gives $\displaystyle F_{\varrho}(x,\xi)$ $\displaystyle=|(V_{\phi}(\mathscr{F}_{\\!\varrho}f))(x,\xi)\omega_{\varrho}(x,\xi)|=|(V_{\phi}f)(A_{d,\varrho}(x,\xi))\omega(A_{d,\varrho}(x,\xi))|.$ We also have $\displaystyle F_{0}(x,\xi)$ $\displaystyle=|(V_{\phi}f)(x,\xi)\omega(x,\xi)|\quad\text{and}\quad F_{\varrho}(x,\xi)=F_{0}(A_{d,\varrho}(x,\xi)).$ By Lemma 3.8 and the identities above we obtain $\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}_{(\omega_{\varrho})}}\asymp\|F_{\varrho}\|_{L^{q,p}}\leq|\sin{\textstyle{(\frac{\pi\varrho}{2})}}|^{d(\frac{1}{p}-\frac{1}{q})}\|F_{0}\|_{L^{p,q}}\asymp|\sin{\textstyle{(\frac{\pi\varrho}{2})}}|^{d(\frac{1}{p}-\frac{1}{q})}\|f\|_{M^{p,q}_{(\omega)}},$ and (3.10) follows. In similar ways one obtains (3.11). The details are left for the reader, and the result follows. ∎ ###### Remark 3.9. By choosing $\omega=1$ and $p=q^{\prime}\geq 2$, Theorem 3.7 agrees with [16, Proposition 5.1] by Cordero and Nicola. It is evident that Theorems 3.6 and 3.7 implies the following weighted version of Proposition 0.1 in the introduction. ###### Proposition 0.1′. Let $\varrho\in\mathbf{R}$, $c\in\mathbf{C}$, $\omega,\omega_{\varrho}\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ and $p,q\in(0,\infty]$ be such that $q\leq p$ and (3.9) holds. Then the following is true: 1. (1) the map $\displaystyle\mathscr{F}_{\\!\varrho}=e^{-i\frac{\pi}{4}H_{x,\varrho,c}}\,$ $\displaystyle:\,M^{p,q}_{(\omega)}(\mathbf{R}^{d})+W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ $\displaystyle\to M^{q,p}_{(\omega_{\varrho})}(\mathbf{R}^{d})+W^{p,q}_{(\omega_{\varrho})}(\mathbf{R}^{d})$ is continuous; 2. (2) if in addition $\varrho\notin\mathbf{Z}$, then the map $\displaystyle\mathscr{F}_{\\!\varrho}=e^{-i\pi H_{x,\varrho,c}}\,$ $\displaystyle:\,M^{p,q}_{(\omega)}(\mathbf{R}^{d})+W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ $\displaystyle\to M^{q,p}_{(\omega_{\varrho})}(\mathbf{R}^{d}){\textstyle\bigcap}W^{p,q}_{(\omega_{\varrho})}(\mathbf{R}^{d})$ is continuous. By choosing $p=q$ in (1) in previous proposition, we get Propositions 3.4 and 3.5. ### 3.3. Extensions to multiple ordered fractional Fourier transforms By using similar arguments as in the proofs of Theorems 3.6 and 3.7, it follows that that the following extensions hold true. Again we refer to Subsections 1.4 and 1.7 for notations. The details are left for the reader. ###### Theorem 3.6′. Let $\varrho\in\mathbf{R}^{d}\setminus 2\mathbf{Z}^{d}$, $c\in\mathbf{C}$, $\omega,\omega_{\varrho}\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ and $p,q\in(0,\infty]$ be such that $q\leq p$ and (3.9) hold. Then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho,c}}$ on $\mathcal{H}_{\flat_{1}}^{\prime}(\mathbf{R}^{d})$ restrict to continuous mappings from $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ to $M^{q,p}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and from $W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ to $W^{p,q}_{(\omega_{t})}(\mathbf{R}^{\varrho})$, and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle\lesssim\big{(}{\textstyle{\prod_{j=1}^{d}}}|\sin(\textstyle{\frac{\pi\varrho_{j}}{2}})|^{\frac{1}{p}-\frac{1}{q}}\big{)}\|f\|_{M^{p,q}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ (3.10)′ and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle\lesssim\big{(}{\textstyle{\prod_{j=1}^{d}}}|\sin(\textstyle{\frac{\pi\varrho_{j}}{2}})|^{\frac{1}{p}-\frac{1}{q}}\big{)}\|f\|_{W^{q,p}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in W^{q,p}_{(\omega)}(\mathbf{R}^{d}).$ (3.11)′ ###### Theorem 3.7′. Let $\varrho\in\mathbf{R}^{d}\setminus(2\mathbf{Z}d+1)^{d}$, $\omega,\omega_{\varrho}\in\mathscr{P}_{\\!A}(\mathbf{R}^{2d})$ and $p,q\in(0,\infty]$ be such that $q\leq p$ and (3.9) hold. Then $\mathscr{F}_{\\!\varrho}$ and $e^{-i\frac{\pi}{4}H_{x,\varrho,c}}$ on $\mathcal{H}_{\flat_{1}}^{\prime}(\mathbf{R}^{d})$ restrict to continuous mappings from $M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ to $W^{p,q}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and from $W^{q,p}_{(\omega)}(\mathbf{R}^{d})$ to $M^{q,p}_{(\omega_{\varrho})}(\mathbf{R}^{d})$, and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{W^{p,q}_{(\omega_{\varrho})}}$ $\displaystyle\lesssim\big{(}{\textstyle{\prod_{j=1}^{d}}}|\cos(\textstyle{\frac{\pi\varrho_{j}}{2}})|^{\frac{1}{p}-\frac{1}{q}}\big{)}\|f\|_{M^{p,q}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in M^{p,q}_{(\omega)}(\mathbf{R}^{d})$ (3.12)′ and $\displaystyle\|\mathscr{F}_{\\!\varrho}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle=e^{-\frac{\pi}{4}\cdot\operatorname{Im}(c)}\|e^{-i\frac{\pi}{4}H_{x,\varrho,c}}f\|_{M^{q,p}_{(\omega_{\varrho})}}$ $\displaystyle\lesssim\big{(}{\textstyle{\prod_{j=1}^{d}}}|\cos(\textstyle{\frac{\pi\varrho_{j}}{2}})|^{\frac{1}{p}-\frac{1}{q}}\big{)}\|f\|_{W^{q,p}_{(\omega)}},$ $\displaystyle\qquad f$ $\displaystyle\in W^{q,p}_{(\omega)}(\mathbf{R}^{d}).$ (3.13)′ ## 4\. Applications to Strichartz estimates, and some further continuity properties for certain partial differential equations In this section we apply results from the previous sections to extend certain Strichartz estimates in [16], with initial data in suitable Wiener amalgam spaces. Thereafter we deduce further continuity properties for a family of equations involving certain Schrödinger equations and heat equations. ### 4.1. Strichartz estimates for certain Schrödinger equations We shall deduce Strichartz estimates in the framework of the operator $E$, $S_{1}$ and $S_{2}$ in Subsection 1.8 (see (1.38)′–(1.41)′). If $T>0$, then it follows by straight-forward estimates that $S_{1}$ and $S_{2}$ are continuous from $C([0,T];M^{1}(\mathbf{R}^{d}))$ to $L^{\infty}([0,T];M^{1}(\mathbf{R}^{d}))$. By Proposition 3.5 it follows that $E$, $S_{1}$ and $S_{2}$ are uniquely defined and continuous. In the following result we extend the operator $S_{j}$ to act between spaces of the form $L^{r}([0,T];W^{p,q}(\mathbf{R}^{d}))$. ###### Theorem 4.1. Let $p,p_{0},q\in[1,\infty]$ and $r_{0}\in(0,\infty]$ be such that $0\leq d\left(\frac{1}{q}-\frac{1}{p}\right)<1,\quad d\left(\frac{1}{q}-\frac{1}{p}\right)\leq 1+\frac{1}{r_{0}}-\frac{1}{p_{0}},$ (4.1) with strict inequalities when $q<p$ and $p_{0}=1$, or when $q<p$ and $r_{0}=\infty$. Also let $S_{1}$ and $S_{2}$ from $C([0,T];M^{1}(\mathbf{R}^{d}))$ to $L^{\infty}([0,T];M^{1}(\mathbf{R}^{d}))$ be given by (1.39)′ and (1.41)′, and let $\omega\in\mathscr{P}_{\\!A,r}(\mathbf{R}^{2d})$. Then $S_{1}$ and $S_{2}$ is uniquely extendable to a continuous mappings $\displaystyle S_{j}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];M^{p,q}_{(\omega)}(\mathbf{R}^{d}))$ $\displaystyle\to L^{r_{0}}([0,T];M^{q,p}_{(\omega)}(\mathbf{R}^{d})),$ (4.2) $\displaystyle S_{j}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];M^{p,q}_{(\omega)}(\mathbf{R}^{d}))$ $\displaystyle\to L^{r_{0}}([0,T];W^{p,q}_{(\omega)}(\mathbf{R}^{d})),$ (4.3) $\displaystyle S_{j}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];W^{q,p}_{(\omega)}(\mathbf{R}^{d}))$ $\displaystyle\to L^{r_{0}}([0,T];M^{q,p}_{(\omega)}(\mathbf{R}^{d})),$ (4.4) and $\displaystyle S_{j}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];W^{q,p}_{(\omega)}(\mathbf{R}^{d}))$ $\displaystyle\to L^{r_{0}}([0,T];W^{p,q}_{(\omega)}(\mathbf{R}^{d})),$ (4.5) $j=1,2$. Theorem 4.1 can also be formulated in the following way (cf. Theorem 0.5 in the introduction). ###### Theorem 0.5′. Let $p,p_{0},q\in[1,\infty]$ and $r_{0}\in(0,\infty]$ be such that $0\leq d\left(\frac{1}{q}-\frac{1}{p}\right)<1,\quad d\left(\frac{1}{q}-\frac{1}{p}\right)\leq 1+\frac{1}{r_{0}}-\frac{1}{p_{0}},$ with strict inequalities when $q<p$ and $p_{0}=1$, or when $q<p$ and $r_{0}=\infty$. Also let $\omega\in\mathscr{P}_{\\!A,r}(\mathbf{R}^{2d})$. Then $S_{1}$ and $S_{2}$ from $C([0,T];M^{1}(\mathbf{R}^{d}))$ to $L^{\infty}([0,T];M^{1}(\mathbf{R}^{d}))$ are uniquely extendable to continuous mappings $\displaystyle S_{j}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];M^{p,q}_{(\omega)}(\mathbf{R}^{d})+W^{q,p}_{(\omega)}(\mathbf{R}^{d}))$ $\displaystyle\to L^{r_{0}}([0,T];M^{q,p}_{(\omega)}(\mathbf{R}^{d})\bigcap W^{p,q}_{(\omega)}(\mathbf{R}^{d})),$ $j=1,2$, and $\|S_{j}F\|_{L^{r_{0}}([0,T];M^{q,p}_{(\omega)}(\mathbf{R}^{d}))}+\|S_{j}F\|_{L^{r_{0}}([0,T];W^{p,q}_{(\omega)}(\mathbf{R}^{d}))}\\\\[4.30554pt] \lesssim\min\left(\|F\|_{L^{p_{0}}([0,T];M^{p,q}_{(\omega)}(\mathbf{R}^{d}))},\|F\|_{L^{p_{0}}([0,T];W^{q,p}_{(\omega)}(\mathbf{R}^{d}))}\right),\quad j=1,2.$ (4.6) We need some preparations for the proof of Theorem 4.1 and start with the following. ###### Lemma 4.2. Suppose that $p_{0},q_{0},r_{0}\in[1,\infty]$ satisfy $\frac{1}{p_{0}}+\frac{1}{q_{0}}\leq 1+\frac{1}{r_{0}}\quad\text{and}\quad q_{0}>1,$ (4.7) with strict inequality when $q_{0}<\infty$ and $p_{0}=1$, or when $q_{0}<\infty$ and $r_{0}=\infty$. Let $T>0$ and $\phi_{q_{0}}(t)=|\sin t|^{-\frac{1}{q_{0}}}$ or $\phi_{q_{0}}(t)=|\cos t|^{-\frac{1}{q_{0}}}$, $t\in\mathbf{R}$. Also let $T_{1}$ and $T_{2}$ be the mappings from $C[0,T]$ to $C[0,T]$, given by $(T_{1}h)(t)\equiv\int_{0}^{T}\phi_{q_{0}}(t-s)h(s)\,ds\quad\text{and}\quad(T_{2}h)(t)\equiv\int_{0}^{t}\phi_{q_{0}}(t-s)h(s)\,ds,$ when $0\leq t\leq T$. Then $T_{1}$ and $T_{2}$ are uniquely extendable to continuous mappings from $L^{p_{0}}[0,T]$ to $L^{r_{0}}[0,T]$. ###### Proof. We only prove the assertions for $T_{1}$. The assertions for $T_{2}$ follows by similar arguments and is left for the reader. First suppose that $1<p_{0}$ and $r_{0}<\infty$, and let $\psi$ be a measurable complex-valued function on $\mathbf{R}$ such that $|\psi(t)|\lesssim|t|^{-\frac{1}{q_{0}}}.$ We observe that if $k$ is an integer, then $\displaystyle|\phi_{q_{0}}(t)|$ $\displaystyle\lesssim|t-k\pi|^{-\frac{1}{q_{0}}}$ when $\displaystyle\quad|t-k\pi|$ $\displaystyle\leq\frac{\pi}{2},$ or $\displaystyle|\phi_{q_{0}}(t)|$ $\displaystyle\lesssim|t-(k+{\textstyle{\frac{1}{2}}})\pi|^{-\frac{1}{q_{0}}}$ when $\displaystyle\quad|t-(k+{\textstyle{\frac{1}{2}}})\pi|$ $\displaystyle\leq\frac{\pi}{2}.$ Since $L^{p}[0,T]$ decreases with $p$, we may assume that equality is attained in the first inequality in (4.7). Then it follows from Lebesgue’s theorem and Hardy-Littlewood-Sobolev inequality in [35, Theorem 4.5.3] that $f\mapsto\psi*f$ from $C_{0}(\mathbf{R})$ to $C(\mathbf{R})$ is uniquely extendable to a continuous map from $L^{p_{0}}(\mathbf{R})$ to $L^{r_{0}}(\mathbf{R})$. We may now divide $T_{1}$ into a finite sum $T_{1}=\sum_{j=1}^{N}T_{1,N},$ where $(T_{1,j}f)(t)=\int_{a_{j}}^{b_{j}}f(t-s)\psi_{j}(s)\,ds,$ with $\psi_{j}$ being measurable functions on $\mathbf{R}$ which satisfy $|\psi_{j}(t)|\lesssim|t-a_{j}|^{-\frac{1}{q_{0}}}\quad\text{or}\quad|\psi_{j}(t)|\lesssim|t-b_{j}|^{-\frac{1}{q_{0}}},$ for some $a_{j}\leq b_{j}$, $j=1,\dots,N$. Since each $T_{1,j}$ is uniquely extendable to a continuous map from $L^{p_{0}}(\mathbf{R})$ to $L^{r_{0}}(\mathbf{R})$, in view of the previous part of the proof, the asserted continuity assertions for $T_{1}$ follows in the case $1<p_{0}$ and $r_{0}<\infty$. Next suppose that $p_{0}=1$. Then $r_{0}<q_{0}$ and $q_{0}<\infty$, or $r_{0}=q_{0}=\infty$. This implies that $\phi_{q_{0}}\in L^{r_{0}}[0,T]$. By Minkowski’s inequality we obtain $\|T_{1}h\|_{L^{r_{0}}[0,T]}\leq\int_{0}^{T}\|\phi_{q_{0}}(\,\cdot\,-s)\|_{L^{r_{0}}[0,T]}|h(s)|\,ds\leq 2\|\phi_{q_{0}}\|_{L^{r_{0}}[0,T]}\|h\|_{L^{1}[0,T]},$ and the result follows in the case $p_{0}=1$. Finally suppose that $r_{0}=\infty$ and $q_{0}<\infty$. Then $p_{0}^{\prime}<q_{0}$, giving that $\phi_{q_{0}}\in L^{p_{0}^{\prime}}[0,T]$. Hence, Hölder’s inequality gives $|(T_{1}h)(t)|\leq\|\phi_{q_{0}}(t-\,\cdot\,)\|_{L^{p_{0}^{\prime}}[0,T]}\|h\|_{L^{p_{0}}[0,T]}\leq 2\|\phi_{q_{0}}\|_{L^{p_{0}^{\prime}}[0,T]}\|h\|_{L^{p_{0}}[0,T]},$ and the result follows. ∎ ###### Proof of Theorem 4.1. We only prove the continuity for (4.2). The other cases follow by similar arguments and are left for the reader. Since $L^{r_{0}}[0,T]$ is decreasing with $r_{0}$ and that (4.1) is obviously true for some $r_{0}\geq 1$, we may assume that $r_{0}\geq 1$. We also observe that (4.1) implies that $q\leq p$, which makes it possible to apply Theorem 3.6. Let $q_{0}\in\mathbf{R}\cup\\{\infty\\}$ be defined by $\frac{1}{q_{0}}=d\left(\frac{1}{q}-\frac{1}{p}\right)$ and let $\phi_{q_{0}}$ be the same as in Lemma 4.2. Then $q_{0}\in(1,\infty]$. A combination of Theorem 3.6 and Minkowski’s inequality gives $\displaystyle\|(SF)(t,\,\cdot\,)\|_{M^{q,p}_{(\omega)}}$ $\displaystyle\leq\int_{0}^{T}\|(e^{-i\pi(t-s)H_{x,c}}F(s,\,\cdot\,))\|_{M^{q,p}_{(\omega)}}\,ds$ $\displaystyle\lesssim\int_{0}^{T}\phi_{q_{0}}(t-s)\|F(s,\,\cdot\,))\|_{M^{p,q}_{(\omega)}}\,ds.$ By applying the $L^{r_{0}}[0,T]$ norm on the last inequality and using Lemma 4.2 we get $\|SF\|_{L^{r_{0}}([0,T];M^{q,p}_{(\omega)})}\lesssim\|SF\|_{L^{p_{0}}([0,T];M^{p,q}_{(\omega)})},$ and the result follows. ∎ Next we shall apply Theorems 3.6 and 4.1 to deduce continuity properties for the operator $E$ in (1.40)′, and thereby obtain Strichartz estimates for the harmonic oscillator propagator $e^{-itH_{x,c}}$. The first result is the following and is a straight-forward consequence of Theorem 3.6. The details are left for the reader. Here we let $L^{p}_{\text{w}}(\Omega)$ be the weak $L^{p}(\Omega)$ space when $p\in(0,\infty]$, which is often denoted by $L^{p,\infty}(\Omega)$ or $L^{p,*}(\Omega)$ in the literature (see e. g. [7]). ###### Theorem 4.3. Let $c\in\mathbf{C}$, $\omega\in\mathscr{P}_{E}^{r}(\mathbf{R}^{2d})$ and $p,q,r_{0}\in(0,\infty]$ be such that $q\leq p$ and $\frac{1}{r_{0}}=d\left(\frac{1}{q}-\frac{1}{p}\right).$ (4.8) Then $E$ in (1.40)′ is uniquely extendable to a continuous map $E:M^{p,q}_{(\omega)}(\mathbf{R}^{d})+W^{q,p}_{(\omega)}(\mathbf{R}^{d})\to L^{r_{0}}_{{\text{\rm{w}}}}([0,T];M^{q,p}_{(\omega)}(\mathbf{R}^{d}))\bigcap L^{r_{0}}_{{\text{\rm{w}}}}([0,T];W^{p,q}_{(\omega)}(\mathbf{R}^{d})),$ and $\|Ef\|_{L^{r_{0}}_{{\text{\rm{w}}}}([0,T];M^{q,p}_{(\omega)})}+\|Ef\|_{L^{r_{0}}_{{\text{\rm{w}}}}([0,T];W^{p,q}_{(\omega)})}\lesssim\|f\|_{M^{p,q}_{(\omega)}+W^{q,p}_{(\omega)}},\\\\[4.30554pt] f\in M^{p,q}_{(\omega)}(\mathbf{R}^{d})+W^{q,p}_{(\omega)}(\mathbf{R}^{d}).$ (4.9) Here recall that if $\mathcal{B}_{1}$ and $\mathcal{B}_{2}$ are quasi-Banach spaces, then $\mathcal{B}_{1}+\mathcal{B}_{2}$ is the quasi-Banach space $\\{\,f_{1}+f_{2}\,;\,f_{1}\in\mathcal{B}_{1},\ f_{2}\in\mathcal{B}_{2}\,\\}$ equipped with the quasi-norm $\|f\|_{\mathcal{B}_{1}+\mathcal{B}_{2}}\equiv\inf_{f=f_{1}+f_{2}}\left(\|f_{1}\|_{\mathcal{B}_{1}}+\|f_{2}\|_{\mathcal{B}_{2}}\right).$ ###### Remark 4.4. If $p=q$ in Theorem 4.3, then $r_{0}=\infty$ in (4.8). By Theorems 3.6 and 3.7 it follows that $E$ in (1.40)′ is uniquely extendable to a continuous map $E:M^{p}_{(\omega)}(\mathbf{R}^{d})\to L^{\infty}(\mathbf{R};M^{p}_{(\omega)}(\mathbf{R}^{d})),$ and $\|Ef\|_{L^{\infty}(\mathbf{R};M^{p}_{(\omega)})}\lesssim\|f\|_{M^{p}_{(\omega)}},\qquad f\in M^{p}_{(\omega)}(\mathbf{R}^{d}).$ (4.9)′ The next result follows by combining Theorem 4.1 with general techniques in Section 2 in [30] for Strichartz estimates in order to deduce further continuity properties for $E$ and $E^{*}$. ###### Theorem 4.5. Let $p,p_{0}\in[1,\infty]$ be such that $2\leq p<\frac{2d}{d-1}\quad\text{and}\quad d\left(1-\frac{2}{p}\right)=\frac{2}{p_{0}^{\prime}}.$ (4.10) Then $E^{*}$ in (1.40)′ is uniquely extendable to continuous mappings $\displaystyle E^{*}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];M^{p,p^{\prime}}(\mathbf{R}^{d}))$ $\displaystyle\to L^{2}(\mathbf{R}^{d}),$ (4.11) $\displaystyle E^{*}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];W^{p^{\prime},p}(\mathbf{R}^{d}))$ $\displaystyle\to L^{2}(\mathbf{R}^{d}),$ (4.12) and $E$ in (1.38)′ is uniquely extendable to continuous mappings $\displaystyle E$ $\displaystyle:\,$ $\displaystyle L^{2}(\mathbf{R}^{d})$ $\displaystyle\to L^{p_{0}^{\prime}}([0,T];M^{p^{\prime},p}(\mathbf{R}^{d})),$ (4.13) $\displaystyle E$ $\displaystyle:\,$ $\displaystyle L^{2}(\mathbf{R}^{d})$ $\displaystyle\to L^{p_{0}^{\prime}}([0,T];W^{p,p^{\prime}}(\mathbf{R}^{d})),$ (4.14) ###### Proof. By choosing $r_{0}=p_{0}^{\prime}$, $q=p^{\prime}$ and $\omega=1$, it follows that (4.2) and (4.5) takes the forms $\displaystyle S_{2}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];M^{p,p^{\prime}}(\mathbf{R}^{d}))$ $\displaystyle\to L^{p_{0}^{\prime}}([0,T];M^{p^{\prime},p}(\mathbf{R}^{d}))$ (4.2)′ and $\displaystyle S_{2}$ $\displaystyle:\,$ $\displaystyle L^{p_{0}}([0,T];W^{p^{\prime},p}(\mathbf{R}^{d}))$ $\displaystyle\to L^{p_{0}^{\prime}}([0,T];W^{p,p^{\prime}}(\mathbf{R}^{d})).$ (4.5)′ Since the ranks in (4.2)′ and (4.5)′ are the duals to their domains, the result follows by straight-forward applications of the equivalences (2.1)–(2.3) in [30]. ∎ By Theorem 4.5 and the fact that $S_{2}=E\circ E^{*}$, we get the following (see also [30, Corollary 2.1]). ###### Theorem 4.6. Let $p_{j},p_{0,j}\in[1,\infty]$ be such that $2\leq p_{j}<\frac{2d}{d-1}\quad\text{and}\quad d\left(1-\frac{2}{p_{j}}\right)=\frac{2}{p_{0,j}^{\prime}},$ (4.15) $j=1,2$. Then $S_{2}$ in (1.41)′ is uniquely extendable to continuous mappings $\displaystyle S_{2}$ $\displaystyle:\,$ $\displaystyle L^{p_{0,1}}([0,T];M^{p_{1},p_{1}^{\prime}}(\mathbf{R}^{d}))$ $\displaystyle\to L^{p_{0,2}^{\prime}}([0,T];M^{p_{2}^{\prime},p_{2}}(\mathbf{R}^{d})),$ (4.16) $\displaystyle S_{2}$ $\displaystyle:\,$ $\displaystyle L^{p_{0,1}}([0,T];M^{p_{1},p_{1}^{\prime}}(\mathbf{R}^{d}))$ $\displaystyle\to L^{p_{0,2}^{\prime}}([0,T];W^{p_{2},p_{2}^{\prime}}(\mathbf{R}^{d})),$ (4.17) $\displaystyle S_{2}$ $\displaystyle:\,$ $\displaystyle L^{p_{0,1}}([0,T];W^{p_{1}^{\prime},p_{1}}(\mathbf{R}^{d}))$ $\displaystyle\to L^{p_{0,2}^{\prime}}([0,T];M^{p_{2}^{\prime},p_{2}}(\mathbf{R}^{d})),$ (4.18) and $\displaystyle S_{2}$ $\displaystyle:\,$ $\displaystyle L^{p_{0,1}}([0,T];W^{p_{1}^{\prime},p_{1}}(\mathbf{R}^{d}))$ $\displaystyle\to L^{p_{0,2}^{\prime}}([0,T];W^{p_{2},p_{2}^{\prime}}(\mathbf{R}^{d})).$ (4.19) The same holds true with $S_{1}$ in place of $S$ at each occurrence. ###### Remark 4.7. We observe that (4.19) is the same as (45) in [16]. ###### Remark 4.8. We observe that the conditions (4.10) and (4.15) imply that $p_{0}<2$ and $p_{0,j}<2$. ### 4.2. Continuity properties for a family of equations related to Schrödinger and heat equations Next we consider more general equations, given by (1.36), where $u_{0}$ is a suitable function or ultra-distribution on $\mathbf{R}^{d}$, $F$ is a suitable function or ultra-distribution on $\mathbf{R}^{d+1}$ and $R=H_{x,\varrho,c}^{r}$ for some $r\geq 0$, $\varrho\in\mathbf{C}^{d}$ and $c\in\mathbf{C}$. That is, we consider $\begin{cases}i\partial_{t}u-H_{x,\varrho,c}^{r}u=F,\\\\[4.30554pt] u(0,x)=u_{0}(x),\qquad(t,x)\in[0,T]\times\mathbf{R}^{d},\end{cases}$ (4.20) where $T>0$ is fixed. Here we observe that (4.20) is a partial differential equation when $r$ is an integer. For general $r$, (4.20) becomes a pseudo- differential equation. By (1.37) it follows that the formal solution is given by $u(t,x)=(e^{-itH_{x,\varrho,c}^{r}}u_{0})(x)-i\int_{0}^{t}(e^{i(t-s)H_{x,\varrho,c}^{r}}F(t,\,\cdot\,))(x)\,ds.$ (4.21) Hence, questions on well-posed properties for the equation (4.20) rely completely on continuity properties of the propagator $\displaystyle(E_{\varrho,r,c}f)(t,x)$ $\displaystyle\equiv(e^{-itH_{x,\varrho,c}^{r}}u_{0})(x),\qquad(t,x)\in[0,T]\times\mathbf{R}^{d}.$ (4.22) as well as for the operators $\displaystyle(S_{1,\varrho,r,c}F)(t,x)$ $\displaystyle=\int_{0}^{t}(e^{-i(t-s)H_{x,\varrho,c}^{r}}F(s,\,\cdot\,))(x)\,ds,\qquad(t,x)\in[0,T]\times\mathbf{R}^{d},$ (4.23) and $\displaystyle(S_{2,\varrho,r,c}F)(t,x)$ $\displaystyle=\int_{0}^{T}(e^{-i(t-s)H_{x,\varrho,c}^{r}}F(s,\,\cdot\,))(x)\,ds,\qquad(t,x)\in[0,T]\times\mathbf{R}^{d}.$ (4.24) Here we remark that there are different definitions of well-posed problems in the literature. We say that the problem (4.20) is _well-posed_ if the solution $u=u(t,x)$ depends continuously on the initial data $u_{0}$. If (4.20) fails to be well-posed, then (4.20) is called _ill-posed_. By Proposition 2.2 it follows that the following is true, which shows that the operator (4.22) easily become discontinuous in the framework of classical functions and (ultra-)distribution spaces. The details are left for the reader. ###### Proposition 4.9. Let $r,T>0$, $c\in\mathbf{C}$ and $\varrho\in\mathbf{C}^{d}$ be such that $\operatorname{Im}(\varrho_{j}^{r})>0$ for some $j\in\\{1,\dots,d\\}$. Then following is true: 1. (1) $E_{\varrho,r,c}$ in (4.22) is discontinuous from $\mathscr{S}(\mathbf{R}^{d})$ to $\mathscr{S}^{\prime}(\mathbf{R}^{d})$; 2. (2) if in addition $r\geq 1$, then $E_{\varrho,r,c}$ in (4.22) is discontinuous from $\mathcal{S}_{1/2}(\mathbf{R}^{d})$ to $\mathcal{S}_{1/2}^{\prime}(\mathbf{R}^{d})$. On the other hand, by Proposition 2.1, it follows that the mappings (4.22), (4.23) and (4.24) are continuous on suitable Pilipović spaces, which is explained in the following result. The details are left for the reader. Here we let $\displaystyle L^{1}([0,T];\mathcal{H}_{0,s}(\mathbf{R}^{d}))$ $\displaystyle=\underset{r>0}{\operatorname{proj\,lim\,}}L^{1}([0,T];\mathcal{H}_{s;r}(\mathbf{R}^{d})),$ $\displaystyle L^{1}([0,T];\mathcal{H}_{s}(\mathbf{R}^{d}))$ $\displaystyle=\underset{r>0}{\operatorname{ind\,lim\,}}L^{1}([0,T];\mathcal{H}_{s;r}(\mathbf{R}^{d})),$ $\displaystyle L^{1}([0,T];\mathcal{H}_{s}^{\prime}(\mathbf{R}^{d}))$ $\displaystyle=\underset{r>0}{\operatorname{proj\,lim\,}}L^{1}([0,T];\mathcal{H}_{s;r}^{\prime}(\mathbf{R}^{d})),$ and $\displaystyle L^{1}([0,T];\mathcal{H}_{0,s}^{\prime}(\mathbf{R}^{d}))$ $\displaystyle=\underset{r>0}{\operatorname{ind\,lim\,}}L^{1}([0,T];\mathcal{H}_{s;r}^{\prime}(\mathbf{R}^{d})),$ where $\mathcal{H}_{s;r}(\mathbf{R}^{d})$ and $\mathcal{H}_{s;r}^{\prime}(\mathbf{R}^{d})$ are the images of $\ell_{s;r}^{\infty}(\mathbf{N}^{d})$ and $\ell_{s;r}^{\infty,*}(\mathbf{N}^{d})$, respectively under the map $T_{\mathcal{H}}$ in (1.11), also in topological sense. (See also Definition 1.1.) ###### Theorem 4.10. Let $r,T>0$, $c\in\mathbf{C}$, $\varrho\in\mathbf{C}^{d}$ and let $s,s_{1},s_{2}\in\overline{\mathbf{R}_{\flat}}$ be such that $0<s_{1}\leq\frac{1}{2r}$ and $s_{2}<\frac{1}{2r}$. Then following is true: 1. (1) $E_{\varrho,r,c}$ in (4.22) is a homeomorphism on the spaces in (2.6); 2. (2) $S_{j,\varrho,r,c}$ in (4.23) and (4.24) are homeomorphisms on the spaces $\displaystyle L^{1}([0,T];\mathcal{H}_{0,s_{1}}(\mathbf{R}^{d})),$ $\displaystyle L^{1}([0,T];\mathcal{H}_{s_{2}}(\mathbf{R}^{d})),$ (4.25) $\displaystyle L^{1}([0,T];\mathcal{H}_{s_{2}}^{\prime}(\mathbf{R}^{d}))$ and $\displaystyle L^{1}([0,T];\mathcal{H}_{0,s_{1}}^{\prime}(\mathbf{R}^{d})).$ As consequences of Proposition 4.9 and Theorem 4.10 we get the following, concerning well-posed properties of the equation (4.20). ###### Corollary 4.11. Suppose that $r\geq 1$, $c\in\mathbf{C}$ and $\varrho\in\mathbf{C}^{d}$ are such that $\operatorname{Im}(\varrho_{j}^{r})>0$ for some $j\in\\{1,\dots,d\\}$. Then the following is true: 1. (1) the equation (4.20) is ill-posed in the framework of Schwartz functions, Gelfand-Shilov spaces, and their dual spaces; 2. (2) the equation (4.20) is well-posed for the Pilipović spaces and their dual spaces in (2.6). ###### Remark 4.12. If $c=0$, $r=1$, $F=0$ and $\varrho_{j}=i$ for every $j$ in Corollary 4.11, then (4.20) takes the form $\begin{cases}\partial_{t}u=-\Delta_{x}+|x|^{2},\\\\[4.30554pt] u(0,x)=u_{0}(x),\qquad(t,x)\in[0,T]\times\mathbf{R}^{d}.\end{cases}$ (4.20)′ That is we obtain a sort of heat equation, where the potential term $|x|^{2}$ is included. The minus sign in front of the Laplace operator $\Delta_{x}$ implies that we are searching for a solution when moving backwards in time. Corollary 4.11 then shows that it is _not meaningful_ to investigate (4.20) in the framework of Schwartz spaces, Gelfand-Shilov spaces and their distribution spaces. On the other hand, it follows from the same corollary that it always makes sense to investigate such problems in the framework of Pilipović spaces which are not Gelfand-Shilov spaces, and their distributions. ### 4.3. Some further applications and remarks A question which appears is whether our results are applicable to problems like $\begin{cases}\partial_{t}u-\operatorname{Op}^{w}(a)u=F,\\\\[4.30554pt] u(0,x)=u_{0}(x),\qquad(t,x)\in[0,T]\times\mathbf{R}^{d},\end{cases}$ (4.26) when $a$ is a positive definite quadratic form on $\mathbf{R}^{2d}$. Here $\operatorname{Op}^{w}(a)$ is the _Weyl quantization_ of $a$, i. e. the operator on $\mathscr{S}(\mathbf{R}^{d})$, given by $\operatorname{Op}^{w}(a)f(x)=(2\pi)^{-d}\iint_{\mathbf{R}^{2d}}a({\textstyle{\frac{1}{2}}}(x+y),\xi)f(y)e^{i\langle x-y,\xi\rangle}\,dyd\xi.$ The operator $\operatorname{Op}^{w}(a)$ is continuous on $\mathscr{S}(\mathbf{R}^{d})$ and on any Pilipović space, which extends uniquely to a continuous map on $\mathscr{S}^{\prime}(\mathbf{R}^{d})$, and to any Pilipović distribution space. (See e. g. [35, 54].) By introducing suitable new symplectic coordinates it follows that $\operatorname{Op}^{w}(a)$ takes the form $\operatorname{Op}^{w}(a)=\sum_{j=1}^{d}\varrho_{j}(x_{j}^{2}-\partial_{x_{j}}^{2})+c,$ for some $\varrho_{j}>0$, $j=1,\dots,d$, and some real constant $c$, in these new coordinates. (See Section 18.6 in [35].) By Proposition 4.9 and Theorem 4.10 it follows that (4.26) is ill-posed in the framework of Schwartz spaces, Gelfand-Shilov spaces and their distribution spaces, but well-posed for other types of Pilipović spaces of functions and distributions. Here we notice that we need to keep staying in these new symplectic coordinates, since Pilipović spaces which are not Gelfand-Shilov spaces, are not invariant under general changes of symplectic coordinates. (See e. g. [53].) ## References * [1] A. Abdeljawad, S. Coriasco, J. Toft _Liftings for ultra-modulation spaces, and one-parameter groups of Gevrey type pseudo-differential operators_ , Anal. Appl. 18 (2020), 523–583. * [2] T. Alieva, M.J. Bastiaans _Wigner distribution and fractional Fourier transform in: B. Boashash (ed.), Time-Frequency Signal Analysis and Processing: A Comprehensive Reference_, Elsevier, Oxford, UK, 2003, pp. 145–152. * [3] L. B. Almeida _The fractional Fourier transform and time-frequency representations_ , IEEE Trans. on Signal Processing 42 (1994), 3084–3091. * [4] A. Abdeljawad, C. Fernandez, A. Galbis, J. Toft, R. Üster _Characterizations of a class of Pilipović spaces by powers of harmonic oscillator_ , RACSAM 114 (2020), 131. * [5] V. Bargmann, _On a Hilbert space of analytic functions and an associated integral transform_ , Comm. Pure Appl. Math. 14 (1961), 187–214. * [6] V. Bargmann, _On a Hilbert space of analytic functions and an associated integral transform. Part II. A family of related function spaces. Application to distribution theory_ , Comm. Pure Appl. Math. 20 (1967), 1–101. * [7] J. Bergh, J. Löfström _Interpolation Spaces, An Introduction_ , Springer-Verlag, Berlin Heidelberg NewYork, 1976. * [8] D. G. Bhimani _The nonlinear Schrödinger equations with harmonic dinger equations with harmonic potential modulation spaces_ , Discrete Contin. Dyn. Syst. 39 (2019), 5923–5944. * [9] D. G. Bhimani, R. Balhara, S. Thangavelu _Hermite multipliers on modulation spaces_ , in: Analysis and partial differential equations: perspectives from developing countries, Springer Proc. Math. Stat., 275, Springer, Cham, 2019, pp. 42–64. * [10] D. G. Bhimani, R. Manna, F. Nicola, S. Thangavelu, S. I. Trapasso _Phase space analysis of the Hermite semigroup and applications to nonlinear global well-posedness_ , Adv. Math. 392 (2021), Paper no. 107995. * [11] P. Boggiatto, E. Cordero, K. Gröchenig _Generalized anti-Wick operators with symbols in distributional Sobolev spaces_ , Integr. Equ. Oper. Theory 48 (2004), 427–442. * [12] A. Bultheel, H.M. Sulbaran _An introduction to the fractional Fourier transform and friends_ , Cubo Mat. Educ. 7 (2005), 201–221. * [13] W. Chen, Z. Fu, L. Grafakos, Y. Wu _Fractional Fourier transforms on Lp and applications_ , Appl. Comput. Harmon. Anal. 55 (2021), 71–96. * [14] J. Chung, S.-Y. Chung, D. Kim _Characterizations of the Gelfand-Shilov spaces via Fourier transforms_ , Proc. Amer. Math. Soc. 124 (1996), 2101–2108. * [15] E. Cordero, K. H. Gröchenig, F. Nicola, L. Rodino _Generalized metaplectic operators and the Schrödinger equation with a potential in the Sjöstrand class_ , J. Math. Phys. 55 (2014), 081506. * [16] E. Cordero, F. Nicola _Metaplectic representation on Wiener amalgam spaces and applications to the Schrödinger equation_ , J. Func. Anal. 254 (2008), 506–534. * [17] E. Cordero, S. Pilipović, L. Rodino, N. Teofanov _Quasianalytic Gelfand-Shilov spaces with applications to localization operators_ , Rocky Mt. J. Math. 40 (2010), 1123–1147. * [18] M. de Gosson _The quantum motion of half-densities and the derivation of Schrödinger’s equation_ , J. Phys. A: Math. Gen. 31 (1998), 4239–4247. * [19] M. de Gosson _Symplectic methods in harmonic analysis and in mathematical physics_ , Pseudo-Differential Operators. Theory and Applications 7, Birkhäuser/Springer Basel AG, Basel, 2011. * [20] M. de Gosson, F. Luef _Metaplectic group, symplectic Cayley transform, and fractional Fourier transforms_ , J. Math. Anal. and Appl. 416 (2014), 947–968. * [21] I. Daubechies _Time-frequency localization operators: a geometric phase space approach_ , IEEE Trans. Inform. Theory 34 (1988), 605–612. * [22] H. Fan, L. Hu _Optical transformation from chirplet to fractional Fourier transformation kernel_ , J. Mod. Opt. 56 (2009), 1227–1229. * [23] H. G. Feichtinger _Modulation spaces on locally compact abelian groups. Technical report_ , University of Vienna, Vienna, 1983; also in: M. Krishna, R. Radha, S. Thangavelu (Eds) Wavelets and their applications, Allied Publishers Private Limited, NewDehli Mumbai Kolkata Chennai Hagpur Ahmedabad Bangalore Hyderbad Lucknow, 2003, pp. 99–140. * [24] H. G. Feichtinger _Modulation spaces: Looking back and ahead_ , Sampl. Theory Signal Image Process. 5 (2006), 109–140. * [25] H. G. Feichtinger, K. Gröchenig _Banach spaces related to integrable group representations and their atomic decompositions, I_ , J. Funct. Anal., 86 (1989), 307–340. * [26] H. G. Feichtinger, K. Gröchenig _Banach spaces related to integrable group representations and their atomic decompositions, II_ , Monatsh. Math., 108 (1989), 129–148. * [27] C. Fernandez, A. Galbis-Verdu, J. Toft _The Bargmann transform and powers of harmonic oscillator on Gelfand-Shilov subspaces_ , RACSAM 111 (2017), 1–13. * [28] Y. V. Galperin, S. Samarah _Time-frequency analysis on modulation spaces $M^{p,q}_{m}$, $0<p,q\leq\infty$_, Appl. Comput. Harmon. Anal. 16 (2004), 1–18. * [29] I. M. Gelfand, G. E. Shilov _Generalized functions, II-III_ , Academic Press, NewYork London, 1968. * [30] J. Ginibre, G. Velo _Smooting properties and retarded estimates for some dispersive evolution equations_ * [31] T. Gramchev, S. Pilipović, L. Rodino _Classes of degenerate elliptic operators in Gelfand-Shilov spaces in: L. Rodino, M. W. Wong (eds) New developments in pseudo-differential operators_, Operator Theory: Advances and Applications 189, Birkhäuser, Basel, 2009, pp. 15–31. * [32] K. Gröchenig, _Foundations of Time-Frequency Analysis_ , Birkhäuser, Boston, 2001. * [33] K. Gröchenig _Weight functions in time-frequency analysis in: L. Rodino, M. W. Wong (Eds) Pseudodifferential Operators: Partial Differential Equations and Time-Frequency Analysis_, Fields Institute Comm., 52 2007, pp. 343–366. * [34] K. Gröchenig, G. Zimmermann _Spaces of test functions via the STFT_ J. Funct. Spaces Appl. 2 (2004), 25–53. * [35] L. Hörmander _The Analysis of Linear Partial Differential Operators_ , vol I–III, Springer-Verlag, Berlin Heidelberg NewYork Tokyo, 1983, 1985.
A Clique-Based Method for Improving Motif Scanning Accuracy, v3.01 Braslav Rabar, Keti Nižetić and Pavle Goldstein University of Zagreb, Faculty of Science, Mathematics Department ## Abstract ### Background Motif scanning is a very common method in bioinformatics. Its objective is to detect motifs of sufficient similarity to the query, which is then used to determine familiy membership, or structural or functional features or assignments. Considering a variety of uses, accuracy of motif scanning procedures is of great importance. ### Results We present a new approach for improving motif scanning accuracy, based on analysis of in-between similarity. Given a set of motifs obtained from a scanning process, we construct an associated weighted graph. We also compute the expected weight of an edge in such a graph. It turns out that restricting results to the maximal clique in the graph, computed with respect to the expected weight, greatly increases precision, hence improves accuracy of the scan. We tested the method on an ungapped motif-characterized protein family from five plant proteomes. The method was applied to three iterative motif scanners - PSI-BLAST, JackHMMer and IGLOSS - with very good results. ### Conclusions We presented a method for improving protein motif scanning accuracy, and have successfully applied it in several situations. The method has wider implications, for general pattern recognition and feature extraction strategies, as long as one can determine the expected similarity between objects under consideration. ## 1 Background Motif scanning - or local similarity search - is a very important part of sequence analysis. It can be used for various purposes - protein family assignment ([2]), secondary structure prediction ([8]) and similar. Motif scanning methods are typically based on a local alignment algorithm - Smith- Waterman or Viterbi algorithm - with various modifications added, such as approximations and variations in the scoring function or model building. Motif scanning procedures normally take an instance of a motif - or even a profile - as an input - that is a query \- and search for similar patterns in a set of sequences. The output consists of a set of sufficiently similar matches, and these form a set of positives, or - what we call it here - the response. In this paper, we are concerned with accuracy of motif scanning procedures. Namely, the aim of motif scanning applications is to detect as many significant motifs as possible - these are the true positives, while keeping the number of wrong assignments - or false positives \- to the minimum. In other words, accuracy is measured by how closely the response matches the set of biologically relevant sequences in the sample. Typically, motif scanning methods are based on a scoring function - usually a log-likelihood ratio - and the response is generated in two steps: first, by ranking all elements of the sample by their similarity to the query, and, second, by considering all candidates with a similarity score above a certain threshold. Both steps can be a source of errors - an inaccurate ranking scheme, with a low threshold, will generate a huge response, causing a large type I-like error, whereas a high threshold, while testing positive on strongest examples, might miss the candidates with a slightly weaker signal. Various applications deal with these problems in various ways - for example, JackHMMer ([6]) uses - effectively - several scoring functions, each with its own ranking and threshold, while IGLOSS ([9]) uses detailed parameter estimation to minimize both issues. In this paper, we explore an alternative approach for improving accuracy, based on pairwise similarity. Namely, given a large response, containing - presumably - a large percentage of false positives, we search for subsets in which each pair of elements is sufficiently similar. We then consider the largest of these subsets as the new, modified response. Since two true positives are more likely to be similar than a true and a false positive or two false positives, this is a sensible strategy, and it also turns out to be very robust. Furthermore, we determine what “sufficiently similar” means - that is, we compute the appropriate similarity threshold, in terms of the expected conservation and length of the motif. The algorithm is presented in the framework of graph theory, where the new response is obtained as the maximal clique in a certain graph, that is, in turn, derived from the response graph, modified by applying the similarity threshold. As already mentioned, using the maximal clique as the new response greatly increases accuracy of the search. ## 2 Methods ### 2.1 Response Graph, Derived Graph and Maximal Clique Algorithm The starting point for our analysis is a set of positives from a motif- scanning process. We assume that our motifs are ungapped, so this set, called response, is just a collection of k-mers. We form an undirected, weighted graph $\Gamma=(V,E)$, with vertices given by the elements of the response, and weights $w(e,f)$, $e,f\in V$, given by $w(e,f)=\sum_{i=1}^{k}B(e_{i},f_{i}),$ where $B(\cdot,\cdot)$ is the BLOSUM50-score for corresponding amino acids. We now form an undirected $\\{0,1\\}$-graph $\bar{\Gamma}=(V,\bar{E})$ using the expected weight $st$ from Section 2.3: for $e,f\in V$ and the unordered pair $\\{e,f\\}$ we have $\\{e,f\\}\in\bar{E}\Leftrightarrow w(e,f)-st>0.$ Put simply, two vertices in $\bar{\Gamma}$ are connected if and only if their similarity score is above the threshold. It is now straightforward to apply a maximal-clique algorithm to $\bar{\Gamma}$. We used the standard Bron-Kerbosh algorithm ([3]), implemented in Python. ### 2.2 Accuracy Measures In this subsection, we establish notation and define relevant accuracy measures, and we closely follow ([9, Supplementary material, Section 5]) in the presentation. Sequences in the sample that have been annotated as belonging to test families are marked as condition positive and their number is denoted as $|CP|$, while the rest are marked as condition negative (CN). Now, each application or combination of applications under consideration produces - for a specified similarity level - a list of hits, and their respective sequences are denoted as positive (P) - with the rest of the sample being negative (N) - while $|P|$ and $|N|$ denote the corresponding set sizes. We then have true positives (TP) and false positives (FP) as $TP=P\cap CP,\ FP=P\cap CN,$ (1) and likewise for true negatives (TN) and false negatives (FN) $TN=N\cap CN,\ FN=N\cap CP.$ (2) The usual way of assessing diagnostic ability of an application would be to compare sensitivity or true positive rate $TPR=|TP|/|CP|$ (3) and false positive rate $FPR=|FP|/|CN|.$ (4) However, in the present context, there is a serious imbalance between the sizes of the condition positive and condition negative sets: CN is several orders of magnitude larger than CP, so, for any reasonable test outcome, $|FPR|$ will be close to $0$. Consequently, we consider precision or positive predictive value $PPV=|TP|/|P|,$ (5) and use $PPV$ and $TPR$ as accuracy measures. Finally, we combine these two by considering their harmonic mean, called the F1-score, hence $F1=2\cdot\frac{PPV\cdot TPR}{PPV+TPR},$ (6) and plot threshold-F1 diagrams to assess accuracy. ### 2.3 Similarity Threshold We compute the similarity threshold as the expected BLOSUM ([7]) score of two $k$-mers, sampled from a certain distribution. Namely, given $x=(x_{1},\ldots,x_{k})$ and $y=(y_{1},\ldots,y_{k})$, let $s(x,y)=\sum_{i}B(x_{i},y_{i})$ be their BLOSUM score and $\mathbb{E}[s(x,y)]$ the expected value. Then, somewhat informally, $\mathbb{E}[s(x,y)]=\sum\mathbb{E}[B(x_{i},y_{i})],$ and, averaging over the whole length of the motif, $\mathbb{E}[s(x,y)]=k\mathbb{E}[B(x_{0},y_{0})],$ for some “average” amino acids $x_{0}$ and $y_{0}$. Now, let $e_{i}=(0,\ldots,1,0,\ldots,0),$ with $1$ on $i$-th position, let $\alpha\in(0,1)$, and let $bg=bg(i)$ be the average distribution of amino acids. Let $f_{i}=\alpha\cdot e_{i}+(1-\alpha)bg$ (7) be an $e_{i}-bg$-mixture of distributions. Here, $\alpha$ is a “conservation parameter”, representing the percentage of the dominant amino acid $i$ in an alignment column sampled from $f_{i}$. Then $\sum_{j,k=1}^{20}B(j,k)f_{i}(j)f_{i}(k)$ is the expected BLOSUM score for two amino acids sampled from $f_{i}$. Setting $\alpha=0.68$ and averaging over the distribution $bg$, we get $\sum_{i=1}^{20}bg(i)\sum_{j,k=1}^{20}B(j,k)f_{i}(j)f_{i}(k)=2.522,$ (8) and, for a motif length $k$, we take the similarity threshold $st$ to be $st=k\cdot 2.5$ (9) We further comment on this in Section 3.2.3. ## 3 Results and Discussion ### 3.1 Tests and Results In order to test the method, we applied it to responses from three iterative motif scanners - PSI-BLAST (PB) ([1]), JackHMMER (JH) ([6]) and IGLOSS (IG) ([9]) - and compared the maximal clique with the original response. As in ([9]), scanners were applied to five plant proteomes - Arabidopsis thaliana (AT, v. TAIR9), Oryza sativa (OS, v. MSU v7), Solanum tuberosum (ST, v. ITAG1), Solanum lycopersicum (SL, v. ITAG2.3) and Beta vulgaris (BV, v. KWS2320) - where we searched for members of an extensively studied, motif characterized protein family - GDSL lipases. GDSL lipases belong to lipid hydrolyzing enzymes that exhibit a GDSL motif. Proteins in this family display fairly low overall sequence similarity, but are reasonably well described by the presence of conserved residues in four conserved blocks (I, II, III, and V) ([11]). Block I contains the main characteristic motif (PROSITE:PS01098) ([10]) from which the main search query of 10 amino acids was constructed. As in [9], the condition positive set was determined by processing the information from GoMapMan resource [12]. Altogether, we performed approximately $900$ tests. We used three search queries: * • FVFGDSLSDA - consensus query, defined above * • FVFnDSLSDA - a single mutation, at a highly conserved site * • vfFGDSLSDn - three substitutions This was done with all three scanners, for approximately $20$ threshold levels each, and all five proteomes. The average gain in the F1-score was around $0.20$ in the most interesting threshold region (around 1/3 of the x-axis). However, gains can sometimes be spectacular - Table 1 shows results of scans where we more than doubled F1-score, albeit at a fairly low threshold level. Another feature of our tests that we should comment on are peaks-and-troughs present in the Figure 3. Namely, the corresponding responses are diffuse and inhomogeneous, as a result of a “wrong” query, so the clique algorithm hardly improves PPV at certain threshold levels - hence a “jagged” PPV-curve. We further analyzed this situation, and detected features that indicate that the clique recognizes this “wrongness”. We discuss this in Section 3.2.4. Figures in the last section were obtained by merging the results from all five proteomes. Here, responses are matched by their size (i.e. the number of positives), and the average threshold - scale (for IG) or negative logarithm of the e-value (for PB and JH) - was assigned to the x-axis, with averaged PPV, TPR or F1-score on the y-axis. For the tables, responses were again matched by their sizes, with the last column reporting cumulative results for positives and true positives, and the average threshold. ### 3.2 Discussion #### 3.2.1 Overview Put somewhat abstractly, the aim of our method is to detect - using pairwise comparison - an optimal subset of true positives in a (fairly) large set of hits (positives). This is achieved with the help of a pairwise similarity threshold and a maximal clique algorithm. We have developed this concept in the context of ungapped motif scanning - hence the present method - and tested it in conjunction with three different iterative motif scanners. As stated before, these principles can be applied more generally, provided suitable assumptions are fulfilled. #### 3.2.2 Robustness Before we discuss the way we computed the similarity threshold - in the next section - let us first note that the threshold is rather robust. Namely, as seen from the Figure 10(a), $10\%$ change in the level of the threshold produces only minor changes in terms of accuracy (measured by the F1-score). Hence, gradual changes in the threshold level will affect accuracy gradually. On the other hand, this is in contrast to the iterative approach - as mentioned in the Introduction - where small changes of the threshold (e-value or scale) might produce very different results. This is due to the different nature of two approaches - there are many more comparisons carried out in the pairwise case, so the effect of the change is smoothed out. Furthermore, to phrase this in the graph theory framework: the pairwise threshold acts on the edges of the response graph, while we are looking for a suitable subset of the vertices; so, while a small change in the threshold level might affect many edges, that will - eventually - add or remove only a couple of vertices from the maximal clique. #### 3.2.3 Similarity Threshold Comments The aim of the similarity threshold is to distinguish between true and false positives by means of the pairwise score. Namely, protein motifs are chracterized by a specific substitution pattern, so it is to be expected that the pairwise score between two true positives will be higher than the one between true and false positives, let alone two false positives (note that this naturally leads to the maximal clique approach). However, knowledge of the substitution pattern amounts to a detailed description of the motif under consideration, in which case the type of analysis that we study here - single- query, iterative approximation - becomes almost redundant. Hence, we derived the threshold a priori, as an abstract, average score, and dependent on a single parameter $\alpha$, so-called conservation coefficient. More precisely, the threshold was obtained as the expected BLOSUM score of two $k$-mers, sampled from an $\alpha$-convex combination of distributions, as well as assuming average (i.e. background) distribution of amino acids across the length $k$. The parameter $\alpha$ should be thought of as the relative frequency of the dominant amino acid in the hypothetical motif profile, averaged across the length $k$. Incidently, in five proteomes that we considered, profile conservation varied from $68\%$ to $72\%$. There are other strategies to obtain the threshold, as long as appropriate conservation is maintained. For example, one could use a suitable power of the PAM matrix ([4]) instead of distributions $f_{i}$ in the Equation 7. So, taking PAM120 - which yields the average diagonal value around $2/3$ \- and repeating the procedure from Section 2.3, gives the value $st=2.58$ \- approximately the same. Likewise, using uniform distribution - instead of background - in the Equation 8, one obtains $st=2.4$. Furthermore, other similarity measures, rather than the BLOSUM50 matrix, could be used - other BLOSUM matrices, or even other, non-BLOSUM substitution matrices. However, that would involve setting up a new response graph and a new threshold, in parallel. Considering the comments above, one should expect marginal changes, or no changes at all. Finally, we should mention that, although the threshold was obtained as an expected score, it is used as the “minimal allowed” score. This is because assumptions for the conservation and, especially, amino acid composition are rather weak. It is possible to tighten these assumptions, and then set the “minimal score” to, say, $-2\sigma$ (i.e. two standard deviations) from the mean. However, tightening the assumptions would again amount to a description of the motif under consideration - hence, not an a priori approach - so we decided not to explore this further. #### 3.2.4 Peaks-and-Troughs As mentioned in Section 3.2.4, peaks-and-troughs in the Figure 3 were caused by the incorrect query, resulting in an inhomogenous, diffuse response. This, in turn, produced the maximal clique containing a large number of false positives, yielding a fairly low PPV for some threshold levels. Obviously, this is in contrast with Figure 1, where the “right” - i.e. consensus - query produces more homogenous response, and the clique yields consistent improvement. We analyzed this a bit futher, in order to detect features that might differentiate between these two situations. First of all, note that most of the scan descriptions used above - “low PPV, varying F1-score”, and figures such as Figure 3 and Figure 1 \- are available only a posteriori. In other words, this information is not available in an exploratory setting, where one wants to asses the validity of the response without knowing the desired outcome. So, we should be looking for some other - “a priori” \- features. Homogeneity of the response is a possible candidate, and it will distinguish between these two sets of scans, but, again, homogeneity is a feature that is best measured a posteriori, when some information regarding the variability of the condition positive set is available. So, we looked at stability, that is, relationship between responses and their respective cliques, for neighbouring threshold levels. A sequence of threshold levels, from low to high, should - in principle - produce a descending family of responses. This is, clearly, the case in a non- iterative setting, where a simple scan results in a fixed ranking scheme - an ordering of the sample with respect to the similarity to the query. In an iterative situation, the similarity function is being optimized, which might produce a different ranking scheme from one iteration to another, and result in a different ranking scheme from one threshold to another. Consequently, a smaller response - produced with a higher threshold - might not be the subset of a larger one, obtained with a lower threshold. Furthermore, a significant deviation from this stability principle indicates, in general, problems with either the sample or the query. Let us analyze from this point of view the series of scans from Figure 3 (we will focus on Arabidopsis thaliana here; some of the scan results - for neighbouring e-values - are presented in Tables 4 and 3; for a complete set of tables, consult the server web-page): as already mentioned, we have scanned with a non-consensus query; quite surprisingly, responses have shown to be rather stable, with sizes $98$ and $81$, and the size of the intersection $80$; however, the corresponding cliques - with sizes $26$ and $24$ \- have a single element in the intersection, which should be considered as a significant deviation from the stability principle. Hence, our approach provides another method to assess validity of a scan, with the clique as a new stability criterion. #### 3.2.5 Pairwise Similarity vs EM-algorithm All the iterative motif scanners that we combined our clique-method with use some form of the expectation maximization algorithm (EM-algorthm) to find the “optimum” - the optimal set of positives, for a given significance level. On the other hand, the maximal clique algorithm also provides the optimal solution for the set of positives - the maximal clique itself. A natural question arises: are these two optima the same? More precisely, given the right parameters - similarity and significance threshold - will these two approaches return the same, or very similar, response? The answer appears to be yes - optimal solutions will be more-or-less the same for the right choice of parameters, with a couple of outliers added or subtracted. This can already be inferred from figures in the next Section, where we see stability in TPR across all threshold ranges, and F1-convergence, as the threshold becomes higher. In the opposite direction, iterative scanners will, invariably, accept the maximal clique as the optimal solution, again with minor changes. All this means that these two approaches are complementary and interchangeable, at least in the present context. How general can such framework be is, at the moment, unclear. The underlying algorithms - the EM- algorithm and maximal-clique approach - are very different, so this agreement should not be considered a rule. However, in a very tractable situation - an $n$-dimensional Euclidean space $\mathbb{E}^{n}$\- this can be made more precise, as follows: we have to show that sets of positives from the two approaches - the EM-algorithm and the maximal clique - are identical; it is well known that in $\mathbb{E}^{n}$ the k-means clustering algorithm is a form of EM-algorithm (see [5]), with clusters (i.e. sets of positives) given by $n$-balls, where $B(x_{0};r)=\\{x\in\mathbb{E}^{n};|x-x_{0}|\leq r\\}$ (10) is an $n$-ball around the point $x_{0}$, with the radius $r$; on the other hand, set $d=2r$, and note that $B=B(x_{0};r)$ can be realized as the largest subset of $\mathbb{E}^{n}$ such that $B=\\{x\in\mathbb{E}^{n};|x-y|\leq d,\forall y\in B\\},$ (11) which yields a clique-like object. Consequently, we see that the considerable improvement in accuracy that we have recorded is not a question of a superior, but complementary approach. Namely, our queries - deliberately - consist of a single string, sometimes even not a consensus query, and iterative process reaches a local optimum - a set of positives with a fairly low F1-score and a blurred signal. And this is a situation where maximal clique approach offers greatest gains, by filtering the response and providing a better foundation for the next step in analysis. #### 3.2.6 Availability http://compbioserv.math.hr/igloss/index.html?clique ## 4 Tables and Figures ### 4.1 BLAST Figure 1: BLAST FVFGDSLSDA Figure 2: BLAST FVFNDSLSDA Figure 3: BLAST VFFGDSLSDN ### 4.2 JackHMMER Figure 4: HMMER FVFGDSLSDA Figure 5: HMMER FVFNDSLSDA Figure 6: HMMER VFFGDSLSDN ### 4.3 IGLOSS Figure 7: IGLOSS FVFGDSLSDA Figure 8: IGLOSS FVFNDSLSDA Figure 9: IGLOSS VFFGDSLSDN ### 4.4 Examples (a) robustness BLAST (b) robustness IGLOSS Figure 10: F1-threshold level ### 4.5 Tables | AT | OS | ST | SL | BV | ALL ---|---|---|---|---|---|--- CP | 118 | 155 | 123 | 108 | 86 | 590 | IGLOSS | scale | 5.0 | 6.0 | 5.0 | 4.9 | 4.6 | 5.1 TP/P | 105/421 | 111/321 | 95/389 | 94/408 | 64/413 | 469/1952 PPV | 0.2494 | 0.3458 | 0.2442 | 0.2304 | 0.1550 | 0.2403 TPR | 0.8898 | 0.7161 | 0.7724 | 0.8704 | 0.7442 | 0.7949 F1 | 0.3896 | 0.4664 | 0.3711 | 0.3643 | 0.2565 | 0.3690 | IGLOSS + CLIQUE | TP/P | 101/115 | 106/120 | 87/97 | 87/92 | 60/67 | 441/491 PPV | 0.8783 | 0.8833 | 0.8969 | 0.9457 | 0.8955 | 0.8982 TPR | 0.8559 | 0.6839 | 0.7073 | 0.8056 | 0.6977 | 0.7475 F1 | 0.8670 | 0.7709 | 0.7909 | 0.8700 | 0.7843 | 0.8159 Table 1: GDSL lipases IGLOSS vs IGLOSS+CLIQUE (FVFGDSLSDA) | AT | OS | ST | SL | BV | ALL ---|---|---|---|---|---|--- CP | 118 | 155 | 123 | 108 | 86 | 590 | IGLOSS | scale | 6.6 | 7.6 | 6.6 | 6.5 | 6.2 | 6.7 TP/P | 80/180 | 118/176 | 92/174 | 93/174 | 63/156 | 446/860 PPV | 0.4444 | 0.6705 | 0.5287 | 0.5345 | 0.4038 | 0.5186 TPR | 0.6780 | 0.7613 | 0.7480 | 0.8611 | 0.7326 | 0.7559 F1 | 0.5369 | 0.7130 | 0.6195 | 0.6596 | 0.5207 | 0.6152 | IGLOSS + CLIQUE | TP/P | 75/92 | 116/124 | 87/95 | 86/90 | 59/66 | 423/467 PPV | 0.8152 | 0.9355 | 0.9158 | 0.9556 | 0.8939 | 0.9058 TPR | 0.6356 | 0.7484 | 0.7073 | 0.7963 | 0.6860 | 0.7169 F1 | 0.7143 | 0.8315 | 0.7982 | 0.8687 | 0.7763 | 0.8004 Table 2: GDSL lipases IGLOSS vs IGLOSS+CLIQUE (FVFGDSLSDA) | AT | OS | ST | SL | BV | ALL ---|---|---|---|---|---|--- CP | 118 | 155 | 123 | 108 | 86 | 590 | BLAST | ev | 226 | 226 | 226 | 226 | 226 | 226 TP/P | 26/98 | 32/51 | 26/ 109 | 32/108 | 35/87 | 151/453 PPV | 0.2653 | 0.6275 | 0.2385 | 0.2963 | 0.4023 | 0.3333 TPR | 0.2203 | 0.2065 | 0.2114 | 0.2963 | 0.4070 | 0.2559 F1 | 0.2407 | 0.3107 | 0.2241 | 0.2963 | 0.4046 | 0.2895 | BLAST + CLIQUE | TP/P | 24/35 | 31/38 | 10/40 | 12/41 | 34/37 | 111/191 PPV | 0.6857 | 0.8158 | 0.2500 | 0.2927 | 0.9189 | 0.5812 TPR | 0.2034 | 0.2000 | 0.0813 | 0.1111 | 0.3953 | 0.1881 F1 | 0.3137 | 0.3212 | 0.1227 | 0.1611 | 0.5528 | 0.2843 Table 3: BLAST vs BLAST+CLIQUE (VFFGDSLSDN) | AT | OS | ST | SL | BV | ALL ---|---|---|---|---|---|--- CP | 118 | 155 | 123 | 108 | 86 | 590 | BLAST | ev | 203 | 203 | 203 | 203 | 203 | 203 TP/P | 24/81 | 29/44 | 24/98 | 29/93 | 32/80 | 138/396 PPV | 0.2963 | 0.6591 | 0.2449 | 0.3118 | 0.4000 | 0.3485 TPR | 0.2034 | 0.1871 | 0.1951 | 0.2685 | 0.3721 | 0.2339 F1 | 0.2412 | 0.2915 | 0.2172 | 0.2886 | 0.3855 | 0.2799 | BLAST + CLIQUE | TP/P | 1/30 | 29/35 | 10/40 | 12/39 | 30/33 | 82/177 PPV | 0.03333 | 0.8286 | 0.2500 | 0.3077 | 0.9091 | 0.4633 TPR | 0.00847 | 0.1871 | 0.0813 | 0.1111 | 0.3488 | 0.1390 F1 | 0.01351 | 0.3053 | 0.1227 | 0.1633 | 0.5042 | 0.2138 Table 4: BLAST vs BLAST+CLIQUE (VFFGDSLSDN) ## References * [1] Altschul, Stephen F and Madden, Thomas L and Schäffer, Alejandto A and Zhang, Jinghui and Zhang, Zheng and Miller, Webb and Lipman, David J Gapped BLAST and PSI-BLAST:a new generation of protein database search programs. Nucleic Acids Research. 1997;25(17):3389-402. * [2] Bateman A, Coin L, Durbin R, Finn RD, Hollich V, Griffiths-Jones S, et al. The Pfam protein families database. Nucleic Acids Research. 2004;32(DI):D138-41. * [3] Bron C, Kerbosch J. Algorithm 457: finding all cliques of an undirected graph; Communications of the ACM. 1973;16(9):575-77. * [4] Dayhoff M, Schwartz R. A Model of Evolutionary Change in Proteins. In Atlas of protein sequence and structure. 1978:345-52. * [5] Hastie, T et al, Elements of Statistical Learning, Springer, 2009 * [6] Finn RD, Clements J, Arndt W, Miller BL, Wheeler TJ, Schreiber F, et al. HMMER: web server 2015 update. Nucleic Acids Research. 2015;43(W1):W30-8. * [7] Henikoff S, Henikoff JG. Amino acid substitution matrices from protein blocks. Proc Natl Acad Sci U S A. 1992;89(22):10915-9. * [8] Pirovano W. and Heringa J. Protein Secondary Structure Prediction. Methods in Molecular Biology. 2010;609. * [9] Rabar B, Zagorščak M, Ristov S, Rosenzweig M, Goldstein P. IGLOSS: iterative gapless local similarity search. Bioinformatics. 2019;35(18):3491-2. * [10] Sigrist CJA, de Castro E, Cerutti L, Cuche BA, Hulo N, Bridge A, et al. New and continuing developments at PROSITE. Nucleic Acids Research. 2013;41(D1):D344-7. * [11] Vujaklija I, Bielen A, Paradžik T, Bidin S, Goldstein P, Vujaklija D. An effective approach for annotation of protein families with low sequence similarity and conserved motifs: Identifying GDSL hydrolases across the plant kingdom. BMC Bioinformatics. 2016;17(1):1-17. * [12] Živa Ramšak, Špela Baebler, Rotter A, Korbar M, Mozetič I, Usadel B, et al. GoMapMan: integration, consolidation and visualization of plant gene annotations within the MapMan ontology. Nucleic Acids Research. 2014;42(D1):D1167-75.
Notes on a non-thermal fluctuation-dissipation relation in quantum Brownian motion Xinyi<EMAIL_ADDRESS> NORDITA Hannes Alfvéns väg 12, SE-106 91 Stockholm, Sweden Abstract We review how unitarity and stationarity in the Schwinger-Keldysh formalism naturally lead to a (quantum) generalized fluctuation-dissipation relation (gFDR) that works beyond thermal equilibrium. Non-Gaussian loop corrections are also presented. Additionally, we illustrate the application of this gFDR in various scenarios related to quantum Brownian motion and the generalized Langevin equation. ###### Contents 1. 1 Introduction 2. 2 Effective SK theories 1. 2.1 Open quantum systems 2. 2.2 Stationary regime 3. 2.3 Generating functional for stationary states 4. 2.4 Quadratic effective action 5. 2.5 Green’s function 6. 2.6 Loop corrections 7. 2.7 Thermalization condition 3. 3 Quantum Brownian motion 1. 3.1 Bath in thermal equilibrium 2. 3.2 Multiple baths 3. 3.3 Out-of-equilibrium bath 4. 3.4 Anomalous diffusion 5. 3.5 Anharmonic oscillator 4. 4 Conclusion 5. A Schwinger-Keldysh 1. A.1 Density operator 2. A.2 Time-evolution kernel 3. A.3 Generating functional 1. A.3.1 Properties 4. A.4 Perturbation theory 5. A.5 Keldysh basis 6. A.6 Green’s functions 6. B Fluctuation-Dissipation Theorem 7. C Generalized Langevin equation 1. C.1 Solution 2. C.2 Stationary limit 8. D Fourier transform ## 1 Introduction The Schwinger-Keldysh (SK) formalism [1, 2] is a well-known path integral framework to study quantum non-equilibrium many-body systems [3]. In the classical limit, it reduces to the Martin-Siggia-Rose (MSR) path integral, which is also equivalent to the stochastic Langevin description. One important aspect of the SK approach is that symmetries become a powerful tool to derive important relations in both quantum and classical regimes. For example, the thermodynamic equilibrium can be formulated as a symmetry of the SK action [4], and leads to the celebrated fluctuation-dissipation theorem (FDT). Another example is the breaking of the time-reversal symmetry giving rise to fluctuation theorems in non-equilibrium physics [5, 6]. Besides, the path integral approach allows a systematic and perturbative way to deal with non-linear theories beyond the scope of the standard Langevin approach. These notes focus on the unitarity constraint on the SK two-point correlation functions in the stationary limit, which can be viewed as a generalized fluctuation-dissipation relation (gFDR) that works beyond thermal stationary states and Markov approximation. While the classical limit of this gFDR is certainly known in the study of aging glassy systems, where it is used to define an effective temperature [7], its use in the broader classical stochastic community does not seem widespread, in contrast to the use of the non-Markovian generalization of the Langevin equation (GLE). The objective of these notes is not to provide an extensive review of existing literature222The present work was conducted mostly independently and with limited awareness of the literature on glassy systems. It is part of my journey to learn about classical and quantum non-equilibrium frameworks.. Instead, the aim is to present a self-contained derivation of the gFDR for non-driven non-equilibrium systems and illustrate its application through various examples. The paper is structured as follows. In section 2, we review the SK framework for open quantum systems with one effective degree of freedom. Then, we impose the stationary condition at the generating functional level. Next we perturbatively derive the gFDR in Fourier space. In section 3, the gFDR is applied to GLE and quantum Brownian motion (QBM). Finally, we conclude in section 4. In the appendix, we provide additional review material on the SK formalism in section A, FDT in section B, and GLE in section C. ## 2 Effective SK theories ### 2.1 Open quantum systems Consider a quantum system whose action is separable: $S=S_{\text{subsystem}}+S_{\text{environment}}+S_{\text{interaction}}.$ (1) By integrating out the environment degrees of freedom, we obtain an SK effective theory for the subsystem, which is equivalent to the Feynman-Vernon theory [8]. The SK generating functional, see A.3, can be rewritten in terms of the reduced density matrix $\rho_{r}$ and the reduced time evolution kernel $J_{r}$: $\displaystyle Z(t,t_{0};\phi,\phi^{\prime}]$ $\displaystyle=\int_{-\infty}^{\infty}dx_{0}\int_{-\infty}^{\infty}dx_{0}^{\prime}\rho_{r}(x_{0},x_{0}^{\prime},t_{0})\int_{-\infty}^{\infty}dx\,J_{r}(x,x,t,x_{0},x_{0}^{\prime},t_{0};\phi,\phi^{\prime}]$ (2) with $\displaystyle J_{r}(x,x^{\prime},t,x_{0},x_{0}^{\prime},t_{0};\phi,\phi^{\prime}]$ $\displaystyle=\int_{q(t_{0})=x_{0}}^{q(t)=x}\mathcal{D}q\int_{q^{\prime}(t_{0})=x_{0}^{\prime}}^{q^{\prime}(t)=x^{\prime}}\mathcal{D}q^{\prime}\,\exp{\left(\frac{i}{\hbar}A_{\text{CG}}[q,q^{\prime};\phi,\phi^{\prime}]\right)},$ (3) where the subindex CG emphasizes a coarse-grained effective action, and $\phi,\phi^{\prime}$ are the external sources. The curly/square parenthesis notation refers to function/functional of the left/right parameters inside, separated by a semi-colon. This action includes the action that affects only the subsystem, and the influence phase $\Phi[q,q^{\prime}]$, which summarizes all the contributions from the environment, its initial state and its interaction with the subsystem: $\displaystyle A_{\text{CG}}[q,q^{\prime};\phi,\phi^{\prime}]$ $\displaystyle=S_{\text{subsystem}}[q]-S_{\text{subsystem}}[q^{\prime}]+\Phi[q,q^{\prime}]$ $\displaystyle+\int_{t_{0}}^{t}ds\,(q(s)\phi(s)-q^{\prime}(s)\phi^{\prime}(s)).$ (4) Here, it is also assumed that the initial density matrix factorizes as a product of the reduced density matrix and that of the environment. It does not matter, though, as we are interested only in the stationary regime. ### 2.2 Stationary regime Let us define the _stationary regime_ as the one with the correlators being 1. 1. independent on the initial conditions and distribution $\rho(x_{0},x_{0}^{\prime},t_{0})$, 2. 2. time-translation invariant. ### 2.3 Generating functional for stationary states Consider a quadratic effective action $A_{0}[q,q^{\prime};\phi,\phi^{\prime}]$. Its corresponding time evolution kernel is a double Gaussian path integral. Therefore, it is solvable and splits into the classical path contribution and fluctuations around the classical path: $\displaystyle J_{r}(x,x,t,x_{0},x_{0}^{\prime},t_{0};\phi,\phi^{\prime}]$ $\displaystyle=\int_{q(t_{0})=x_{0}}^{q(t)=x}\mathcal{D}q\int_{q^{\prime}(t_{0})=x_{0}^{\prime}}^{q^{\prime}(t)=x}\mathcal{D}q^{\prime}\,\exp{\left(\frac{i}{\hbar}A_{0}[q,q^{\prime};\phi,\phi^{\prime}]\right)}$ (5) $\displaystyle=C(t,t_{0})\exp{\left(\frac{i}{\hbar}S_{\mathrm{cl}}(x,x_{0},x_{0}^{\prime};\phi,\phi^{\prime}]\right)},$ (6) where the integration over quantum fluctuations hidden in $C(t,t_{0})$, does not depend on the external sources. The classical action is separable into a term independent of the initial conditions and a term that is not: $\displaystyle S_{\mathrm{cl}}(x,x_{0},x_{0};\phi,\phi^{\prime}]$ $\displaystyle=S_{\mathrm{cl}}(x,0,0;\phi,\phi^{\prime}]+\delta S(x,x_{0},x_{0}^{\prime},\phi,\phi^{\prime}]$ (7) $\displaystyle\approx S_{\mathrm{cl}}^{\infty}(x,0,0;\phi,\phi^{\prime}],\quad(t\longrightarrow\infty)$ (8) where we imposed the condition of independency on the initial distribution, in the stationary limit. The time-translational invariance condition allows us to take the initial and final times to be minus and plus infinities. Finally, the generating functional for stationary states is simplified to: $\displaystyle Z_{0}^{\infty}[\phi,\phi^{\prime}]$ $\displaystyle\equiv Z_{0}(\infty,-\infty;\phi,\phi^{\prime}]$ (9) $\displaystyle=C(\infty,-\infty)\int_{-\infty}^{\infty}dx\,\exp{\left(\frac{i}{\hbar}S_{\mathrm{cl}}^{\infty}(x,0,0;\phi,\phi^{\prime}]\right)},$ (10) since the integrations over the initial distribution is just one. For quadratic theories perturbed by an external potential $V(q,q^{\prime})$, see appendix A.4, we can do perturbation theory as usual: $\displaystyle Z_{V}^{\infty}[\phi,\phi^{\prime}]$ $\displaystyle=\exp\left(-\frac{i}{\hbar}\int_{-\infty}^{\infty}dt\,V\left(\frac{\hbar}{i}\frac{\delta}{\delta\phi},\frac{\hbar}{i}\frac{\delta}{\delta\phi^{\prime}}\right)\right)Z_{0}^{\infty}[\phi,\phi^{\prime}].$ (11) ### 2.4 Quadratic effective action In the stationary limit, the most general SK quadratic effective action (the external currents are set to zero here for simplicity) in Keldysh basis is as follows: $\displaystyle A_{0}[q_{r},q_{a}]=$ $\displaystyle\int_{-\infty}^{\infty}dt^{\prime}\,q_{a}(t^{\prime})\left\\{(D_{ar}*q_{r})(t^{\prime})+i(D_{aa}*q_{a})(t^{\prime})\right\\},$ (12) where the star operation is the Fourier convolution. Note that $q_{r}-q_{r}$ term is prohibited by unitarity, see appendix A. The kernel $D_{ar}$ contains time derivative operators, and $D_{ar}$ is non-zero and satisfies $D_{aa}(-t)=D_{aa}(t).$ (13) The action above can be identified with the stochastic Martin-Siggia-Rose action, where $q_{a}(t)$ plays the role of the auxiliary field (see also appendix of A of [9]). Therefore, it is also equivalent to a generalized stationary Langevin equation with a Gaussian noise $F(t)$: $\displaystyle-(D_{ar}*q_{r})(t)$ $\displaystyle=F(t)$ (14) $\displaystyle\frac{1}{2}\braket{\\{F(t),F(t^{\prime})\\}}$ $\displaystyle=D_{aa}(t-t^{\prime}).$ (15) where the curly parenthesis represents the anticommutator. ### 2.5 Green’s function Because the action (12) is time-translation invariant, we can Fourier transform it: $\displaystyle\tilde{A}_{0}[\tilde{q}_{r},\tilde{q}_{a}]=$ $\displaystyle\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega\,\tilde{q}_{a}(-\omega)\left\\{\tilde{D}_{ar}(\omega)\tilde{q}_{r}(\omega)+i\tilde{D}_{aa}(\omega)\tilde{q}_{a}(\omega)\right\\},$ (16) from which two Feynman diagrams can be read off, see Fig. 1. The $q_{a}-q_{a}$ diagram is the interaction vertex, and the $q_{r}-q_{a}$ diagram is the retarded Green’s function, which is: $\tilde{G}^{R}(\omega)=-\frac{1}{\tilde{D}_{ar}(\omega)},$ (17) since it is the solution of the homogeneous equation: $-(D_{ar}*G^{R})(t)=\delta(t).$ (18) Of course, the retarded Green’s function must satisfy the causality condition $\displaystyle G^{R}(t<0)=0,$ (19) which implies that the lower half plane in the frequency $\omega$-space must be analytic. The autocorrelator, or the symmetric Green’s function (100), is a simple composite diagram (see Fig. 2) in this limit: $\tilde{G}^{S}(\omega)=\left|\tilde{G}^{R}(\omega)\right|^{2}\tilde{D}_{aa}(\omega).$ (20) For complex retarded Green functions, we use the following identity $\displaystyle\mathrm{Im}\tilde{G}^{R}(\omega)=\left|\tilde{G}^{R}(\omega)\right|^{2}\mathrm{Im}\tilde{D}_{ar}(\omega),$ (21) to obtain: $\boxed{\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}=\frac{\tilde{D}_{aa}(\omega)}{\mathrm{Im}\tilde{D}_{ar}(\omega)}}.$ (22) Given that $\tilde{G}^{S}(\omega)$ is even in $\omega$ space, the inverse Fourier transform can be written as: $\displaystyle G^{S}(t)$ $\displaystyle=\frac{1}{\pi}\int_{0}^{\infty}d\omega\,\tilde{G}^{S}(\omega)\cos(\omega t)$ (23) $\displaystyle=\frac{1}{\pi}\int_{0}^{\infty}d\omega\frac{\tilde{D}_{aa}(\omega)}{\mathrm{Im}\tilde{D}_{ar}(\omega)}\mathrm{Im}\tilde{G}^{R}(\omega-i0^{+})\cos(\omega t),\quad(t\longrightarrow\infty)$ (24) where the latter expression (we added the prescription for the poles of the retarded Green’s function) is valid only if the stationarity conditions in section 2.2 are fulfilled. We will see some examples, but before that, let us generalize the above relation for non-quadratic theories. Figure 1: Feynman diagram for the retarded Green’s function $G_{ra}$ on the left and the noise kernel $D_{aa}$ on the right. The solid line represents $q_{r}$ and the dashed line is for $q_{a}$. Figure 2: Feynman diagram for the symmetric Green’s function $G_{rr}$. ### 2.6 Loop corrections Small non-quadratic terms added to the action can be treated perturbatively according to (11). For the retarded Green’s function, also known as the propagator, the loop corrections (aka self-energy) which we call $\delta\tilde{D}_{ar}(\omega)$, form a geometric sum as shown in Fig. 3: $\displaystyle\tilde{G}_{r}^{R}(\omega)$ $\displaystyle=\tilde{G}^{R}(\omega)(1+\delta\tilde{D}_{ar}(\omega)\,\tilde{G}^{R}(\omega)+\delta\tilde{D}_{ar}(\omega)^{2}\,\tilde{G}^{R}(\omega)^{2}+\ldots)$ (25) $\displaystyle=\frac{\tilde{G}^{R}(\omega)}{1-\delta\tilde{D}_{ar}(\omega)\,\tilde{G}^{R}(\omega)}.$ (26) The equation above is known as the Dyson equation. In terms of the kernel $\tilde{D}_{ar}(\omega)$, which is the negative inverse of the propagator, the above expression simplifies to: $\displaystyle\tilde{D}_{ar,r}(\omega)$ $\displaystyle=\tilde{D}_{ar}(\omega)+\delta\tilde{D}_{ar}(\omega).$ (27) Note that we use an additional subindex $r$ to stand for renormalized/corrected values, and it should not be confused with the $r$ from the Keldysh variables. The correction to the other kernel, shown as the black circle in Fig. 4, is simply additive: $\tilde{D}_{aa,r}(\omega)=\tilde{D}_{aa}(\omega)+\delta\tilde{D}_{aa}(\omega).$ (28) Now, new interaction vertices connecting $q_{r}$ with $q_{r}$ can emerge, and we call it $\delta Q(\omega)$ and represent it as the diamond in Fig. 4. This type of interaction breaks the tree level composition rule (22). Nevertheless, we can still write down a generic loop-corrected expression for the symmetric Green’s function in a closed form, which corresponds to Fig. 5: $\displaystyle\tilde{G}_{r}^{S}(\omega)=\tilde{D}_{aa,r}(\omega)\left|\tilde{G}_{r}^{R}(\omega)\right|^{2}(1+\delta Q(\omega)\tilde{D}_{aa,r}(\omega)\left|\tilde{G}_{r}^{R}(\omega)\right|^{2}).$ (29) Replacing the modulus squared by the imaginary part: $\displaystyle\mathrm{Im}\tilde{G}^{R}_{r}(\omega)=\left|\tilde{G}_{r}^{R}(\omega)\right|^{2}\mathrm{Im}\tilde{D}_{ar,r}(\omega),$ (30) we can rewrite the above expression as: $\displaystyle\frac{\tilde{G}_{r}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}_{r}(\omega)}$ $\displaystyle=\frac{\tilde{D}_{aa,r}(\omega)}{\mathrm{Im}\tilde{D}_{ar,r}(\omega)}\left(1+\delta Q(\omega)\frac{\tilde{D}_{aa,r}(\omega)}{\mathrm{Im}\tilde{D}_{ar,r}(\omega)}\mathrm{Im}\tilde{G}^{R}_{r}(\omega)\right).$ (31) This expression is useful when we Taylor-expand the right-hand-side (RHS) in terms of the unperturbed quantities and the loop contributions of different orders. For the leading order correction, we get $\boxed{\frac{\tilde{G}_{r}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}_{r}(\omega)}\approx\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}\left(1+\frac{\delta\tilde{D}_{aa}(\omega)}{\tilde{D}_{aa}(\omega)}-\frac{\mathrm{Im}\delta\tilde{D}_{ar}(\omega)}{\mathrm{Im}\tilde{D}_{ar}(\omega)}+\delta Q(\omega)\,\tilde{G}^{S}(\omega)\right)},$ (32) where the three corrections are shown in Fig. 6. Figure 3: Renormalized retarded Green’s function. The self-energy $\delta\tilde{D}_{ar}(\omega)$ is the white square. Figure 4: Corrections to the interaction vertices. The solid line represents $q_{r}$ and the dashed line $q_{a}$. The black circle is $\delta\tilde{D}_{aa}(\omega)$ and the diamond is $\delta Q(\omega)$. Figure 5: The renormalized symmetric Green’s function is composed of the renormalized retarded Green’s function, see Fig. 3, and the vertex corrections in Fig. 4. Figure 6: The 1-loop contributions (the white geometric shapes are symbolic only) to the renormalized symmetric Green’s function. ### 2.7 Thermalization condition If thermalization happens, both (22) and (31) must satisfy FDT (116): $\displaystyle\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}$ $\displaystyle=\hbar\coth\frac{\hbar\beta\omega}{2},$ (33) $\displaystyle\frac{\tilde{G}_{r}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}_{r}(\omega)}$ $\displaystyle=\hbar\coth\frac{\hbar\beta_{r}\omega}{2}.$ (34) Therefore $\displaystyle\coth\frac{\hbar\beta_{r}\omega}{2}$ $\displaystyle\approx\coth\frac{\hbar\beta\omega}{2}\left(1+\Delta(\omega)\right)$ (35) where we use $\Delta(\omega)$ as a short-hand notation for the loop contributions to the ratio of the Green’s functions, for example, at the leading order, see (32), it is: $\Delta(\omega)=\frac{\delta\tilde{D}_{aa}(\omega)}{\tilde{D}_{aa}(\omega)}-\frac{\mathrm{Im}\delta\tilde{D}_{ar}(\omega)}{\mathrm{Im}\tilde{D}_{ar}(\omega)}+\delta Q(\omega)\,\tilde{G}^{S}(\omega).$ (36) Note that $\Delta(\omega)$ can depend on the temperature, but its dependence is not made explicit here. On the other hand, FDT is a non-perturbative result. Therefore, we must systematically expand the hyperbolic cotangent in terms of the corrections to the inverse temperature, namely $\beta_{r}=\beta+\delta\beta$, as $\delta\beta$ shall absorb the loop contributions in the thermal case. Then, the expansion is to be compared order by order with the one from (35). Let us show how it works for the leading order. First: $\displaystyle\coth\frac{\hbar\beta_{r}\omega}{2}$ $\displaystyle=\coth\frac{\hbar(\beta+\delta\beta)\omega}{2}$ (37) $\displaystyle\approx\coth\frac{\hbar\beta\omega}{2}\left(1-\hbar\delta\beta\omega\operatorname{csch}\hbar\beta\omega\right).$ (38) Then, comparing the RHS of (35) with (38), we see that thermalization condition at this order implies: $\delta\beta\overset{!}{=}-\frac{\Delta(\omega)}{\hbar\omega\operatorname{csch}\hbar\beta\omega}.$ (39) Since $\delta\beta$ must be a constant, there are three possible scenarios for thermalization: $\displaystyle\Delta(\omega)\begin{cases}\propto\omega\operatorname{csch}\hbar\beta\omega&\text{Quantum thermalization}\\\ =0&\text{Quantum thermalization: }\beta_{r}=\beta\\\ =\text{constant}&\text{Classical/high-T thermalization}\\\ \end{cases}$ (40) Again, this a perturbative result, hence the conclusion for (quantum or classical) thermalization is only valid up to the level studied. This means higher order terms could potentially drive the system away from thermalization. On the other hand, the negative statement about thermalization is sufficient with only perturbative information. ## 3 Quantum Brownian motion In this section, we focus on one of the simplest integrable SK effective theories, that is, a quantum Brownian motion. This is a quantum harmonic oscillator coupled linearly to a harmonic bath. The classical action is the sum of the following actions: $\displaystyle S[x]=$ $\displaystyle\int^{t}_{0}ds\,\frac{1}{2}M\left(\dot{x}(s)^{2}-\Omega_{R}^{2}x(s)^{2}\right)$ (41) $\displaystyle S_{\mathrm{bath}}[\\{x_{n}\\}]=$ $\displaystyle\int^{t}_{0}ds\sum_{n}\frac{1}{2}m_{n}\left(\dot{x}_{n}(s)^{2}-\omega_{n}^{2}x_{n}(s)^{2}\right)$ (42) $\displaystyle S_{\mathrm{int}}[x,\\{x_{n}\\}]=$ $\displaystyle-\int^{t}_{0}ds\sum_{n}c_{n}x_{n}(s)x(s)$ (43) where $\Omega_{R}^{2}=\Omega^{2}+\delta\Omega^{2},\quad\delta\Omega^{2}=\sum_{n}\frac{c_{n}^{2}}{2m_{n}\omega_{n}^{2}}.$ (44) In the continuum limit333The finite version is also integrable, but thermalization happens only in the continuum limit., the bath is fully characterized by a spectral density function $I(\omega)$, that is related to the discrete frequencies as follow: $I(\omega)=\sum_{n}\frac{c_{n}^{2}}{2m_{n}\omega_{n}}\delta\left(\omega-\omega_{n}\right).$ (45) Traditionally, power law distributions $I(\omega)\propto\omega^{\alpha}$ were studied, where $\alpha=1$ is called the Ohmic bath, and when $\alpha$ is smaller or bigger than 1, sub-Ohmic or supra-Ohmic baths. The Feynman-Vernon/SK approach gives rise to an effective quadratic action (12) with the kernels [10, 11]: $\displaystyle D_{ar}(t)$ $\displaystyle=-M\delta(t)(\partial^{2}_{t}+\Omega_{R}^{2})-\mu(t)$ (46) $\displaystyle D_{aa}(t)$ $\displaystyle=\nu(t)$ (47) where $\mu$ and $\nu$ are known as the dissipation and the noise kernel, respectively. In this case of a linearly coupled bath, the dissipation kernel in Fourier space is fully determined by the spectral density of the bath, [12, 13]444Note that [13] absorbs the $\pi$ in the definition of the spectral density.: $\tilde{\mu}(\omega)=i\pi I(\omega).$ (48) This problem is solvable for all time [14], and it is equivalent to the stationary generalized Langevin equation, see appendix C. The Fourier transform of the retarded Green’s function is: $\tilde{G}^{R}(\omega)=\frac{1/M}{-\omega^{2}+\Omega_{R}^{2}+\tilde{\mu}(\omega)/M},$ (49) and the symmetric Green’s function in the stationary limit is obtained from (22): $\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}=\frac{\tilde{\nu}(\omega)}{\mathrm{Im}\tilde{\mu}(\omega)}.$ (50) ### 3.1 Bath in thermal equilibrium If the bath is in initial thermal equilibrium, the noise kernel is: $\displaystyle\nu(t)$ $\displaystyle=\int_{0}^{\infty}d\omega\,\hbar\coth\left(\frac{\hbar\beta\omega}{2}\right)I(\omega)\cos\omega t,$ (51) which in Fourier space is just the FDT for the bath (sometimes referred as 2FDT in the classical literature): $\frac{\tilde{\nu}(\omega)}{\mathrm{Im}\tilde{\mu}(\omega)}=\hbar\coth\left(\frac{\hbar\beta\omega}{2}\right)\xrightarrow{\hbar\rightarrow 0}\frac{2}{\beta\omega}.$ (52) If the bath were to be modeled by a scalar field, $\nu$ and $\mu$ would correspond precisely to the symmetric and retarded Green’s functions of the field, see [9]. Now, the gFDR (50) implies: $\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}=\hbar\coth\left(\frac{\hbar\beta\omega}{2}\right)\xrightarrow{\hbar\rightarrow 0}\frac{2}{\beta\omega}.$ (53) In other words, when the system eventually thermalizes, the final temperature is the same as the one of the initial bath. This case is also studied and emphasized in [9]. The FDT above is sometimes called 1FDT in the classical literature, and it will differ from 2FDT in non-equilibrium scenarios. ### 3.2 Multiple baths The sum of Gaussian random variables are Gaussian, hence, when there are many harmonic baths, the gFDR (50)is easily generalized to: $\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}=\frac{\sum_{i}\tilde{\nu}_{i}(\omega)}{\sum_{i}\mathrm{Im}\tilde{\mu}_{i}(\omega)}.$ (54) Even when the baths are in thermal equilibrium (i.e. satisfying (52)), it is clear from the above expression that, in general, the system does not thermalize. It was also shown in [5] that the multi-bath setting breaks the thermal symmetry. Only in the classical limit, when all the baths have the same spectral density, then, the particle thermalizes at an average temperature of the baths: $\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}=\frac{2k_{B}\bar{T}}{\omega}.$ (55) This case has been numerically studied for realistic thermal baths in [15]. The above expression can be generalized to effective temperatures for classical out-of-equilibrium baths, see [16]. ### 3.3 Out-of-equilibrium bath The generalized Langevin equation is used to model classical glassy systems with a slow relaxation dynamics [17]. The gFDR (50) naturally appears in this context, where an effective temperature [7] defined for the bath encodes the non-equilibrium properties. In this section, we apply the gFDR to a relatively recent example of a semiclassical time glass model in [18], that was solved using fractional calculus. The microscopic model is characterized by the following noise kernel and the spectral density (hence the dissipation kernel through (48))555The paper [18] defines their spectral density $J(\omega)$ that differs a factor $\pi$ from our definition, i.e. $J(\omega)=\pi I(\omega)$.: $\displaystyle\nu(t)$ $\displaystyle=2\int_{0}^{\infty}d\omega\frac{(t_{s}\omega)^{1-s}}{\beta\omega}I(\omega)\cos(\omega t)$ (56) $\displaystyle I(\omega)$ $\displaystyle=\frac{\eta}{\pi}\sin\left(\frac{\pi}{2}s\right)\omega^{s},$ (57) where $0<s<1$. The integration gives: $\displaystyle\nu(t)$ $\displaystyle=\frac{2}{\pi}\frac{t_{s}^{1-s}}{\beta}\eta\sin\left(\frac{\pi}{2}s\right)\int_{0}^{\infty}d\omega\cos(\omega t)$ (58) $\displaystyle=\frac{2t_{s}^{1-s}}{\beta}\eta\sin\left(\frac{\pi}{2}s\right)\delta(t).$ (59) The prefactor before the Dirac delta function is the Fourier transform of the noise kernel, therefore, together with (48), the relation (50) reduces to: $\displaystyle\frac{\tilde{G}^{S}(\omega)}{\mathrm{Im}\tilde{G}^{R}(\omega)}=\frac{2t_{s}^{1-s}}{\beta\omega^{s}}.$ (60) The effective frequency-dependent inverse temperature is then: $\beta_{\text{eff}}(\omega)=(\omega t_{s})^{s-1}\beta.$ (61) When $s=1$, which corresponds to an Ohmic bath, we recover the classical FDT (118). According to [18], only when $s<1$, i.e. in the sub-Ohmic regime, the non-equilibrium time glassy behavior appears. ### 3.4 Anomalous diffusion The diffusion of a classical Brownian particle is characterized by the late- time behavior of the mean square displacement (MSD): $MSD(t)=\left\langle[q(t)-q(0)]^{2}\right\rangle.$ (62) Ford and O’Connell in [19] related MSD to FDT and studied anomalous diffusion for non-Ohmic baths. Their relation for the time derivative of MSD can be readily generalized to: $MSD^{\prime}(t)=\int_{0}^{\infty}d\omega\,\tilde{G}^{S}(\omega)\omega\sin(\omega t)$ (63) where the symmetric Green’s function can be determined by the gFDR (50). Let us apply this method to the non-equilibrium time glass model in the subsection above, and we will reproduce the asymptotic results in the supplemental material of [18]. Recall (60) and the retarded Green’s function (49) (with free particle as done in [18]): $\displaystyle\tilde{G}_{S}(\omega)$ $\displaystyle=\frac{2t_{s}^{1-s}}{\beta\omega^{s}}\frac{\pi I(\omega)M^{-2}}{\omega^{4}+\pi^{2}I(\omega)^{2}M^{-2}}$ (64) $\displaystyle\propto\frac{1}{\omega^{4}+A^{2}\omega^{2s}}$ (65) $\displaystyle\propto\frac{1}{\omega^{2s}(\omega^{4-2s}+A)}$ (66) $\displaystyle\propto\frac{1}{\omega^{2s}},\quad(\omega\approx 0)$ (67) where $A=M^{-1}\eta\sin\left(\frac{\pi}{2}s\right)$, and we took the small frequency limit in order to obtain the late-time behavior for the time derivative of the MSD (63), which is $\displaystyle MSD^{\prime}(t)$ $\displaystyle\propto\int_{0}^{\infty}d\omega\,\frac{\sin(\omega t)}{\omega^{2s-1}}\propto t^{2s-2}$ (68) hence, for MSD, we recover the following asymptotic results: $\displaystyle MSD(t)$ $\displaystyle\propto\log(t),\quad s=1/2,$ (69) $\displaystyle MSD(t)$ $\displaystyle\propto t^{2s-1},\quad s\neq 1/2.$ (70) ### 3.5 Anharmonic oscillator Now, let us show an example for the loop-corrected gFDR (32). Instead of the harmonic oscillator, let us consider a quartic anharmonic oscillator subject to the following SK potential: $\displaystyle V(q,q^{\prime})=\frac{\lambda}{4!}(q^{4}-q^{\prime 4})$ (71) which in Keldysh basis becomes $\displaystyle V(q_{r},q_{a})=\frac{\lambda}{4!}(q_{r}q_{a}^{3}-3q_{r}^{3}q_{a}),$ (72) where the two contributions are diagrammatically represented in Fig. 7. With these diagrams, we can build the leading order loop corrections shown in Fig. 8, which correspond to the $\delta\tilde{D}_{ar}$, $\delta\tilde{D}_{aa}$ and $\delta Q$ term in (31), respectively. However, the loop integral with the retarded Green’s function vanishes because of causality. Furthermore, the loop integral with the symmetric Green’s function is real. That means, at leading order in $\lambda$, the qFDT is _not_ corrected. Hence, at the leading order perturbation, the anharmonic oscillator thermalizes with the same temperature as the harmonic oscillator, as discussed in subsection 2.7. The loop computation was done explicitly by Hsiang et al [20], and that is precisely their result. Furthermore, the same authors have a non-perturbative argument for thermalization discussed in [21]. Then, it would be interesting to compute the next-to-leading order corrections, where the Feynman diagrams are shown in Fig. 9, because these do not seem to vanish and could potentially violate the thermalization condition. Figure 7: The quartic interaction vertices Figure 8: Leading order corrections. However, the diagrams with retarded Green’s function loop (the two right ones) are vanishing because of causality. Their corresponding complex conjugate ones are not shown. Figure 9: Next-to-leading order corrections. Their corresponding complex conjugate ones are not shown. ## 4 Conclusion In summary, we presented the generalized fluctuation-dissipation relation (gFDR) (22) for the quadratic Schwinger-Keldysh (SK) path integral and its loop generalization (31). The gFDR generalizes the fluctuation-dissipation theorem (FDT) to non-thermal baths, providing valuable insight into FDT itself and its breakdown in various scenarios. We were also able to reproduce the asymptotic behavior of the correlation function and the anomalous diffusion for the time glass model in [18]. Besides, the loop gFDR (31) can be used to determine thermalization if the quadratic system is perturbed. Beyond the presented cases, equations (22) and (31) apply to other effective SK theories in the stationary limit, such as the SK effective field theory for diffusion [22, 23]. This theory, however, is thermal by construction, leading to the reduction of the gFDR to FDT for all loop orders, as one can check [24]. Another example of an SK effective theory where these equations apply is a non-linear SK diffusion model without an external driven force, which was proposed to model a quantum time crystal state in [25]. Unitarity and stationarity can also be used to generalize the non-linear quantum FDTs presented in [26] to non-linear gFDRs. Finally, for driven systems, there is some work done towards a FDR for non-equilibrium steady states in [27]. ### Acknowledgements We would like to thank K. Zarembo, S. Krishnamurthy and E. Aurell for discussion and reviewing the manuscript. We would also like to thank L. Cugliandolo for pointing out some relevant existing work. This work was supported by the Knut and Alice Wallenberg Foundation. ## Appendix A Schwinger-Keldysh In this section, we briefly review the Schwinger-Keldysh formalism in the coordinate respresentation. The reader can learn more from [3, 28, 29]. ### A.1 Density operator Consider a time-evolving quantum statistical system characterized by the density operator: $\displaystyle\rho(t)$ $\displaystyle=\frac{1}{N}\sum_{n=1}^{N}p_{n}\ket{\psi_{n}(t)}\bra{\psi_{n}(t)},$ (73) such that the statistical weights $p_{n}$ sum to one, and the evolution of the quantum states $\ket{\psi_{n}(t)}$ is governed by the Schroedinger equation: $i\hbar\frac{d}{dt}\ket{\psi_{n}(t)}=H\ket{\psi_{n}(t)},$ (74) whose solution is $\ket{\psi_{n}(t)}=U(t-t_{0})\ket{\psi_{n}(t_{0})}$ (75) with the evolution operator being $U(t)=e^{-itH/\hbar}.$ (76) Therefore the evolution of the density operator is: $\rho(t)=U\left(t-t_{0}\right)\rho(t_{0})U^{\dagger}\left(t-t_{0}\right).$ (77) ### A.2 Time-evolution kernel The density operator in the coordinate basis is: $\displaystyle\rho(x,x^{\prime},t)$ $\displaystyle=\frac{1}{N}\sum_{n=1}^{N}p_{n}\,\psi_{n}(x,t)\psi_{n}(x^{\prime},t)^{*}$ (78) $\displaystyle=\int_{-\infty}^{\infty}dx_{0}\,\int_{-\infty}^{\infty}dx_{0}^{\prime}\,J(x,x^{\prime},t,x_{0},x_{0}^{\prime},t_{0})\rho(x_{0},x_{0}^{\prime},t_{0}),$ (79) where the time-evolution kernel is a double copy of the quantum mechanical time-evolution kernel: $\displaystyle J(x,x^{\prime},t,x_{0},x_{0}^{\prime},t_{0})$ $\displaystyle\equiv\langle x|U(t-t_{0})|x_{0}\rangle\langle x^{\prime}|U(t-t_{0})|x^{\prime}_{0}\rangle^{*}$ (80) $\displaystyle=\int_{q(t_{0})=x_{0}}^{q(t)=x}\mathcal{D}q\int_{q^{\prime}(t_{0})=x_{0}^{\prime}}^{q^{\prime}(t)=x^{\prime}}\mathcal{D}q^{\prime}\,\exp{\left(\frac{i}{\hbar}A[q,q^{\prime}]\right)},$ (81) where the action is: $A[q,q^{\prime}]=S[q]-S[q^{\prime}].$ (82) Notice that the time-evolution kernel is time-translation invariance, because of the unitary evolution. ### A.3 Generating functional We can probe the system by minimally coupling it to an external source $\phi(t)$, that means: $\displaystyle J(x,x,t,x_{0},x_{0}^{\prime},t_{0};\phi,\phi^{\prime}]$ $\displaystyle=\int_{q(t_{0})=x_{0}}^{q(t)=x}\mathcal{D}q\int_{q^{\prime}(t_{0})=x_{0}^{\prime}}^{q^{\prime}(t)=x^{\prime}}\mathcal{D}q^{\prime}\,\exp{\left(\frac{i}{\hbar}A[q,q^{\prime};\phi,\phi^{\prime}]\right)}$ (83) $\displaystyle A[q,q^{\prime};\phi,\phi^{\prime}]$ $\displaystyle=S[q]-S[q^{\prime}]+\int_{t_{0}}^{t}ds\,(q(s)\phi(s)-q^{\prime}(s)\phi^{\prime}(s)).$ (84) Then the partition function of this system is the generating functional for the correlation functions of the original system: $\displaystyle Z[\phi,\phi^{\prime}]$ $\displaystyle=\,\int_{-\infty}^{\infty}dx_{0}\,\int_{-\infty}^{\infty}dx_{0}^{\prime}\,\rho(x_{0},x_{0}^{\prime},t_{0})\int_{-\infty}^{\infty}dx\,J(x,x,t,x_{0},x_{0}^{\prime},t_{0};\phi,\phi^{\prime}].$ (85) In particular, its logarithm $W[\phi,\phi^{\prime}]\equiv\ln Z[\phi,\phi^{\prime}]$ (86) generates connected correlators. #### A.3.1 Properties The unitary evolution of the density matrix implies the following properties for the SK generating functional: 1. 1. Normalization condition: $Z[\phi,\phi]=1\longrightarrow W[\phi,\phi^{\prime}]=0$ (87) 2. 2. Reflection symmetry: $Z[\phi^{\prime},\phi]=Z[\phi,\phi^{\prime}]^{*}\longrightarrow W[\phi^{\prime},\phi]=W[\phi,\phi^{\prime}]^{*}$ (88) ### A.4 Perturbation theory For a quadratic action $A_{0}[q,q^{\prime};\phi,\phi^{\prime}]$ perturbed with a non-quadratic potential $V(q,q^{\prime})$: $\displaystyle A_{V}[q,q^{\prime};\phi,\phi^{\prime}]=A_{0}[q,q^{\prime};\phi,\phi^{\prime}]-\int_{t_{0}}^{t}ds\,V(q(s),q^{\prime}(s)),$ (89) the corresponding generating functional of the resulting action $A_{V}[q,q^{\prime};\phi,\phi^{\prime}]$ can be expressed in terms of the unperturbed one: $\displaystyle Z_{V}[\phi,\phi^{\prime}]$ $\displaystyle=\exp\left(-\frac{i}{\hbar}\int_{t_{0}}^{t}ds\,V\left(\frac{\hbar}{i}\frac{\delta}{\delta\phi},\frac{\hbar}{i}\frac{\delta}{\delta\phi^{\prime}}\right)\right)Z_{0}[\phi,\phi^{\prime}]$ (90) $\displaystyle\approx\left(1-\frac{i}{\hbar}\int_{t_{0}}^{t}ds\,V\left(\frac{\hbar}{i}\frac{\delta}{\delta\phi},\frac{\hbar}{i}\frac{\delta}{\delta\phi^{\prime}}\right)+\ldots\right)Z_{0}[\phi,\phi^{\prime}],$ (91) where in the last step, we expanded the exponential. ### A.5 Keldysh basis In practice, it is more convenient to work in the so-called Keldysh basis, i.e.: $\displaystyle q_{r}$ $\displaystyle=\frac{1}{2}(q+q^{\prime}),\quad q_{a}=q-q^{\prime},$ (92) that also applies to the external currents. Then, the action (84) becomes: $\displaystyle A[q_{r},q_{a};\phi_{r},\phi_{a}]=S[q_{r},q_{a}]+\int_{t_{0}}^{t}ds\,(q_{r}(s)\phi_{a}(s)+q_{a}(s)\phi_{r}(s)).$ (93) ### A.6 Green’s functions The two-point (path-ordered) correlation functions in Keldysh basis are defined below: $\displaystyle G_{rr}(t_{1},t_{2})$ $\displaystyle\equiv\braket{\mathcal{P}q_{r}(t_{1})q_{r}(t_{2})}=\left(\frac{\hbar}{i}\right)^{2}\left.\frac{\delta^{2}W[\phi_{r},\phi_{a}]}{\delta\phi_{a}(t_{1})\phi_{a}(t_{2})}\right|_{\phi_{r,a}=0}$ (94) $\displaystyle\frac{\hbar}{i}G_{ra}(t_{1},t_{2})$ $\displaystyle\equiv\braket{\mathcal{P}q_{r}(t_{1})q_{a}(t_{2})}=\left(\frac{\hbar}{i}\right)^{2}\left.\frac{\delta^{2}W[\phi_{r},\phi_{a}]}{\delta\phi_{a}(t_{1})\phi_{r}(t_{2})}\right|_{\phi_{r,a}=0}$ (95) $\displaystyle\frac{\hbar}{i}G_{ar}(t_{1},t_{2})$ $\displaystyle\equiv\braket{\mathcal{P}q_{a}(t_{1})q_{r}(t_{2})}=\left(\frac{\hbar}{i}\right)^{2}\left.\frac{\delta^{2}W[\phi_{r},\phi_{a}]}{\delta\phi_{r}(t_{1})\phi_{a}(t_{2})}\right|_{\phi_{r,a}=0}$ (96) $\displaystyle\left(\frac{\hbar}{i}\right)^{2}G_{aa}(t_{1},t_{2})$ $\displaystyle\equiv\braket{\mathcal{P}q_{a}(t_{1})q_{a}(t_{2})}=\left(\frac{\hbar}{i}\right)^{2}\left.\frac{\delta^{2}W[\phi_{r},\phi_{a}]}{\delta\phi_{r}(t_{1})\phi_{r}(t_{2})}\right|_{\phi_{r,a}=0}=0.$ (97) The latter one vanishes due to the normalization condition (87). In fact, (87) implies that all $G_{a\ldots a}=0.$ (98) This identity is sometimes called _Schwinger-Keldysh collapse rule_ in the literature. The reflection symmetry (88) implies: $G_{ra}(t_{1},t_{2})=G_{ar}(t_{2},t_{1}),$ (99) i.e. it relates the retarded and the advanced Green’s functions, since the non-vanishing two-point correlators are identified with the symmetric, retarded and advanced Green’s functions: $\displaystyle G_{rr}(t_{1},t_{2})$ $\displaystyle=G^{S}(t_{1},t_{2})\equiv\frac{1}{2}\langle\\{q(t_{1}),q(t_{2})\\}\rangle$ (100) $\displaystyle G_{ra}(t_{1},t_{2})$ $\displaystyle=G^{R}(t_{1},t_{2})\equiv\frac{i}{\hbar}\,\theta(t_{1}-t_{2})\Delta(t_{1},t_{2})$ (101) $\displaystyle G_{ar}(t_{1},t_{2})$ $\displaystyle=G^{A}(t_{1},t_{2})\equiv-\frac{i}{\hbar}\,\theta(t_{2}-t_{1})\Delta(t_{1},t_{2}),$ (102) where $\Delta(t_{1},t_{2})=\langle[q(t_{1}),q(t_{2})]\rangle.$ (103) Hence, for a quadratic theory, the influence phase is simply: $\displaystyle W_{0}[\phi_{r},\phi_{a}]$ $\displaystyle=\frac{i}{\hbar}\frac{1}{2}\int_{t_{0}}^{t}dt_{1}\int_{t_{0}}^{t}dt_{2}\,\Phi(t_{1})^{T}\mathbf{G}(t_{1},t_{2})\Phi(t_{2})$ (104) where $\displaystyle\Phi$ $\displaystyle=\left(\begin{array}[]{l}\phi_{r}\\\ \phi_{a}\end{array}\right),\quad\mathbf{G}=\left(\begin{array}[]{cc}0&G^{A}\\\ G^{R}&\frac{i}{\hbar}G^{S}\end{array}\right).$ (109) ## Appendix B Fluctuation-Dissipation Theorem When the density matrix is thermal, i.e. a Gibbs state: $\displaystyle\rho_{\beta}=\frac{1}{Z_{\beta}}e^{-\beta H},\quad Z_{\beta}=\mathrm{Tr}\,\left(e^{-\beta H}\right),$ (110) then the generating functional satisfies the so-called Kubo-Martin-Schwinger (KMS) condition [30, 31]. In the case of thermal two-point correlators, the KMS condition translates into the well-known quantum fluctuation-dissipation theorem (FDT). Let us show how to derive it. First, $\displaystyle\langle O_{i}(t_{1})O_{j}(t_{2})\rangle$ $\displaystyle=\frac{1}{Z_{\beta}}\mathrm{Tr}\,[e^{-\beta H}U^{\dagger}(t_{1}-t_{0})O_{i}U(t_{1}-t_{2})O_{j}U(t_{2}-t_{0})]$ $\displaystyle=\frac{1}{Z_{\beta}}\mathrm{Tr}\,[e^{-\beta H}U^{\dagger}(t_{1}-t_{0})O_{i}$ $\displaystyle\qquad\qquad U(t_{1}-t_{0})U(t_{0}-t_{2})O_{j}U(t_{2}-t_{0})]$ $\displaystyle=\frac{1}{Z_{\beta}}\mathrm{Tr}\,[e^{-\beta H}e^{\beta H}U(t_{0}-t_{2})O_{j}U(t_{2}-t_{0})e^{-\beta H}U^{\dagger}(t_{1}-t_{0})$ $\displaystyle\qquad\qquad O_{i}U(t_{1}-t_{0})]$ $\displaystyle=\frac{1}{Z_{\beta}}\mathrm{Tr}\,[e^{-\beta H}U(t_{0}-t_{2}+i\hbar\beta)O_{j}U(t_{2}-i\hbar\beta- t_{1})O_{i}U(t_{1}-t_{0})]$ $\displaystyle=\langle O_{j}(t_{2}-i\hbar\beta)O_{i}(t_{1})\rangle.$ (111) Now, applying the following operator identities: $\displaystyle AB$ $\displaystyle=\frac{1}{2}\\{A,B\\}+\frac{1}{2}[A,B]$ (112) $\displaystyle BA$ $\displaystyle=\frac{1}{2}\\{A,B\\}-\frac{1}{2}[A,B]$ (113) and the definitions of the Green’s functions in (100), the identity (111) becomes $\displaystyle G^{S}_{ij}(t_{1}-t_{2})+\frac{1}{2}\Delta_{ij}(t_{1}-t_{2})$ $\displaystyle=G^{S}_{ij}(t_{1}-t_{2}+i\hbar\beta)-\frac{1}{2}\Delta_{ij}(t_{1}-t_{2}+i\hbar\beta).$ (114) Its Fourier transform is simply: $\tilde{G}^{S}_{ij}(\omega)=\frac{\hbar}{2}\coth{\frac{\hbar\beta\omega}{2}}\tilde{\Delta}_{ij}(\omega),$ (115) which, in terms of the retarded Green’s function, reduces to the familiar quantum FDT: $\displaystyle\tilde{G}^{S}_{ij}(\omega)$ $\displaystyle=\hbar\coth{\frac{\hbar\beta\omega}{2}}\mathrm{Im}\tilde{G}^{R}_{ij}(\omega)$ (116) since $\mathrm{Im}\tilde{G}^{R}(\omega)=\frac{\tilde{\Delta}(\omega)}{2\hbar}.$ (117) Note that the factor 2 dividing is due to the retarded Green’s function defined only for positive time, see D. Finally, in the classical or high temperature limit, we obtain the classical FDT: $\displaystyle\tilde{G}^{S}_{ij}(\omega)=\frac{2}{\beta\omega}\mathrm{Im}\tilde{G}^{R}_{ij}(\omega),\quad(\hbar\rightarrow 0\;\text{and/or}\;\beta\rightarrow 0).$ (118) ## Appendix C Generalized Langevin equation A quantum or classical harmonic oscillator coupled linearly to a harmonic bath is an exactly solvable model [14, 12], and, as discussed in section 3, can be described in terms the generalized Langevin equation: $\displaystyle M\ddot{q}(t)+M(\Omega^{2}+\delta\Omega^{2})q(t)+\int_{0}^{t}d\tau\mu(t-\tau)q(\tau)=F(t)$ (119) with the initial conditions: $q(0)=q_{0},\quad\dot{q}(0)=\dot{q}_{0}.$ (120) and a Gaussian noise: $\displaystyle\langle F(t)\rangle$ $\displaystyle=0$ (121) $\displaystyle\frac{1}{2}\braket{\\{F(t),F(t^{\prime})\\}}$ $\displaystyle=\nu(t-t^{\prime}).$ (122) ### C.1 Solution It is common in the literature to find the generalized Langevin equation in terms of the damping kernel $\gamma$, aka the memory function, which is defined as: $\mu(t-\tau)=M\frac{\partial}{\partial t}\gamma(t-\tau)$ (123) then: $\ddot{q}(t)+\Omega^{2}q(t)+\int_{0}^{t}d\tau\gamma(t-\tau)\dot{q}(\tau)+\gamma(t)q(0)=F(t)/M$ (124) The generalized Langevin equation (124) can be solved using Laplace transform, which gives: $(s^{2}+\Omega^{2}+s\hat{\gamma}(s))\hat{q}(s)=sq_{0}+\dot{q}_{0}+\hat{F}(s)/M$ (125) from which the solution is: $\hat{q}(s)=M(s\,q_{0}+\dot{q}_{0})\hat{G}(s)+\hat{F}(s)\hat{G}(s)$ (126) where the Laplace transform of the (retarded) Green’s function is $\hat{G}(s)=\dfrac{1/M}{s^{2}+\Omega^{2}+s\hat{\gamma}(s)},$ (127) which, in time space, must satisfy the initial conditions $G(0)=0,\quad\dot{G}(0)=\frac{1}{M}$ (128) that fully determine it. The solution back to the time space is: $q(t)=M\left(q_{0}\dot{G}(t)+\dot{q}_{0}G(t)\right)+\int_{0}^{t}d\tau G(t-\tau)F(\tau).$ (129) ### C.2 Stationary limit The general solution (129) simplifies in the stationary limit, reducing to: $\displaystyle q_{s}(t)=\int_{0}^{\infty}d\tau G(t-\tau)F(\tau),\quad(t\rightarrow\infty),$ (130) that means for non-vanishing initial conditions, we require $G(t\longrightarrow\infty)=0$ and $\dot{G}(t\longrightarrow\infty)=0$. However, if $q_{0}=0$ then $G(t\longrightarrow\infty)=0$ is sufficient to reach stationarity. The causality condition of the Green’s function (19) allows the integration to be extended to the full real axis. Hence, upon Fourier transform, the stationary solution (130) is just a product: $\tilde{q}_{s}(\omega)=\tilde{G}(\omega)\tilde{F}(\omega).$ (131) The gFDR (50) can be readily obtained using the definition of the correlation function and the stationary solution (131). ## Appendix D Fourier transform Consider the following definitions of Fourier transform: $\displaystyle\tilde{f}(\omega)$ $\displaystyle\equiv\int_{-\infty}^{\infty}dt\,e^{-i\omega t}f(t)$ (132) $\displaystyle\tilde{f}_{+}(\omega)$ $\displaystyle\equiv\int_{0}^{\infty}dt\,e^{-i\omega t}f(t)$ (133) $\displaystyle\tilde{f}_{-}(\omega)$ $\displaystyle\equiv\int^{0}_{-\infty}dt\,e^{-i\omega t}f(t)$ (134) such that $\displaystyle\tilde{f}(\omega)$ $\displaystyle=\tilde{f}_{-}(\omega)+\tilde{f}_{+}(\omega).$ (135) It follows that: 1. 1. For functions even in time $f(t)=f(-t)$: $\tilde{f}_{-}(\omega)=\tilde{f}_{+}(-\omega)\Longrightarrow\tilde{f}(\omega)=\tilde{f}(-\omega)$ (136) 2. 2. For functions odd in time $f(t)=-f(-t)$: $\tilde{f}_{-}(\omega)=-\tilde{f}_{+}(-\omega)\Longrightarrow\tilde{f}(\omega)=-\tilde{f}(-\omega)$ (137) 3. 3. If $\tilde{f}_{-}(\omega)=\tilde{f}_{+}(\omega)$, then $\tilde{f}(\omega)=2\tilde{f}_{+}(\omega)$. 4. 4. If $g(t)=f(t)\theta(t)$, then $\tilde{g}(\omega)=\tilde{f}_{+}(\omega)$. ## References * [1] J. Schwinger, Brownian motion of a quantum oscillator, Journal of Mathematical Physics 2, 407–432, 1961. * [2] L. V. Keldysh, Diagram technique for nonequilibrium processes, Sov. Phys. JETP 20, 1018–1026, 1965. * [3] A. Kamenev, _Field theory of non-equilibrium systems_. Cambridge University Press, 2023. * [4] L. M. Sieberer, A. Chiocchetta, A. Gambassi, U. C. Täuber and S. Diehl, Thermodynamic equilibrium as a symmetry of the schwinger-keldysh action, 92, Physical Review B, 2015. * [5] C. Aron, G. Biroli and L. F. Cugliandolo, Symmetries of generating functionals of langevin processes with colored multiplicative noise, Journal of Statistical Mechanics: Theory and Experiment 2010, P11018, 2010. * [6] C. Aron, G. Biroli and L. F. Cugliandolo, (non) equilibrium dynamics: a (broken) symmetry of the keldysh generating functional, SciPost Physics 4, 008, 2018. * [7] L. F. Cugliandolo, The effective temperature, Journal of Physics A: Mathematical and Theoretical 44, 483001, 2011. * [8] R. Feynman and F. Vernon, The theory of a general quantum system interacting with a linear dissipative system, Annals of Physics 24, 173, 1963. * [9] J. T. Hsiang, C. H. Chou, Y. Subaşı and B. L. Hu, Quantum thermodynamics from the nonequilibrium dynamics of open systems: Energy, heat capacity, and the third law, Phys. Rev. E 97, 012135, 2018, [arXiv:1703.04970 [quant-ph]]. * [10] A. O. Caldeira and A. J. Leggett, Path integral approach to quantum Brownian motion, Physica A 121, 587–616, 1983. * [11] B. L. Hu, J. P. Paz and Y.-h. Zhang, Quantum Brownian motion in a general environment: I. Exact master equation with nonlocal dissipation and colored noise, Phys. Rev. D 45, 2843–2861, 1992. * [12] G. W. Ford, J. T. Lewis and O’Connell, Quantum langevin equation, Phys. Rev. A 37, 4419, 1988. * [13] A. O. Caldeira, _An introduction to macroscopic quantum phenomena and quantum dissipation_. Cambridge University Press, 2014. * [14] C. Fleming, A. Roura and B. Hu, Exact analytical solutions to the master equation of quantum brownian motion for a general environment, Annals of Physics 326, 1207–1258, 2011, [arXiv:1004.1603 [quant-ph]]. * [15] H. Ness, A. Genina, L. Stella, C. Lorenz and L. Kantorovich, Nonequilibrium processes from generalized langevin equations: Realistic nanoscale systems connected to two thermal baths, Physical Review B 93, 174303, 2016. * [16] F. Zamponi, F. Bonetto, L. F. Cugliandolo and J. Kurchan, A fluctuation theorem for non-equilibrium relaxational systems driven by external forces, Journal of Statistical Mechanics: Theory and Experiment 2005, P09013, 2005. * [17] L. F. Cugliandolo, J. Kurchan and G. Parisi, Off equilibrium dynamics and aging in unfrustrated systems, Journal de Physique I 4, 1641–1656, 1994. * [18] R. C. Verstraten, R. F. Ozela and C. M. Smith, Time glass: A fractional calculus approach, Phys. Rev. B 103, L180301, 2021, [arXiv:2006.08786 [cond-mat]]. * [19] G. Ford and R. O’Connell, Anomalous diffusion in quantum brownian motion with colored noise, Phys. Rev. A 73, 032103, 2006. * [20] J.-T. Hsiang and B.-L. Hu, Nonequilibrium nonlinear open quantum systems: Functional perturbative analysis of a weakly anharmonic oscillator, Phys. Rev. D 101, 125002, 2020, [arXiv:1912.12803 [hep-th]]. * [21] J.-T. Hsiang and B.-L. Hu, Fluctuation-Dissipation Relation from the Nonequilibrium Dynamics of a Nonlinear Open Quantum System, Phys. Rev. D 101, 125003, 2020, [arXiv:2002.07694 [hep-th]]. * [22] M. Crossley, P. Glorioso and H. Liu, Effective field theory of dissipative fluids, JHEP 09, 095, 2017, [arXiv:1511.03646 [hep-th]]. * [23] X. Chen-Lin, L. V. Delacrétaz and S. A. Hartnoll, Theory of diffusive fluctuations, Phys. Rev. Lett. 122, 091602, 2019, [arXiv:1811.12540 [hep-th]]. * [24] A. Jain, P. Kovtun, A. Ritz and A. Shukla, Hydrodynamic effective field theory and the analyticity of hydrostatic correlators, 2021, Journal of High Energy Physics, 2021. * [25] T. Hayata and Y. Hidaka, Diffusive Nambu-Goldstone modes in quantum time-crystals, 2018, [arXiv:1808.07636 [hep-th]]. * [26] E. Wang and U. W. Heinz, A Generalized fluctuation dissipation theorem for nonlinear response functions, Phys. Rev. D 66, 025008, 2002, [arXiv:hep-th/9809016]. * [27] J.-T. Hsiang and B.-L. Hu, Fluctuation-dissipation relation for open quantum systems in a nonequilibrium steady state, Phys. Rev. D 102, 105006, 2020, [arXiv:2007.00906 [quant-ph]]. * [28] Y. BenTov, Schwinger-Keldysh path integral for the quantum harmonic oscillator, 2021, [arXiv:2102.05029 [hep-th]]. * [29] H. Liu and P. Glorioso, Lectures on non-equilibrium effective field theories and fluctuating hydrodynamics, PoS TASI2017, 008, 2018, [arXiv:1805.09331 [hep-th]]. * [30] R. Kubo, Statistical mechanical theory of irreversible processes. I. General theory and simple applications in magnetic and conduction problems, J. Phys. Soc. Jap. 12, 570–586, 1957. * [31] P. C. Martin and J. Schwinger, Theory of many-particle systems. i, Phys. Rev. 115, 1342–1373, 1959.
* [94] Weishuang Linda Xu, Cora Dvorkin, and Andrew Chael. Probing sub-GeV Dark Matter-Baryon Scattering with Cosmological Observables. Phys. Rev., D97(10):103530, 2018. arXiv:1802.06788, doi:10.1103/PhysRevD.97.103530. * [95] Ethan O. Nadler, Vera Gluscevic, Kimberly K. Boddy, and Risa H. Wechsler. Constraints on Dark Matter Microphysics from the Milky Way Satellite Population. ApJL, 878(2):L32, June 2019. arXiv:1904.10000, doi:10.3847/2041-8213/ab1eb2. * [96] David V. Nguyen, Dimple Sarnaaik, Kimberly K. Boddy, Ethan O. Nadler, and Vera Gluscevic. Observational constraints on dark matter scattering with electrons. Phys. Rev. D, 104(10):103521, November 2021. arXiv:2107.12380, doi:10.1103/PhysRevD.104.103521. * [97] Karime Maamari, Vera Gluscevic, Kimberly K. Boddy, Ethan O. Nadler, and Risa H. Wechsler. Bounds on Velocity-dependent Dark Matter-Proton Scattering from Milky Way Satellite Abundance. ApJL, 907(2):L46, February 2021. arXiv:2010.02936, doi:10.3847/2041-8213/abd807. * [98] Sergey E. Koposov, Andrew R. Casey, Vasily Belokurov, James R. Lewis, Gerard Gilmore, Clare Worley, et al. Kinematics and Chemistry of Recently Discovered Reticulum 2 and Horologium 1 Dwarf Galaxies. ApJ, 811:62, September 2015. arXiv:1504.07916, doi:10.1088/0004-637X/811/1/62. * [99] A. Drlica-Wagner, K. Bechtol, E. S. Rykoff, E. Luque, A. Queiroz, Y. Y. Mao, et al. Eight Ultra-faint Galaxy Candidates Discovered in Year Two of the Dark Energy Survey. ApJ, 813:109, November 2015. arXiv:1508.03622, doi:10.1088/0004-637X/813/2/109. * [100] N. Menci, A. Merle, M. Totzauer, A. Schneider, A. Grazian, M. Castellano, et al. Fundamental Physics with the Hubble Frontier Fields: Constraining Dark Matter Models with the Abundance of Extremely Faint and Distant Galaxies. ApJ, 836:61, February 2017. arXiv:1701.01339, doi:10.3847/1538-4357/836/1/61. * [101] Andrei Mesinger, Rosalba Perna, and Zoltán Haiman. Constraints on the Small-Scale Power Spectrum of Density Fluctuations from High-Redshift Gamma-Ray Bursts. ApJ, 623:1–10, April 2005. arXiv:astro-ph/0501233, doi:10.1086/428770. * [102] R. S. de Souza, A. Mesinger, A. Ferrara, Z. Haiman, R. Perna, and N. Yoshida. Constraints on warm dark matter models from high-redshift long gamma-ray bursts. MNRAS, 432:3218–3227, July 2013. arXiv:1303.5060, doi:10.1093/mnras/stt674. * [103] Ben Moore, Sebastiano Ghigna, Fabio Governato, George Lake, Thomas Quinn, Joachim Stadel, et al. Dark Matter Substructure within Galactic Halos. ApJ, 524:L19–L22, October 1999. arXiv:astro-ph/9907411, doi:10.1086/312287. * [104] Kathryn V. Johnston and Raymond G. Carlberg. Tidal Debris as a Dark Matter Probe. In Heidi Jo Newberg and Jeffrey L. Carlin, editors, Tidal Streams in the Local Group and Beyond, volume 420 of Astrophysics and Space Science Library, page 169, January 2016. arXiv:1603.06602, doi:10.1007/978-3-319-19336-6_7. * [105] R. G. Carlberg. Star Stream Folding by Dark Galactic Subhalos. ApJ, 705:L223–L226, November 2009. arXiv:0908.4345, doi:10.1088/0004-637X/705/2/L223. * [106] D. Erkal and V. Belokurov. Properties of dark subhaloes from gaps in tidal streams. MNRAS, 454:3542–3558, December 2015. arXiv:1507.05625, doi:10.1093/mnras/stv2122. * [107] Jo Bovy. Dynamical Modeling of Tidal Streams. ApJ, 795:95, November 2014. arXiv:1401.2985, doi:10.1088/0004-637X/795/1/95. * [108] Renyue Cen, Jordi Miralda-Escudé, Jeremiah P. Ostriker, and Michael Rauch. Gravitational Collapse of Small-Scale Structure as the Origin of the Lyman-Alpha Forest. ApJ, 437:L9, December 1994. arXiv:astro-ph/9409017, doi:10.1086/187670. * [109] L. Hernquist, N. Katz, D. H. Weinberg, and J. Miralda-Escudé. The Lyman-Alpha Forest in the Cold Dark Matter Model. ApJL, 457:L51, February 1996. arXiv:astro-ph/9509105, doi:10.1086/309899. * [110] Rupert A. C. Croft, David H. Weinberg, Max Pettini, Lars Hernquist, and Neal Katz. The Power Spectrum of Mass Fluctuations Measured from the Ly$\alpha$ Forest at Redshift z = 2.5. ApJ, 520:1–23, July 1999. arXiv:astro-ph/9809401, doi:10.1086/307438. * [111] Lam Hui. Recovery of the Shape of the Mass Power Spectrum from the Ly$\alpha$ Forest. ApJ, 516:519–526, May 1999. arXiv:astro-ph/9807068, doi:10.1086/307134. * [112] Matteo Viel, George D. Becker, James S. Bolton, and Martin G. Haehnelt. Warm dark matter as a solution to the small scale crisis: New constraints from high redshift Lyman-$\alpha$ forest data. Phys. Rev. D, 88:043502, August 2013. arXiv:1306.2314, doi:10.1103/PhysRevD.88.043502. * [113] Julien Baur, Nathalie Palanque-Delabrouille, Christophe Yèche, Christophe Magneville, and Matteo Viel. Lyman-alpha forests cool warm dark matter. Journal of Cosmology and Astro-Particle Physics, 2016:012, August 2016. arXiv:1512.01981, doi:10.1088/1475-7516/2016/08/012. * [114] Vid Iršič, Matteo Viel, Martin G. Haehnelt, James S. Bolton, Stefano Cristiani, George D. Becker, et al. New constraints on the free-streaming of warm dark matter from intermediate and small scale Lyman-$\alpha$ forest data. Phys. Rev. D, 96:023522, July 2017. arXiv:1702.01764, doi:10.1103/PhysRevD.96.023522. * [115] Till Sawala, Carlos S. Frenk, Azadeh Fattahi, Julio F. Navarro, Tom Theuns, Richard G. Bower, et al. The chosen few: the low-mass haloes that host faint galaxies. MNRAS, 456:85–97, February 2016. arXiv:1406.6362, doi:10.1093/mnras/stv2597. * [116] K. A. Oman, J. F. Navarro, A. Fattahi, C. S. Frenk, T. Sawala, S. D. M. White, et al. The unexpected diversity of dwarf galaxy rotation curves. MNRAS, 452:3650–3665, October 2015. arXiv:1504.01437, doi:10.1093/mnras/stv1504. * [117] Lam Hui, Jeremiah P. Ostriker, Scott Tremaine, and Edward Witten. Ultralight scalars as cosmological dark matter. Phys. Rev. D, 95:043541, February 2017. arXiv:1610.08297, doi:10.1103/PhysRevD.95.043541. * [118] N. Dalal and C. S. Kochanek. Direct Detection of Cold Dark Matter Substructure. ApJ, 572:25–33, June 2002. arXiv:astro-ph/0111456, doi:10.1086/340303. * [119] L. V. E. Koopmans. Gravitational-mass imaging of CDM substructure. Mon. Not. Roy. Astron. Soc., 363:1136, 2005. arXiv:astro-ph/0501324, doi:10.1111/j.1365-2966.2005.09523.x. * [120] S. Vegetti and L. V. E. Koopmans. Statistics of mass substructure from strong gravitational lensing: quantifying the mass fraction and mass function. MNRAS, 400:1583–1592, December 2009. arXiv:0903.4752, doi:10.1111/j.1365-2966.2009.15559.x. * [121] S. Vegetti, O. Czoske, and L. V. E. Koopmans. Quantifying dwarf satellites through gravitational imaging: the case of SDSSJ120602.09+514229.5. MNRAS, 407:225–231, September 2010. arXiv:1002.4708, doi:10.1111/j.1365-2966.2010.16952.x. * [122] S. Vegetti, L. V. E. Koopmans, A. Bolton, T. Treu, and R. Gavazzi. Detection of a dark substructure through gravitational imaging. MNRAS, 408:1969–1981, November 2010. arXiv:0910.0760, doi:10.1111/j.1365-2966.2010.16865.x. * [123] S. Vegetti, D. J. Lagattuta, J. P. McKean, M. W. Auger, C. D. Fassnacht, and L. V. E. Koopmans. Gravitational detection of a low-mass dark satellite galaxy at cosmological distance. Nature, 481:341–343, January 2012. arXiv:1201.3643, doi:10.1038/nature10669. * [124] Yashar Hezaveh, Neal Dalal, Gilbert Holder, Michael Kuhlen, Daniel Marrone, et al. Dark Matter Substructure Detection Using Spatially Resolved Spectroscopy of Lensed Dusty Galaxies. Astrophys. J., 767:9, 2013. arXiv:1210.4562, doi:10.1088/0004-637X/767/1/9. * [125] S. Vegetti, L. V. E. Koopmans, M. W. Auger, T. Treu, and A. S. Bolton. Inference of the cold dark matter substructure mass function at z = 0.2 using strong gravitational lenses. MNRAS, 442:2017–2035, August 2014. arXiv:1405.3666, doi:10.1093/mnras/stu943. * [126] Yashar D. Hezaveh, Neal Dalal, Daniel P. Marrone, Yao-Yuan Mao, Warren Morningstar, Di Wen, et al. Detection of Lensing Substructure Using ALMA Observations of the Dusty Galaxy SDP.81. ApJ, 823:37, May 2016. arXiv:1601.01388, doi:10.3847/0004-637X/823/1/37. * [127] E. Ritondale, S. Vegetti, G. Despali, M. W. Auger, L. V. E. Koopmans, and J. P. McKean. Low-mass halo perturbations in strong gravitational lenses at redshift z$\sim$0.5 are consistent with CDM. 2018\. arXiv:1811.03627. * [128] Y. Hezaveh, N. Dalal, G. Holder, T. Kisner, M. Kuhlen, and L. Perreault Levasseur. Measuring the power spectrum of dark matter substructure using strong gravitational lensing. JCAP, 11:048, November 2016. arXiv:1403.2720, doi:10.1088/1475-7516/2016/11/048. * [129] Tansu Daylan, Francis-Yan Cyr-Racine, Ana Diaz Rivero, Cora Dvorkin, and Douglas P. Finkbeiner. Probing the small-scale structure in strongly lensed systems via transdimensional inference. Astrophys. J., 854(2):141, 2018. arXiv:1706.06111, doi:10.3847/1538-4357/aaaa1e. * [130] D. Bayer, S. Chatterjee, L. V. E. Koopmans, S. Vegetti, J. P. McKean, T. Treu, et al. Observational constraints on the sub-galactic matter-power spectrum from galaxy-galaxy strong gravitational lensing. 2018\. arXiv:1803.05952. * [131] Ana Diaz Rivero, Francis-Yan Cyr-Racine, and Cora Dvorkin. Power spectrum of dark matter substructure in strong gravitational lenses. Phys. Rev. D, 97:023001, January 2018. arXiv:1707.04590, doi:10.1103/PhysRevD.97.023001. * [132] S. Chatterjee and L. V. E. Koopmans. The inner mass power spectrum of galaxies using strong gravitational lensing: beyond linear approximation. MNRAS, 474:1762–1772, February 2018. arXiv:1710.03075, doi:10.1093/mnras/stx2674. * [133] Francis-Yan Cyr-Racine, Charles R. Keeton, and Leonidas A. Moustakas. Beyond subhalos: Probing the collective effect of the Universe’s small-scale structure with gravitational lensing. 2018\. arXiv:1806.07897. * [134] Sean Brennan, Andrew J. Benson, Francis-Yan Cyr-Racine, Charles R. Keeton, Leonidas A. Moustakas, and Anthony R. Pullen. Quantifying the power spectrum of small-scale structure in semi-analytic galaxies. arXiv e-prints, page arXiv:1808.03501, August 2018. arXiv:1808.03501. * [135] Ana Díaz Rivero, Cora Dvorkin, Francis-Yan Cyr-Racine, Jesús Zavala, and Mark Vogelsberger. Gravitational Lensing and the Power Spectrum of Dark Matter Substructure: Insights from the ETHOS N-body Simulations. Phys. Rev., D98(10):103517, 2018. arXiv:1809.00004, doi:10.1103/PhysRevD.98.103517. * [136] Benjamin Horowitz, Simone Ferraro, and Blake D. Sherwin. Reconstructing small-scale lenses from the cosmic microwave background temperature fluctuations. MNRAS, 485(3):3919–3929, May 2019. arXiv:1710.10236, doi:10.1093/mnras/stz566. * [137] Boryana Hadzhiyska, Blake D. Sherwin, Mathew Madhavacheril, and Simone Ferraro. Improving Small-Scale CMB Lensing Reconstruction. arXiv e-prints, page arXiv:1905.04217, May 2019. arXiv:1905.04217. * [138] M. P. van Daalen, J. Schaye, C. M. Booth, and C. Dalla Vecchia. The effects of galaxy formation on the matter power spectrum: a challenge for precision cosmology. MNRAS, 415:3649–3665, August 2011. arXiv:1104.1174, doi:10.1111/j.1365-2966.2011.18981.x. * [139] Alyson M. Brooks, Michael Kuhlen, Adi Zolotov, and Dan Hooper. A Baryonic Solution to the Missing Satellites Problem. ApJ, 765:22, March 2013. arXiv:1209.5394, doi:10.1088/0004-637X/765/1/22. * [140] A. Brooks. Re-examining astrophysical constraints on the dark matter model. Annalen der Physik, 526:294–308, August 2014. arXiv:1407.7544, doi:10.1002/andp.201400068. * [141] Aravind Natarajan, Andrew R. Zentner, Nicholas Battaglia, and Hy Trac. Systematic errors in the measurement of neutrino masses due to baryonic feedback processes: Prospects for stage IV lensing surveys. Phys. Rev. D, 90:063516, September 2014. arXiv:1405.6205, doi:10.1103/PhysRevD.90.063516. * [142] Aurel Schneider, Romain Teyssier, Joachim Stadel, Nora Elisa Chisari, Amandine M. C. Le Brun, Adam Amara, et al. Quantifying baryon effects on the matter power spectrum and the weak lensing shear correlation. arXiv e-prints, page arXiv:1810.08629, October 2018. arXiv:1810.08629. * [143] Keir K. Rogers and Hiranya V. Peiris. Strong Bound on Canonical Ultralight Axion Dark Matter from the Lyman-Alpha Forest. Phys. Rev. Lett., 126(7):071302, 2021. arXiv:2007.12705, doi:10.1103/PhysRevLett.126.071302. * [144] J. Ruz, J. K. Vogel, and M. J. Pivovaroff. Recent Constraints on Axion-photon and Axion-electron Coupling with the CAST Experiment. Physics Procedia, 61:153–156, 2015. doi:10.1016/j.phpro.2014.12.025. * [145] Noemie Bastidon and II for the ALPS collaboration. Any Light Particle Search II - Status Overview. Sep 2015. arXiv:1509.02070. * [146] Béla Majorovits et al. MADMAX: A new Dark Matter Axion Search using a Dielectric Haloscope. Nov 2016. arXiv:1611.04549. * [147] S. J. Asztalos et al. SQUID-Based Microwave Cavity Search for Dark-Matter Axions. Phys. Rev. Lett., 104(4):041301, Jan 2010. arXiv:0910.5914, doi:10.1103/PhysRevLett.104.041301. * [148] Dmitry Budker et al. Proposal for a Cosmic Axion Spin Precession Experiment (CASPEr). Physical Review X, 4(2):021030, Apr 2014. arXiv:1306.6089, doi:10.1103/PhysRevX.4.021030. * [149] Wayne Hu, Rennan Barkana, and Andrei Gruzinov. Cold and fuzzy dark matter. Phys. Rev. Lett., 85:1158–1161, 2000. arXiv:astro-ph/0003365, doi:10.1103/PhysRevLett.85.1158. * [150] Renee Hlozek, Daniel Grin, David J. E. Marsh, and Pedro G. Ferreira. A search for ultralight axions using precision cosmological data. Phys. Rev. D, 91(10):103512, 2015. arXiv:1410.2896, doi:10.1103/PhysRevD.91.103512. * [151] P. Sikivie. Experimental tests of the ”invisible” axion. Phys. Rev. Lett., 51:1415–1417, Oct 1983. doi:10.1103/PhysRevLett.51.1415. * [152] Georg Raffelt and Leo Stodolsky. Mixing of the photon with low-mass particles. Phys. Rev. D, 37:1237–1249, Mar 1988. doi:10.1103/PhysRevD.37.1237. * [153] Suvodip Mukherjee, Rishi Khatri, and Benjamin D. Wandelt. Constraints on non-resonant photon-axion conversion from the Planck satellite data. JCAP, 1906(06):031, 2019. arXiv:1811.11177, doi:10.1088/1475-7516/2019/06/031. * [154] Peter Ade et al. The Simons Observatory: Science goals and forecasts. 2018\. arXiv:1808.07445. * [155] Kevork N. Abazajian et al. CMB-S4 Science Book, First Edition. 2016\. arXiv:1610.02743. * [156] V. Anastassopoulos et al. New CAST Limit on the Axion-Photon Interaction. Nature Phys., 13:584–590, 2017. arXiv:1705.02290, doi:10.1038/nphys4109. * [157] A. Payez, C. Evoli, T. Fischer, M. Giannotti, A. Mirizzi, and A. Ringwald. Revisiting the SN1987A gamma-ray limit on ultralight axion-like particles. JCAP, 2:006, February 2015. arXiv:1410.3747, doi:10.1088/1475-7516/2015/02/006. * [158] Sean M. Carroll, George B. Field, and Roman Jackiw. Limits on a Lorentz and Parity Violating Modification of Electrodynamics. Phys. Rev., D41:1231, 1990. doi:10.1103/PhysRevD.41.1231. * [159] M. M. Ivanov, Y. Y. Kovalev, M. L. Lister, A. G. Panin, A. B. Pushkarev, T. Savolainen, et al. Constraining the photon coupling of ultra-light dark-matter axion-like particles by polarization variations of parsec-scale jets in active galaxies. JCAP, 02:059, 2019. arXiv:1811.10997, doi:10.1088/1475-7516/2019/02/059. * [160] Vid Iršič, Matteo Viel, Martin G. Haehnelt, James S. Bolton, and George D. Becker. First constraints on fuzzy dark matter from Lyman-$\alpha$ forest data and hydrodynamical simulations. Phys. Rev. Lett., 119(3):031302, 2017. arXiv:1703.04683, doi:10.1103/PhysRevLett.119.031302. * [161] E. O. Nadler et al. Milky Way Satellite Census. III. Constraints on Dark Matter Properties from Observations of Milky Way Satellite Galaxies. Phys. Rev. Lett., 126:091101, 2021. arXiv:2008.00022, doi:10.1103/PhysRevLett.126.091101. * [162] Neelima Seghal and Dongwan Han. Private Communication, 2020. * [163] Kevork Abazajian et al. CMB-S4 Science Case, Reference Design, and Project Plan. 7 2019. arXiv:1907.04473. * [164] Kevork N. Abazajian, Peter Adshead, Zeeshan Ahmed, Steven W. Allen, David Alonso, Kam S. Arnold, et al. CMB-S4 Science Book, First Edition. arXiv e-prints, page arXiv:1610.02743, October 2016. arXiv:1610.02743. * [165] Daniel Green and Surjeet Rajendran. The Cosmology of Sub-MeV Dark Matter. JHEP, 10:013, 2017. arXiv:1701.08750, doi:10.1007/JHEP10(2017)013. * [166] Daniel Baumann, Daniel Green, and Benjamin Wallisch. Searching for light relics with large-scale structure. Journal of Cosmology and Astro-Particle Physics, 2018:029, Aug 2018\. arXiv:1712.08067, doi:10.1088/1475-7516/2018/08/029. * [167] Sergei Bashinsky and Uros Seljak. Neutrino perturbations in CMB anisotropy and matter clustering. Phys. Rev. D, 69:083002, 2004. arXiv:astro-ph/0310198, doi:10.1103/PhysRevD.69.083002. * [168] Daniel Baumann, Daniel Green, Joel Meyers, and Benjamin Wallisch. Phases of New Physics in the CMB. JCAP, 1601:007, 2016. arXiv:1508.06342, doi:10.1088/1475-7516/2016/01/007. * [169] Christina D. Kreisch, Francis-Yan Cyr-Racine, and Olivier Doré. Neutrino puzzle: Anomalies, interactions, and cosmological tensions. Phys. Rev. D, 101(12):123505, 2020. arXiv:1902.00534, doi:10.1103/PhysRevD.101.123505. * [170] Thejs Brinckmann, Jae Hyeok Chang, and Marilena LoVerde. Self-interacting neutrinos, the Hubble parameter tension, and the cosmic microwave background. Phys. Rev. D, 104(6):063523, 2021. arXiv:2012.11830, doi:10.1103/PhysRevD.104.063523. * [171] Nicole F. Bell, Elena Pierpaoli, and Kris Sigurdson. Cosmological signatures of interacting neutrinos. Phys. Rev. D, 73:063523, 2006. arXiv:astro-ph/0511410, doi:10.1103/PhysRevD.73.063523. * [172] Alexander Friedland, Kathryn M. Zurek, and Sergei Bashinsky. Constraining Models of Neutrino Mass and Neutrino Interactions with the Planck Satellite. 4 2007. arXiv:0704.3271. * [173] Francis-Yan Cyr-Racine and Kris Sigurdson. Limits on Neutrino-Neutrino Scattering in the Early Universe. Phys. Rev. D, 90(12):123533, 2014. arXiv:1306.1536, doi:10.1103/PhysRevD.90.123533. * [174] Isabel M. Oldengott, Cornelius Rampf, and Yvonne Y. Y. Wong. Boltzmann hierarchy for interacting neutrinos I: formalism. JCAP, 04:016, 2015. arXiv:1409.1577, doi:10.1088/1475-7516/2015/04/016. * [175] Ryan J. Wilkinson, Celine Boehm, and Julien Lesgourgues. Constraining Dark Matter-Neutrino Interactions using the CMB and Large-Scale Structure. JCAP, 05:011, 2014. arXiv:1401.7597, doi:10.1088/1475-7516/2014/05/011. * [176] Miguel Escudero, Olga Mena, Aaron C. Vincent, Ryan J. Wilkinson, and Céline Bœhm. Exploring dark matter microphysics with galaxy surveys. JCAP, 09:034, 2015. arXiv:1505.06735, doi:10.1088/1475-7516/2015/9/034. * [177] Manuel A. Buen-Abad, Gustavo Marques-Tavares, and Martin Schmaltz. Non-Abelian dark matter and dark radiation. Phys. Rev. D, 92(2):023531, 2015. arXiv:1505.03542, doi:10.1103/PhysRevD.92.023531. * [178] Zackaria Chacko, Yanou Cui, Sungwoo Hong, Takemichi Okui, and Yuhsin Tsai. Partially Acoustic Dark Matter, Interacting Dark Radiation, and Large Scale Structure. JHEP, 12:108, 2016. arXiv:1609.03569, doi:10.1007/JHEP12(2016)108. * [179] Manuel A. Buen-Abad, Martin Schmaltz, Julien Lesgourgues, and Thejs Brinckmann. Interacting Dark Sector and Precision Cosmology. JCAP, 01:008, 2018. arXiv:1708.09406, doi:10.1088/1475-7516/2018/01/008. * [180] Christopher Brust, Yanou Cui, and Kris Sigurdson. Cosmological Constraints on Interacting Light Particles. JCAP, 08:020, 2017. arXiv:1703.10732, doi:10.1088/1475-7516/2017/08/020. * [181] Gongjun Choi, Chi-Ting Chiang, and Marilena LoVerde. Probing Decoupling in Dark Sectors with the Cosmic Microwave Background. JCAP, 06:044, 2018. arXiv:1804.10180, doi:10.1088/1475-7516/2018/06/044. * [182] Miguel Escudero and Samuel J. Witte. A CMB search for the neutrino mass mechanism and its relation to the Hubble tension. Eur. Phys. J. C, 80(4):294, 2020. arXiv:1909.04044, doi:10.1140/epjc/s10052-020-7854-5. * [183] Subhajit Ghosh, Rishi Khatri, and Tuhin S. Roy. Can dark neutrino interactions phase out the Hubble tension? Phys. Rev. D, 102(12):123544, 2020. arXiv:1908.09843, doi:10.1103/PhysRevD.102.123544. * [184] Nikita Blinov and Gustavo Marques-Tavares. Interacting radiation after Planck and its implications for the Hubble Tension. JCAP, 09:029, 2020. arXiv:2003.08387, doi:10.1088/1475-7516/2020/09/029. * [185] Anirban Das and Subhajit Ghosh. Flavor-specific interaction favors strong neutrino self-coupling in the early universe. JCAP, 07:038, 2021. arXiv:2011.12315, doi:10.1088/1475-7516/2021/07/038. * [186] Miguel Escudero and Samuel J. Witte. The hubble tension as a hint of leptogenesis and neutrino mass generation. Eur. Phys. J. C, 81(6):515, 2021. arXiv:2103.03249, doi:10.1140/epjc/s10052-021-09276-5. * [187] Daniel Aloni, Asher Berlin, Melissa Joseph, Martin Schmaltz, and Neal Weiner. A Step in Understanding the Hubble Tension. 10 2021. arXiv:2111.00014. * [188] Nils Schöneberg, Guillermo Franco Abellán, Andrea Pérez Sánchez, Samuel J. Witte, Vivian Poulin, and Julien Lesgourgues. The $H_{0}$ Olympics: A fair ranking of proposed models. 7 2021. arXiv:2107.10291. * [189] Daniel Green, Mustafa A. Amin, Joel Meyers, Benjamin Wallisch, Kevork N. Abazajian, Muntazir Abidi, et al. Messengers from the Early Universe: Cosmic Neutrinos and Other Light Relics. Bulletin of the American Astronomical Society, 51(3):159, May 2019. arXiv:1903.04763. * [190] Benjamin Wallisch. Cosmological Probes of Light Relics. arXiv e-prints, page arXiv:1810.02800, October 2018. arXiv:1810.02800. * [191] Levon Pogosian, Meir Shimon, Matthew Mewes, and Brian Keating. Future CMB constraints on cosmic birefringence and implications for fundamental physics. Phys. Rev. D, D100:023507, 2019. arXiv:1904.07855, doi:10.1103/PhysRevD.100.023507. * [192] Lawrence M. Widrow. Origin of galactic and extragalactic magnetic fields. Rev. Mod. Phys., 74:775–823, 2002. arXiv:astro-ph/0207240, doi:10.1103/RevModPhys.74.775. * [193] R. Durrer and A. Neronov. Cosmological magnetic fields: their generation, evolution and observation. The Astronomy and Astrophysics Review, 21:62, June 2013. arXiv:1303.7121, doi:10.1007/s00159-013-0062-7. * [194] Bharat Ratra. Cosmological ’seed’ magnetic field from inflation. Astrophys. J. Lett., 391:L1–L4, 1992. doi:10.1086/186384. * [195] Craig J. Hogan. Magnetohydrodynamic Effects of a First-Order Cosmological Phase Transition. Phys. Rev. Lett., 51:1488–1491, 1983. doi:10.1103/PhysRevLett.51.1488. * [196] Edward Witten. Cosmic Separation of Phases. Phys. Rev. D, 30:272–285, 1984. doi:10.1103/PhysRevD.30.272. * [197] Russell M. Kulsrud and Ellen G. Zweibel. The Origin of Astrophysical Magnetic Fields. Rept. Prog. Phys., 71:0046091, 2008. arXiv:0707.2783, doi:10.1088/0034-4885/71/4/046901. * [198] Alejandra Kandus, Kerstin E. Kunze, and Christos G. Tsagas. Primordial magnetogenesis. Phys. Rept., 505:1–58, 2011. arXiv:1007.3891, doi:10.1016/j.physrep.2011.03.001. * [199] A. Neronov and I. Vovk. Evidence for strong extragalactic magnetic fields from Fermi observations of TeV blazars. Science, 328:73–75, 2010. arXiv:1006.3504, doi:10.1126/science.1184192. * [200] M. Ackermann et al. The Search for Spatial Extension in High-latitude Sources Detected by the $Fermi$ Large Area Telescope. Astrophys. J. Suppl., 237(2):32, 2018. arXiv:1804.08035, doi:10.3847/1538-4365/aacdf7. * [201] S. Archambault et al. Search for Magnetically Broadened Cascade Emission From Blazars with VERITAS. Astrophys. J., 835(2):288, 2017. arXiv:1701.00372, doi:10.3847/1538-4357/835/2/288. * [202] Dario Grasso and Hector R. Rubinstein. Magnetic fields in the early universe. Phys. Rept., 348:163–266, 2001. arXiv:astro-ph/0009061, doi:10.1016/S0370-1573(00)00110-1. * [203] Lawrence M. Widrow, Dongsu Ryu, Dominik R. G. Schleicher, Kandaswamy Subramanian, Christos G. Tsagas, and Rudolf A. Treumann. The First Magnetic Fields. Space Sci. Rev., 166:37–70, 2012. arXiv:1109.4052, doi:10.1007/s11214-011-9833-5. * [204] Dongsu Ryu, Dominik R. G. Schleicher, Rudolf A. Treumann, Christos G. Tsagas, and Lawrence M. Widrow. Magnetic fields in the Large-Scale Structure of the Universe. Space Sci. Rev., 166:1–35, 2012. arXiv:1109.4055, doi:10.1007/s11214-011-9839-z. * [205] K. Subramanian. The origin, evolution and signatures of primordial magnetic fields. Reports on Progress in Physics, 79(7):076901, July 2016. arXiv:1504.02311, doi:10.1088/0034-4885/79/7/076901. * [206] Levon Pogosian and Alex Zucca. Searching for Primordial Magnetic Fields with CMB B-modes. Class. Quant. Grav., 35(12):124004, 2018. arXiv:1801.08936, doi:10.1088/1361-6382/aac398. * [207] Ruth Durrer. Cosmic Magnetic Fields and the CMB. New Astron. Rev., 51:275–280, 2007. arXiv:astro-ph/0609216, doi:10.1016/j.newar.2006.11.057. * [208] Kandaswamy Subramanian. Primordial magnetic fields and cmb anisotropies. Astron. Nachr., 327:403, 2006. arXiv:astro-ph/0601570, doi:10.1002/asna.200610542. * [209] L. Campanelli, A. D. Dolgov, M. Giannotti, and F. L. Villante. Faraday rotation of the CMB polarization and primordial magnetic field properties. Astrophys. J., 616:1–7, 2004. arXiv:astro-ph/0405420, doi:10.1086/424840. * [210] Y. Akrami et al. Planck 2018 results. IX. Constraints on primordial non-Gaussianity. Astron. Astrophys., 641:A9, 2020. arXiv:1905.05697, doi:10.1051/0004-6361/201935891. * [211] Neal Dalal, Olivier Dore, Dragan Huterer, and Alexander Shirokov. The imprints of primordial non-gaussianities on large-scale structure: scale dependent bias and abundance of virialized objects. Phys. Rev. D, 77:123514, 2008. arXiv:0710.4560, doi:10.1103/PhysRevD.77.123514. * [212] Uros Seljak. Extracting primordial non-gaussianity without cosmic variance. Phys. Rev. Lett., 102:021302, 2009. arXiv:0807.1770, doi:10.1103/PhysRevLett.102.021302. * [213] Marcel Schmittfull and Uros Seljak. Parameter constraints from cross-correlation of CMB lensing with galaxy clustering. Phys. Rev. D, 97(12):123540, 2018. arXiv:1710.09465, doi:10.1103/PhysRevD.97.123540. * [214] LSST Science Collaboration, P. A. Abell, J. Allison, S. F. Anderson, J. R. Andrew, J. R. P. Angel, et al. LSST Science Book, Version 2.0. ArXiv e-prints, December 2009. arXiv:0912.0201. * [215] Esfandiar Alizadeh and Christopher M. Hirata. How to detect gravitational waves through the cross correlation of the galaxy distribution with the CMB polarization. Phys. Rev. D, 85(12):123540, June 2012. arXiv:1201.5374, doi:10.1103/PhysRevD.85.123540. * [216] Anne-Sylvie Deutsch, Emanuela Dimastrogiovanni, Matteo Fasiello, Matthew C. Johnson, and Moritz Münchmeyer. Primordial gravitational wave phenomenology with polarized Sunyaev Zel’dovich tomography. Phys. Rev. D, 100(8):083538, 2019. arXiv:1810.09463, doi:10.1103/PhysRevD.100.083538. * [217] Anne-Sylvie Deutsch, Emanuela Dimastrogiovanni, Matthew C. Johnson, Moritz Münchmeyer, and Alexandra Terrana. Reconstruction of the remote dipole and quadrupole fields from the kinetic Sunyaev Zel’dovich and polarized Sunyaev Zel’dovich effects. Phys. Rev. D, 98(12):123501, 2018. arXiv:1707.08129, doi:10.1103/PhysRevD.98.123501. * [218] Anne-Sylvie Deutsch, Matthew C. Johnson, Moritz Münchmeyer, and Alexandra Terrana. Polarized Sunyaev Zel’dovich tomography. JCAP, 04:034, 2018. arXiv:1705.08907, doi:10.1088/1475-7516/2018/04/034. * [219] Dhiraj Kumar Hazra, Daniela Paoletti, Fabio Finelli, and George F. Smoot. Reionization in the dark and the light from Cosmic Microwave Background. JCAP, 2018(9):016, September 2018. arXiv:1807.05435, doi:10.1088/1475-7516/2018/09/016. * [220] E. Di Valentino, T. Brinckmann, M. Gerbino, V. Poulin, F. R. Bouchet, J. Lesgourgues, et al. Exploring cosmic origins with CORE: Cosmological parameters. JCAP, 2018(4):017, April 2018. arXiv:1612.00021, doi:10.1088/1475-7516/2018/04/017. * [221] Thejs Brinckmann, Deanna C. Hooper, Maria Archidiacono, Julien Lesgourgues, and Tim Sprenger. The promising future of a robust cosmological neutrino mass measurement. JCAP, 01:059, 2019. arXiv:1808.05955, doi:10.1088/1475-7516/2019/01/059. * [222] Mathew S. Madhavacheril, Nicholas Battaglia, and Hironao Miyatake. Fundamental physics from future weak-lensing calibrated Sunyaev-Zel’dovich galaxy cluster counts. Phys. Rev. D, 96(10):103525, November 2017. arXiv:1708.07502, doi:10.1103/PhysRevD.96.103525. * [223] Günter Sigl and Pranjal Trivedi. Axion-like Dark Matter Constraints from CMB Birefringence. 11 2018. arXiv:1811.07873. * [224] Arthur Lue, Limin Wang, and Mark Kamionkowski. Cosmological signature of new parity-violating interactions. Phys. Rev. Lett., 83:1506–1509, 1999. arXiv:astro-ph/9812088. * [225] Robert R. Caldwell, Vera Gluscevic, and Marc Kamionkowski. Phys. Rev. D, 84:043504, 2011. arXiv:1104.1634. * [226] Seokcheon Lee, Guo-Chin Liu, and Kin-Wang Ng. Cosmic Birefringence Fluctuations and Cosmic Microwave Background $B$-mode Polarization. Phys. Lett. B, 746:406–409, 2015. arXiv:1403.5585, doi:10.1016/j.physletb.2015.05.038. * [227] V. Gluscevic, M. Kamionkowski, and A. Cooray. Derotation of the cosmic microwave background polarization: Full-sky formalism. Phys. Rev. D, 80:023510, 2009. arXiv:0905.1687. * [228] Vera Gluscevic, Duncan Hanson, Marc Kamionkowski, and Christopher M. Hirata. First CMB constraints on direction-dependent cosmological birefringence from WMAP-7. Phys. Rev. D, 86(10):103529, November 2012. arXiv:1206.5546, doi:10.1103/PhysRevD.86.103529. * [229] Bo Feng, Mingzhe Li, Jun-Qing Xia, Xuelei Chen, and Xinmin Zhang. Searching for CPT Violation with Cosmic Microwave Background Data from WMAP and BOOMERANG. Phys. Rev. Lett., 96:221302, 2006. arXiv:astro-ph/0601095, doi:10.1103/PhysRevLett.96.221302. * [230] A. Gruppuso, P. Natoli, N. Mandolesi, A. DeRosa, and F. Paci. WMAP 7 year constraints on CPT violation from large angle CMB anisotropies. JCAP, 2012:023, 2012. arXiv:1107.5548, doi:10.1088/1475-7516/2012/02/023. * [231] Si-Yu Li, Jun-Qing Xia, Mingzhe Li, Hong Li, and Xinmin Zhang. Testing CPT Symmetry with Current and Future CMB Measurements. ApJ, 799:211, 2015. arXiv:1405.5637, doi:10.1088/0004-637X/799/2/211. * [232] Hsien-Hao Mei, Wei-Tou Ni, Wei-Ping Pan, Lixin Xu, and Sperello di Serego Alighieri. New constraints on cosmic polarization rotation from the ACTPol cosmic microwave background B-Mode polarization observation and the BICEP2 constraint update. ApJ, 805:107, 2015. arXiv:1412.8569, doi:10.1088/0004-637X/805/2/107. * [233] Luca Pagano, Paolo de Bernardis, Grazia de Troia, Giulia Gubitosi, Silvia Masi, Alessandro Melchiorri, et al. CMB polarization systematics, cosmological birefringence, and the gravitational waves background. Phys. Rev. D, 80:043522, 2009. arXiv:0905.1651, doi:10.1103/PhysRevD.80.043522. * [234] N. J. Miller, M. Shimon, and B. G. Keating. CMB polarization systematics due to beam asymmetry: Impact on cosmological birefringence. Phys. Rev. D, 79:103002, 2009. arXiv:0903.1116, doi:10.1103/PhysRevD.79.103002. * [235] Brian Keating, Meir Shimon, and Amit Yadav. Self-calibration of cmb polarization experiments. ApJ, 762:L23, 2013. arXiv:1211.5734. * [236] G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, et al. ApJ, 208:19, 2013. arXiv:1212.5226. * [237] Yuto Minami, Hiroki Ochi, Kiyotomo Ichiki, Nobuhiko Katayama, Eiichiro Komatsu, and Tomotake Matsumura. Simultaneous determination of the cosmic birefringence and miscalibrated polarization angles from CMB experiments. PTEP, 2019:083E02, 2019. arXiv:1904.12440, doi:10.1093/ptep/ptz079. * [238] Marc Kamionkowski. How to derotate the cosmic microwave background polarization. Phys. Rev. Lett., 102:111302, 2009. arXiv:0810.1286. * [239] Amit P. S. Yadav, Meir Shimon, and Brian G. Keating. Revealing cosmic rotation. Phys. Rev. D, 86:083002, 2012. arXiv:1010.1957. * [240] T. Namikawa. Testing parity-violating physics from cosmic rotation power reconstruction. Phys. Rev. D, 95:043523, 2017. arXiv:1612.07855. * [241] POLARBEAR Collaboration, Peter A. R. Ade, Kam Arnold, Matt Atlas, Carlo Baccigalupi, Darcy Barron, et al. Polarbear constraints on cosmic birefringence and primordial magnetic fields. Phys. Rev. D, 92:123509, 2015. arXiv:1509.02461. * [242] Bicep2 / Keck Array Collaboration, P. A. R. Ade, Z. Ahmed, R. W. Aikin, K. D. Alexander, D. Barkats, et al. BICEP2 / keck array ix: New bounds on anisotropies of cmb polarization rotation and implications for axion-like particles and primordial magnetic fields. Phys. Rev. D, 96:102003, 2017. arXiv:1705.02523, doi:10.1103/PhysRevD.96.102003. * [243] Dagoberto Contreras, Paula Boubel, and Douglas Scott. Constraints on direction-dependent cosmic birefringence from Planck polarization data. JCAP, 1712(12):046, 2017. arXiv:1705.06387, doi:10.1088/1475-7516/2017/12/046. * [244] Alessandro Gruppuso, Diego Molinari, Paolo Natoli, and Luca Pagano. Planck 2018 constraints on anisotropic birefringence and its cross-correlation with CMB anisotropy. 8 2020. arXiv:2008.10334. * [245] Giulia Gubitosi, Marina Migliaccio, Luca Pagano, Giovanni Amelino-Camelia, Alessandro Melchiorri, Paolo Natoli, et al. Using CMB data to constrain non-isotropic Planck-scale modifications to Electrodynamics. JCAP, 1111:003, 2011. arXiv:1106.6049, doi:10.1088/1475-7516/2011/11/003. * [246] Mingzhe Li and Bo Yu. New Constraints on Anisotropic Rotation of CMB Polarization. JCAP, 1306:016, 2013. arXiv:1303.1881, doi:10.1088/1475-7516/2013/06/016. * [247] Sperello di Serego Alighieri, Wei-Tou Ni, and Wei-Ping Pan. New Constraints on Cosmic Polarization Rotation From B-mode Polarization in the Cosmic Microwave Background. ApJ, 792:35, 2014. arXiv:1404.1701, doi:10.1088/0004-637X/792/1/35. * [248] Hua Zhai, Si-Yu Li, Mingzhe Li, and Xinmin Zhang. Joint constraint on primordial gravitational waves and polarization rotation angle with current CMB polarization data. 2019\. arXiv:1910.02395. * [249] T. Mroczkowski, D. Nagai, K. Basu, J. Chluba, J. Sayers, R. Adam, et al. Astrophysics with the Spatially and Spectrally Resolved Sunyaev-Zeldovich Effects. A Millimetre/Submillimetre Probe of the Warm and Hot Universe. Space Sci. Rev., 215:17, February 2019. arXiv:1811.02310, doi:10.1007/s11214-019-0581-2. * [250] R. A. Sunyaev and Y. B. Zeldovich. The Spectrum of Primordial Radiation, its Distortions and their Significance. Comments on Astrophysics and Space Physics, 2:66, March 1970. * [251] R. A. Sunyaev and Y. B. Zeldovich. The Observations of Relic Radiation as a Test of the Nature of X-Ray Radiation from the Clusters of Galaxies. Comments on Astrophysics and Space Physics, 4:173, November 1972\. * [252] R. A. Sunyaev and Ya. B. Zeldovich. The Velocity of clusters of galaxies relative to the microwave background. The Possibility of its measurement. Mon. Not. Roy. Astron. Soc., 190:413–420, 1980. * [253] L. Knox, G. P. Holder, and S. E. Church. Effects of Submillimeter and Radio Point Sources on the Recovery of Sunyaev-Zel’dovich Galaxy Cluster Parameters. ApJ, 612:96–107, September 2004. arXiv:astro-ph/0309643, doi:10.1086/422447. * [254] N. Sehgal, A. Kosowsky, and G. Holder. Constrained Cluster Parameters from Sunyaev-Zel’dovich Observations. ApJ, 635:22–34, December 2005. arXiv:astro-ph/0504274, doi:10.1086/497258. * [255] Nicholas Battaglia, Simone Ferraro, Emmanuel Schaan, and David N. Spergel. Future constraints on halo thermodynamics from combined Sunyaev-Zel’dovich measurements. Journal of Cosmology and Astro-Particle Physics, 2017:040, November 2017. arXiv:1705.05881, doi:10.1088/1475-7516/2017/11/040. * [256] D. Nagai and E. T. Lau. Gas Clumping in the Outskirts of $\Lambda$CDM Clusters. ApJL, 731:L10, April 2011. arXiv:1103.0280, doi:10.1088/2041-8205/731/1/L10. * [257] K. Nelson, E. T. Lau, and D. Nagai. Hydrodynamic Simulation of Non-thermal Pressure Profiles of Galaxy Clusters. ApJ, 792:25, September 2014. arXiv:1404.4636, doi:10.1088/0004-637X/792/1/25. * [258] E. T. Lau, D. Nagai, C. Avestruz, K. Nelson, and A. Vikhlinin. Mass Accretion and its Effects on the Self-similarity of Gas Profiles in the Outskirts of Galaxy Clusters. ApJ, 806:68, June 2015. arXiv:1411.5361, doi:10.1088/0004-637X/806/1/68. * [259] C. Avestruz, D. Nagai, E. T. Lau, and K. Nelson. Non-equilibrium Electrons in the Outskirts of Galaxy Clusters. ApJ, 808:176, August 2015. arXiv:1410.8142, doi:10.1088/0004-637X/808/2/176. * [260] K. Basu, M. Sommer, J. Erler, D. Eckert, F. Vazza, B. Magnelli, et al. ALMA-SZ Detection of a Galaxy Cluster Merger Shock at Half the Age of the Universe. ApJ, 829(2):L23, Oct 2016. arXiv:1608.05413, doi:10.3847/2041-8205/829/2/L23. * [261] T. A. Marriage, V. Acquaviva, P. A. R. Ade, P. Aguirre, M. Amiri, J. W. Appel, et al. The Atacama Cosmology Telescope: Sunyaev-Zel’dovich-Selected Galaxy Clusters at 148 GHz in the 2008 Survey. ApJ, 737:61, August 2011. arXiv:1010.1065, doi:10.1088/0004-637X/737/2/61. * [262] K. Vanderlinde, T. M. Crawford, T. de Haan, J. P. Dudley, L. Shaw, P. A. R. Ade, et al. Galaxy Clusters Selected with the Sunyaev-Zel’dovich Effect from 2008 South Pole Telescope Observations. ApJ, 722:1180–1196, October 2010. arXiv:1003.0003, doi:10.1088/0004-637X/722/2/1180. * [263] Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud, M. Ashdown, et al. Planck 2013 results. XXIX. The Planck catalogue of Sunyaev-Zeldovich sources. A$\&$A, 571:A29, November 2014. arXiv:1303.5089, doi:10.1051/0004-6361/201321523. * [264] J. Schaye, R. A. Crain, R. G. Bower, M. Furlong, M. Schaller, T. Theuns, et al. The EAGLE project: simulating the evolution and assembly of galaxies and their environments. MNRAS, 446:521–554, January 2015. arXiv:1407.7040, doi:10.1093/mnras/stu2058. * [265] D. Nelson, A. Pillepich, V. Springel, R. Weinberger, L. Hernquist, R. Pakmor, et al. First results from the IllustrisTNG simulations: the galaxy colour bimodality. MNRAS, 475:624–647, March 2018. arXiv:1707.03395, doi:10.1093/mnras/stx3040. * [266] Jonathan J. Davies, Robert A. Crain, Ian G. McCarthy, Benjamin D. Oppenheimer, Joop Schaye, Matthieu Schaller, et al. The gas fractions of dark matter haloes hosting simulated L⋆ galaxies are governed by the feedback history of their black holes. MNRAS, 485:3783–3793, May 2019. arXiv:1810.07696, doi:10.1093/mnras/stz635. * [267] A. Pillepich, V. Springel, D. Nelson, S. Genel, J. Naiman, R. Pakmor, et al. Simulating galaxy formation with the IllustrisTNG model. MNRAS, 473:4077–4106, January 2018. arXiv:1703.02970, doi:10.1093/mnras/stx2656. * [268] Nicolas B. Cowan, Gil Holder, and Nathan A. Kaib. Cosmologists in Search of Planet Nine: The Case for CMB Experiments. ApJ, 822:L2, May 2016. arXiv:1602.05963, doi:10.3847/2041-8205/822/1/L2. * [269] Eric J. Baxter, Bhuvnesh Jain, Cullen Blake, Gary Bernstein, Mark Devlin, and Gil Holder. Planet X in CMB and Optical Galaxy Surveys. arXiv e-prints, page arXiv:1812.08701, December 2018. arXiv:1812.08701. * [270] D. W. Gerdes, M. Sako, S. Hamilton, K. Zhang, T. Khain, J. C. Becker, et al. Discovery and Physical Characterization of a Large Scattered Disk Object at 92 au. ApJ, 839:L15, April 2017. arXiv:1702.00731, doi:10.3847/2041-8213/aa64d8. * [271] Brian D. Metzger, P. K. G. Williams, and Edo Berger. Extragalactic Synchrotron Transients in the Era of Wide-field Radio Surveys. I. Detection Rates and Light Curve Characteristics. ApJ, 806(2):224, Jun 2015. arXiv:1502.01350, doi:10.1088/0004-637X/806/2/224. * [272] Tanmoy Laskar, Kate D. Alexander, Edo Berger, Cristiano Guidorzi, Raffaella Margutti, Wen-fai Fong, et al. First ALMA Light Curve Constrains Refreshed Reverse Shocks and Jet Magnetization in GRB 161219B. ApJ, 862(2):94, Aug 2018. arXiv:1808.09476, doi:10.3847/1538-4357/aacbcc. * [273] X. Chen, J. P. Rachen, M. López-Caniego, C. Dickinson, T. J. Pearson, L. Fuhrmann, et al. Long-term variability of extragalactic radio sources in the Planck Early Release Compact Source Catalogue. A$\&$A, 553:A107, May 2013. arXiv:1302.2114, doi:10.1051/0004-6361/201220517. * [274] G. Madejski and M. Sikora. Gamma-Ray Observations of Active Galactic Nuclei. ARAA, 54:725–760, September 2016. doi:10.1146/annurev-astro-081913-040044. * [275] Roger Blandford, David Meier, and Anthony Readhead. Relativistic Jets in Active Galactic Nuclei. arXiv e-prints, page arXiv:1812.06025, Dec 2018. arXiv:1812.06025. * [276] A. A. Abdo, M. Ackermann, M. Ajello, M. Axelsson, L. Baldini, J. Ballet, et al. A change in the optical polarization associated with a $\gamma$-ray flare in the blazar 3C279. Nature, 463(7283):919–923, Feb 2010. arXiv:1004.3828, doi:10.1038/nature08841. * [277] A. Young. Astrophysics: Cosmic jet engines. Nature, 463:886–887, February 2010. doi:10.1038/463886a. * [278] W. Fong, E. Berger, R. Margutti, and B. A. Zauderer. A Decade of Short-duration Gamma-Ray Burst Broadband Afterglows: Energetics, Circumburst Densities, and Jet Opening Angles. ApJ, 815:102, December 2015. arXiv:1509.02922, doi:10.1088/0004-637X/815/2/102. * [279] Gilbert Holder, Edo Berger, Lindsey Bleem, Thomas M. Crawford, Douglas Scott, and Nathan Whitehorn. Tracking the time-variable Millimeter-wave sky with CMB experiments. In Bulletin of the American Astronomical Society, volume 51, page 331, May 2019. doi:https://baas.aas.org/wp-content/uploads/2019/05/331_holder.pdf. * [280] Brian D. Metzger, Peter K. G. Williams, and Edo Berger. Extragalactic Synchrotron Transients in the era of Wide-field Radio Surveys. I. Detection Rates and Light Curve Characteristics. Astrophys. J., 806(2):224, 2015. arXiv:1502.01350, doi:10.1088/0004-637X/806/2/224. * [281] Gilbert Holder, Edo Berger, Lindsey Bleem, Thomas M. Crawford, Douglas Scott, and Nathan Whitehorn. Tracking the time-variable Millimeter-wave sky with CMB experiments. Bulletin of the American Astronomical Society, 51(3):331, May 2019. * [282] T. Eftekhari, E. Berger, B. D. Metzger, T. Laskar, V. A. Villar, K. D. Alexander, et al. Extragalactic Millimeter Transients in the Era of Next Generation CMB Surveys. 10 2021. arXiv:2110.05494. * [283] P. M. Chichura, A Foster, C Patel, N Ossa-Jaen, PAR Ade, Z Ahmed, et al. Asteroid Measurements at Millimeter Wavelengths with the South Pole Telescope. 2022\. arXiv:2202.01406. * [284] Andrew W. Blain, Ian Smail, R. J. Ivison, J. P. Kneib, and David T. Frayer. Submillimeter galaxies. Phys. Rep., 369(2):111–176, October 2002. arXiv:astro-ph/0202228, doi:10.1016/S0370-1573(02)00134-5. * [285] Caitlin M. Casey, Desika Narayanan, and Asantha Cooray. Dusty star-forming galaxies at high redshift. Phys. Rep., 541(2):45–161, August 2014. arXiv:1402.1456, doi:10.1016/j.physrep.2014.02.009. * [286] J. A. Zavala, I. Aretxaga, J. S. Dunlop, M. J. Michałowski, D. H. Hughes, N. Bourne, et al. The SCUBA-2 Cosmology Legacy Survey: The EGS deep field - II. Morphological transformation and multiwavelength properties of faint submillimetre galaxies. MNRAS, 475(4):5585–5602, April 2018. arXiv:1801.07718, doi:10.1093/mnras/sty217. * [287] M. L. Strandet, A. Weiss, C. De Breuck, D. P. Marrone, J. D. Vieira, M. Aravena, et al. ISM Properties of a Massive Dusty Star-forming Galaxy Discovered at z $\sim$ 7\. ApJL, 842(2):L15, June 2017. arXiv:1705.07912, doi:10.3847/2041-8213/aa74b0. * [288] D. P. Marrone, J. S. Spilker, C. C. Hayward, J. D. Vieira, M. Aravena, M. L. N. Ashby, et al. Galaxy growth in a massive halo in the first billion years of cosmic history. Nature, 553(7686):51–54, January 2018. arXiv:1712.03020, doi:10.1038/nature24629. * [289] C. Gruppioni, F. Pozzi, G. Rodighiero, I. Delvecchio, S. Berta, L. Pozzetti, et al. The Herschel PEP/HerMES luminosity function - I. Probing the evolution of PACS selected Galaxies to z $\simeq$ 4\. MNRAS, 432(1):23–52, June 2013. arXiv:1302.5209, doi:10.1093/mnras/stt308. * [290] S. C. Chapman, A. W. Blain, Ian Smail, and R. J. Ivison. A Redshift Survey of the Submillimeter Galaxy Population. ApJ, 622(2):772–796, April 2005. arXiv:astro-ph/0412573, doi:10.1086/428082. * [291] A. L. R. Danielson, A. M. Swinbank, Ian Smail, J. M. Simpson, C. M. Casey, S. C. Chapman, et al. An ALMA Survey of Submillimeter Galaxies in the Extended Chandra Deep Field South: Spectroscopic Redshifts. ApJ, 840(2):78, May 2017. arXiv:1705.03503, doi:10.3847/1538-4357/aa6caf. * [292] Christina C. Williams, Ivo Labbe, Justin Spilker, Mauro Stefanon, Joel Leja, Katherine Whitaker, et al. Discovery of a Dark, Massive, ALMA-only Galaxy at z $\sim$ 5-6 in a Tiny 3 mm Survey. ApJ, 884(2):154, October 2019. arXiv:1905.11996, doi:10.3847/1538-4357/ab44aa. * [293] Caitlin M. Casey et al. Mapping Obscuration to Reionization with ALMA (MORA): 2 mm Efficiently Selects the Highest-redshift Obscured Galaxies. Astrophys. J., 923(2):215, 2021. arXiv:2110.06930, doi:10.3847/1538-4357/ac2eb4. * [294] Asantha Cooray, Jae Calanog, Julie L. Wardlow, J. Bock, C. Bridge, D. Burgarella, et al. HerMES: The Rest-frame UV Emission and a Lensing Model for the z = 6.34 Luminous Dusty Starburst Galaxy HFLS3. ApJ, 790(1):40, July 2014. arXiv:1404.1378, doi:10.1088/0004-637X/790/1/40. * [295] Caitlin M. Casey, Jorge A. Zavala, Manuel Aravena, Matthieu Béthermin, Karina I. Caputi, Jaclyn B. Champagne, et al. Physical Characterization of an Unlensed, Dusty Star-forming Galaxy at z = 5.85. ApJ, 887(1):55, December 2019. arXiv:1910.13331, doi:10.3847/1538-4357/ab52ff. * [296] Caitlin Casey, Peter Capak, Johannes Staguhn, Lee Armus, Andrew Blain, Matthieu Bethermin, et al. Taking Census of Massive, Star-Forming Galaxies formed less than 1 Gyr After the Big Bang. Bulletin of the American Astronomical Society, 51(3):212, May 2019. arXiv:1903.05634. * [297] J. A. Zavala, C. M. Casey, S. M. Manning, M. Aravena, M. Bethermin, K. I. Caputi, et al. The Evolution of the IR Luminosity Function and Dust-obscured Star Formation over the Past 13 Billion Years. ApJ, 909(2):165, March 2021. arXiv:2101.04734, doi:10.3847/1538-4357/abdb27. * [298] P. Ade, J. Aguirre, Z. Ahmed, S. Aiola, A. Ali, D. Alonso, et al. The Simons Observatory: science goals and forecasts. JCAP, 2:056, February 2019. arXiv:1808.07445, doi:10.1088/1475-7516/2019/02/056. * [299] S. W. Henderson, R. Allison, J. Austermann, T. Baildon, N. Battaglia, J. A. Beall, et al. Advanced ACTPol Cryogenic Detector Arrays and Readout. Journal of Low Temperature Physics, 184(3-4):772–779, Aug 2016\. arXiv:1510.02809, doi:10.1007/s10909-016-1575-z. * [300] B. A. Benson, P. A. R. Ade, Z. Ahmed, S. W. Allen, K. Arnold, J. E. Austermann, et al. SPT-3G: a next-generation cosmic microwave background polarization experiment on the South Pole telescope. In Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VII, volume 9153 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, page 91531P, Jul 2014. arXiv:1407.2973, doi:10.1117/12.2057305. * [301] Roger O’Brient, Peter Ade, Kam Arnold, Jennifer Edwards, Greg Engargiola, William L. Holzapfel, et al. A dual-polarized broadband planar antenna and channelizing filter bank for millimeter wavelengths. Applied Physics Letters, 102(6):063506, Feb 2013. arXiv:1302.0325, doi:10.1063/1.4791692. * [302] Kevork Abazajian, Graeme Addison, Peter Adshead, Zeeshan Ahmed, Steven W. Allen, David Alonso, et al. CMB-S4 Science Case, Reference Design, and Project Plan. arXiv e-prints, page arXiv:1907.04473, Jul 2019. arXiv:1907.04473. * [303] Grace E. Chesmore, Tony Mroczkowski, Jeff McMahon, Shreya Sutariya, Alec Josaitis, and Leif Jensen. Reflectometry Measurements of the Loss Tangent in Silicon at Millimeter Wavelengths. arXiv e-prints, page arXiv:1812.03785, Dec 2018. arXiv:1812.03785. * [304] Edward J. Wollack, Giuseppe Cataldo, Kevin H. Miller, and Manuel A. Quijada. Infrared properties of high-purity silicon. Opt. Lett., 45(17):4935–4938, Sep 2020. URL: http://opg.optica.org/ol/abstract.cfm?URI=ol-45-17-4935, doi:10.1364/OL.393847. * [305] Michael D. Niemack. Designs for a large-aperture telescope to map the CMB 10x faster. Appl. Opt., 55(7):1686, Mar 2016. arXiv:1511.04506, doi:10.1364/AO.55.001686. * [306] Pablo Gómez Toribio, Tony Mroczkowski, Anna Cabré, Carlos De Breuck, Ricardo Bustos, and Rodrigo Reeves. Two decades of km-resolution satellite-based measurements of the precipitable water vapor above the Atacama Desert. arXiv e-prints, page arXiv:2103.03917, March 2021. arXiv:2103.03917. * [307] R. Bustos, M. Rubio, A. Otárola, and N. Nagar. Parque Astronómico de Atacama: An Ideal Site for Millimeter, Submillimeter, and Mid-Infrared Astronomy. PASP, 126(946):1126, Dec 2014. arXiv:1410.2451, doi:10.1086/679330. * [308] Oliver P. Lay and Nils W. Halverson. The Impact of Atmospheric Fluctuations on Degree-Scale Imaging of the Cosmic Microwave Background. ApJ, 543(2):787–798, Nov 2000. arXiv:astro-ph/9905369, doi:10.1086/317115. * [309] J. E. Austermann, J. A. Beall, S. A. Bryan, B. Dober, J. Gao, G. Hilton, et al. Millimeter-Wave Polarimeters Using Kinetic Inductance Detectors for TolTEC and Beyond. Journal of Low Temperature Physics, 193(3-4):120–127, Nov 2018\. arXiv:1803.03280, doi:10.1007/s10909-018-1949-5.
High Resolution On-Chip Thin-Film Lithium Niobate Single-Photon Buffer Cagin Ekici,1 Yonghe Yu,1 Jeremy C. Adcock,1 Alif Laila Muthali,1 Heyun Tan,2 Hao Li,2 Leif Katsuo Oxenløwe,1 Xinlun Cai,2 and Yunhong Ding1,* 1 Center for Silicon Photonics for Optical Communication (SPOC), Department of Electrical and Photonics Engineering, Technical University of Denmark, Lyngby, Denmark 2State Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China <EMAIL_ADDRESS> ###### Abstract We experimentally demonstrate a room-temperature, voltage controlled, short- term quantum photonics memory on a lithium niobate chip. Our chip is capable of resolving 100 ps time steps with 0.74 dB loss per round-trip. ## 1 Introduction Short-term quantum photonics memories or single-photon buffers are essential for quantum technologies, since they provide a synchronization scheme for matching independent systems functioning at different speeds. In order to optimize two-photon interference from distant sources in a quantum network, photon buffers having high resolution configurability is needed to store one photon until the other is transmitted [1]. In addition, entangling quantum operations in photonics are generally probabilistic, and such short-term memories play a crucial role to buffer gates of probabilistic nature. Furthermore, approaching ideal single-photon sources based on parametric spontaneous pair generation through temporal multiplexing requires low-loss and controllable photon storage [2, 3]. To date, optical buffers based on delay lines [4], slow light [5], and Bragg scattering four-wave mixing [6] have been introduced. All these techniques either have an excessive loss which is not suitable for quantum applications or are overly sophisticated. Although atomic cloud optical memories are main contenders, they are difficult to integrate, and only operate at specific wavelengths. Therefore, to fulfill the requirements of a single-photon buffer, thin-film lithium niobate (TFLN) based integrated photonics platforms are ideal candidates, since they offer voltage controlled, low-loss and high-speed interferometric switching. In this paper, we experimentally demonstrate an on- chip TFLN single-photon buffer based on recirculating 1 cm-long loop with a round-trip time of 100 ps, i.e. the overall delay can be controlled with 100 ps time resolution, and storage times of up to 1.4 ns (14 round trips). ## 2 Experimental Setup and Results The TFLN single-photon buffer was fabricated on a commercial lithium niobate on insulator (LNOI) platform with top LN thickness of 600 nm. The switch consists of 4.5 mm long LN phase modulator on push-pull mode, exhibiting bandwidth more than 40 GHz, as shown in Fig. 1 (b), and the whole chip insertion loss is less than 6.2 dB (including the coupling loss). Fig. 1: (a) Schematics of the experimental setup with a real image of the TFLN chip consisting several buffers. (b) Electro-optic bandwidth (S21) measurement. Abbreviations: FPGA: Field-Programmable Gate Array, VOA: Variable Optical Attenuator, UC: Ultrafast Comparator, EA: Electronic Amplifier, TFLN S-PB: TFLN Single-Photon Buffer. The experimental setup is shown in Fig. 1 (a). We conduct the experiments utilizing heavily-attenuated light from a laser (1550 nm, 40 fs pulse duration), i.e. weak coherent state, with 100 MHz repetition rate instead of true single-photon quantum states. The switch control signals are generated via an FPGA and are fed into an ultrafast comparator to obtain a fast fall- rise time. Afterwards, the fast signals are amplified to the $V_{\pi}$ of the TFLN switch, and are applied to the chip through high speed radio frequency (RF) probes using micropositioners. After storage and read-out for a delay, photons are detected by a superconducting nanowire single-photon detector(s) and recorded by a time-tagger which produces a real-time histogram of the detection event. Fig. 2: The experimental results of single-photon storage: (a) Normalized histogram counts as a function of different storage time. (b) The peak amplitudes of the normalized histogram counts revealing the loss for different storage times. (c) Second-order correlation function $g^{(2)}(0)$ of the read- out single photons for different storage times with the error bars. The experimental results of single photon storage with our TFLN chip is shown in Fig. 2. Normalized histogram counts for the first 5 round-trip are depicted in Fig. 2 (a). The round-trip loss performance of the chip as a function of time is exhibited in Fig. 2 (b). Each peak value after a round-trip has been fitted the line with slope 0.74 dB. Accordingly, we measure the second-order correlation function $g^{(2)}(0)$ after each round-trip by adding a 50/50 fiber optic beam splitter before the detection, see Fig. 2 (c). As expected $g^{(2)}(0)\approx 1$, since our TFLN photonics chip is illuminated by a weak coherent state. As a result of constant $g^{(2)}(0)\approx 1$ for every round- trip, it can be inferred that the statistics do not change significantly as a function of a storage time and there is no substantial optical background noise owing to the absence of an optical pump beam [7]. ## 3 Conclusion We present an experimental study of a recirculating on-chip TFLN single-photon buffer enabling single photons to be captured, stored, and read-out at will with 100 ps time step resolution in a reliable way. Our promising chip is a robust and scalable architecture working at room-temperature with low-loss around 0.74 dB per round-trip. ## References * [1] K. Azuma, K. Tamaki, and H.-K. Lo , “All-photonic quantum repeaters,” Nat. Commun. 7, 6787 (2015). * [2] J. C. Adcock, D. Bacco, and Y. Ding, “Temporal Multiplexing Enhancement with a Silicon Waveguide Single Photon Source,” _CLEO_ , JTu3B. 1 (2022). * [3] J. C. Adcock, D. Bacco, and Y. Ding, “Enhancement of a silicon waveguide single photon source by temporal multiplexing,” Quantum Science and Technology 7, 025025 (2022). * [4] E. F. Burmeister, D. J. Blumenthal, and J. E. Bowers, “A comparison of optical buffering technologies,” Opt. Switching Networking 6, 10–18 (2008). * [5] R. S. Tucker, P.-C. Ku, and C. J. Chang-Hasnain, “Slow-light optical buffers: Capabilities and fundamental limitations,” J. Lightwave Technol. 23, 4046–4066 (2005). * [6] S. Clemmen, A. Farsi, S. Ramelow, and A. L. Gaeta, “All-optically tunable buffer for single photons,” Opt. Lett. 43, 2138 (2018). * [7] C. Kupchak, T. Mittiga, B. Jordaan, M. Namazi, C. Nölleke, and E. Figueroa “Room-Temperature Single-photon level Memory for Polarization States,” Sci. Rep. 5, 7658 (2015).
(Section VIII.1) $\sum_{n^{\prime}}c_{n^{\prime}}\ket{\phi_{n^{\prime}}(0)}$, $|c_{n}|\geq\gamma$ | $\ket{\phi_{n}(t)}+\order{\delta}$ | $\frac{\alpha T+\log(1/\Delta)}{\gamma\Delta}\log\left(\frac{1}{\delta}\right)\log\left(\frac{1}{\gamma\delta}\right)$ | $\frac{1}{\gamma}\log\left(\frac{1}{\delta}\right)$ | Floquet, $H(t)$ --- (Section VIII.2) $\sum_{n^{\prime}}c_{n^{\prime}}\ket{\phi_{n^{\prime}}(0)}$, $|c_{n}|\geq\gamma$ | $\ket{\Phi_{n}}+\order{\delta}$ | $\frac{\alpha T+N+\log(1/\Delta\gamma\delta)}{\gamma\Delta}\log\left(\frac{1}{\delta}\right)\log\left(\frac{1}{\gamma\delta}\right)$ | $\frac{1}{\gamma}\log\left(\frac{1}{\delta}\right)$ Table 2: Cost of eigenstate preparation for time-independent and time-periodic Hamiltonians under a promised gap $\Delta$. A preferable Floquet eigenstate, either $\ket{\phi_{n}(t)}\in\mathcal{H}$ or $\ket{\Phi_{n}}\in\mathcal{H}^{\infty}$ can be prepared in nearly optimal query complexity in the parameters. The difference from time-independent cases is at most logarithmic corrections in all the parameters $\gamma,\Delta,\delta$. ### VIII.2 Eigenstate preparation of $\ket{\Phi_{n}}$ The preparation of a preferable Floquet eigenstate $\ket{\Phi_{n}}$ living in the Sambe space is also formulated by the Floquet QPE. Let us assume without loss of generality that the preferable quasienergy $\epsilon_{n}$ is located in $[(-1/2+\Delta)\omega,(1/2-\Delta)\omega)$ (If not, we move the origin of the BZ by multiplying a global phase). Then, we can use the Floquet QPE without rounding promise in Section VII.4, where the parameters are chosen by $\varepsilon\leftarrow\Delta/2$ and $\delta\leftarrow\gamma\delta$. We get a unitary operation $U_{\mathrm{FQPE}}^{\mathrm{Sambe}}$ such that $\displaystyle U_{\mathrm{FQPE}}^{\mathrm{Sambe}}\ket{0}_{g}\ket{0}_{b}\ket{0}_{f}\ket{\psi}=$ $\displaystyle\qquad\sum_{n^{\prime}}c_{n^{\prime}}\left(\sum_{i=0,1}p_{i}^{n^{\prime}}\ket{g_{i}^{n^{\prime}}}_{g}\ket{(\epsilon_{n^{\prime}})_{bi}}_{b}\right)\ket{\Phi_{n^{\prime}}}+\order{\gamma\delta}.$ The promised gap $\Delta$ and the accuracy of the Floquet QPE ensure that the projection $\Pi_{n,\Delta}\otimes I_{g}\otimes I_{f}$ by Eq. (155) keeps only the preferable component with $\ket{\Phi_{n}}$ in the above state, and it allows us to form the block-encoding $U_{\mathrm{FQPE}}^{\mathrm{Sambe}}$ like Eq. (157). Again, we can run QAA based on QSVT, which does the transformation, $\ket{\psi}\to\left(\sum_{i=0,1}p_{i}^{n^{\prime}}\ket{g_{i}^{n}}_{g}\ket{(\epsilon_{n})_{bi}}_{b}\right)\otimes\ket{\Phi_{n}}+\order{\delta}.$ (160) Discarding the ancilla systems other than $f$ completes the accurate preparation of the preferable Floquet eigenstate $\ket{\Phi_{n}}$ (or more presicely, the eigenstate of the truncated Floquet Hamiltonian). The cost is evaluated in a similar way as in Section VIII.1. The query complexity in $C[O_{H_{m}}]$ or its inverse amounts to the product of that of the Floquet QPE, $\order{\frac{\alpha T+N+\log(1/\Delta\gamma\delta)}{\Delta}\log(1/\gamma\delta)},$ (161) and that of QAA, $\order{q}=\order{\gamma^{-1}\log(1/\delta)}$. The query complexity in the state preparation unitary $U_{\psi}$ or its inverse, which are used for the counterpart of the parametrized unitary $R_{\Pi_{\psi}}(\theta)$, is $\order{q}=\order{\gamma^{-1}\log(1/\delta)}$. These results are summarized in the last row of Table 2. Like the preparation of $\ket{\phi_{n}(t)}$ in Section VIII.1, the preparation of $\ket{\Phi_{n}}$ can also be performed in nearly optimal query complexity. The scaling in the gap $\Delta$ is $\order{\Delta^{-1}\log\Delta^{-1}}$ and that in the overlap $\gamma$ is $\order{\gamma^{-1}\log^{2}\gamma^{-1}}$, both of which differ from the optimal algorithm for time-independent cases by at most logarithmic corrections. ## IX Conclusion and Discussion Throughout the paper, we have focused on the problem of computing quasienergy and Floquet eigenstates under time-periodic Hamiltonians, and have organized efficient quantum algorithms for it. Our algorithms can output pairs of accurate quasienergy and a Floquet eigenstate in a coherent way, referred to as “Floquet quantum phase estimation (Floquet QPE)”, and also deterministically prepare a preferable gapped Floquet eigenstate based on the Floquet QPE. Time-periodic systems have the difficulty of time-dependency or equivalently infinite-dimensionality of the Sambe space compared to time- independent cases. Nevertheless, these quantum algorithms achieve nearly optimal query complexity whose difference from the optimal algorithms for time-independent cases is at most logarithmic in all the parameters. Note that this efficiency comes from the interplay of nonequilibrium many-body physics and quantum algorithms; The guaranteed accuracy is derived from the Sambe space formalism in Floquet theory, and the Lieb-Robinson bound or the localization of Floquet eigenstates. The computational resources increased by the Sambe space can be small in quantum algorithms according to the QPE and the QAA, which allows us to solve the time-dependent problems almost as efficiently as the time-independent problems. The computation of energy eigenvalues and eigenstates has been a central problem in fundamental quantum many-body physics, and offers various applications in condensed matter physics and quantum chemistry. Our quantum algorithms first achieve nearly optimal query complexity for its natural extension to time-periodic systems, which will provide a deep insight into the complexity of nonequilibrium systems like the sampling complexity of time- periodic systems [66]. We also expect that it will provide a powerful tool for exploring nonequilibrium phases of matter: For instance, in Floquet time crystalline phases [6, 7, 8], every low-entangled state (e.g. product state) becomes a superposition of Floquet eigenstates which are vulnerable to noise and measurement (e.g. cat states). Since these Floquet eigenstates have equally large weight and a quasienergy gap $\omega/n$ ($n=2,3,\ldots$), our quantum algorithms will be useful for the identification of time crystals by confirming their properties. Similarly, various nonequilibrium phenomena such as Floquet many-body localization [14, 15, 16, 17] and Floquet quantum many- body scars [19, 20] are characterized by Floquet eigenstates, and hence they are also in the scope. On the other hand, generic nonintegrable time-periodic Hamiltonians are believed to satisfy Floquet eigenstate thermalization hypothesis [67, 68], where every Floquet eigenstate is locally indistinguishable from a trivial infinite temperature state. Although this is an undesired phenomenon in condensed matter physics, which is known as heating, our quantum algorithms for nearly optimal Floquet eigenstate preparation will serve as a source of randomness appearing in time-periodic Hamiltonians [66]. ## Acknowledgment We thank T. N. Ikeda for fruitful discussion. We also thank S. Kitamura for giving us some useful information about Floquet theory. K. M. is supported by JST PRESTO Grant No. JPMJPR235A. ## References * Oka and Aoki [2009] T. Oka and H. Aoki, Photovoltaic hall effect in graphene, Phys. Rev. B 79, 081406 (2009). * Oka and Kitamura [2019] T. Oka and S. Kitamura, Floquet Engineering of Quantum Materials, Annual Review of Condensed Matter Physics 10, 387 (2019). * Kitagawa _et al._ [2010] T. Kitagawa, E. Berg, M. Rudner, and E. Demler, Topological characterization of periodically driven quantum systems, Phys. Rev. B: Condens. Matter Mater. Phys. 82, 235114 (2010). * Rudner _et al._ [2013] M. S. Rudner, N. H. Lindner, E. Berg, and M. Levin, Anomalous edge states and the bulk-edge correspondence for periodically driven two-dimensional systems, Physical Review X 3, 031005 (2013). * Harper _et al._ [2019] F. Harper, R. Roy, M. S. Rudner, and S. L. Sondhi, Topology and Broken Symmetry in Floquet Systems, arXiv:1905.01317 [cond-mat.str-el] (2019). * Khemani _et al._ [2016] V. Khemani, A. Lazarides, R. Moessner, and S. L. Sondhi, Phase Structure of Driven Quantum Systems, Phys. Rev. Lett. 116, 250401 (2016). * Else _et al._ [2016] D. V. Else, B. Bauer, and C. Nayak, Floquet Time Crystals, Phys. Rev. Lett. 117, 090402 (2016). * Khemani _et al._ [2019] V. Khemani, R. Moessner, and S. L. Sondhi, A Brief History of Time Crystals, arXiv:1910.10745 [cond-mat.str-el] (2019). * Abanin _et al._ [2015] D. A. Abanin, W. De Roeck, and F. Huveneers, Exponentially slow heating in periodically driven Many-Body systems, Phys. Rev. Lett. 115, 256803 (2015). * Mori _et al._ [2016] T. Mori, T. Kuwahara, and K. Saito, Rigorous Bound on Energy Absorption and Generic Relaxation in Periodically Driven Quantum Systems, Phys. Rev. Lett. 116, 120401 (2016). * Kuwahara _et al._ [2016] T. Kuwahara, T. Mori, and K. Saito, Floquet–Magnus theory and generic transient dynamics in periodically driven many-body quantum systems, Ann. Phys. 367, 96 (2016). * Abanin _et al._ [2017a] D. Abanin, W. De Roeck, W. W. Ho, and F. Huveneers, A Rigorous Theory of Many-Body Prethermalization for Periodically Driven and Closed Quantum Systems, Commun. Math. Phys. 354, 809 (2017a). * Abanin _et al._ [2017b] D. A. Abanin, W. De Roeck, W. W. Ho, and F. Huveneers, Effective hamiltonians, prethermalization, and slow energy absorption in periodically driven many-body systems, Phys. Rev. B Condens. Matter 95, 014112 (2017b). * Ponte _et al._ [2015] P. Ponte, Z. Papić, F. m. c. Huveneers, and D. A. Abanin, Many-body localization in periodically driven systems, Phys. Rev. Lett. 114, 140401 (2015). * Lazarides _et al._ [2015] A. Lazarides, A. Das, and R. Moessner, Fate of many-body localization under periodic driving, Phys. Rev. Lett. 115, 030402 (2015). * Abanin _et al._ [2016] D. A. Abanin, W. De Roeck, and F. Huveneers, Theory of many-body localization in periodically driven systems, Ann. Phys. 372, 1 (2016). * Bordia _et al._ [2017] P. Bordia, H. Lüschen, U. Schneider, M. Knap, and I. Bloch, Periodically driving a many-body localized quantum system, Nat. Phys. 13, 460 (2017). * Mukherjee _et al._ [2020] B. Mukherjee, S. Nandy, A. Sen, D. Sen, and K. Sengupta, Collapse and revival of quantum many-body scars via Floquet engineering, Phys. Rev. B 101, 245107 (2020). * Sugiura _et al._ [2021] S. Sugiura, T. Kuwahara, and K. Saito, Many-body scar state intrinsic to periodically driven system, Phys. Rev. Res. 3, L012010 (2021). * Mizuta _et al._ [2020] K. Mizuta, K. Takasan, and N. Kawakami, Exact Floquet quantum many-body scars under Rydberg blockade, Phys. Rev. Res. 2, 033284 (2020). * Abrams and Lloyd [1999] D. S. Abrams and S. Lloyd, Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors, Phys. Rev. Lett. 83, 5162 (1999). * Aspuru-Guzik _et al._ [2005] A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-Gordon, Simulated quantum computation of molecular energies, Science 309, 1704 (2005). * Sambe [1973] H. Sambe, Steady states and quasienergies of a quantum-mechanical system in an oscillating field, Phys. Rev. A 7, 2203 (1973). * Guérin and Jauslin [2003] S. Guérin and H. R. Jauslin, Control of quantum dynamics by laser pulses: Adiabatic Floquet theory, in _Advances in chemical physics_ (John Wiley & Sons, Inc., 2003) pp. 147–267. * Mikami _et al._ [2016] T. Mikami, S. Kitamura, K. Yasuda, N. Tsuji, T. Oka, and H. Aoki, Brillouin-Wigner theory for high-frequency expansion in periodically driven systems: Application to Floquet topological insulators, Phys. Rev. B 93, 144307 (2016). * Rodriguez-Vega _et al._ [2018] M. Rodriguez-Vega, M. Lentz, and B. Seradjeh, Floquet perturbation theory: formalism and application to low-frequency limit, New J. Phys. 20, 093022 (2018). * Fauseweh and Zhu [2023] B. Fauseweh and J.-X. Zhu, Quantum computing Floquet energy spectra, Quantum 7, 1063 (2023). * Mizuta and Fujii [2023] K. Mizuta and K. Fujii, Optimal Hamiltonian simulation for time-periodic systems, Quantum 7, 962 (2023). * Eckstein _et al._ [2023] T. Eckstein, R. Mansuroglu, P. Czarnik, J.-X. Zhu, M. J. Hartmann, L. Cincio, A. T. Sornborger, and Z. Holmes, Large-scale simulations of Floquet physics on near-term quantum computers, arXiv:2303.02209 [quant-ph] (2023). * Mizuta [2023] K. Mizuta, Optimal and nearly optimal simulation of multiperiodic time-dependent hamiltonians, Phys. Rev. Res. 5, 033067 (2023). * Lloyd [1996] S. Lloyd, Universal Quantum Simulators, Science 273, 1073 (1996). * Low and Chuang [2017] G. H. Low and I. L. Chuang, Optimal Hamiltonian Simulation by Quantum Signal Processing, Phys. Rev. Lett. 118, 010501 (2017). * Low and Chuang [2019] G. H. Low and I. L. Chuang, Hamiltonian simulation by qubitization, Quantum 3, 163 (2019). * Yu. Kitaev [1995] A. Yu. Kitaev, Quantum measurements and the Abelian Stabilizer Problem, arXiv:quant-ph/9511026 [quant-ph] (1995). * Cleve _et al._ [1998] R. Cleve, A. Ekert, C. Macchiavello, and M. Mosca, Quantum algorithms revisited, Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 454, 339 (1998). * Poulin and Wocjan [2009a] D. Poulin and P. Wocjan, Preparing ground states of quantum many-body systems on a quantum computer, Phys. Rev. Lett. 102, 130503 (2009a). * Ge _et al._ [2019] Y. Ge, J. Tura, and J. I. Cirac, Faster ground state preparation and high-precision ground energy estimation with fewer qubits, J. Math. Phys. 60, 022202 (2019). * Lin and Tong [2020] L. Lin and Y. Tong, Near-optimal ground state preparation, Quantum 4, 372 (2020). * Poulin and Wocjan [2009b] D. Poulin and P. Wocjan, Sampling from the thermal quantum gibbs state and evaluating partition functions with a quantum computer, Phys. Rev. Lett. 103, 220502 (2009b). * Bilgin and Boixo [2010] E. Bilgin and S. Boixo, Preparing thermal states of quantum systems by dimension reduction, Phys. Rev. Lett. 105, 170405 (2010). * Riera _et al._ [2012] A. Riera, C. Gogolin, and J. Eisert, Thermalization in nature and on a quantum computer, Phys. Rev. Lett. 108, 080402 (2012). * Chowdhury and Somma [2016] A. N. Chowdhury and R. D. Somma, Quantum algorithms for Gibbs sampling and hitting-time estimation (2016), arXiv:1603.02940 [quant-ph] . * Gilyén _et al._ [2019] A. Gilyén, Y. Su, G. H. Low, and N. Wiebe, Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics, in _Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing_, STOC 2019 (Association for Computing Machinery, New York, NY, USA, 2019) pp. 193–204. * Rall [2021] P. Rall, Faster Coherent Quantum Algorithms for Phase, Energy, and Amplitude Estimation, Quantum 5, 566 (2021). * Martyn _et al._ [2021] J. M. Martyn, Z. M. Rossi, A. K. Tan, and I. L. Chuang, Grand Unification of Quantum Algorithms, PRX Quantum 2, 040203 (2021). * Harrow _et al._ [2009] A. W. Harrow, A. Hassidim, and S. Lloyd, Quantum algorithm for linear systems of equations, Phys. Rev. Lett. 103, 150502 (2009). * Brassard _et al._ [2002] G. Brassard, P. Høyer, M. Mosca, and A. Tapp, Quantum amplitude amplification and estimation, Quantum Computation and Information , 53–74 (2002). * Kitaev _et al._ [2002] A. Y. Kitaev, A. Shen, and M. N. Vyalyi, _Classical and Quantum Computation_ (American Mathematical Soc., 2002). * Zener [1934] C. Zener, A theory of the electrical breakdown of solid dielectrics, Proc. R. Soc. Lond. A 145, 523 (1934). * Wannier [1960] G. H. Wannier, Wave functions and effective hamiltonian for bloch electrons in an electric field, Phys. Rev. 117, 432 (1960). * Hone _et al._ [1997] D. W. Hone, R. Ketzmerick, and W. Kohn, Time-dependent floquet theory and absence of an adiabatic limit, Phys. Rev. A 56, 4045 (1997). * Lindner _et al._ [2017] N. H. Lindner, E. Berg, and M. S. Rudner, Universal chiral quasisteady states in periodically driven many-body systems, Phys. Rev. X 7, 011018 (2017). * Rudner and Lindner [2020] M. S. Rudner and N. H. Lindner, The floquet engineer’s handbook, arXiv:2003.08252 [cond-mat.mes-hall] (2020). * Bhatia [2013] R. Bhatia, _Matrix Analysis_ (Springer Science & Business Media, 2013). * Levante _et al._ [1995] T. O. Levante, M. Baldus, B. H. Meier, and R. R. Ernst, Formalized quantum mechanical Floquet theory and its application to sample spinning in nuclear magnetic resonance, Mol. Phys. 86, 1195 (1995). * Lieb and Robinson [1972] E. H. Lieb and D. W. Robinson, The finite group velocity of quantum spin systems, Commun. Math. Phys. 28, 251 (1972). * Nachtergaele and Sims [2006] B. Nachtergaele and R. Sims, Lieb-Robinson bounds and the exponential clustering theorem, Commun. Math. Phys. 265, 119 (2006). * Gong and Hamazaki [2022] Z. Gong and R. Hamazaki, Bounds in nonequilibrium quantum dynamics, International Journal of Modern Physics B 36, 2230007 (2022). * Childs _et al._ [2021] A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu, Theory of trotter error with commutator scaling, Phys. Rev. X 11, 011020 (2021). * Berry _et al._ [2017] D. W. Berry, A. M. Childs, R. Cleve, R. Kothari, and R. D. Somma, Exponential improvement in precision for simulating sparse Hamiltonians, Forum of Mathematics, Sigma 5, e8 (2017). * Grover [1997] L. K. Grover, Quantum Mechanics Helps in Searching for a Needle in a Haystack, Phys. Rev. Lett. 79, 325 (1997). * Høyer [2000] P. Høyer, Arbitrary phases in quantum amplitude amplification, Phys. Rev. A 62, 052304 (2000). * Long [2001] G. L. Long, Grover algorithm with zero theoretical failure rate, Phys. Rev. A 64, 022307 (2001). * Note [1] $R_{\Pi_{0}}(\theta)$ is implemented by $\order{\log L}$ Toffoli gates, $\order{\log L}$ additional qubits, and one controlled-phase gate. The implementation of $R_{\Pi_{1}}(\theta)$ exploits the quantum comparator, which transforms $\mathinner{|{x}\rangle}\mathinner{|{0}\rangle}\to\mathinner{|{x}\rangle}\mathinner{|{\mathrm{bool}(x>m)}\rangle}$ for a $b$-bit variable $x$ and a constant $m$. * Note [2] We note that the factor $\log(1/\delta)$ is added to the query complexity in Ref. [38] for fair comparison. While Ref. [38] executes the QAA so that they can succeed in the state preparation with some constant property, we require the success probability to be greater than $1-\order{\delta}$. The cost of achieving it by QAA appears as an additional factor of $\log(1/\delta)$. * Tangpanitanon _et al._ [2023] J. Tangpanitanon, S. Thanasilp, M.-A. Lemonde, N. Dangniam, and D. G. Angelakis, Signatures of a sampling quantum advantage in driven quantum many-body systems, Quantum Sci. Technol. 8, 025019 (2023). * D’Alessio and Rigol [2014] L. D’Alessio and M. Rigol, Long-time behavior of isolated periodically driven interacting lattice systems, Phys. Rev. X 4, 041048 (2014). * Lazarides _et al._ [2014] A. Lazarides, A. Das, and R. Moessner, Equilibrium states of generic quantum systems subject to periodic driving, Phys. Rev. E 90, 012110 (2014). * Sachdeva [2013] S. Sachdeva, Faster algorithms via approximation theory, Found. Trends Theor. Comput. Sci. 9, 125 (2013). Appendix ## Appendix A Exponential tails of Floquet eigenstates Theorem 3 in the main text states the exponential decay of every Fourier component $\ket{\phi_{n}^{l}}$ in $l\in\mathbb{Z}$ of Floquet eigenstates. This is related to the localization on one-dimensional lattice under linear potential, where the Fourier index $l\in\mathbb{Z}$ plays a role of the coordinate. While several references such as Refs. [51, 52, 53] mention about the exponential decay in Floquet eigenstates, we cannot find an explicit bound that fits our setup. For this paper to be self-contained, we provide a rigorous proof of Theorem 3. The theorem is restated as follows. ###### Theorem 3. (Tails of Floquet eigenstates) Suppose that a Floquet eigenstate $\ket{\phi_{n}(t)}$ or equivalently $\ket{\Phi_{n}}$ has quasienergy $\epsilon_{n}\in\mathrm{BZ}=[-\omega/2,\omega/2)$ under the Hamiltonian, Eq. (27). Then, every Fourier component exponentially decays as $\norm{\ket{\phi_{n}^{l}}}\leq\exp\left(-\frac{|l|-1/2}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right).$ (162) The proof consists of three main steps. In the first step, we show the exponential decay in each eigenstate of the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$ instead of the exact one (Proposition A1). The second step is to show that the eigenspace of the truncated Floquet Hamiltonian well approximates the subspace spanned by the exact Floquet eigenstates (Proposition A2). Finally, we combine them and prove that the Floquet eigenstates must also show exponential decay as Theorem 3. We hereby provide the first step and its derivation. ###### Proposition A1. (Truncated Floquet eigenstates) Let $\ket{\tilde{\Phi}_{n}^{L}}\in\mathcal{H}^{L}$ be an eigenstate of the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$ as $H_{\mathrm{F}}^{L}\ket{\tilde{\Phi}_{n}^{L}}=\tilde{\epsilon}_{n}^{L}\ket{\tilde{\Phi}_{n}^{L}}$. We define projections to a certain Fourier index $l\in\mathbb{Z}$ and an eigenspectra $E\subset\mathbb{R}$ respectively by $\displaystyle P_{l}$ $\displaystyle=$ $\displaystyle\ket{l}\bra{l}_{f}\otimes I,$ (163) $\displaystyle\tilde{P}^{L}(E)$ $\displaystyle=$ $\displaystyle\sum_{n:\tilde{\epsilon}_{n}^{L}\in E}\ket{\tilde{\Phi}_{n}^{L}}\bra{\tilde{\Phi}_{n}^{L}}.$ (164) Then, for arbitrary $l\in[L]$, $\norm{P_{l}\tilde{P}^{L}(E)}\leq\exp\left(-\frac{|l|-\epsilon_{\mathrm{max}}/\omega}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right)$ (165) is satisfied with $\epsilon_{\mathrm{max}}=\max_{\epsilon\in E}(|\epsilon|)$. Proof of Proposition A1.— We start with the case $l\geq 0$. For arbitrary $\lambda\in\mathbb{R}$, we obtain the following inequality: $\displaystyle\norm{P_{l}\tilde{P}^{L}(E)}$ $\displaystyle=$ $\displaystyle\norm{P_{l}e^{\lambda H_{\mathrm{F}}^{L}}e^{-\lambda H_{\mathrm{F}}^{L}}\tilde{P}^{L}(E)}$ (166) $\displaystyle\leq$ $\displaystyle\norm{P_{l}e^{\lambda H_{\mathrm{F}}^{L}}}\cdot\norm{\sum_{\tilde{\varepsilon}_{n}\in E}e^{-\lambda\tilde{\varepsilon}_{n}}\ket{\tilde{\Phi}_{n}^{L}}\bra{\tilde{\Phi}_{n}^{L}}}$ $\displaystyle\leq$ $\displaystyle e^{\max_{\varepsilon\in E}(|\lambda\varepsilon|)}\sqrt{\norm{\braket{l}{e^{2\lambda H_{\mathrm{F}}^{L}}}{l}_{f}}}.$ We should evaluate a bound on $\norm{\braket{l}{e^{2\lambda H_{\mathrm{F}}^{L}}}{l}_{f}}$. By splitting the Floquet Hamiltonian into $H_{\mathrm{F}}^{L}=H_{\mathrm{Add}}^{L}-H_{\mathrm{LP}}^{L}$ with $\displaystyle H_{\mathrm{Add}}^{L}$ $\displaystyle=$ $\displaystyle\sum_{|m|\leq M}\sum_{l\in[L];l+m\in[L]}\ket{l+m}\bra{l}_{f}\otimes H_{m},$ (167) $\displaystyle H_{\mathrm{LP}}^{L}$ $\displaystyle=$ $\displaystyle\sum_{l\in[L]}l\omega\ket{l}\bra{l}_{f}\otimes I,$ (168) we use an interaction picture based on $H_{\mathrm{LP}}^{L}$. The imaginary- time evolution $e^{2\lambda H_{\mathrm{F}}^{L}}$ is rewritten by $\displaystyle e^{2\lambda H_{\mathrm{F}}^{L}}$ $\displaystyle=$ $\displaystyle e^{-2\lambda H_{\mathrm{LP}}^{L}}\mathcal{T}\exp\left(\int_{0}^{2\lambda}H_{\mathrm{Add},I}^{L}(\tau)\differential\tau\right).$ (169) Here, $H_{\mathrm{Add},I}^{L}(\tau)$ denotes the interaction Hamiltonian described by $\displaystyle H_{\mathrm{Add},I}^{L}(\tau)$ $\displaystyle=$ $\displaystyle e^{\tau H_{\mathrm{LP}}^{L}}H_{\mathrm{Add}}^{L}e^{-\tau H_{\mathrm{LP}}^{L}}$ $\displaystyle=$ $\displaystyle\sum_{|m|\leq M}\sum_{l\in[L];l+m\in[L]}\ket{l+m}\bra{l}_{f}\otimes e^{m\omega\tau}H_{m}.$ and its norm is bounded by $\displaystyle\norm{H_{\mathrm{Add},I}^{L}(\tau)}$ $\displaystyle\leq$ $\displaystyle\sum_{|m|\leq M}e^{m\omega\tau}\alpha$ (171) $\displaystyle=$ $\displaystyle\frac{\sinh(2M+1)\omega\tau/2}{\sinh(\omega\tau/2)}\alpha$ $\displaystyle\leq$ $\displaystyle(2M+1)\alpha\sinh 1,$ for $\tau\in[0,2/\\{(2M+1)\omega\\}]$. Finally, by setting $\lambda=1/\\{(2M+1)\omega\\}$, we get the bound, $\norm{\braket{l}{e^{2\lambda H_{\mathrm{F}}^{L}}}{l}_{f}}\leq\exp\left(\frac{-2l\omega+2(2M+1)\alpha\sinh 1}{(2M+1)\omega}\right),$ (172) and combining this inequality with Eq. (166) immediately implies the upper bound, Eq. (165). We get the same statement for a negative integer $l$ by inserting $e^{-\lambda H_{\mathrm{F}}^{L}}e^{\lambda H_{\mathrm{F}}^{L}}$ instead of $e^{\lambda H_{\mathrm{F}}^{L}}e^{-\lambda H_{\mathrm{F}}^{L}}$ in Eq. (166). $\quad\square$ For a single eigenstate $\ket{\tilde{\Phi}_{n}^{L}}$, this proposition gives the inequality for $\ket{\tilde{\phi}_{n}^{l,L}}=(\bra{l}_{f}\otimes I)\ket{\tilde{\Phi}_{n}^{L}}$, $\norm{\ket{\tilde{\phi}_{n}^{l,L}}}\leq\exp\left(-\frac{|l|-\tilde{\epsilon}_{n}^{L}/\omega}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right),$ (173) which gives rise to the exponential decay. Although we would like to prove a similar statement for the Floquet Hamiltonian $H_{\mathrm{F}}$, we note that the truncation is essential in the above proof. In Eq. (166), we insert the exponential functions $e^{\pm\lambda H_{\mathrm{F}}^{L}}$, and they are well defined only when the exponent is bounded. To extend this result to the Floquet eigenstates, we next show that the eigenspace of the truncated Floquet Hamiltonian approximates the exact one as follows. We also prove Theorem 3 following to this. ###### Proposition A2. (Relation of eigenspaces) For every Floquet eigenstate $\ket{\Phi_{n}}$ having quasinergy $\epsilon_{n}\in\mathrm{BZ}$, $\lim_{L\to\infty}(1-\tilde{P}^{L}(\mathrm{BZ}^{\varepsilon}))\ket{\Phi_{n}}=0,$ (174) is satisfied for arbitrary $\varepsilon>0$, where $\mathrm{BZ}^{\varepsilon}$ is defined by $\mathrm{BZ}^{\varepsilon}=[-(1/2+\varepsilon)\omega,(1/2+\varepsilon)\omega).$ (175) Proof of Proposition A2.— From the definition of the projection, Eq. (164), the norm is represented by $\norm{(1-\tilde{P}^{L}(\mathrm{BZ}^{\varepsilon}))\ket{\Phi_{n}}}=\sqrt{\sum_{n^{\prime}:\epsilon_{n^{\prime}}\notin\mathrm{BZ}^{\varepsilon}}|\braket{\tilde{\Phi}_{n^{\prime}}^{L}}{\Phi_{n}}|^{2}}.$ (176) We evaluate an upper bound on each inner product $|\braket{\tilde{\Phi}_{n^{\prime}}^{L}}{\Phi_{n}}|$. Plugging the truncated Floquet Hamiltonian into it, we obtain $\displaystyle|\braket{\tilde{\Phi}_{n^{\prime}}^{L}}{\Phi_{n}}|$ $\displaystyle\leq$ $\displaystyle\frac{|\braket{\tilde{\Phi}_{n^{\prime}}^{L}}{H_{\mathrm{F}}^{L}}{\Phi_{n}}-\epsilon_{n}\braket{\tilde{\Phi}_{n^{\prime}}^{L}}{\Phi_{n}}|}{|\tilde{\epsilon}_{n^{\prime}}^{L}-\epsilon_{n}|}$ (177) $\displaystyle\leq$ $\displaystyle\frac{\norm{(H_{\mathrm{F}}^{L}-\epsilon_{n})\ket{\Phi_{n}}}}{\varepsilon\omega},$ where $\tilde{\epsilon}_{n^{\prime}}^{L}\in\mathrm{BZ}^{\varepsilon}$ denotes an eigenvalue corresponding to $\ket{\tilde{\Phi}_{n^{\prime}}^{L}}$. The denominator can be evaluated by $\displaystyle(H_{\mathrm{F}}^{L}-\epsilon_{n})\ket{\Phi_{n}}$ $\displaystyle=\sum_{m}\sum_{\begin{subarray}{c}l\in[L]\\\ ;l+m\in[L]\end{subarray}}\ket{l+m}_{f}H_{m}\ket{\phi_{n}^{l}}-\sum_{l\in[L]}(\epsilon_{n}+l\omega)\ket{l}_{f}\ket{\phi_{n}^{l}}$ $\displaystyle=\sum_{l\in[L]}\ket{l}_{f}\left(\sum_{m}H_{m}\ket{\phi_{n}^{l-m}}-(\epsilon_{n}+l\omega)\ket{\phi_{n}^{l}}\right)$ $\displaystyle\qquad+\sum_{|m|\leq M}\left(\sum_{\begin{subarray}{c}l\in[L]\\\ ;l+m\in[L]\end{subarray}}-\sum_{l\in-m+[L]}\right)\ket{l+m}_{f}H_{m}\ket{\phi_{n}^{l}}.$ (178) The first term is exactly zero due to the eigenvalue equation, Eq. (9), for Floquet eigenstates. In the second term, we use integration by part based on the analyticity of $\ket{\phi_{n}(t)}$, which implies $\displaystyle\norm{\ket{\phi_{n}^{l}}}$ $\displaystyle\leq$ $\displaystyle\frac{1}{(|l|\omega)^{2}}\max_{t\in[0,T]}\left(\norm{\derivative[2]{t}\ket{\phi_{n}(t)}}\right)$ $\displaystyle\leq$ $\displaystyle\frac{((2M+1)\alpha+\omega/2)^{2}+M(2M+1)\omega\alpha}{(|l|\omega)^{2}}.$ In the second inequality, we use $\ket{\phi_{n}(t)}=e^{i\epsilon_{n}t}U(t;0)\ket{\phi_{n}(0)}$ and the relations, $\norm{H(t)}\leq(2M+1)\alpha$, $\norm{H^{\prime}(t)}\leq M(2M+1)\omega\alpha$, and $\epsilon_{n}\in\mathrm{BZ}$. As a result, we arrive at the inequality, $\displaystyle\norm{(H_{\mathrm{F}}^{L}-\epsilon_{n})\ket{\Phi_{n}}}$ $\displaystyle\leq$ $\displaystyle\sum_{|m|\leq M}\sum_{|l|\geq L-M}\norm{H_{m}\ket{\phi_{n}^{l}}}$ (180) $\displaystyle\leq$ $\displaystyle\mathrm{Const.}\times\sum_{l=L-M}^{\infty}l^{-2}$ $\displaystyle\leq$ $\displaystyle\frac{\mathrm{Const.}}{L-M}.$ In the above, the constant does not depend on the cutoff $L$, but includes $M$, $\alpha$, and $\omega$. Combining this with Eqs. (176) and (177), we finally obtain the following relation, $\displaystyle\norm{(1-\tilde{P}^{L}(\mathrm{BZ}^{\varepsilon}))\ket{\Phi_{n}}}$ $\displaystyle\leq$ $\displaystyle\frac{\mathrm{Const.}}{\varepsilon}\sqrt{\frac{2L\cdot\mathrm{dim}(\mathcal{H})}{(L-M)^{2}}}$ (181) $\displaystyle\begin{subarray}{c}\to\\\ L\to\infty\end{subarray}$ $\displaystyle 0,$ where the coefficient in the first line comes from the dimension of the truncated Sambe space. This completes the proof of Proposition A2. $\quad\square$ Proof of Theorem 3.— For an eigenstate in the Sambe space $\ket{\Phi_{n}}$, corresponding to a Floquet eigenstate $\ket{\phi_{n}(t)}$, we choose a arbitrarily large cutoff $L$ such that $l\in[L]$. Then, the norm of each Fourier component is evaluated by $\displaystyle\norm{\ket{\phi_{n}^{l}}}$ $\displaystyle=$ $\displaystyle\norm{P_{l}\ket{\Phi_{n}}}$ $\displaystyle\leq$ $\displaystyle\norm{\tilde{P}^{L}(\mathrm{BZ}^{\varepsilon})P_{l}}+\norm{(1-\tilde{P}^{L}(\mathrm{BZ}^{\varepsilon}))P_{l}\ket{\Phi_{n}}}.$ The first term is bounded by Eq. (165) regardless of $L$ from Proposition A1. The second term goes to zero by $L\to\infty$ from Proposition A2. As a result, the inequality $\norm{\ket{\phi_{n}^{l}}}\leq\exp\left(-\frac{|l|-1/2-\varepsilon}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right)$ (183) holds for arbitrary $\varepsilon>0$, and this completes the proof of Theorem 3. $\quad\square$ ## Appendix B Approximate Floquet eigenstate from the truncated Sambe space In Section IV, we show that the state $\ket{\tilde{\Phi}_{n}^{L}}\in\mathcal{H}^{L}$, an eigenstate of the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$, becomes an approximate eigenstate of the Floquet Hamiltonian $H_{\mathrm{F}}$ by Proposition 5. Namely, the truncated Sambe space provides an accurate estimate for each Floquet eigenstate on the Sambe space, $\ket{\Phi_{n}}\in\mathcal{H}^{\infty}$. Here, we prove that it also gives an accurate estimate on each Floquet eigenstate on the physical space, $\ket{\phi_{n}(t)}$. Let $\tilde{\epsilon}_{n}^{L}$ be an eigenvalue of $H_{\mathrm{F}}^{L}$ for the eigenstate $\ket{\tilde{\Phi}_{n}^{L}}\in\mathcal{H}^{L}$. From the exact relation between $\ket{\phi_{n}(t)}$ and $\ket{\Phi_{n}}$ shown in Eqs. (5) and (8), the Floquet eigenstate in the physical space is expected to be reproduced by $\ket{\tilde{\phi}_{n}^{L}(t)}\equiv e^{i\tilde{\epsilon}_{n}^{L}t}U(t;0)\sum_{l\in[L]}(\bra{l}_{f}\otimes I)\ket{\tilde{\Phi}_{n}^{L}}$ (184) Each Floquet eigenstate $\ket{\phi_{n}(t)}$ is characterized as an eigenstate of the time-evolution operator $U(t+T;t)$ with the eigenvalue $e^{-i\epsilon_{n}T}$. The state $\ket{\tilde{\phi}_{n}^{L}(t)}$ provides an accurate estimate on the exact Floquet eigenstate $\ket{\phi_{n}(t)}$ in that it becomes an approximate eigenstate of $U(t+T;t)$, as mentioned in Section II.2. We prove this fact by the following theorem, based on the coincidence of the cutoff $L$ for Floquet eigenstates (Theorem 2) and the Lieb-Robinson bound in the Sambe space [Eq. (57)]. ###### Proposition B1. We define a state $\ket{\tilde{\phi}_{n}^{L}(t)}\in\mathcal{H}$ by Eq. (184) with the eigenvalue $\tilde{\epsilon}_{n}^{L}$ and the eigenstate $\ket{\tilde{\Phi}_{n}^{L}}$ of the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$. Then, it can be an approximate eigenstate of the time- evolution $U(t+T;t)$ with an error $\varepsilon\in(0,1)$ by $\frac{\norm{\left(U(t+T;t)-e^{-i\tilde{\epsilon}_{n}^{L}T}\right)\ket{\tilde{\phi}_{n}^{L}(t)}}}{\norm{\ket{\tilde{\phi}_{n}^{L}(t)}}}\leq\varepsilon$ (185) under the choice of the cutoff $L\in\Theta(\alpha T+|\tilde{\epsilon}_{n}^{L}|/\omega+\log(1/\varepsilon))$. We first focus on the case $t=0$, where the state is given by $\ket{\tilde{\phi}_{n}^{L}(0)}=\sum_{l\in[L]}\ket{\tilde{\phi}_{n}^{l,L}},\quad\ket{\tilde{\phi}_{n}^{l,L}}\equiv(\bra{l}_{f}\otimes I)\ket{\tilde{\Phi}_{n}^{L}},$ (186) and show that this gives an approximate eigenstate of the Floquet operator $U(T;0)$. Applying the Floquet operator in the Sambe space formalism by Eq. (55) to $\ket{\tilde{\phi}_{n}^{L}(0)}$, the numerator of Eq. (185) can be transformed into $\displaystyle U(T;0)\ket{\tilde{\phi}_{n}^{L}(0)}-e^{-i\tilde{\epsilon}_{n}^{L}T}\ket{\tilde{\phi}_{n}^{L}(0)}$ $\displaystyle\quad=\sum_{l\in\mathbb{Z}}\sum_{l^{\prime}\in[L]}\bra{l}\left(e^{-iH_{\mathrm{F}}T}-e^{-iH_{\mathrm{F}}^{L}T}\right)\ket{l^{\prime}}_{f}\ket{\tilde{\phi}_{n}^{l^{\prime},L}}.$ (187) Before going to the proof of Proposition B1, we provide two propositions: One is about the numerator associated with the Lieb-Robinson bound (Proposition B2), and the other is about the denominator, i.e., the norm of $\ket{\tilde{\phi_{n}^{L}}(0)}$ (Proposition B3). The first proposition says that each amplitude $\bra{l}\left(e^{-iH_{\mathrm{F}}T}-e^{-iH_{\mathrm{F}}^{L}T}\right)\ket{l^{\prime}}_{f}$ in Eq. (185) is exponentially suppressed in the distance between $l$ and $l^{\prime}$. ###### Proposition B2. Consider a time-periodic Hamiltonian $H(t)$, satisfying Eq. (27). For a Fourier index $l^{\prime}\in[L]$, the inequality $\norm{\braket{l}{(e^{-iH_{\mathrm{F}}t}-e^{-iH_{\mathrm{F}}^{L}t})}{l^{\prime}}_{f}}\leq 2e^{(2M+1)e\alpha t-d(l,l^{\prime})/M}$ (188) is satisfied, where $d(l,l^{\prime})$ is defined by $d(l,l^{\prime})=\begin{cases}2L-|l|-|l^{\prime}|&(l\in[L])\\\ |l|-|l^{\prime}|&(l\notin[L]).\end{cases}$ (189) Proof of Proposition B2.— We use an interaction picture based on $H_{\mathrm{LP}}=\sum_{l\in\mathbb{Z}}l\omega\ket{l}\bra{l}_{f}\otimes I$. Using the Dyson series expansion under the interaction Hamiltonian $H_{\mathrm{Add},I}(t)=\sum_{|m|\leq M}\sum_{l\in\mathbb{Z}}\ket{l+m}\bra{l}_{f}\otimes e^{im\omega t}H_{m}$, $U_{\mathrm{Add},I}(t)=\sum_{n=0}^{\infty}\int_{0}^{t}\differential t_{1}\ldots\int_{0}^{t_{n-1}}\differential t_{n}\prod_{i=1}^{n}H_{\mathrm{Add},I}(t_{i})$ (190) and that for the truncated Floquet Hamiltonian, we get the relation, $(\text{l.h.s of Eq. (\ref{A_Eq:Lieb_Robinson_difference})})=\norm{\braket{l}{U_{\mathrm{Add},I}(t)-U_{\mathrm{Add},I}^{L}(t)}{l^{\prime}}_{f}}.$ (191) By plugging the completeness $\sum_{l_{i}\in\mathbb{Z}}\ket{l_{i}}\bra{l_{i}}_{f}\otimes I$ into the Dyson series for $i=1,2,\ldots,n-1$, each term of the right-hand side represents the transition amplitude via the path $l^{\prime}\equiv l_{n}\to l_{n-1}\to\ldots\to l_{1}\to l_{0}\equiv l$ as $\displaystyle\braket{l}{U_{\mathrm{Add},I}(t)}{l^{\prime}}_{f}$ $\displaystyle\quad=\sum_{n=0}^{\infty}\int_{0}^{t}\differential t_{1}\ldots\int_{0}^{t_{n-1}}\differential t_{n}\prod_{i=1}^{n}\bra{l_{i-1}}H_{\mathrm{Add},I}(t_{i})\ket{l_{i}}_{f}.$ Each transition $l_{i}\to l_{i-1}$ is allowed only when $|l_{i}-l_{i-1}|\leq M$ is satisfied due to the assumption. The difference from the truncated one appears when the path goes across the region $\mathbb{Z}\backslash[L]$, i.e., low order terms with $n<d(l,l^{\prime})$ in the Dyson series vanish. Considering the bound on the interaction Hamiltonian $\norm{H_{\mathrm{Add},I}(t)}\leq\max_{t\in[0,T]}\norm{H(t)}\leq(2M+1)\alpha$ and that for the truncated one [28], we arrive at the inequality, $\displaystyle(\text{l.h.s of Eq. (\ref{A_Eq:Lieb_Robinson_difference})})$ $\displaystyle\leq$ $\displaystyle 2\sum_{n=\lceil d(l,l^{\prime})/M\rceil}^{\infty}\frac{t^{n}}{n!}((2M+1)\alpha)^{n}$ $\displaystyle\leq$ $\displaystyle 2\exp\left((2M+1)e\alpha t-\frac{d(l,l^{\prime})}{M}\right).$ In the last line, we use the relation $\sum_{n=n_{0}}^{\infty}(x/n)^{n}\leq e^{ex-n_{0}}$ for arbitrary $x\geq 0$. $\quad\square$ We substitute $t=T$ in the above proposition to evaluate the numerator of Eq. (185) later. Note that this bound comes from the Lieb-Robinson bound in the Sambe space [28], i.e., the decay of the propagation in Fourier indices. Indeed, when the distance $d(l,l^{\prime})$ is greater than $L_{\mathrm{LR}}\in\Theta(\alpha t+\log(1/\varepsilon))$, the transition amplitude becomes smaller than $\varepsilon$. Next, we prove the second proposition, which provides the norm of the state $\ket{\tilde{\phi}_{n}^{L}(t)}$ in the denominator of Eq. (185). ###### Proposition B3. The state $\ket{\tilde{\phi}_{n}^{L}(0)}=\sum_{l\in[L]}\ket{\tilde{\phi}_{n}^{l,L}}$ is approximately normalized in the sense that $1-\varepsilon_{\mathrm{norm}}^{L}\leq\norm{\ket{\tilde{\phi}_{n}^{L}(0)}}\leq 1+\varepsilon_{\mathrm{norm}}^{L}$ (194) is satisfied. Here, the value $\varepsilon_{\mathrm{norm}}^{L}$ is defined by $\displaystyle\varepsilon_{\mathrm{norm}}^{L}$ $\displaystyle=$ $\displaystyle 6(2M+1)^{2}\alpha T\log(2eL)$ (195) $\displaystyle\times\exp\left(-\frac{L-|\tilde{\epsilon}_{n}^{L}|/\omega}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right),$ whose scaling is $e^{-\Theta(L-|\tilde{\epsilon}_{n}^{L}|/\omega-\alpha T)}$. Proof of Proposition B3.— As we define the equivalent Floquet eigenstate under translation by $\ket{\Phi_{n}^{l}}=\mathrm{Add}_{l}\ket{\Phi_{n}}$ [See Eq. (15)], we organize its approximate counterpart by $\ket{\tilde{\Phi}_{n}^{l,L}}=\mathrm{Add}_{l}\ket{\tilde{\Phi}_{n}^{L}}=\sum_{l^{\prime}\in[L]}\ket{l^{\prime}+l}_{f}\ket{\tilde{\phi}_{n}^{l^{\prime},L}}.$ (196) Then, the norm of the state $\ket{\tilde{\phi}_{n}^{L}(0)}$ is evaluated by $\displaystyle\braket{\tilde{\phi}_{n}^{L}(0)}{\tilde{\phi}_{n}^{L}(0)}$ $\displaystyle=$ $\displaystyle\sum_{k,l\in\mathbb{Z}}\braket{\tilde{\Phi}_{n}^{L}}{(\ket{k}\bra{l}_{f}\otimes I)}{\tilde{\Phi}_{n}^{L}}$ (197) $\displaystyle=$ $\displaystyle\sum_{k,l\in\mathbb{Z}}\braket{\tilde{\Phi}_{n}^{L}}{P_{k}\mathrm{Add}_{k-l}}{\tilde{\Phi}_{n}^{L}}$ $\displaystyle=$ $\displaystyle 1+\sum_{l\in\mathbb{Z}\backslash\\{0\\}}\braket{\tilde{\Phi}_{n}^{L}}{\tilde{\Phi}_{n}^{l,L}}.$ We derive the approximate orthogonality of $\ket{\tilde{\Phi}_{n}^{L}}$ and $\ket{\tilde{\Phi}_{n}^{l,L}}$ for $l\neq 0$ based on the fact that they are approximate eigenvectors of $H_{\mathrm{F}}$ with different eigenvalues. As discussed in Proposition 5, the error $\norm{(H_{F}-\tilde{\epsilon}_{n}^{L})\ket{\tilde{\Phi}_{n}^{L}}}$ is suppressed up to $e^{-\Theta(L-|\tilde{\epsilon}_{n}^{L}|/\omega-\alpha T)}$ by Eq. (52). In addition, the translation symmetry of the Floquet Hamiltonian, $\mathrm{Add}_{l}^{\dagger}H_{\mathrm{F}}\mathrm{Add}_{l}=H_{\mathrm{F}}-l\omega$, implies the relation, $\norm{(H_{F}-\tilde{\epsilon}_{n}^{L}+l\omega)\ket{\tilde{\Phi}_{n}^{L,l}}}=\norm{(H_{\mathrm{F}}-\tilde{\epsilon}_{n}^{L})\ket{\tilde{\Phi}_{n}^{L}}},$ (198) which gives the same upper bound as Eq. (52). The inner products appearing in Eq. (197) can be evaluated by $\displaystyle|\braket{\tilde{\Phi}_{n}^{L}}{\tilde{\Phi}_{n}^{l,L}}|$ $\displaystyle=$ $\displaystyle\frac{\left|\braket{\tilde{\Phi}_{n}^{L}}{(\tilde{\epsilon}_{n}^{L}-l\omega-\tilde{\epsilon}_{n}^{L})}{\tilde{\Phi}_{n}^{l,L}}\right|}{|l|\omega}$ (199) $\displaystyle\leq$ $\displaystyle\frac{\norm{(H_{\mathrm{F}}-\tilde{\epsilon}_{n}^{L})\ket{\tilde{\Phi}_{n}^{L}}}}{|l|\omega},$ for $l\in[2L]\backslash\\{0\\}$. We note $\braket{\tilde{\Phi}_{n}^{L}}{\tilde{\Phi}_{n}^{l,L}}=0$ for $l\in\mathbb{Z}\backslash[2L]$ by definition. Using the inequality $\sum_{l=1}^{2L}l^{-1}\leq 1+\log(2L)$, we arrive at the inequality, $\displaystyle\left|1-\norm{\ket{\tilde{\phi}_{n}(0)}}^{2}\right|$ $\displaystyle\leq$ $\displaystyle 2\log(2eL)\times\frac{\norm{(H_{\mathrm{F}}-\tilde{\epsilon}_{n}^{L})\ket{\tilde{\Phi}_{n}^{L}}}}{\omega}$ $\displaystyle=$ $\displaystyle 6(2M+1)^{2}\alpha T\log(2eL)$ $\displaystyle\times\exp\left(-\frac{L-|\tilde{\epsilon}_{n}^{L}|/\omega}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right).$ This immediately implies the relation of Eqs. (194) and (195). $\quad\square$ Now, we are ready for proving Proposition B1. We go back to showing that the state $\ket{\tilde{\phi}_{n}^{L}(0)}$ organized by the truncated Sambe space appropriately provides an approximate eigenstate of the Floquet operator $U(T;0)$. The proof for generic $t\in\mathbb{R}$ follows it. Proof of Proposition B1.— Beginning with Eq. (187), we evaluate the bound on its norm given by $\displaystyle\norm{\left(U(T;0)-e^{-i\tilde{\epsilon}_{n}^{L}T}\right)\ket{\tilde{\phi}_{n}^{L}(0)}}$ $\displaystyle\quad\leq\sum_{l\in\mathbb{Z}}\sum_{l^{\prime}\in[L]}\norm{\bra{l}\left(e^{-iH_{\mathrm{F}}T}-e^{-iH_{\mathrm{F}}^{L}T}\right)\ket{l^{\prime}}_{f}}\cdot\norm{\ket{\tilde{\phi}_{n}^{l^{\prime},L}}}.$ Let us first focus on the summation over $l\in[L]$ in the above. The bound from Proposition B2 and the one for the state $\ket{\tilde{\phi}_{n}^{l,L}}$ by Eq. (173) indicates the inequality $\displaystyle\sum_{l\in[L]}\sum_{l^{\prime}\in[L]}\norm{\bra{l}\left(e^{-iH_{\mathrm{F}}T}-e^{-iH_{\mathrm{F}}^{L}T}\right)\ket{l^{\prime}}_{f}}\cdot\norm{\ket{\tilde{\phi}_{n}^{l^{\prime},L}}}$ $\displaystyle\quad\leq 8\sum_{l,l^{\prime}=0}^{L}\exp\left((2M+1)e\alpha T-\frac{2L-l-l^{\prime}}{M}\right)$ $\displaystyle\qquad\qquad\qquad\times\exp\left(-\frac{l^{\prime}-|\tilde{\epsilon}_{n}^{L}|/\omega}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right)$ $\displaystyle\quad\leq 16(2M+1)^{2}\exp\left(-\frac{L-|\tilde{\epsilon}_{n}^{L}|/\omega}{2M+1}+2(M+1)e\alpha T\right).$ (202) The sum over $l\in\mathbb{Z}\backslash[L]$ in Eq. (187) is evaluated in a similar way, and results in the same form as Eq. (202). Therefore, $\ket{\tilde{\phi}_{n}^{L}(0)}$ is an approximate eigenstate of $U(T;0)$ in a sense that $\displaystyle\norm{\left(U(T;0)-e^{-i\tilde{\epsilon}_{n}^{L}T}\right)\ket{\tilde{\phi}_{n}^{L}(0)}}$ $\displaystyle\quad\leq 32(2M+1)^{2}\exp\left(-\frac{L-|\tilde{\epsilon}_{n}^{L}|/\omega}{2M+1}+2(M+1)e\alpha T\right)$ holds. The right hand side scales as $e^{-\Theta(L-|\tilde{\epsilon}_{n}^{L}/\omega|-\alpha T)}$. Since the norm of $\ket{\tilde{\phi}_{n}^{L}(0)}$ scales as $\norm{\ket{\tilde{\phi}_{n}^{L}(0)}}=1+e^{-\Theta(L-|\tilde{\epsilon}_{n}^{L}|/\omega-\alpha T)}$ by Proposition B3, we can achieve $\frac{\norm{\left(U(T;0)-e^{-i\tilde{\epsilon}_{n}^{L}T}\right)\ket{\tilde{\phi}_{n}^{L}(0)}}}{\norm{\ket{\tilde{\phi}_{n}^{L}(0)}}}\leq\varepsilon$ (204) under the choice of the cutoff $L\in\Theta(\alpha T+|\tilde{\epsilon}_{n}^{L}|/\omega+\log(1/\varepsilon))$. This completes the proof of Proposition B1 for $t=0$. Proposition B1 for generic time $t\in[0,T)$ is easily proved by the result for $t=0$. The state $\ket{\tilde{\phi}_{n}^{L}(t)}$ organized from the truncated Sambe space by Eq. (184) is related to $\ket{\tilde{\phi}_{n}^{L}(0)}$ by $\ket{\tilde{\phi}_{n}^{L}(t)}=e^{i\tilde{\epsilon}_{n}^{L}t}U(t;0)\ket{\tilde{\phi}_{n}^{L}(0)}$. Using the relation $U(t+T;t)U(t;0)=U(t;0)U(T;0)$, which is valid for a time- periodic Hamiltonian $H(t)$, the equality $\displaystyle\norm{\left(U(t+T;t)-e^{-i\tilde{\epsilon}_{n}^{L}T}\right)\ket{\tilde{\phi}_{n}^{L}(t)}}$ $\displaystyle\qquad\qquad\quad=\norm{\left(U(T;0)-e^{-i\tilde{\epsilon}_{n}^{L}T}\right)\ket{\tilde{\phi}_{n}^{L}(0)}}$ (205) is derived. The approximate normalization of $\ket{\tilde{\phi}_{n}^{L}(t)}$ follows immediately from $\norm{\ket{\tilde{\phi}_{n}^{L}(t)}}=\norm{\ket{\tilde{\phi}_{n}^{L}(0)}}$. Thus, the bound on Eq. (185) for generic time $t$ is exactly the same as the case of $t=0$. $\quad\square$ ## Appendix C Block-encoding of Floquet Hamiltonian In the Sambe space formalism, we often use the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$ defined by $H_{\mathrm{F}}^{L}=\sum_{|m|\leq M}\sum_{l\in[L];l+m\in[L]}\ket{l+m}\bra{l}_{f}-\sum_{l\in[L]}l\omega\ket{l}\bra{l}_{f}\otimes I,$ (206) and the main text follows this definition. However, for block-encoding toward QSVT, another truncated Floquet Hamiltonian $H_{\mathrm{F,pbc}}^{L}$ by $\displaystyle H_{\mathrm{F,pbc}}^{L}$ $\displaystyle=$ $\displaystyle\sum_{|m|\leq M}\mathrm{Add}_{m}^{[L]}\otimes H_{m}-\sum_{l\in[L]}l\omega\ket{l}\bra{l}_{f}\otimes I,$ $\displaystyle\mathrm{Add}_{m}^{[L]}$ $\displaystyle=$ $\displaystyle\sum_{l\in[L]}\ket{(l\oplus m)_{[L]}}\bra{l}_{f},$ (208) is rather feasible, as shown in Section V.2. Here, we provide an efficient block-encoding of $H_{\mathrm{F,pbc}}^{L}$, and prove that using either $H_{\mathrm{F}}^{L}$ or $H_{\mathrm{F,pbc}}^{L}$ results in the same eigenvalues and eigenstates with negligible errors. Namely, the discussion based on $H_{\mathrm{F}}^{L}$ in the main text is valid, while the actual quantum algorithms run with queries to block-encoding of $H_{\mathrm{F,pbc}}^{L}$ by $C[O_{H_{m}}]$. ### C.1 Block-encoding under periodic boundary conditions We briefly review the block-encoding of $H_{\mathrm{F,pbc}}^{L}$ in Ref. [28], and how Proposition 6 in the main text is confirmed. The block-encoding unitaries for the first and the second terms in Eq. (LABEL:A_Eq:modified_H_F) are constructed separately and their combination forms the one for $H_{\mathrm{F,pbc}}^{L}$. Preparing a $\order{1}$-ancilla qubits expressed by $\\{\ket{m}_{M}\\}_{|m|\leq M}$, the block-encoding of the first term $O_{1}$ is provided by $\displaystyle O_{1}=G_{M}^{\dagger}\left(\sum_{|m|\leq M}\ket{m}\bra{m}\otimes\mathrm{Add}_{m}^{[L]}\otimes O_{H_{m}}\right)G_{M},$ with a unitary gate $G_{M}$ on this anclla system such that $G_{M}\ket{0}_{M}=\sum_{|m|\leq M}\sqrt{\frac{\alpha_{m}}{\sum_{|m^{\prime}|\leq M}\alpha_{m^{\prime}}}}\ket{m}_{M}.$ (210) The block-encoding $O_{1}$ embeds the first term of Eq. (LABEL:A_Eq:modified_H_F), $(\bra{0}_{M}\bra{0}_{a})O_{1}(\ket{0}_{M}\ket{0}_{a})=\frac{\sum_{|m|\leq M}\mathrm{Add}_{l}^{[L]}\otimes H_{m}}{\sum_{|m|\leq M}\alpha_{m}},$ (211) requiring one query respectively for $C[O_{H_{m}}]$ and at most $\order{\log L}$ primitive gates. The second term of Eq. (LABEL:A_Eq:modified_H_F) has a block-encoding $O_{2}$ such that $\braket{0}{O_{2}}{0}_{f^{\prime}}=\frac{-\sum_{l\in[L]}l\omega\ket{l}\bra{l}_{f}\otimes I}{L\omega},$ (212) with a $\Theta(\log L)$ ancilla qubit system $f^{\prime}$, where we use only $\Theta(\log L)$ primitive gates [28]. The block-encoding of $H_{\mathrm{F,pbc}}^{L}$ is organized one query respectively to $O_{1}$ and $O_{2}$. Introducing additional two qubits $c$, it is defined by $\displaystyle O_{H_{\mathrm{F,pbc}}^{L}}$ $\displaystyle=$ $\displaystyle G_{c}^{\dagger}\biggl{(}\ket{00}\bra{00}_{c}\otimes O_{1}+\ket{01}\bra{01}\otimes O_{2}\biggr{.}$ $\displaystyle\qquad+(\ket{10}\bra{11}+\ket{11}\bra{10})_{c}\otimes I\biggl{.}\biggr{)}G_{c},$ $\displaystyle G_{c}\ket{00}_{c}$ $\displaystyle=$ $\displaystyle\sqrt{\frac{\sum_{|m|\leq M}\alpha_{m}}{\tilde{\alpha}}}\ket{00}_{c}+\sqrt{\frac{L\omega}{\tilde{\alpha}}}\ket{00}_{c}$ (214) $\displaystyle\quad+\sqrt{\frac{\tilde{\alpha}-\sum_{|m|\leq M}\alpha_{m}-L\omega}{\tilde{\alpha}}}\ket{10}_{c},$ where some identity matrices are omitted. The above block-encoding is well- defined for arbitrary $\tilde{\alpha}\geq(2M+1)\alpha+L\omega$, which is ensured to be larger than $\sum_{|m|\leq M}\alpha_{m}+L\omega$. By summarizing the ancilla system $a$ and the additional $\order{\log L}$ qubits as $a^{\prime}$, this embeds the truncated Floquet Hamiltonian $H_{\mathrm{F,pbc}}^{L}$ with the denominator $\tilde{\alpha}$ as Proposition 6. Therefore, QSVT algorithms working with the query complexity $q$ in $O_{H_{\mathrm{F,pbc}}^{L}}$ (e.g., the standard QPE based on the truncated Floquet Hamiltonians in the main text) can be executed by $\order{qM}=\order{q}$ queries to $C[O_{H_{m}}]$. ### C.2 Equivalence between different boundary conditions Here, we show the equivalence of the quantum algorithms working with $H_{\mathrm{F}}^{L}$ and $H_{\mathrm{F,pbc}}^{L}$. In Section VII, we apply the QPE under the Hamiltonian $H_{\mathrm{F}}^{pL}$ to the input state $\ket{\Psi_{0}^{L}}$, Eq. (88). The input state is approximately a superposition of Floquet eigenstates $\ket{\Phi_{n}^{l}}$ for $l\in[6L]$ according to Eqs. (110) and (111) based on Proposition 10. The calculation relies solely on the facts that each $\ket{\Phi_{n}^{l}}$ is an approximate eigenstate of $H_{\mathrm{F}}^{L}$ satisfying $\norm{(H_{\mathrm{F}}^{L}-(\epsilon_{n}-l\omega))\ket{\Phi_{n}^{l}}}\leq e^{-\Theta(L-\alpha T)}$ by Theorem 2 and that it causes only a small error in the QPE result $\delta_{\mathrm{approx}}$ like Eq. (115) as shown by Proposition D1. To prove the equivalence, it is sufficient to show the former fact for the alternative Floquet Hamiltonian $H_{\mathrm{F,pbc}}^{pL}$ as follows. ###### Proposition C1. Let $\ket{\Phi_{n}^{l}}\in\mathcal{H}^{\infty}$ be a Floquet eigenstate such that $H_{\mathrm{F}}\ket{\Phi_{n}^{l}}=(\epsilon_{n}-l\omega)\ket{\Phi_{n}^{l}}$. It is an approximate eigenstate of the truncated Floquet Hamiltonian $H_{\mathrm{F,pbc}}^{L}$ by $\displaystyle\norm{(H_{\mathrm{F,pbc}}^{L}-(\epsilon_{n}-l\omega))\ket{\Phi_{n}^{l}}}$ $\displaystyle\quad\leq 54(2M+1)^{2}\exp\left(-\frac{L-|l|}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right)$ $\displaystyle\quad\leq e^{-\Theta(L-\alpha T)},$ (215) where $H_{\mathrm{F,pbc}}^{L}$ is defined by Eq. (LABEL:A_Eq:modified_H_F), Proof of Proposition C1.— The proof is completely parallel to the proof for $H_{\mathrm{F}}^{L}$, which is shown as Proposition 4 in the main text. Setting $l=0$ for simplicity, we obtain the relation, $\displaystyle(H_{\mathrm{F,pbc}}^{L}-\epsilon_{n})\ket{\Phi_{n}}$ $\displaystyle\quad=\sum_{l^{\prime}\in[L]}\ket{l^{\prime}}_{f}\left(\sum_{|m|\leq M}H_{m}\ket{\phi_{n}^{(l^{\prime}\ominus m)_{[L]}}}-(\epsilon_{n}+l^{\prime}\omega)\ket{\phi_{n}^{l^{\prime}}}\right)$ $\displaystyle\quad=\sum_{l^{\prime}\in[L]}\ket{l^{\prime}}_{f}\left(\sum_{|m|\leq M}H_{m}\ket{\phi_{n}^{l^{\prime}-m}}-(\epsilon_{n}+l^{\prime}\omega)\ket{\phi_{n}^{l^{\prime}}}\right)$ $\displaystyle\qquad\quad+\sum_{l^{\prime}\in[L]}\ket{l^{\prime}}_{f}\sum_{|m|\leq M}H_{m}\left(\ket{\phi_{n}^{(l^{\prime}\ominus m)_{[L]}}}-\ket{\phi_{n}^{l^{\prime}-m}}\right).$ The symbol $(l\ominus m)_{[L]}=(l\oplus(-m))_{[L]}$ means $l-m$ modulo $2L$ defined on $[L]$. In the last line, the first term disappears by the eigenvalue equation $H_{\mathrm{F}}\ket{\Phi_{n}}=\epsilon_{n}\ket{\Phi_{n}}$. In the second term, only the contributions from $|l^{\prime}|\geq L-M$ can survive since $(l\ominus m)_{[L]}=l-m$ is otherwise satisfied. As a result, the norm is bounded by $\displaystyle\norm{(H_{\mathrm{F,pbc}}^{L}-\epsilon_{n})\ket{\Phi_{n}}}$ $\displaystyle\quad\leq\sum_{|l^{\prime}|\geq L-M}\sum_{|m|\leq M}\left(\norm{\ket{\phi_{n}^{(l^{\prime}\ominus m)_{[L]}}}}+\norm{\ket{\phi_{n}^{l^{\prime}-m}}}\right)$ $\displaystyle\quad\leq 2(2M+1)\alpha\sum_{l^{\prime}\mathbb{Z}\backslash[L-2M]}\norm{\ket{\phi_{n}^{l^{\prime}}}}$ $\displaystyle\quad\leq 18(2M+1)^{2}\exp\left(-\frac{L-2M}{2M+1}+\frac{\sinh 1}{2\pi}\alpha T\right),$ where we use Eq. (LABEL:Eq:phi_n_l_contribute_out) in the main text for deriving the last inequality. Using $e^{2M/(2M+1)}<3$, we arrive at the inequality, Eq. (215), for $l=0$. The result for a generic integer $l\in\mathbb{Z}$ is obtained similarly. $\quad\square$ In the main text, the algorithm in Section VII deals with the truncated Hamiltonian $H_{\mathrm{F,pbc}}^{pL}$ ($p\geq 7$), and each Floquet eigenstate $\ket{\Phi_{n}^{l}}$ for $l\in[6L]$ is involved in the initial state $\ket{\Psi_{0}^{L}}$ as Eqs. (110) and (111). The above proposition guarantees the relation, $\norm{(H_{\mathrm{F,pbc}}^{pL}-(\epsilon_{n}-l\omega))\ket{\Phi_{n}^{l}}}\leq e^{-\Theta(L-\alpha T)}.$ (218) The standard QPE based on $H_{\mathrm{F,pbc}}^{pL}$ exactly as described in Section VI. Combined with the efficient block-encoding of $H_{\mathrm{F,pbc}}^{pL}$ in Section C.1, the Floquet QPE algorithms for $(\epsilon_{n},\ket{\Phi_{n}})$ are efficiently executed by the controlled block-encoding $C[O_{H_{m}}]$ and its inverse. ## Appendix D QSVT for approximate eigenstates In the QPE algorithm of Sec. VII, we use QSVT based on the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$, but we apply it to a state expanded by the set of approximate eigenstates $\\{\ket{\Phi_{n}}\\}$ as Eq. (114). Here, we show that the influence of this deviation $\delta_{\mathrm{approx}}$ is bounded by Eq. (115). Let us consider a generic time-independent Hamiltonian $H$ with a spectral decomposition $H=\sum_{n}E_{n}\ket{\phi_{n}}\bra{\phi_{n}}$. The Hamiltonian $H$ is assumed to be renormalized as $\norm{H}\leq 1$, and thus $E_{n}\in[-1,1]$. The QSVT based on $H$ applies a degree-$q$ polynomial $f_{q}(H)$ to a state expanded by approximate eigenstates $\\{\ket{\tilde{\phi}_{n}}\\}_{n}$. Denoting their approximate eigenvalues by $\\{\tilde{E}_{n}\in[-1,1]\\}_{n}$, we examine how the QSVT based on $H$ reproduces the one based on $\tilde{H}=\sum_{n=1}^{n_{\mathrm{max}}}\tilde{E}_{n}\ket{\tilde{\phi}_{n}}\bra{\tilde{\phi}_{n}}.$ (219) We summarize the result be the following proposition. ###### Proposition D1. (Approximate QSVT) Suppose that a given state $\ket{\psi}$ is expanded by $\ket{\psi}=\sum_{n=1}^{n_{\mathrm{max}}}c_{n}\ket{\tilde{\phi}_{n}}$ ($\sum_{n}|c_{n}|^{2}=1$), where the approximate eigenstates $\\{\ket{\tilde{\phi}_{n}}\\}_{n}$ are characterized by $\norm{(H-\tilde{E}_{n})\ket{\tilde{\phi}_{n}}}\leq\eta,\quad^{\exists}\tilde{E}_{n}\in[-1,1],$ (220) and $\braket{\tilde{\phi}_{n}}{\tilde{\phi}_{n^{\prime}}}=\delta_{nn^{\prime}}$. Then, the difference between $f_{q}(H)$ and $f_{q}(\tilde{H})$ when applied to $\ket{\psi}$ is bounded by $\norm{f_{q}(H)\ket{\psi}-f_{q}(\tilde{H})\ket{\psi}}\leq q^{2}\eta\sqrt{n_{\mathrm{max}}},$ (221) for any degree-$q$ polynomial $f_{q}$ realized by QSVT. Proof of Proposition D1.— We first focus on the difference of Eq. (221) for each approximate eigenstate $\ket{\psi}=\ket{\tilde{\phi}_{n}}$. Using a degree-$(q-1)$ polynomial $g_{q-1}(x,y)$ defined by factorization $f_{q}(x)-f_{q}(y)=(x-y)g_{q-1}(x,y)$, it is evaluated as follows; $\displaystyle\norm{(f_{q}(H)-f_{q}(\tilde{H}))\ket{\tilde{\phi}_{n}}}$ $\displaystyle\quad=\norm{\sum_{n^{\prime}}g_{q-1}(E_{n^{\prime}},\tilde{E}_{n})(E_{n^{\prime}}-\tilde{E}_{n})\ket{\phi_{n^{\prime}}}\braket{\phi_{n^{\prime}}}{\tilde{\phi}_{n}}}$ $\displaystyle\quad\leq\max_{x\in[-1,1]}(|g_{q-1}(x,\tilde{E}_{n})|)\norm{(H-\tilde{E}_{n})\ket{\tilde{\phi}_{n}}}.$ (222) The mean value theorem gives an upper bound on $g_{q-1}$ by $\displaystyle|g_{q-1}(x,\tilde{E}_{n})|$ $\displaystyle\leq$ $\displaystyle\sup_{x\in(-1,1)}(|f^{\prime}_{q}(x)|)\leq q^{2}.$ (223) The second inequality follows from the fact that achievable degree-$q$ polynomials in QSVT should be renormalized as $|f_{q}(x)|\leq 1$ (${}^{\forall}x\in[-1,1]$), and then their derivatives cannot exceed $q^{2}$ by the Markov theorem [69]. Therefore, for generic state $\ket{\psi}=\sum_{n=1}^{n_{\mathrm{max}}}c_{n}\ket{\tilde{\phi}_{n}}$, the difference, $\norm{f_{q}(H)\ket{\psi}-f_{q}(\tilde{H})\ket{\psi}}$, has an upper bound $\sum_{n=1}^{n_{\mathrm{max}}}|c_{n}|q^{2}\eta$. Using the inequality $\sum_{n=1}^{n_{\mathrm{max}}}|c_{n}|\leq\sqrt{n_{\mathrm{max}}}$, which is derived from the Cauchy-Schwartz inequality, we obtain Eq. (221). $\quad\square$ While the above results are for QSVT for hermitian matrices, their extension to generic matrices can be immediately obtained. In the case of QPE under the truncated Floquet Hamiltonian in Section VII, we apply QSVT based on $H_{\mathrm{F}}^{pL}$ or $H_{\mathrm{F,pbc}}^{pL}$ to the input state $\ket{\Psi_{0}^{L}}$, which is a superposition of $\ket{\Phi_{n}^{l}}$ by Eqs. (110) and (111). Then, Proposition 10 suggests that the number $n_{\mathrm{max}}$ for the terms other than the negligible state $\ket{\Psi_{\mathrm{neg}}^{L}}$ is bounded by $n_{\mathrm{max}}\leq\mathrm{dim}(\mathcal{H})\times 12L,$ (224) consisting of $\ket{\Phi_{n}^{l}}$ for $n=1,2,\ldots,\mathrm{dim}(\mathcal{H})$ and $l\in[6L]$. The error $\eta$, which provides the upper bound on $\norm{(H_{\mathrm{F}}^{pL}-(\epsilon_{n}-l\omega))\ket{\Phi_{n}^{l}}}$ scales as $e^{-\Theta(L-\alpha T)}$ by Eq. (46). This is also true for the modified Hamiltonian $H_{\mathrm{F,pbc}}^{pL}$ as discussed in Section C. Therefore, when the QPE based on QSVT is executed with the query complexity $q_{\mathrm{QPE}}$, the error of regarding $\ket{\Phi_{n}^{l}}$ as the exact eigenstate of the truncated Floquet Hamiltonians amounts to at most $\delta_{\mathrm{approx}}\leq q_{\mathrm{QPE}}^{2}e^{-\Theta(L-\alpha T-N)},$ (225) as shown in Eq. (115). ## Appendix E Extension to Hamiltonian with exponentially-decaying Fourier components In the main text, we focus on the cases where the Fourier indices in the Hamiltonian $H(t)$ are bounded by $|m|\leq M$ as in Eq. (27). Here, we consider the case whose Hamiltonian is given by $H(t)=\sum_{m=-\infty}^{\infty}H_{m}e^{-im\omega t},\quad\norm{H_{m}}\leq\alpha e^{-|m|/\zeta},$ (226) with positive constants $\alpha,\zeta>0$. For instance, these Hamiltonians can describe the Gaussian wave packet of laser light [28]. Our results in the main text, i.e. the guaranteed quasienergy from Sambe space and the quantum algorithms for quasienergy and Floquet eigenstates, can be easily extended to these cases. To extend our results in the main text to this class of time-periodic Hamiltonians, we note the points to be confirmed: 1. (E-1) Exponential decay of $\braket{l}{e^{-iH_{F}t}}{l^{\prime}}_{f}$ in the distance $|l-l^{\prime}|$ (Lieb-Robinson bound) 2. (E-2) Exponential decay of $\ket{\phi_{n}^{l}}$ in the Fourier index $l$ 3. (E-3) Efficient block-encoding of the truncated Floquet Hamiltonian $H_{F}^{L}$ (E-1) is required for the Floquet QPE to compute pairs of $(\epsilon_{n},\ket{\phi_{n}(t)})$ as Section VI. According to Ref. [28], a time-periodic Hamiltonian by Eq. (226) also possesses a bound, $\norm{\braket{l}{e^{-iH_{F}t}}{l^{\prime}}_{f}}\leq e^{-\Theta(|l-l^{\prime}|-\alpha T)}$. This results in the proper cutoff $L_{\mathrm{LR}}\in\Theta(\alpha T+\log(1/\varepsilon))$ for the Sambe space formalism of the Floquet operator $U(T;0)$. (E-2) is required for the Floquet QPE to compute pairs of $(\epsilon_{n},\ket{\Phi_{n}})$ as Section VI. The accuracy of quasienergy and Floquet eigenstates by the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$, i.e., Theorem 2, relies solely on the exponential decay of $\ket{\phi_{n}^{l}}$, which tells us a proper cutoff $L\in\Theta(\alpha T+\log(1/\varepsilon))$ to achieve the allowable error $\varepsilon$. Namely, the proof of (E-2) validates the QPE under $H_{\mathrm{F}}^{L}$ in the Floquet QPE algorithm. Finally, (E-3) is required for the Floquet QPE algorithm to run efficiently with designated oracles. Our algorithms use the block-encoding of the truncated Floquet Hamiltonian for realizing the Floquet operator $U(T;0)$ (Section VI) or for executing the QPE (Section VII). It should be constructed efficiently by queries to some block- encoding of the Hamiltonian to preserve the efficiency of time-independent cases. (E-1) and (E-3) are resolved in Ref. [28]: A time-periodic Hamiltonian $H(t)$ by Eq. (226) has the Lieb-Robinson bound on the propagation, given by $\norm{\braket{l}{e^{-iH_{\mathrm{F}}t}}{l^{\prime}}_{f}}\leq\exp\left(-\frac{|l-l^{\prime}|}{\zeta^{\prime}}+2\zeta^{\prime\prime}\alpha t+\frac{2}{\zeta^{\prime\prime}}\right),$ (227) with the two constants $\zeta^{\prime}=(1/\zeta-1+e^{-1/\zeta})^{-1}$ and $\zeta^{\prime\prime}=(1-e^{-1/\zeta})^{-1}$. For the efficient block- encoding, we assume that the Hamiltonian $H(t)$ by Eq. (226) is written by $H(t)=\sum_{j=1}^{J}\alpha_{j}(t)H_{j},\quad\alpha_{j}(t+T)=\alpha_{j}(t),$ (228) and that we can organize block-encoding $O_{H_{j}}$ respectively for each time-independent operator $H_{j}$. Then, in a similar manner to Section V.2, we can organize the modified Floquet Hamiltonian suitable for block-encoding by $H_{\mathrm{F,pbc}}^{L}=\sum_{l\in[L]}\left(\sum_{j=1}^{J}\mathrm{Add}_{l}^{[L]}\otimes(\alpha_{j}^{l}H_{j})-l\omega\ket{l}\bra{l}_{f}\otimes I\right),$ (229) where $\alpha_{j}^{l}=T^{-1}\int_{0}^{T}\differential t\,\alpha_{j}(t)e^{il\omega t}$ denotes the Fourier component of $\alpha_{j}(t)$. The block-encoding of $H_{\mathrm{F,pbc}}^{L}$ can be constructed by one query to $C[O_{H_{j}}]$ respectively for $j=1,2,\ldots,J$ and some other cheap primitive gates like Proposition 6. At the same time, running the algorithms with $H_{\mathrm{F,pbc}}^{L}$ is essentially the same as running with the original Floquet Hamiltonian $H_{\mathrm{F}}^{L}$ under $L\in\Theta(\alpha T+\log(1/\varepsilon))$. The remaining task for the extension is to prove condition (E-2). We end up with this section by showing the counterpart of Theorem 3 as follows. ###### Theorem E1. Let $\ket{\phi_{n}(t)}$ be a Floquet eigenstate of a time-periodic Hamiltonian with exponentially-decaying Fourier components by Eq. (226) Then, its Fourier component $\ket{\phi_{n}^{l}}$ exponentially decays in the Fourier index $l$ by $\norm{\ket{\phi_{n}^{l}}}\leq\exp\left(-\frac{|l|-1/2}{4\zeta}+\frac{\coth(1/4\zeta)}{8\pi\zeta}\alpha T\right),$ (230) when the quasienergy $\epsilon_{n}$ belongs to $\mathrm{BZ}=[-\omega/2,\omega/2)$. Proof of Theorem E1.— We follow the proof of Theorem 3 in Section A. We can prove Proposition A2 in the same way, and this part does not affect the bound itself. The difference appears in the evaluation of $\norm{H_{\mathrm{Add},I}^{L}(\tau)}$ in Eq. (171) as follows. $\displaystyle\norm{H_{\mathrm{Add},I}^{L}(\tau)}$ $\displaystyle\leq$ $\displaystyle\sum_{m\in\mathbb{Z}}e^{m\omega\tau}\norm{H_{m}}$ (231) $\displaystyle\leq$ $\displaystyle\alpha\left(2\sum_{m=0}^{\infty}e^{-m(\zeta^{-1}-\omega\tau)}-1\right),$ in Proposition A1. For $\tau\in[0,1/(2\zeta\omega)]$, this gives a bound, $\norm{H_{\mathrm{Add},I}^{L}(\tau)}\leq\alpha\coth(1/4\zeta)$. By choosing the parameter $\lambda=1/(4\zeta\omega)$ in Eq. (166), we arrive at Eq. (230). $\quad\square$ Compared to Theorem 3, the characteristic scale $\zeta\in\order{1}$ plays a role of the maximum Fourier index $M$ in a time-periodic Hamiltonian by Eq. (27). This exponential decay implies the approximation of the quasienergy $\epsilon_{n}$ by the eigenvalue of the truncated Floquet Hamiltonian $H_{\mathrm{F}}^{L}$ with an error up to $e^{-\Theta(L-\alpha T)}$ as in Theorem 2. As a consequence, all the results in the main text are valid also for the class of Hamiltonians by Eq. (226). The cost of the Floquet QPE for $(\epsilon_{n},\ket{\phi_{n}(t)})$ [Section VI] or $(\epsilon_{n},\ket{\Phi_{n}})$ [Section VII] remains given by Theorem 9 and Theorem 12 respectively. The cost of the Floquet eigenstate preparation discussed in Section VIII is summarized by Table 2. We conclude that the quantum algorithm for computing quasienergy and Floquet eigenstates can achieve near optimal query complexity even for time-periodic Hamiltonians having exponentially-decaying Fourier components. This result is reminiscent of Hamiltonian simulation [28], in which real-time dynamics can be simulated with nearly optimal query complexity both for time-periodic Hamiltonians with a finite number of Fourier components and for those with exponentially- decaying Fourier components.
# A benchmark of categorical encoders for binary classification Federico Matteucci Karlsruhe Institute of Technology {federico.matteucci, vadim.arzamasov<EMAIL_ADDRESS>Vadim Arzamasov Karlsruhe Institute of Technology {federico.matteucci, vadim.arzamasov<EMAIL_ADDRESS>Klemens Böhm Karlsruhe Institute of Technology {federico.matteucci, vadim.arzamasov<EMAIL_ADDRESS> ###### Abstract Categorical encoders transform categorical features into numerical representations that are indispensable for a wide range of machine learning models. Existing encoder benchmark studies lack generalizability because of their limited choice of 1. encoders, 2. experimental factors, and 3. datasets. Additionally, inconsistencies arise from the adoption of varying aggregation strategies. This paper is the most comprehensive benchmark of categorical encoders to date, including an extensive evaluation of 32 configurations of encoders from diverse families, with 48 combinations of experimental factors, and on 50 datasets. The study shows the profound influence of dataset selection, experimental factors, and aggregation strategies on the benchmark’s conclusions --- aspects disregarded in previous encoder benchmarks. Our code is available at https://github.com/DrCohomology/EncoderBenchmarking. This version of the paper is identical to the one accepted at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), Track on Datasets and Benchmarks. ## 1 Introduction Learning from categorical data poses additional challenges compared to numerical data, due to a lack of inherent structure such as order, distance, or kernel. The conventional solution is to transform categorical attributes into a numerical form, i.e., _encode_ them, before feeding them to a downstream Machine Learning (ML) model. Various encoders have been proposed, followed by several benchmark studies. However, their combined results remain inconclusive, as we now describe. Many factors impact the generalizability [26] of a benchmark of encoders, including: 1. the compared encoders, 2. the number of datasets, 3. the quality metrics, 4. the ML models used, and 5. the tuning strategy. We also hypothesize that 6. the _aggregation strategy_ used to summarize the results of multiple experiments may affect the conclusions of a study. Existing encoder benchmarks, reviewed in Section 2, only partially control for these factors. First, none of these studies uses more than 15 datasets of a given type (regression or classification). Second, despite these studies collectively covering a substantial number of encoders, they often focus on specific encoder families, resulting in comparison gaps between the best encoders. For instance, the best-performing encoders from [28] (Cross- Validated GLMM) and [44] (Mean-Target) have not been studied together yet. Third, the results of existing studies are often not comparable due to variations in the selected quality metrics. For instance, [28] measures quality with ROC AUC, [4] with average precision, and [41] with accuracy. Fourth, existing studies tune ML models in different ways, yielding incompatible evaluations. For instance, [28, 41] do not tune, while [4, 5, 8, 44] tune but do not specify if they tune the ML model on encoded data or if they tune the entire ML pipeline. Last, no benchmark study of categorical encoders explores the impact of aggregation strategies, which is substantial according to our experiments. For instance, [5] ranks the encoders by average ranking across all datasets, while [28] computes the median ranking with Kemeny-Young aggregation [46]. This study offers a taxonomy and a comprehensive experimental comparison of encoders for binary classification, taking into account the factors just mentioned. In particular, we consider: 1. 32 encoder configurations, including all of the best-performing ones from the literature and three novel encoders; 2. 50 datasets for binary classification; 3. four quality metrics; 4. five widely used ML models; 5. three tuning strategies; 6. 10 aggregation strategies gathered from existing categorical encoder benchmarks and from benchmarking methodology studies [27, 9]. This allows us to provide novel insights into the sensitivity of experimental results to experimental factors. In particular, we demonstrate how replicability [26] may not be ensured even for studies conducted on up to 25 datasets. For those combinations of experimental factors that show reproducible results, we isolate and recommended the best encoders. Paper outline: Section 2 reviews existing works, Section 3 presents a taxonomy of encoder families, Section 4 describes the experimental setup, and Section 5 features the results. Table 1: Related work on categorical encoders for binary classification. | | Ours | [28] | [4] | [8] | [5] | [44] | [41] ---|---|---|---|---|---|---|---|--- # Binary classification datasets | | 50 | 10 | 5 | 3 | 2 | 2 | 6 # ML models | | 5 | 5 | 1 | 4 | 2 | 1 | 5 Encoder family | Identifier | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ Frequency-based | ✓ | ✓ | | | | | Contrast | ✓ | | | | | | ✓ Similarity | ✓ | | ✓ | | ✓ | | Simple target | ✓ | ✓ | | ✓ | | | ✓ Binning | ✓ | ✓ | | | | | Smoothing | ✓ | ✓ | | | ✓ | | ✓ Data-constraining | ✓ | ✓ | | | | | ✓ Quality metric | Precision-recall based | ✓ | | ✓ | ✓ | ✓ | | Balanced accuracy | ✓ | ✓ | | ✓ | | | Accuracy | ✓ | | | ✓ | | ✓ | ✓ Tuning strategy | Full pipeline tuning | ✓ | | | ? | | ✓* | Model tuning | ✓ | | ✓ | ✓ | | No tuning | ✓ | ✓ | | | | | ✓ Aggregation strategy | Heuristic | ✓ | | | | ✓ | | Friedman-Nemenyi | ✓ | | ✓ | | | ✓ | Kemeny-Young | ✓ | ✓ | | | | | ## 2 Related work Benchmarks of encoders. We focus on binary classification tasks, as they offer a wider range of compatible encoders; indeed, we could conduct a deeper replicability analysis while maintaining the computation feasible. Table 1 summarizes the related work. The other benchmarks often consider few datasets and either do not tune the ML model or do not describe the tuning procedure. This limits their applicability and generalizability. Additionally, there are substantial differences in the experimental settings across articles, including the encoders considered, quality metrics employed, and aggregation strategies used to interpret results. Hence, the comparability of these findings is limited. For instance, [28] recommends a data-constraining encoder, [41] both data-constraining and contrast encoders, [5, 4] similarity encoders, [8] an identifier encoder, and [44] a simple target encoder. Other benchmarks of encoders are [36], which focuses on regression tasks and faces similar issues, and [30, 14, 20], that use only a single dataset. Analysis of benchmarks. When designing our benchmark, we adhered to the best practices discussed in the literature on benchmark design and analysis. In particular, [27] studies how choices of experimental factors impact the experimental results and advocates for benchmarks that consider a large variety of factors. Similarly, [9] suggests guidelines to mitigate the inconsistencies in the choices of data and evaluation metric. Finally, [2] proposes a methodology to account for variance in the design choices (randomization of sources of variation) and post-processing of the experimental results (significant and meaningful improvements). ## 3 Taxonomy of encoders This section presents the essential terminology and discusses the considered encoders and their corresponding families. Appendix 7.1 provides formal and detailed descriptions of the encoders. ### 3.1 Notation and terminology Consider a tabular dataset with target $y$ taking values in $\\{0,1\\}$, and let $\mathbf{A}$ be one of its attributes (columns). $\mathbf{A}$ is _categorical_ if it represents qualitative properties and takes values in a finite domain $\Omega_{A}$. Each $\omega\in\Omega_{A}$ is a _level_ of $\mathbf{A}$. Categorical attributes do not support arithmetic operations like addition or multiplication, and their comparison is not based on arithmetic relations. An _encoder_ $E$ replaces a categorical attribute $\mathbf{A}$ with a set of numerical attributes, $E(\mathbf{A})$. We write $E(\Omega_{A})$ to indicate the domain of $E(\mathbf{A})$. Encoders may encode different levels in $\mathbf{A}$ in the same way, or encode in different ways different occurrences in the dataset of the same level. Encoders are either _supervised_ or _unsupervised_ : Supervised encoders require a target column, while unsupervised encoders solely rely on $\mathbf{A}$. In what follows, $\mathbf{A}$ always denotes the categorical attribute to be encoded. ### 3.2 Unsupervised encoders Identifier encoders assign a unique vector identifier to each level. The most recognized encoder is One-Hot (OH), the default encoder in most machine learning pipelines [11, 15]. One-Hot is both space-inefficient and ineffective [28, 4, 5]. Alternatives include Ordinal (Ord), which assigns a unique consecutive identifier to each level, and Binary (Bin), which splits the base-2 representation of Ord($\mathbf{A}$) into its digits. Frequency-based encoders replace levels with some function of their frequency in the dataset. We use Count, which relies on absolute frequencies [28]. Contrast encoders encode levels into ($L-1$)-dimensional vectors so that the encodings of all levels sum up to $\left(0,\dots,0\right)$ [41]. A constant intercept term, $1$, is usually appended to the encoding of each level. Contrast encoders encode levels such that their coefficients represent the level’s effect contrasted against a reference value. A common example is Sum, which contrasts against the target’s average value. Similarity encoders treat $\omega\in\Omega_{A}$ as strings and map them into a numeric space taking their similarity into account [5, 4]. These encoders are particularly useful for handling ‘‘dirty’’ categorical datasets that may contain typos and redundancies. One example is Min-Hash (MH), which decomposes each level into a set of $n$-grams, sequences of $n$ consecutive letters, and encodes to preserve the Jaccard similarity of the decompositions. ### 3.3 Supervised encoders Simple target encoders encode levels with a function of the target. Prime examples are Mean-Target (MT) [7], which encodes with the conditional average $y$ given $\mathbf{A}$, and Weight of Evidence (WoE) [39], which encodes with the logit of MT($\mathbf{A}$). As Mean-Target can lead to severe overfitting [28, 31], it may benefit from regularization. The following families of encoders are regularization for Mean-Target. We propose Binning encoders, that regularize MT by partitioning either $\Omega_{A}$ or MT($\Omega_{A}$) into bins. Pre-Binned MT (PBMT) partitions $\Omega_{A}$ to maximize the number of bins such that each bin’s relative frequency exceeds a specified threshold, then encodes the binned attribute with MT. Discretized MT (DMT) partitions MT$(\Omega_{A})$ into intervals of equal length, then encodes each level with the lower bound of the interval in which its MT encoding falls. Smoothing encoders blend MT($\Omega_{A}$) with the overall average target. Notable examples are Mean-Estimate (ME) [22], which uses a weighted average of the two, and Generalized Linear Mixed Model encoder (GLMM) [28], which encodes with the coefficients of a generalized linear mixed model fitted on the data. Data-constraining encoders regularize MT$(\mathbf{A})$ by restricting the amount of data used to encode each occurrence of a level in the dataset. CatBoost (CB) [31] first randomly permutes the dataset’s rows, then maps each occurrence of a level $\omega$ to the average target of its previous occurrences. Cross-Validated MT (CVMT) [28] splits the dataset into folds of equal size, then encodes each fold with an MT trained on the other folds. We propose the BlowUp variant of CVMT, BUMT, which trains an MT on each fold and uses them to encode the whole dataset. Related variants are Cross-Validated GLMM (CVGLMM) [28] and its BlowUp version (BUGLMM). ## 4 Experimental design As there is no intrinsic measure of an encoder’s quality, we proxy the latter with the quality of an ML model trained on encoded data. This procedure is in line with the current literature on the topic, discussed in Section 2. Each experiment thus consists of the following steps. First, we fix a combination of factors: a dataset, an ML model, a quality metric, and a tuning strategy. Then, we partition the dataset using a 5-fold stratified cross-validation and pre-process the training folds by: * • imputing missing values with median for numerical and mode for categorical attributes; * • scaling the numerical attributes; * • encoding the categorical attributes. If tuning is to be applied, we fine-tune the pipeline with nested cross- validation and output the average performance over the outer test folds. We used standard scikit-learn [29] procedures for scaling and missing values imputation. We conducted experiments using Python 3.8 on an AMD EPYX 7551 machine with 32 cores and 128 GB RAM. We limit each evaluation to 100 minutes to handle the extensive workload. As described in Appendix 7.3.1, out of the 64000 cross- validated evaluations, 61812 finished on time without throwing errors. For the sensitivity, replicability, and encoder comparison analysis, we ignored the missing evaluations. We did so 1. since there is no clearly superior imputation method, and 2. to avoid introducing unnecessary variability in the analysis. Our preliminary experiments confirm that imputing the small number of missing evaluations does not significantly impact our analysis. In what follows, we describe the datasets, ML models, quality metrics, and tuning strategies we use in our experiments. Then, we outline the different aggregation strategies. Appendix 7.2 provides further details about datasets and aggregation strategies. ### 4.1 Encoders We used the category_encoders111https://contrib.scikit- learn.org/category_encoders/ implementations of Bin, CB, Count, Ord, OH, Sum, and WoE. We sourced MH from the authors’ implementation [4, 5].222https://dirty-cat.github.io/stable/ We implemented DMT, GLMM, ME, MT, PBMT, CVMT, BUMT, CVGLMM, and BUGLMM. We also added a baseline encoder, Drop, which encodes every level with $1$. For DMT, we experimented with the number of bins: $\\{2,5,10\\}$, for ME, with the regularization strength: $\\{0.1,1,10\\}$, for PBMT, with the minimum frequency: $\\{0.001,0.01,0.1\\}$, and for cross-validated encoders, such as CVMT, with the number of folds: $\\{2,5,10\\}$. We display hyperparameter values with subscripts, e.g., CV2MT. ### 4.2 Datasets We used binary classification datasets. This allows us to conduct in-depth analysis using the same ML models and quality metrics. Additionally, certain supervised encoders, e.g., WoE, are specifically designed for binary classification tasks. We chose 50 datasets with categorical attributes from OpenML [42], including the suitable ones from the related work. Table 2: ML models used in related studies. | | Ours | [28] | [4] | [8] | [5] | [44] | [41] ---|---|---|---|---|---|---|---|--- Model family | Tree ensembles | ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ Linear | ✓ | ✓ | | ✓ | | | ✓ SVM | ✓ | ✓ | | | ✓ | | ✓ k-NN | ✓ | ✓ | | | | | DT | ✓ | ✓ | | ✓ | | ✓ | Neural | | | | ✓ | | | ✓ Naïve Bayes | | | | | | | ✓ ### 4.3 ML models We experimented with diverse ML models that process data in different ways: decision trees (DT) and boosted trees (LGBM) exploit orderings, support vector machines (SVM) use kernels, k-nearest neighbors (k-NN) relies on distances, and logistic regression (LogReg) is a "pseudo-linear" model. The LGBM implementation we used is from the LightGBM module,333https://lightgbm.readthedocs.io/en/v3.3.5/ while the other models’ implementations are from scikit-learn. Table 2 compares our model choices with related work. We excluded neural models due to their inferior performance on tabular data [15] and the absence of a recommended architecture. We also did not use Naïve Bayes due to its lack of popularity. ### 4.4 Quality metrics and tuning strategies We assessed an encoder’s quality by evaluating an ML model trained on the encoded data. We use four quality metrics: balanced accuracy (BAcc), F1-score (F1), accuracy (Acc), and Area Under the ROC Curve (AUC). We compared three tuning strategies: * • _no tuning_ ; * • _model tuning_ : the entire training set is pre-processed before tuning the model; * • _full tuning_ : the entire pipeline is tuned on the training set, with each training fold of the nested cross-validation pre-processed independently. We used Bayesian search from scikit-optimize444https://scikit- optimize.github.io/stable/ for full tuning, and for model tuning grid search from scikit-learn. Table 4 summarizes the tuning search space for different ML models. To mitigate excessive runtime, we chose not to tune certain ML models and limited the dataset selection to the smallest 30 for full tuning, as Table 3 illustrates. Table 3: Factors for different tuning strategies. | Models | # Datasets ---|---|--- No tuning | DT, k-NN, LogReg, SVM, LGBM | 50 Model tuning | DT, k-NN, LogRreg | 50 Full tuning | DT, k-NN, LogReg, SVM | 30 (a) Table 4: Tuning search space. | Hyperparameter | Interval | Grid ---|---|---|--- DT | max_depth | $[2,\dots,5]$ | $\\{2,5,None\\}$ k-NN | n_neighbors | $[2,\dots,10]$ | $\\{2,5,10\\}$ LogReg | C | $[0.2,5]$ | $\\{0,1,10\\}$ SVM | C | $[0.1,2]$ | gamma | $[0.1,100]$ | (b) ### 4.5 Aggregating into a consensus ranking A common practice for summarizing and interpreting the results of benchmark experiments is to aggregate them into a _consensus ranking_ of _alternatives_ (encoders in our case) [10, 27, 15]. To obtain a dataset-independent ranking of encoders, we aggregate the results across different datasets while keeping all other factors fixed. We now present well-known aggregation strategies used in benchmarks. Heuristics rank alternatives based on an aggregate score. Common aggregation heuristics include mean rank (R-M) [5], median rank (R-Md), mean quality (Q-M), median quality (Q-Md), rescaled mean quality [36, 15] (Q-RM), the number of times the alternative was ranked the best (R-B) or the worst (R-W) [41], the number of times the alternative’s quality is better than the best quality multiplied by a threshold $\theta\leq 1$ (Q-Thθ). Friedman-Nemenyi tests [10] (R-Nemp-value). First, one ranks alternatives separately for each dataset and then applies a Friedman test to reject the hypothesis that all encoders have the same average rank. If the hypothesis is rejected, pairwise Nemenyi post-hoc tests are conducted to compare pairs of alternatives. Finally, one uses the results of these post-hoc tests to construct the consensus ranking. This aggregation strategy requires the user to choose a p-value. Kemeny-Young aggregation [21, 47] (R-Kem) first ranks alternatives separately for each dataset. Then, it determines the consensus ranking that minimizes the sum of distances to the datasets’ rankings. We adopt the approach described in [45], with a distance measure that accomodates ties and missing values in the rankings. We then formulate the optimization problem as a mixed integer linear problem and solve it using a GUROBI solver with academic license.555https://www.gurobi.com/solutions/gurobi-optimizer/ Kemeny-Young aggregation is much slower than the other aggregation strategies, taking minutes for each aggregation. ## 5 Results This section summarizes the main results of our study. Appendix 7.3 further discusses the missing evaluations, run time, replicability, the ranks of the encoders and studies the effect of tuning on pipeline quality. ### 5.1 Sensitivity analysis The relative performance of encoders, i.e., the ranking, can depend on the pick of ML model, quality metric, and tuning strategy. More, the choice of an aggregation strategy impacts the consensus ranking. To quantify the influence of these choices, we calculate the similarity between rankings using the Jaccard index $J$ for the sets of best encoders and the Spearman correlation coefficient $\rho$. Intuitively, $J$ measures if two experiments with different factor combinations agree on the best encoders, while $\rho$ takes the entire ranking into account. For both measures, values close to 1 indicate high agreement and low sensitivity. Conversely, values near 0 (or, for $\rho$, negative) suggest low consistency and high sensitivity. #### 5.1.1 Sensitivity to experimental factors Figure 1: Sensitivity as the average similarity between rankings, measured with $\rho$ (upper triangle) and $J$ (lower triangle), computed between individual rankings for varying: (a) ML model, (b) quality metric, (c) tuning strategy, and between consensus rankings for varying (d) aggregation strategy. We evaluate the sensitivity of encoder rankings on individual datasets with respect to an experimental factor (ML model, quality metric, or tuning strategy) by varying the factor of interest and keeping the other factors fixed, then calculating the similarity between pairs of rankings. After that, we average the result across all combinations of the other factors. Figures 1a, 1b, and 1c show the resulting values, with Spearman’s $\rho$ in the upper triangle and Jaccard index $J$ in the lower triangle. For example, Spearman’s $\rho$ between encoder rankings for DT and SVM, averaged across all datasets, tuning strategies, and quality metrics, is 0.3. Our findings highlight the high sensitivity of results to experimental factors, for both the full rankings and the best encoders. They also explain why results from other studies are so inconsistent, as choosing different values for any factor will lead to different results. #### 5.1.2 Sensitivity to aggregation strategy To evaluate the impact of the aggregation strategy on the consensus ranking, we apply the same procedure as above to consensus rankings instead of rankings on individual datasets. Figure 1a presents the results with the notation from Section 4.5. For example, Spearman’s $\rho$ between consensus rankings obtained with Q-M and Q-Md averaged across all ML models, tuning strategies, and quality metrics is 0.8. While some aggregation strategies show strong similarities, different strategies yield very different consensus rankings in general. This is particularly evident for Jaccard index $J$, indicating the high sensitivity of the best encoders to the rank aggregation strategy. Figure 2: Replicability as the average similarity of consensus rankings from disjoint subsets of datasets. ### 5.2 Replicability Replicability is defined as the property of a benchmark to produce consistent results from different data [26]. This definition does not, however, provide a quantifiable notion of replicability. To overcome this, we made the following modeling decisions. First, we fix a factor combination: ML model, quality metric, tuning strategy, and aggregation strategy. We excluded the R-Nem and R-Kem aggregation strategies due to their slower run time. Second, we model the result of a benchmark on a dataset sample $S$ with the consensus ranking aggregated across $S$. Third, we quantify replicability as the similarity between consensus rankings averaged over all factor combinations and 100 pairs of equal-sized disjoint sets of datasets. As discussed in Section 5.1, we measure the similarity with $\rho$ and $J$ to capture the similarity between both the rankings and the best encoders. We refer to them as $\rho$-replicability and $J$-replicability, respectively. Figure 2 shows the outcome for different tuning strategies, conditional on the ML model and the size of the dataset samples. We have studied additional factors in Appendix 7.3.3. The shaded areas represent a bootstrapped 95% confidence interval. Our findings show an upward trend of $\rho$-replicability as the size of the dataset samples increases. This observation confirms that, in general, considering a larger number of datasets yields more reliable experimental outcomes. It is, however, important to note that this pattern does not always hold for $J$-replicability. This suggests that, for some models, the best encoders might vary significantly even with a relatively large number of datasets. To conclude, the replicability of our results strongly depends on the ML model, with logistic regression exhibiting the highest replicability and decision trees the lowest. ### 5.3 Comparing encoders Based on the outcome of Section 5.2, we now examine the ranks of encoders limited to decision trees, logistic regression, and all ML models. Figure 3(a) shows the rank of encoders from the experiments with decision trees across all datasets, quality metrics, and tuning strategies. One-Hot is the best-performing encoder; however, Nemenyi tests at a significance level of 0.05 fail to reject that the average rank of One-Hot is the same as that of the other encoders. Figure 3(b) features the encoder ranks for logistic regression, where four encoders, namely One-Hot, Sum, Binary, and Weight of Evidence, consistently achieve higher ranks compared to the others. Nemenyi tests confirm that this difference in ranks is significant. These results are in line with the ones from Section 5.2, which indicate low replicability of the results for decision trees and higher replicability for logistic regression. Figure 3(c) presents the ranks of encoders across all datasets, ML models, quality metrics, and tuning strategies. Similarly to logistic regression, One- Hot, Sum, Binary, and Weight of Evidence consistently achieve significantly higher average ranks compared to the other encoders, again confirmed by Nemenyi tests. We recommend these four encoders as the preferred choices in practical applications. This conclusion contradicts other studies reporting a suboptimal performance of One-Hot [5, 28]. Our findings also reveal that Drop performs significantly worse than all other encoders, i.e., encoding categorical attributes generally yields better results than dropping them. (a) Decision tree (b) Logistic regression (c) All models Figure 3: Ranks of encoders. ### 5.4 Comparing to related work In this section, we compare our results with the findings of other studies. To do so, we select subsets of our results that mimic the experimental settings in related work. In [28], CV5GLMM outperformed every competitor for boosted trees and k-NN, while GLMM was recommended for SVMs. However, in our experiments, Sum outperformed GLMM for SVMs, One-Hot did better than CV5GLMM for boosted trees, and CV10GLMM was better than CV5GLMM for k-NN. Next, while in [5] similarity encoders are better than One-Hot for boosted trees, subsequent research reported no significant difference between Min-hash and One-Hot on medium-sized tabular datasets [4]. Our findings are in line with this latter result, as we could not find a performance difference between the two encoders with a t-test with a significance level of 0.05. In [41], Sum is reported as the best encoder on the Adult dataset for boosted trees, while a Data-constraining encoder is reported as the worst. With the same setting, we did not find a significant performance difference for any encoder except for Drop, which performed the worst. On the Bank marketing dataset, [8] showed that One-Hot and Mean-Target outperformed Binary with logistic regression. In our experiments, Binary was slightly worse than One-Hot and Mean-Target. In [44], Dummy, an identifier encoder similar to One-Hot, was better than Mean- Target on the Tic-tac-toe dataset with boosted trees. We, instead, did not observe any significant difference between One-Hot and Mean-Target for these factors. ## 6 Limitations and conclusions Limitations. First, we treated encoders as part of the pre-processing, but certain encoders can be an integral component of specific ML models. For instance, CatBoost is derived from the homonymous boosted trees algorithm, which re-encodes the data multiple times during training. Second, we applied a single encoder to all categorical attributes. Using different encoders based on the cardinality of the attribute may sometimes yield favorable results [28, 4]. However, the selection of the optimal encoder for each attribute requires either domain knowledge of the attribute or purpose-built tools, which falls outside the scope of our benchmark and is therefore left as future work. We also did not include neural networks, due to the absence of a recommended architecture and reported interior performance to tree-based models on tabular data [15]. Conclusions. In this study, we conducted an extensive evaluation of encoder performance across various experimental factors, including ML models, quality metrics, and tuning strategies. Our results demonstrate a high sensitivity of encoder rankings to these factors, both for the full rankings and the best-performing encoders. This sensitivity explains the inconsistent results among related studies, as different choices in any of these factors can lead to different outcomes. We also assessed the impact of aggregation strategies on consensus rankings, revealing significant variations in rankings depending on the chosen strategy. This emphasizes the importance of carefully considering the aggregation method when post-processing and interpreting results. Regarding replicability, we defined and quantified it using $\rho$-replicability and $J$-replicability. Our findings indicate that replicability is influenced by factors such as the ML model, with logistic regression exhibiting the highest replicability and decision trees the lowest. Additionally, larger dataset samples tend to yield more reliable experimental outcomes, although this trend does not always hold for $J$-replicability. Based on our results, we recommend specific encoders for practical applications. For decision trees, Weight of Evidence performed the best, although statistical tests did not show a significant difference from other encoders. For logistic regression, Sum, One- Hot, Binary, and Weight of Evidence consistently achieved higher ranks, with statistically significant differences from other encoders. These findings contradict previous studies, highlighting the importance of considering a broad range of experimental factors. Finally, our comparative analysis with related work revealed discrepancies in encoder performance, suggesting that the breadth of our study may contribute to these differences. This emphasizes the need for caution when interpreting results from studies with more limited experimental settings. Overall, our study provides valuable insights into the sensitivity of encoder performance to experimental factors, as well as recommendations for practical encoder selection across different scenarios. ## Acknowledgments We thank Dmitriy Simakov for valuable discussions and Natalia Arzamasova for her algorithm and implementation of the PreBinnedEncoder. This work was supported in part by the German Research Foundation (Deutsche Forschungsgemeinschaft), project Charakterisierung, Modellierung und Homogenisierung von Vernetzungswerken mit Hilfe interpretierbarer Datenanalysemethoden, and by the State of Baden-Württemberg, project Algorithm Engineering für die Scalability Challenge. ## References * [1] David Aha ‘‘Tic-Tac-Toe Endgame’’, UCI Machine Learning Repository, 1991 * [2] Xavier Bouthillier et al. ‘‘Accounting for Variance in Machine Learning Benchmarks’’ In _MLSys_ mlsys.org, 2021 * [3] Laurent Candillier and Vincent Lemaire ‘‘Nomao’’, UCI Machine Learning Repository, 2012 * [4] Patricio Cerda and Gaël Varoquaux ‘‘Encoding High-Cardinality String Categorical Variables’’ In _IEEE Trans. Knowl. Data Eng._ 34.3, 2022, pp. 1164–1176 * [5] Patricio Cerda, Gaël Varoquaux and Balázs Kégl ‘‘Similarity encoding for learning with dirty categorical variables’’ In _CoRR_ abs/1806.00979, 2018 * [6] ‘‘Congressional Voting Records’’, UCI Machine Learning Repository, 1987 * [7] Don Coppersmith, Se June Hong and Jonathan R.. Hosking ‘‘Partitioning Nominal Attributes in Decision Trees’’ In _Data Min. Knowl. Discov._ 3.2, 1999, pp. 197–217 * [8] Mwamba Kasongo Dahouda and Inwhee Joe ‘‘A Deep-Learned Embedding Technique for Categorical Features Encoding’’ In _IEEE Access_ 9, 2021, pp. 114381–114391 * [9] Mostafa Dehghani et al. ‘‘The Benchmark Lottery’’ In _CoRR_ abs/2107.07002, 2021 * [10] Janez Demsar ‘‘Statistical Comparisons of Classifiers over Multiple Data Sets’’ In _J. Mach. Learn. Res._ 7, 2006, pp. 1–30 * [11] Keyu Duan et al. ‘‘A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking’’ In _NeurIPS_ , 2022 * [12] Bob Evans ‘‘Cylinder Bands’’, UCI Machine Learning Repository, 1995 * [13] Farhad Soleimanian Gharehchopogh and Seyyed Reza Khaze ‘‘Data mining application for cyber space users tendency in blog writing: a case study’’ In _CoRR_ abs/1307.7432, 2013 * [14] Sebastian Gnat ‘‘Impact of Categorical Variables Encoding on Property Mass Valuation’’ In _KES_ 192, Procedia Computer Science Elsevier, 2021, pp. 3542–3550 * [15] Léo Grinsztajn, Edouard Oyallon and Gaël Varoquaux ‘‘Why do tree-based models still outperform deep learning on typical tabular data?’’ In _NeurIPS_ , 2022 * [16] C. Harleya, R. Reynolds and M. Noordewier ‘‘Molecular Biology (Promoter Gene Sequences)’’, UCI Machine Learning Repository, 1990 * [17] Hans Hofmann ‘‘Statlog (German Credit Data)’’, UCI Machine Learning Repository, 1994 * [18] Ronald Iman and James Davenport ‘‘Approximations of the critical region of the Friedman statistic’’ In _Communications in Statistics-Theory and Methods_ 9, 1980, pp. 571–595 * [19] Andras Janosi, William Steinbrunn, Matthias Pfisterer and Robert Detrano ‘‘Heart Disease’’, UCI Machine Learning Repository, 1988 * [20] Justin M. Johnson and Taghi M. Khoshgoftaar ‘‘Encoding Techniques for High-Cardinality Features and Ensemble Learners’’ In _IRI_ IEEE, 2021, pp. 355–361 * [21] John G Kemeny ‘‘Mathematics without numbers’’ In _Daedalus_ 88.4 JSTOR, 1959, pp. 577–591 * [22] Daniele Micci-Barreca ‘‘A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems’’ In _SIGKDD Explor._ 3.1, 2001, pp. 27–32 * [23] Erick Moreno-Centeno and Adolfo R Escobedo ‘‘Axiomatic aggregation of incomplete rankings’’ In _IIE Transactions_ 48.6 Taylor & Francis, 2016, pp. 475–488 * [24] S Moro, P Rita and P Cortez ‘‘Bank Marketing’’, UCI Machine Learning Repository, 2012 * [25] ‘‘Mushroom’’, UCI Machine Learning Repository, 1987 * [26] Engineering National Academies of Sciences and Medicine ‘‘Reproducibility and replicability in science’’ National Academies Press, 2019 * [27] Christina Nießl et al. ‘‘Over-optimism in benchmark studies and the multiplicity of design and analysis options when interpreting their results’’ In _WIREs Data Mining Knowl. Discov._ 12.2, 2022 * [28] Florian Pargent, Florian Pfisterer, Janek Thomas and Bernd Bischl ‘‘Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features’’ In _Comput. Stat._ 37.5, 2022, pp. 2671–2692 * [29] F. Pedregosa et al. ‘‘Scikit-learn: Machine Learning in Python’’ In _Journal of Machine Learning Research_ 12, 2011, pp. 2825–2830 * [30] Kedar Potdar, Taher S Pardawala and Chinmay D Pai ‘‘A comparative study of categorical variable encoding techniques for neural network classifiers’’ In _International journal of computer applications_ 175.4, 2017, pp. 7–9 * [31] Liudmila Ostroumova Prokhorenkova et al. ‘‘CatBoost: unbiased boosting with categorical features’’ In _NeurIPS_ , 2018, pp. 6639–6649 * [32] Quinlan and Quinlan ‘‘Credit Approval’’, UCI Machine Learning Repository * [33] Ross Quinlan ‘‘Statlog (Australian Credit Approval)’’, UCI Machine Learning Repository * [34] Ross Quinlan ‘‘Thyroid Disease’’, UCI Machine Learning Repository, 1987 * [35] Jan N Rijn and Jonathan K Vis ‘‘Endgame Analysis of Dou Shou Qi’’ In _ICGA Journal_ 37.2 IOS Press, 2014, pp. 120–124 * [36] Diogo Seca and João Mendes-Moreira ‘‘Benchmark of Encoders of Nominal Features for Regression’’ In _WorldCIST (1)_ 1365, Advances in Intelligent Systems and Computing Springer, 2021, pp. 146–155 * [37] Alen Shapiro ‘‘Chess (King-Rook vs. King-Pawn)’’, UCI Machine Learning Repository, 1989 * [38] Peter Sprent and Nigel C Smeeton ‘‘Applied nonparametric statistical methods’’ CRC press, 2016 * [39] Gero Szepannek ‘‘On the practical relevance of modern machine learning algorithms for credit scoring applications’’ In _WIAS Report Series_ 29, 2017, pp. 88–96 * [40] Muhammad Usman and Adeel Ahmed ‘‘Dresses_Attribute_Sales’’, UCI Machine Learning Repository, 2014 * [41] Eric Valdez-Valenzuela, Angel Kuri-Morales and Helena Gómez-Adorno ‘‘Measuring the Effect of Categorical Encoders in Machine Learning Tasks Using Synthetic Data’’ In _MICAI (1)_ 13067, Lecture Notes in Computer Science Springer, 2021, pp. 92–107 * [42] Joaquin Vanschoren, Jan N. Rijn, Bernd Bischl and Luı́s Torgo ‘‘OpenML: networked science in machine learning’’ In _SIGKDD Explor._ 15.2, 2013, pp. 49–60 * [43] J. Wnek ‘‘MONK’s Problems’’, UCI Machine Learning Repository, 1992 * [44] Marvin N Wright and Inke R König ‘‘Splitting on categorical predictors in random forests’’ In _PeerJ_ 7 PeerJ Inc., 2019, pp. e6339 * [45] Yeawon Yoo and Adolfo R. Escobedo ‘‘A New Binary Programming Formulation and Social Choice Property for Kemeny Rank Aggregation’’ In _Decis. Anal._ 18.4, 2021, pp. 296–320 * [46] H Peyton Young ‘‘Condorcet’s theory of voting’’ In _American Political science review_ 82.4 Cambridge University Press, 1988, pp. 1231–1244 * [47] H Peyton Young and Arthur Levenglick ‘‘A consistent extension of Condorcet’s election principle’’ In _SIAM Journal on applied Mathematics_ 35.2 SIAM, 1978, pp. 285–300 * [48] Maciej Zieba, Jakub M. Tomczak, Marek Lubicz and Jerzy Swiatek ‘‘Boosted SVM for extracting rules from imbalanced data in application to prediction of the post-operative life expectancy in the lung cancer patients’’ In _Appl. Soft Comput._ 14, 2014, pp. 99–108 ## 7 Appendix ### 7.1 Encoders This section presents a reproducible description of the encoders discussed in Section 3, following the structure outlined below. We discuss identifier, frequency-based, contrast, and simple target encoders together in Appendix 7.1.4, as all of these encoders can be explicitly represented as functions. Similarity, binning, smoothing, and data-constraining encoders have dedicated sections. Table 5 contains the notation used in this section. Table 5: Notation for section 7.1. Symbol | Meaning ---|--- $\mathbb{N}_{0}$ | natural numbers including $0$ $(x)_{2}$ | base-2 representation of $x\in\mathbb{N}_{0}$ $X^{n\times d}$ | set of matrices with entries in $X$, $n$ rows and $d$ columns $\mathds{1}$ | indicator function $D$ | binary classification dataset $n$ | number of rows of $D$ $\mathbf{y}\in\\{0,1\\}^{n}$ | target attribute of $D$ $\Omega_{A}=\\{\omega_{l}\\}_{l=1}^{L}$ | categorical domain (strings) $\mathbf{A}\in\Omega_{A}^{n}$ | categorical attribute of $D$ to be encoded $l_{i}\in\left\\{1,\dots,L\right\\}$ | such that $\mathbf{A}_{i}=\omega_{l_{i}}$ $E:\mathbf{A}\mapsto\mathbf{M}\in\mathbb{R}^{n\times d}$ | encoder $\mathbf{M}\in\mathbb{R}^{n\times d}$ | encoding of $\mathbf{A}$, compact notation $E(\mathbf{A})\in\mathbb{R}^{n\times d}$ | encoding of $\mathbf{A}$ with explicit encoder $E(\Omega_{A})$ | unique values of rows of $E(\mathbf{A})$ $d=d(E,\mathbf{A})$ | number of columns of $\mathbf{M}$ $\mathbf{M}_{i}$ | $i$-th row of $\mathbf{M}$ if $d>1$, $i$-th component of $\mathbf{M}$ if $d=1$ $l\in\\{1,\dots,L\\}$ | index of levels $i,h\in\\{1,\dots,n\\}$ | row indices of $\mathbf{M}$, $\mathbf{A}$, or $\mathbf{y}$ $j\in\\{1,\dots,d\\}$ | column index of $\mathbf{M}$ Table 6: Identifier encoders | $E(\Omega_{A})$ | $E(\mathbf{A})$ ---|---|--- Binary [8] | $\\{0,1\\}^{[\log_{2}(L)]+1}$ | $\mathbf{M}_{i}=\left(l_{i}\right)_{2}$ Dummy [28, 44] | $\\{0,1\\}^{L-1}$ | $\mathbf{M}_{ij}=\begin{cases}\mathds{1}(\mathbf{A}_{i}=\omega_{j})&j\neq L\\\ 0&j=L\end{cases}$ One-Hot [28, 4, 8, 44, 41] | $\\{0,1\\}^{L}$ | $\mathbf{M}_{ij}=\mathds{1}(\mathbf{A}_{i}=\omega_{j})$ Ordinal [28, 44, 41] | $\mathbb{N}_{0}$ | $\mathbf{M}_{i}=l_{i}$ Table 7: Frequency-based encoders Count [36] | $\mathbb{N}_{0}$ | $\mathbf{M}_{i}=\sum_{j}\mathds{1}\left(\mathbf{A}_{j}=\omega_{l_{i}}\right)$ ---|---|--- Frequency [28] | $\mathbb{R}$ | $\mathbf{M}_{i}=\frac{1}{n}\sum_{j}\mathds{1}\left(\mathbf{A}_{j}=\omega_{l_{i}}\right)$ Table 8: Contrast encoders --- without intercept Sum [41] | $\mathbb{R}^{L-1}$ | $\mathbf{M}_{ij}=\begin{cases}\mathds{1}(\mathbf{A}_{i}=j)&j\neq L\\\ -1&j=L\end{cases}$ ---|---|--- Backward difference [41] | $\mathbb{R}^{L-1}$ | $\mathbf{M}_{ij}=\begin{cases}-\frac{L-i}{L}&i\leq j\\\ \frac{1}{L}&i>j\end{cases}$ Helmert [41] | $\mathbb{R}^{L-1}$ | $\mathbf{M}_{ij}=\begin{cases}-\frac{1}{j+1}&i\leq j\\\ \frac{j}{j+1}&i=j+1\\\ 0&i\geq j+2\end{cases}$ Table 9: Simple target encoders Mean-Target [28, 8] | $\mathbb{R}$ | $\mathbf{M}_{i}=\sum\limits_{h=1}^{n}y_{h}\mathds{1}(\mathbf{A}_{h}=\omega_{l_{i}})$ ---|---|--- Weight of Evidence [39] [28] | $\mathbb{N}_{0}$ | $\mathbf{M}_{i}=\log\left(\frac{MT(\mathbf{A})}{1-MT(\mathbf{A})}\right)$ #### 7.1.1 Similarity encoders [5, 4] Min-Hash treats $\omega\in\Omega_{A}$ as a string, splits it into its set of character-level n-grams (substrings of n-consecutive characters), uses a hash function to encode each n-gram into an integer, and finally encodes $\omega$ with the minimum value of the hash function on the set of n-grams. The process is repeated for $d$ hash functions, yielding $M\in\mathbb{R}^{n\times d}$. The default value of $d$ is 30, the authors report good performance with 300 666https://dirty-cat.github.io/stable/generated/dirty_cat.MinHashEncoder.html. #### 7.1.2 Binning encoders Pre-Binned Mean-Target partitions $\Omega_{A}$ into $B$ buckets $\\{P_{b}\\}_{b=1}^{B}$ to solve the optimization problem Maximize $\displaystyle B$ subject to $\displaystyle\frac{1}{n}\sum\limits_{\omega\in P_{b}}\sum\limits_{i=1}^{n}\mathds{1}\left(\mathbf{A}_{i}=\omega\right)\geq\vartheta$ $\displaystyle\forall b\leq B$ where $\vartheta\in[0,1]$ is a user-defined threshold. Each bucket is then treated as a new level and encoded with Mean-Target, yielding an encoding $M\in\mathbb{R}^{n}$. Discretized Mean-Target partitions $MT(\Omega_{A})$ into intervals $\\{I_{1},\dots,I_{B}\\}$ of equal length. Letting $I(l)$ be the interval that contains $MT(\omega_{l})$ (that is, the average target associated to $\omega_{l}$), the encoding is $\mathbf{M}\in\mathbb{R}^{n}:\mathbf{M}_{i}=\inf I(l_{i})$. We experimented with $B=2,5,10$. #### 7.1.3 Smoothing target encoders Mean-Estimate [41]. Let $n_{l}=\sum_{i=1}^{n}\mathds{1}\left(A_{i}=\omega_{l}\right)$ be the number of occurrences of $\omega_{l}$ in $\mathbf{A}$. $\mathbf{M}_{i}=\frac{n_{l_{i}}MT(\omega_{l_{i}})+\frac{w}{n}\sum\limits_{i=1}^{n}y_{i}}{w+n_{l_{i}}}$ where $w$ is a user-defined weight. Common choices are $1,10$. GLMM [28] fits, for every $\omega_{l}\in\Omega_{A}$, a random intercept model $y_{i}=\beta_{l_{i}}+u_{l_{i}}+\varepsilon_{i}$ where $u_{l}\sim N(0,\tau^{2})$ and $\varepsilon_{i}\sim N(0,\sigma^{2})$. The encoding is $\mathbf{M}\in\mathbb{R}^{n}:\mathbf{M}_{i}=\beta_{l_{i}}$. #### 7.1.4 Identifier, frequency-based, contrast, simple target encoders The descriptions are divided as follows: Table 9 is for identifier encoders, Table 9 is for frequency-based encoders, Table 9 is for contrast encoders, and Table 9 is for simple target encoders. #### 7.1.5 Data-constraining encoders CatBoost [41] uses a permutation $\pi$ of $\\{1,\dots,n\\}$ and encodes with $\mathbf{M}\in\mathbb{R}^{n}$ such that $\mathbf{M}_{\pi(i)}=\sum\limits_{h\leq\pi(i)}y_{h}\mathds{1}\left(A_{h}=\omega_{l_{\pi(i)}}\right)$ Cross-Validated MT [28] randomly partitions $\\{1,\dots,n\\}$ in $k$ folds of equal size. Let $D_{a_{i}}$ be the fold that contains $i$. Then, every fold is encoded with Mean-Target trained on the other $k-1$ folds: $\mathbf{M}_{i}=\sum\limits_{h=1}^{n}\mathds{1}\left(h\notin D_{a_{i}}\right)\mathds{1}\left(\mathbf{A}_{h}=\omega_{l_{i}}\right)y_{h}$ Common values for $k$ are $2,5,10$. Cross-Validated GLMM [28] works in a similar fashion as CVMT: it encodes each fold with GLMM trained on the other $k-1$ folds. BlowUp Cross-Validated MT randomly partitions $\\{1,\dots,n\\}$ in $k$ folds $D_{1},\dots,D_{k}$ of roughly equal size. Then, it encodes with $\mathbf{M}\in\mathbb{R}^{n\times k}$ so that the $j$-th column is $\mathbf{A}$ encoded with Mean-Target trained on the $j$-th fold, yielding $\mathbf{M}_{ij}=\sum\limits_{h=1}^{n}\mathds{1}\left(h\in D_{j}\right)\mathds{1}\left(\mathbf{A}_{h}=\omega_{l_{i}}\right)y_{h}$ We experimented with $k=2,5,10$. Blowup Cross-Validated GLMM is analogous to BUMT, but the $j$-th column of its encoding $\mathbf{M}$ is encode with $\mathbf{M}\in\mathbb{R}^{n\times k}$ so that the $j$-th column of $\mathbf{M}$ is $\mathbf{A}$ encoded with GLMM trained on the $j$-th fold. ### 7.2 Experimental design This section provides additional details about the datasets and aggregation strategies we discussed in Section 4. The notation we use in this section is summarized in Table 10. #### 7.2.1 Datasets Table 11 lists the datasets used in our experiments. The columns are as follows: ID is the OpenML identifier; $n$ is the number of rows; $d$ is the number of attributes; $d_{cat}$ is the number of categorical attributes; $\max\lvert\Omega_{A}\rvert$ is the maximum categorical attribute cardinality; the ‘‘ft’’ flag denotes datasets used for full-tuning (cf. Section 4.4) Table 10: Notation for section 7.2.2. Symbol | Meaning ---|--- $\bot$ | missing evaluation or rank $\mathds{1}$ | indicator function $E_{i}$ | encoder as in table 5 $\Phi_{j}:E_{i}\mapsto\mathbb{R}\cup\\{\bot\\}$ | average cross-validated quality on the $j$-th dataset, all other factors fixed $\Phi^{\max}_{j}=\max_{i=1,\dots,n}\left\\{\Phi_{j}\left(E_{i}\right)\right\\}$ | best quality on the $j$-th dataset, all other factors fixed $\Phi^{\min}_{j}=\min_{i=1,\dots,n}\left\\{\Phi_{j}\left(E_{i}\right)\right\\}$ | worst quality on the $j$-th dataset, all other factors fixed $r_{j}:E\mapsto\mathbb{N}_{0}\cup\left\\{\bot\right\\}$ | ranking obtained from $\Phi_{j}$ $\mathbf{R}^{j}=\left(\mathds{1}\left(r_{j}(E_{i})\leq r_{j}(E_{h})\right)\right)_{i,h=1}^{n}\in\\{0,1\\}^{n\times n}$ | adjacency matrix of $r_{j}$ $c:E\mapsto\mathbb{N}_{0}\cup\left\\{\bot\right\\}$ | consensus ranking $\mathbf{C}\in\\{0,1\\}^{n\times n}$ | adjacency matrix of $c$ $i,h,k\in\\{1,\dots,n\\}$ | index of encoders $j\in\\{1,\dots,m\\}$ | index of objects to be aggregated Table 11: Datasets used in the study. Name | Ref. | ID | $n$ | $d$ | $d_{cat}$ | $\max\lvert\Omega_{A}\rvert$ | ft ---|---|---|---|---|---|---|--- ada_prior | | 1037 | 4562 | 14 | 7 | 40 | ✓ adult | | 1590 | 48842 | 14 | 7 | 42 | ✓ airlines | | 1169 | 539383 | 7 | 4 | 293 | amazon_employee_access | | 4135 | 32769 | 9 | 9 | 7518 | Agrawal1 | | 1235 | 1000000 | 9 | 3 | 20 | Australian | [33] | 40981 | 690 | 14 | 4 | 14 | ✓ bank-marketing | [24] | 1461 | 45211 | 16 | 6 | 12 | blogger | [13] | 1463 | 100 | 5 | 3 | 5 | ✓ Census-Income-KDD | | 42750 | 199523 | 41 | 27 | 51 | credit-approval | [32] | 29 | 690 | 15 | 6 | 15 | ✓ credit-g | [17] | 31 | 1000 | 20 | 11 | 10 | ✓ cylinder-bands | [12] | 6332 | 540 | 37 | 17 | 71 | ✓ dresses-sales | [40] | 23381 | 500 | 12 | 11 | 25 | ✓ heart-h | [19] | 51 | 294 | 13 | 6 | 4 | ibm-employee-attrition | | 43896 | 1470 | 34 | 5 | 9 | ✓ ibm-employee-performance | | 43897 | 1470 | 33 | 5 | 9 | ✓ irish | | 451 | 500 | 5 | 2 | 11 | ✓ jungle_chess_2pcs…_elephant | [35] | 40999 | 2351 | 46 | 2 | 3 | ✓ jungle_chess_2pcs…_lion | [35] | 41007 | 2352 | 46 | 2 | 3 | ✓ jungle_chess_2pcs…_rat | [35] | 41005 | 3660 | 46 | 2 | 3 | ✓ kdd_internet_usage | | 981 | 10108 | 68 | 20 | 129 | ✓ KDDCup09_appetency | | 1111 | 50000 | 230 | 33 | 15416 | KDDCup09_churn | | 1112 | 50000 | 230 | 33 | 15416 | KDDCup09_upselling | | 1114 | 50000 | 230 | 33 | 15416 | KDD98 | | 42343 | 82318 | 477 | 107 | 18543 | kr-vs-kp | [37] | 3 | 3196 | 36 | 1 | 3 | ✓ kick | | 41162 | 72983 | 32 | 17 | 1063 | law-school-admission-bianry | | 43890 | 20800 | 11 | 1 | 6 | molecular-biology_promoters | [16] | 956 | 106 | 57 | 56 | 4 | ✓ monks-problems-1 | [43] | 333 | 556 | 6 | 4 | 4 | ✓ monks-problems-2 | [43] | 334 | 601 | 6 | 4 | 4 | ✓ mv | | 881 | 40768 | 10 | 1 | 3 | ✓ mushroom | [25] | 43922 | 8124 | 22 | 16 | 12 | ✓ national-longitudinal-survey-binary | [6] | 43892 | 4908 | 16 | 4 | 29 | ✓ nomao | [3] | 1486 | 34465 | 118 | 27 | 3 | nursery | | 959 | 12960 | 8 | 7 | 5 | open_payments | | 42738 | 73558 | 5 | 4 | 4374 | porto-seguro | | 41224 | 595212 | 57 | 13 | 104 | profb | | 470 | 672 | 9 | 3 | 28 | ✓ sick | [34] | 38 | 3772 | 29 | 2 | 5 | ✓ sf-police-incidents | | 42344 | 538638 | 6 | 5 | 21838 | SpeedDating | | 40536 | 8378 | 120 | 58 | 260 | ✓ students_scores | | 43098 | 1000 | 7 | 2 | 6 | ✓ telco-customer-churn | | 42178 | 7043 | 19 | 11 | 6531 | thoracic-surgery | [48] | 1506 | 470 | 16 | 3 | 7 | ✓ tic-tac-toe | [1] | 50 | 958 | 9 | 9 | 3 | ✓ Titanic | | 40945 | 1309 | 13 | 6 | 1307 | vote | [6] | 56 | 435 | 16 | 16 | 3 | ✓ wholesale-customers | | 1511 | 440 | 8 | 1 | 3 | ✓ WMO-Hurricane-Survival-Dataset | | 43607 | 5021 | 22 | 21 | 4173 | #### 7.2.2 Aggregation strategies This section presents the mathematical formulations of the aggregation strategies. As Section 4.5 explains, the results are aggregated across datasets, while keeping the other factors --- ML model, tuning strategy, and quality metric --- fixed. ##### Heuristics. Heuristics aggregate by ranking encoders according to some score. _Increasing_ heuristics assign the best rank to the encoder with the highest score, while the non-increasing ones assign the best rank to the encoders with the lowest score. Table 12 contains the respective formulas. Any missing evaluations ($\bot$) are ignored during the computation. ##### Friedman-Nemenyi tests. The Friedman test is used to rule out the null hypothesis that all encoders have, on average, the same rank. The Friedman statistic adjusted for ties [38, 18] is $T=\frac{\left(m-1\right)\left(S_{t}-C\right)}{S_{r}-C}$ where $S_{r}=\sum_{i=1}^{n}\sum_{j=1}^{m}r_{j}(E_{i})^{2}$, $S_{t}=\frac{1}{m}\sum_{i=1}^{n}\left(\sum_{j=1}^{m}r_{j}(E_{i})\right)^{2}$, and $C=\frac{1}{4}mn(n+1)^{2}$. Under the null hypothesis that all encoders have the same rank, $T$ is approximately distributed as an $F$-distribution with $n-1$ and $(m-1)(n-1)$ degrees of freedom. If the Friedman hypothesis is rejected, one can compare all pairs of encoders with $n(n-1)/2$ Nemenyi post-hoc tests [10]. Nemenyi tests apply a correction to control the error from testing multiple hypotheses. Two encoders $E_{1}$ and $E_{2}$ are significatively different if $\frac{1}{m}\sum\limits_{j=1}^{m}\left(r_{j}(E1)-r_{j}(E_{2})\right)\geq q_{\alpha}\sqrt{\frac{n(n+1)}{6m}}$ where $\frac{1}{\sqrt{2}}q_{\alpha}$ is the critical value based on a Studentized range statistic [10]. ##### Kemeny-Young aggregation. [21, 47, 45, 23] The consensus’s adjacency matrix $\mathbf{C}$ is a solution to the mixed- integer linear problem Maximize $\displaystyle\sum\limits_{i,h}\mathbf{S}_{ih}(2\mathbf{C}_{ih}-1)$ subject to $\displaystyle\mathbf{C}_{ih}-\mathbf{C}_{kh}-\mathbf{C}_{ik}\geq-1$ $\displaystyle\forall i\neq h\neq k\neq i$ $\displaystyle\mathbf{C}_{ih}+\mathbf{C}_{hi}\geq 1$ $\displaystyle\forall i<h$ $\displaystyle\mathbf{C}_{ih}\in\\{0,1\\}$ $\displaystyle\forall i,h$ where $\mathbf{S}=\left(\sum_{j}\frac{\mathbf{R}^{j}_{ih}}{n_{j}(n_{j}-1)}\right)_{i,h=1}^{n}$ is a cost matrix and $n_{j}=\sum_{i}\mathds{1}\left(r_{j}(E_{i})\neq\bot\right)$ is the number of encoders with evaluation on dataset $j$. This formulation accounts for ties and missing ranks. Table 12: Scores of heuristics. | Score of $E$ | Increasing ---|---|--- Mean rank | $\frac{1}{m}\sum_{j}r_{j}(E)$ | Median rank | median$\left(\left\\{r_{j}\left(E\right)\right\\}_{j=1}^{m}\right)$ | Rank best | $\sum\limits_{j=1}^{m}\mathds{1}\left(r_{j}\left(E\right)=1\right)$ | ✓ Rank worst | $\sum\limits_{j=1}^{m}\mathds{1}\left(r_{j}\left(E\right)\neq\max\limits_{i=1,\dots,m}r\left(E_{j}\right)\right)$ | Mean quality | $\frac{1}{m}\sum_{j}\Phi_{j}(E)$ | ✓ Median quality | median$\left(\left\\{\Phi_{j}\left(E\right)\right\\}_{j=1}^{m}\right)$ | ✓ Rescaled mean quality | $\frac{1}{m}\sum\limits_{j=1}^{m}\frac{\Phi_{j}(E)-\Phi^{\min}_{j}}{\Phi^{\max}_{j}-\Phi^{\min}_{j}}$ | ✓ $\vartheta$-best quality | $\sum\limits_{j=1}^{m}\mathds{1}\left(\Phi_{j}\left(E\right)\geq\vartheta\cdot\Phi^{\max}_{j}\right)$ | ✓ ### 7.3 Results This section complements Section 5. #### 7.3.1 Missing evaluations We successfully completed 61812 runs out of 64000, one per combination of encoder, dataset, ML model, tuning strategy, and quality metric. The 2188 failed evaluations are equally distributed among the encoders, while tuning was the greatest influencing factor. Indeed, there were 4303 missing runs with no tuning, 1152 with model tuning, and 32 with full tuning. This is likely due to the bigger datasets used in no tuning and model tuning, cf. Section 4.4 and Table 11. The total runtime for successful evaluations was 108 days. #### 7.3.2 Run time We computed two scores for the runtimes of encoders. First is the time necessary to encode the dataset. The outcome, displayed in figure 4(a), is that GLMM-based encoders are the slowest. This happens because the bottleneck of these encoders is the fitting of the random intercept model, a problem that we could only partially alleviate with our custom implementation. As expected, the model has no influence on the runtime. Second, the time necessary to tune the model-encoder pipeline, each tuning step requiring encoding the dataset and then fitting a model. Figure 4(b) tells a similar story as for encoding, with GLMM-based encoders being the slowest. The other encoders, apart from Drop and Mean-Target, all show a similar runtime. (a) Encoding (b) Tuning Figure 4: Runtime of encoders (a) and full tuning pipelines (b). #### 7.3.3 Replicability This section extends the replicability analysis of Section 5.2, showing the behavior of different quality metrics in Figure 5(a) and aggregation strategies in Figure 5(b). The quality metrics behave similarly for $\rho$-replicability. The notable exception is the AUC in model tuning, which is significantly better than the other metrics. Regarding $J$-replicability, instead, accuracy is clearly the poorer choice. This hints that accuracy cannot discern the best encoder as well as the other metrics do and that it is more sensitive to the choice of dataset. Among the aggregation strategies, rank best (R-B) shows higher replicability. A possible explanation is that R-B produces consensus rankings with many encoders tied as the best ones and few tiers in general, (a) Quality metric (b) Aggregation strategy Figure 5: Average similarity of consensus rankings from disjoint subsets of datasets, conditional on (a) quality metric and (b) aggregation strategy. #### 7.3.4 Comparing encoders This section expands on Section 5.3 and portrays in Figure 6 the distribution of ranks of encoders. The best encoders are evident for LogReg (Sum, OH, WoE, Bin) and k-NN (WoE), confirmed by Nemenyi tests at $0.05$ significance. (a) DT (b) LGBM (c) SVM (d) k-NN (e) LogReg (f) All models Figure 6: Ranks of encoders. #### 7.3.5 Effect of tuning This section investigates whether tuning leads to improvements in pipeline performance. The tuning strategies are described in Section 4.4 For a pair of tuning strategies, we consider the factors they share and subtract the performance of the pipelines. Figure 7 shows that full tuning is, in general, advantageous over no tuning and slightly better than model tuning. (a) Model (b) Scoring (c) Encoder --- full tuning VS no tuning (d) Encoder --- full tuning VS model tuning Figure 7: Performance gain of full tuning over no tuning and model tuning.
2023 Mark Quinlan # The aesthetics of cyber security: How do users perceive them? <EMAIL_ADDRESS>Aaron Ceross<EMAIL_ADDRESS>Andrew Simpson <EMAIL_ADDRESS>Department of Computer Science, University of Oxford, Wolfson Building, Parks Rd, Oxford, OX1 3QD, Oxfordshire, United Kingdom ###### Abstract While specific aesthetic philosophies may differ across cultures, all human societies have used aesthetics to support communication and learning. Within the fields of usability and usable security, aesthetics have been deployed for such diverse purposes as enhancing students’ e-learning experiences and optimising user interface design. In this paper, we seek to understand how individual users perceive the visual assets that accompany cyber security information, and how these visual assets and user perceptions underwrite a distinct _cyber security aesthetic_. We ask, (1) _What constitutes cyber security aesthetics, from the perspective of an individual user?_ and (2) _How might these aesthetics affect users’ perceived self-efficacy as they informally learn cyber security precepts?_ To begin answering these questions, we compile an image-set from cyber security web articles and analyse the distinct visual properties and sentiments of these images. ###### keywords: cyber security, aesthetics, visual learning ## 1 Introduction Visual media, like illustrations lin2018impact and diagrams hattwig2013visual , have accompanied text since the very first written documents nichols1858illustrations , providing clarification, communicating distinct emotions or opinions, and serving more subversive ends like propaganda marland2012political ; cooper2008war ; meyer2008aesthetics . Today, digital technologies have introduced new types and ways of accessing visual media while rapidly integrating media consumption into the daily lives of large segments of the global population david2010impact . Still, contemporary online news articles and blog posts are often accompanied by visual media that serve many of the same communicative purposes as those in the earliest human documents mitchell2005just . The fields of usable security and cyber security have used visual media to support learning. For example, user interface designers used (and later abandoned) skeuomorphism to generate easily identifiable visual objects that could help users to navigate new interfaces page2014skeuomorphism ; curtis2015rhetoric , and they frequently use colours to draw attention to salient information and features within e-learning platforms tharangie2008kansei ; reyna2013importance . Meanwhile, usable security experts explore how visual cues can aid users outside of formal learning environments — that is, how and when visual media can facilitate informal learning. Informal learning is the primary way in which adults learn about the world around them malcolm2003interrelationships ; ollis2011learning , and it typically occurs when individuals actively choose to seek out new ideas and advice. Understanding how visual media work to support communication and learning is a complex task, and it is divided amongst many scholarly disciplines. First and foremost amongst these is the ancient philosophical branch of aesthetics, which concerns itself with the nature of perception, taste, and the values of sensory qualities (e.g., beauty). In the context of this paper, aesthetics entail the perceptual logic that allows individuals to instinctively analyse meaning and appraise quality / truth when consuming visual media — whether as stand-alone objects, as part of a user interface, or as an accompaniment to text. By extension, learned aesthetic preferences may influence how individuals navigate information or environments flavian2009heuristic ; joshi2011aesthetics ; shires2020cyber ; carroll2021usable , and they may play a significant role in informal learning. There is a body of literature within the computer science usable security field that looks at users’ aesthetic perceptions of technology. For example, work by Fogg et al. fogg2009behavior found that almost half of all users used aesthetic judgements to infer the credibility of a site’s content. Compounding these results, Robinson et al. robinson2020digital found that individuals use aesthetic information to make rapid judgements about content, and Alsudani and Casey alsudani2009effect reported that these judgements occur within about 3.5 seconds. In the sub-area of cyber security aesthetics, ma2006cyber outlined various cyber security visualisation techniques, and shires2020cyber looked at the adaptation of neo-noir aesthetics in cyber security visual media. There is also work within the field of digital exhibits bernal20201 and the transfer of security aesthetics into cyber security ghertner2020futureproof . Taken together, these precedents suggest that cyber security aesthetics can serve as a pedagogical tool, helping users to parse information and act upon it. Thus, improving our understanding of these aesthetics could help us to improve the efficacy of security advice dissemination. In this paper, we explore the following research aims: 1. 1. _What cyber security aesthetics consist of, from the perspective of an individual user._ 2. 2. _Provide an explorative discussion as to the manner in which these aesthetics may affect users’ perceived self-efficacy as they informally learn cyber security precepts._ To do so, we report on how we assembled an image-set of cyber security images that reflects what a user typically sees within an informal learning environment. The corpus spans 1,027 images and is derived from English language news and online magazine articles from the United States, Canada, and the United Kingdom. The images are organised into several classes, which we derived by extracting visual information from the raw images and mapping them to semantically meaningful keywords, and performed a colour similarity analysis. The remainder of the paper is organised as follows. In Section 2 we provide the background to, and the motivation for, the work described in this paper. In addition, we define some of the terms of interest. In Section 3 we describe the process used to create the image-set, as well as the data cleaning process we used to develop the image-set into usable images. In Sections 4 and 5 we present and discuss our results, placing them in a broader context. Section 6 presents potential research directions for the broader research community. Finally, Section 7 concludes the paper. ## 2 Background and Motivation In this section we discuss the background to, and the motivation for, the work described in this paper. As aesthetics is such a broad and sometimes ambiguous term dewey1934art , we begin by providing an overview of aesthetics research in Sections 2.1 and 2.2, establishing its relevance for our research aims. We then consider the potential efficacy of aesthetics for cyber security in Section 2.3. Section 2.4 then returns to our overarching research aims. ### 2.1 Aesthetics and meaning In Ancient Greece, aesthetics were first described as a ‘sensation’, or the ability to interact with external stimuli through our bodily senses beardsley1975aesthetics . Later, Kant Kant1892-KANTCO-3 espoused the importance of aesthetics for all human domains, arguing that, without its sense-making power, data would simply remain chaotic, lacking meaning and structure.111This could mean that data acquires a certain aesthetic once created (mathematical aesthetics Kant1892-KANTCO-3 ), or that, to have an aesthetic perspective, one must assemble data fragments into something meaningful regardless of outcome Kant1892-KANTCO-3 . However, if aesthetics help to render a shared sensible reality, as asserted by Kant Kant1892-KANTCO-3 ; zander2016intuition , then aesthetic perception must be universal, narrative, and standardised. This brings us into the sphere of semiotics — the study of signs, symbols, and symbolisation, or of the devices and practices that help to stabilise meanings. For the purposes of this paper, we define aesthetics as the perceptual logic that allows individuals to instinctively analyse meaning in visual media, and semiotics as the conventionalised meanings arising from this perceptual process. In other words, where aesthetic objects exist solely for their own purposes Kant1892-KANTCO-3 , informing perceptible meaning walsh1974aesthetic ; wissenburg2012aesthetic without requiring a specific meaning to be understood, semiotic objects contain explicitly built-in meanings, whether skeuomorphic or otherwise carroll2021usable ; barbosa2021semiotics , and can enhance the meaning of words they are associated with (for example, as a compendium to a body of text dewey1934art ). Both aesthetic and semiotic perspectives remain relevant in contemporary philosophical discourse, as well as in the practice of cyber security. For example, doi:10.1080/10350330.2019.1587843 describe how viewers attempt to derive meaning from key referent objects contained within an image. Insofar as these objects are universally understood, they may yield what Ranciere calls ‘a shared sense of perception’ sayers2005jacques . Furthermore, in the context of cyber aesthetics, most interactions at the interface level are directed by symbols and imagery such as icons, pointers, image thumbnails — which themselves can contain semiotic objects — and so forth. All of these objects help to convey a system logic to end users, thus enhancing accuracy and intuitiveness rudner1951semiotic . Of course, in a nascent field like cyber security, aesthetic systems have not necessarily been formalised into stable semiotic resources. As such, we must not preemptively constrain our analysis to specific image contents, addressing instead the full spectrum of aesthetic objects relevant to cyber security communication. In this case, we define these to be (visible) digital image- objects that may themselves contain semiotically legible signs, and which have been added as an adornment or supplement to relevant cyber security literature. Although this definition presents some limitations (discussed in due course), it allows us to account for the narrative functions of aesthetics, as well as for its use in learning. ### 2.2 Aesthetics and learning Because we are primarily interested in how users may interpret cyber security aesthetics in an informal learning context, our understanding of aesthetics is informed by contributions such as that of chatterjee2014aesthetic , which draws from the humanities and sciences to illustrate how aesthetics influence the choices humans make in their given domains of activity. Earlier work by carper1975fundamental went further, identifying aesthetics as one of four distinct structures human beings use when developing knowledge, the other three being personal, empirical, and ethical. For Carper, the ‘knowing’ of aesthetics takes the other three structures and enhances them into a new understanding, creating meaning from otherwise abstract works carper1975fundamental . Building on Ancient Greek notions of aesthetics, keenan2016use proposed the concept of ‘aesthetic knowledge’, wherein sensory experiences form embedded relationships with phenomena such as colour and shape. According to Keenan, when prior aesthetic knowledge is combined with information (or, in our case, images) from user interfaces and other elements within a digital experience, users can associate prior meanings with this new information and thereby generate unanticipated interactions keenan2016use . Taken as a form of knowledge, aesthetic design can enhance users’ ability to make effective decisions based on a mixture of intuition and explicitly learnt knowledge carroll2010designing . We know that people often make decisions based on intuition rather than analytical inference, ‘sensing’ a correct choice without being able to offer a logical explanation for it zander2016intuition ; we may also expect that aesthetic objects can serve to stimulate this intuition. For example, within human–computer interaction, supplementary visual assets that convey a feeling of uncertainty or ambiguity can help individuals to comprehend uncertainty even when it is not explicitly communicated in words fernandes2018uncertainty . It follows that aesthetic knowledge will impact knowledge acquisition in any given field, including fields where many users rely on informally learnt knowledge (such as cyber security rader2012stories ) or cases where decision makers do not have prior experience with the given situation zander2016intuition . ### 2.3 Aesthetics and self-efficacy Clearly, the way in which we are presented with information visually impacts our understanding of, and subsequent decision making towards, a particular topic. As such, usable security research, user interface design, and cognitive psychology theory have sought to better understand how and why users make aesthetic decisions, and how aesthetic attributes can be designed to achieve certain ends. For instance, some scholars studying the ethics of technological development have proposed tools to help designers build fairness and transparency into digital libraries and interface designs through deliberate aesthetic planning barbosa2021semiotics . Other researchers have explored how particular aesthetic / semiotic interpretations of user interfaces can enable users to complete a given task more efficiently carroll2021usable . This latter effect is particularly interesting in the context of cyber security, given the brunt of responsibility that individual users have to bear for protecting themselves, their devices, and their networks online. One important concept implicated in users’ decision making is self-efficacy — a generative capability to organise one’s skill-sets and beliefs towards a desired outcome bandura1999self . According to self-efficacy theory, individual users implicitly judge their own ability to cope with a given situation, thus developing self-efficacy beliefs for a specific domain. These beliefs inform whether individual users will initiate certain behaviours and carry them through to successful outcomes maddux1995self ; bandura1999self . Furthermore, self-efficacy is closely related to motivation: the greater the challenge a user faces, the more self-efficacy they will need to sustain their motivation bandura1999self ; stumpf1987self . Because cyber security is perceived to be both important and complex, users tend to exhibit limited self-efficacy in this domain (as explained by Herley and explored by halevi2016cultural through psychological and cultural means). Many people develop some degree of self-efficacy through their identification with role models: people similar to themselves who display, and thereby make accessible, certain aspirational attitudes, behaviours, or capacities ajzen1991theory . For example, bosma2012entrepreneurship observed that role models in the media can encourage entrepreneurship amongst their viewers. Applying these insights to cyber security aesthetics, we may suggest that researchers can utilise aesthetics to enhance users’ self-efficacy by providing models, structuring and directing behaviour towards goal setting, and measuring progress towards these goals carroll2010designing . ### 2.4 Our expectations for this exploration To summarise, we expect that users acquire an aesthetic literacy when they are repeatedly exposed to domain-specific content, and that this literacy helps them to navigate and derive meaning from future content. As per our first research aim, we aspire to understand the aesthetics (and thus aesthetic literacies) operative in the domain of cyber security, and so we will imaginatively replicate the process whereby users develop these literacies — that is, repeated exposure to the aesthetic objects of cyber security — by compiling an image-set of cyber security’s primary aesthetic objects, allowing us to appraise and compare them at once. We have defined these objects to be images that may themselves contain legible signs, and which are typically part of a larger piece of content like an online article. As per our second research aim, we will interpret the resulting aesthetics in terms of their likely effects for users’ self-efficacy. ## 3 Methodology In this section we discuss the research design of our study, which proceeded in five steps: 1. 1. developing the image-scraping tool in Python to extract images from structured data sources; 2. 2. configuring a viable search methodology based on common cyber security terminology; 3. 3. cleaning the initial pool of images to yield a usable image-set; 4. 4. preparing the labels and resources needed for computational image classification; and 5. 5. performing colour analysis to confirm the internal consistency of each image class. ### 3.1 Developing the image-scraper Web scraping is a popular digital research technique that allows researchers to automatically capture freely available online data — that is, data that does not require privileged access marres2013scraping — via the use of scrapers. Our image-scraper is a simple tool designed to capture images from pages selected by our search methodology (discussed in Section 3.2). Rather than incorporate additional system logic to ensure that all images were viable candidates for analysis, we chose to refine the image-set through subsequent data cleaning (discussed in Section 3.3). ### 3.2 Deriving relevant images from search terms To establish the list of search terms needed to guide the image-scraper, we followed the precedent of schatz2017towards , who used Google Trends to automatically collect real search terms employed by the target audience.222schatz2017towards sought to derive a more precise definition of security, and so they collected the terms that individuals used to search for security content. This focus on user-centred definitions excluded the possibility of replicating the work of humayun2020cyber , who looked at primary studies undertaken within academia. Instead, we followed the Systematic Mapping Study protocol presented by kosar2016protocol . We defined a set of base search terms (for example, ‘cybersecurity’ OR ‘cyber’ AND ‘security’) and then added search terms derived from Google Trends (online OR advice OR protection OR protect OR prevent OR preventative OR tips OR email OR social network OR password OR hack OR hacked OR hacking). All search terms were technology-agnostic — they did not include explicit references to specific products or services. The image-scraper then returned all images that corresponded with content that included these terms within the title or body text. Though not exhaustive, this strategy yields an image-set that adequately represents operative definitions of cyber security, as actualised by users. There is, of course, scope for future improvement. ### 3.3 Cleaning the data The aforementioned search strategy yielded an initial image-set of 4,784 images, which we then subjected to an initial data cleaning based on the following inclusion / exclusion criteria (to enable consistency): * • The image must be derived from a news or blog article that directly addresses at least one aspect of cyber security and / or explicitly contains our search terminology. Blog articles were limited to tutorials, editorials, tool demonstrations, and discussions of technical reports. Due to the nature of the assessment and the search methodology, we only retrieved images from English- language sources. * • The image must be accessible and not hidden behind a paywall or other kind of lockout mechanism, as these obstacles restrict the amount of text that can be retrieved, making it difficult to explain why some images were included in a given article or blog post (that is, the role that the images serve in relation to the text). * • The image cannot be a corporate logo or advertisement (like the lead slide of a corporate presentation). * • The image must be at least 360x640 pixels for ease of processing. * • The image must be in either .jpg or .png format. Applying these criteria, we reduced the initial pool of 4,784 images to 3,757 usable images. We then counted and removed all duplicates333We counted the number of duplicates to assess the extent of duplication within cyber security aesthetics. and then down-sampled our images to a standard pixel resolution. This yielded a final image-set of 1,027 individual images, which we used for analysis 444This image-set can be found here: https://huggingface.co/datasets/Quinm101/cyberaesthetics.. Figure 1: The colour distance charts for our image class heat-maps. Similarity decreases as the x-axis moves from 0 (blue) to 1 (pink). ### 3.4 Classifying the images The next step in the process involved feature extraction — a form of quantitative image classification wherein categorising labels are assigned to images based on specific extracted features. For our image-set, we chose to begin with object recognition to identify any potential semiotic objects (or signs) before moving onto semantic categorisation (categorising emotion or other subjective features). We utilised a variation of the Bag-of-Words (BoW) model to provide human- assigned classifications for our image-set. The BoW model is often used in situations where images require text categorisation but word order is not particularly important. We based our model on work by csurka2004visual , selecting three knowledgeable cyber security researchers to manually locate dominant interest points in individual images (recreating feature extraction) and derive labels that represent these interest points. This solution presented several advantages over an automated set-up, as the expert knowledge allowed for more concise labelling, the ability to label occluded objects that would have been missed by automated methods, and the ability to construct a clearly defined codebook for our classification labels based on prior expertise. However, the contextual awareness these experts brought to the labelling exercise may have introduced some biases. This limitation could be mitigated in future studies by recruiting a wider range of annotators. ### 3.5 Measuring image similarity through colour To confirm the internal consistency of each image class derived from our classification process, we utilised Weller and Weastneat’s weller2019quantitative quantitative, colour-based method for measuring image similarity. This involved transforming each image’s pixels into 3D coordinates to produce a multidimensional color histogram for each image, then using the earth mover’s distance measure rubner2000earth to compute the pairwise distances between histograms. We opted for this method over contour- recognition for object classification (as used by gupta2019nose ; waldchen2018plant ) because we had already classified our images according to their dominant features. Colour similarity measures also allow us to more confidently make qualitative assessments relevant to our research aims. Colour similarity heat-maps for each class will be shown later on in this paper, and can be interpreted through Figure 1. Each heat-map represents the relationship any given image has to the other images within its class, with blue cell colours indicating greater similarity and red cell colours indicating lesser similarity. ## 4 Results Using the process described in Sections 3.2 and 3.3, we compiled an image-set that covered a wide swathe of cyber security topics and their associated aesthetics. Small selections of images from each class are shown in Figures 2 and 3. Through the process described in Sections 3.4 and 3.5, we identified 32 distinct and internally consistent image classes in the image-set. These ranged from abstract interpretations of networked security to imagery depicting the binary view of cyber security as an eternal battle between malicious actors and their victims. However, because most of the images (80.6%) were concentrated in just ten major classes (as detailed in Table 1), we restrict our discussion to these classes. We further group these classes into four broad (but not mutually exclusive) categories: Objects, People, Places, and Others. Table 1: The top ten classes in our image-set. Class | Description | No. ---|---|--- 1. | Physical traditional security semiotics (such as lock, key, or shield) | 290 2. | Hackerman archetype | 88 3. | Non-malicious users of cyberspace | 81 4. | Digital superpositions over cityscapes or skylines | 72 5. | Physical-digital hybrid workspaces | 69 6. | Abstract patterns (such as grids) | 64 7. | Textual content (such as explicit warnings) | 61 8. | Wall of code (incoherent or standard programming language) | 61 9. | Disembodied anatomy interacting with a physical device or digital overlay | 42 10. | Non-security-related skeuomorphism | 32 Figure 2: A random selection of images taken from classes 1-5. Figure 3: A random selection of images taken from classes 6-10. ### 4.1 Colour As one would expect, colour features heavily across all of the classes in our image-set. If we use the heat-maps to discuss similarity potential across colour and texture (where radical differences may indicate domain-shift within the class samek2016evaluating ), then this section concerns itself with the qualitative use of colour as we see it across the complete image-set. Where many of the abstract forms seen in Class 6 utilise hues running the gamut from greens to blues which are almost always contrasted by dark backgrounds, it is clear from the complete image-set that no ‘universal’ definition or convention on the usage of colour exists within cyber security, beyond the heavy use of cyan blue555This was already informally known to the cyber security community, where the cyan blue code #235594 was the most commonly found colour in a large image-set, but which is not explicitly referenced to here due to the fact it was not cleaned for duplicates and other issues, and contained little in the way of search methodology. It can be found here: https://daylight.berkeley.edu/cybersecurity-imagery/.. All of the heat-maps highlight these similarities in colour. Where we find points of interest are in Class-1, where colour is used to denote objects as being of specific importance, ranging from useful to dangerous. We may contrast this with another aspect of cyberspace, the video game, where emphasis is placed on the colour of objects with which the player may interact johnson2017history . This codification of objects again lacks a specific narrative, and objects that are beneficial often share hues with those semiotics deemed dangerous. In between the classes, a specific colour analysis leads to results of limited immediate utility. Turning our attention to Figure 4, we see that our top-two image classes share a penchant for blacks, blues and whites. With this large amount of blacks and darker hues serving as background, we see in many classes a contrasting effect between the brightly coloured objects, spaces and vertices. In this manner, we hypothesise that colour may be used to draw the user’s attention to these objects, which exist in a space with no other domains to draw inspiration from, similar to video games johnson2017history . A colour analysis may not allow us to infer further specifics as to their self-efficacy potential, in which the colours may be used to deliver a decoding of the messages present in the accompanying textual content. Instead, we look towards the objects, people, places and other aspects of the image. In these cases, we find the heat-maps to be of more use as an accompaniment to assess the credibility of the assertions. Each heat-map represents the relationship any given image has to the other images within its class, with blue cell colours indicating greater similarity and red cell colours indicating lesser similarity. Figure 4: An example of the most commonly used colours within our top-two classes, simplified to 4-bit. ### 4.2 Objects Classes 1 and 10 feature objects associated with physical security, like locks, keys, and shields (and others identified by ghertner2020futureproof ); objects appropriated from non-security domains, such as cameras; and skeuomorphic adaptations of real-world objects, like digitised versions of envelopes. Class 1 contains 290 unique images with extraordinary colour similarity (as per Figure 5), implying consistent use of similar semiotics and colour schemes throughout. Class 10 is much smaller, featuring only 32 unique images, but it is notable for its wider variety of objects and colours (see Figure 6). Figure 3 highlights some of the images from these classes. Figure 5: Heat-map highlighting the overall colour differences between images in Class 1 (Traditional physical-digital security semiotics). Figure 6: Heat- map highlighting the overall colour differences between images in Class 10 (Non-security semiotics and skeuomorphism). ### 4.3 People Classes 2 and 3 feature individuals who are implied to be malicious (Class 2) or non-malicious (Class 3) users of cyberspace. We also include Class 9 within this group, given its emphasis on human anatomy. Class 2 contains 88 unique images with significant colour similarity (see Figure 7), which in this case implies similar compositions — individuals assuming similar stances against similar (dark) background colours. Class 3 consists of 81 unique images and is slightly more varied in its make-up (as per Figure 8). Class 9 is the smallest and most differentiated class in this group, consisting of only 42 unique images with wider colour discrepancies in the heat-map (see Figure 9). Figure 2 highlights some images from Classes 2, 3, and 9 are shown in Figure 3. Figure 7: Heat-map highlighting the overall colour differences between images in Class 2 (Hackerman archetype). Figure 8: Heat-map highlighting the overall colour differences between images in Class 3 (Non-malicious users of cyberspace). Figure 9: Heat-map highlighting the overall colour differences between images in Class 9 (Disembodied anatomy interacting with a physical device or digital overlay). ### 4.4 Places Classes 4 and 5 feature specific places related to cyberspace and cyber security, such as futuristic urban spaces (Class 4) and workspaces (Class 5). Class 4 contains 72 unique images with significant colour similarity (see Figure 10), once again implying similar compositions and colour palettes. Class 5 consists of 69 unique images and is more varied, as can be seen in Figure 11. Figure 2 highlights some examples from Classes 4 and 5. Figure 10: Heat-map highlighting the overall colour differences between images in Class 4 (Digital superpositions over cityscapes or skylines). ### 4.5 Other Figure 11: Heat-map highlighting the overall colour differences between images in Class 5 (Physical-digital hybrid workspaces). Classes 6, 7, and 8 variously encompass imagery of digital patterns, alphanumeric symbols, and other two-dimensional or abstract representations of cyberspace and cyber security. Class 6 contains 64 unique images with reasonable colour similarity (as per Figure 12); though similar background colours are frequently used to represent mathematically defined patterns and shapes, there is some variation based on other, supporting semiotic attributes in this category. Class 7 consists of 61 images that are slightly more varied, as seen in Figure 13. Meanwhile, Class 8 contains 61 images, and it is the most diverse of the three (as visible in Figure 14. However, the representativeness of Class 8’s heat-map is limited by the content of the images, namely incoherent alphanumerical symbols or programming languages on a dark background. We expect that the variation in the heat-map reflects the wide variety of colours used in these different symbols, despite larger compositional similarities. Figure 3 highlights some examples from Classes 6, 7, and 8. Figure 12: Heat-map highlighting the overall colour differences between images in Class 6 (Abstract patterns). ## 5 Discussion In this section we argue that the aesthetic classes established in the previous section can help to prime readers’ interpretations of associated texts in ways that affect their self-efficacy. Insofar as the images that accompany cyber security texts are created and selected for this specific purpose (that is, for inclusion in these texts), we may say that the observed aesthetic trends are more or less deliberate attempts to frame cyber security in a certain manner. Accordingly, we begin with an analysis of the semantics of cyber security — the colours, shapes, and devices used throughout the image-set — to understand how cyber security is being framed. We proceed to analyse the image classes that feature objects, people, and places. We finish this section by framing these discussions in light of our research aims and what we learnt in Section 2. ### 5.1 The visual vocabulary of cyber security Our image-set suggests that cyber security exists at the limits of traditional human visibility. Indeed, many of the image classes feature abstractions of objects, situations, individuals, and landscapes rather than concrete subjects. This seems consistent with typical evocations of ‘cyberspace’ — a term coined in 1982 by science-fiction writer William Gibson gibson1982neuromancer to designate ‘a new universe’ parallel to the physical but created by the digital benedikt1991cyberspace (as encapsulated in the ‘fifth branch’ metaphor utilised by the U.S. military branch2020s ). Although the term has since become synonymous with global computer networks such as the Internet, it continues to encapsulate the ‘sublime’ sensations associated with a new frontier. nye1996american explained that, when users are introduced to powerful new technologies (such as cyberspace or a digital system), they experience a pleasurable yet terrifying sensation that alerts them to the limits of their reality. Multiple scholars have explored how aesthetic choices can help to acclimate users to the new reality of cyberspace ‘environments’. featherstone1996cyberspace , for instance, argued that aesthetics help to evoke imagery of life in the domain of cyberspace, while croon1999making explained that imagery can provide insight into a domain and help to cultivate a form of spatial awareness within it. Within our image-set, we observed heavy use of mathematical aesthetics such as concentric arcs, simple and tileable shapes (such as hexagons, albeit with a higher area-to-perimeter ratio), and connecting lines. Where these devices construct the shape of cyberspace, colour establishes its tone. In our image-set, we observed a preponderance of dark shades and blue hues. This is consistent with the work of shedroff2012make , who researched the use of colour within the domain of science fiction. One possible reason for this preference in science fiction and cyber security is the relative rarity of blues in nature greenspan_2013 ; because this colour is scarce in our physical domain, it effectively communicates cyberspace’s distinction from the straightforwardly physical and natural. Figure 13: Heat-map highlighting the overall colour differences between images in Class 7 (Textual content such as explicit warnings). Figure 14: Heat-map highlighting the overall colour differences between images in Class 8 (Wall of code). Of the aesthetic classes identified in our image-set, the ones that most optimally represent cyberspace as a techno-spatial domain separate from, but parallel to, our physical reality are Classes 3, 6 and 8, which represent 16% of our total image-set. These classes deploy the aforementioned aesthetic tactics most consistently / legibly. ### 5.2 Physical traditional security semiotics beasley2010persuasive argue that semiotics researchers are mainly interested in understanding how individual signs, objects, and concepts coalesce into a coherent visual narrative. Objects such as locks, shields, and keys appeared frequently in our image-set, either as digital manifestations of physical objects or, in rare cases, as physical objects in a hybridised physical- digital environment. These objects represent 20% of the total image-set, and a selection can be found in Figure 2. As these objects often pre-date the development of cyberspace, they may initially evoke physical, rather than digital, security, being grafted onto cyber security only later to represent what security might mean in cyberspace. As is the case with skeuomorphic interface design, wherein new signs or symbols are developed from prior objects (retaining the original’s ornamental design cues), the new semiotic interpretation may have superseded the older meaning. Nonetheless, any theoretical constructs associated with the originator sign have likely been carried forward kearney2001continental . According to ghertner2020futureproof , security could be seen as a form of negation, with security signs suggesting the absence of malicious activity. The lock is a powerful symbol of security in the real world, the key a symbol of legitimate authority, and the shield a symbol indicating defense in the event of an attack. These symbols can also be rendered differently to present a kind of advanced warning system; imagery of a broken lock, key, or shield might alert the user to potential security infractions.666It must be pointed out, however, that we did not come across any broken or damaged semiotic objects in our image-set. ### 5.3 Hackers: anatomy, gender, and race Humans were represented in 16.4% of the images in our image-set. Using our codebook, we assigned each of these images to one of two binary categories: the malicious hackerman archetype (8% of our total image-set, and just over 52% of our total human representations) or non-malicious users (7.8% of our total image-set, and just under 48% of our total human representations). An additional 4% of our image-set contained anatomical images, such as those seen in Class 9. These anatomical images were largely restricted to two types: those featuring hands and those featuring eyes. Our image-set broadly suggests a lack of equal representation in cyber security aesthetics, and thus a lack of diverse role models. Just under 1% of our hackerman archetype features feminine-presenting people; all others were either masculine-presenting (63%), implied to be men, or had their secondary- sex characteristics obscured (for instance, by a mask). While we do see a wider variety of individuals amongst the non-malicious users — 63% of these images contained masculine-presenting individuals, 37% contained feminine- presenting individuals, and 20% contained people who appeared to be of a non- white background — this does not necessarily represent the gender or racial/ethnic background ratio of audiences who engage with cyber security content, but that of the organisations that produce cyber security digital media (as per Thomas’s thomas2020discursive concept of a ‘discursive digital archive’).777Statistics from the U.S. Bureau of Labor Statistics indicate that 18% of cyber security specialists are women jethwani2017can , which may provide a reference point for the 37% of human images featuring feminine- presenting people. Our image-set also seems to attribute disparate expertise — and thus responsibility for cyber security — to different individuals. The hackerman, for instance, is presented as a ‘lone wolf’, whereas non-malicious individuals are more frequently depicted in groups than as solo actors.888This is particularly pronounced in images featuring feminine-presenting people, who in many cases are accompanied by masculine-presenting people. One could interpret this to mean that individuals lack the talents or other requisite knowledge that the lone hackerman possesses hack , and must therefore work together to counter the hackerman’s threats. The media promotes this association by applying the hackerman stereotype to organisations such as ‘Anonymous’ and online communities such as 4chan, placing these groups in a position to dominate the conversation. In turn, individual users may view these entities as malicious experts, abdicating their own responsibility for cyber security based on their feelings of powerlessness hack . Finally, our image-set features moral ambiguity and vague representations that could make it difficult for users to derive context from images. For example, while the hackerman is supposed to engage in malicious online activities, he is sometimes presented with morally ambiguous or vigilante imagery like the ‘V for Vendetta’ mask, complicating the viewer’s understanding of his aims. While non-malicious individuals are consistently depicted as benign or neutral, they are frequently engaged in a variety of nondescript tasks, like interacting with hacked devices, responding to being hacked, or performing some kind of professional work (for instance, as cyber security professionals). Images of human anatomy were similar; though hands are an important and highly visible part of the human body jakubietz2005defining that can serve as a heuristic to facilitate learning goldin2005our ,999Hands are often used to model movements or convey information by assuming specific positions. in our image-set they were engaged in a variety of mostly unclear / un-directed movements and positions, often in connection with physical devices like laptops. Images in which the eye was dominant were similarly varied, but in many cases represented the moral ambiguity of a panopticon (alongside images from other classes that render a retina and cornea from composite imagery) or enjoined the user to pay attention gaines2019machinic . By depriving users of role models, clear contexts / goals, and the means or abilities to achieve such goals, these issues likely undermine individual users’ self-efficacy. We believe that these issues stem, at least in part, from cyber security’s reliance on stock photography. Stock photography is characteristically nondescript and visually homogeneous because it must make individual images salient to various use cases — hence the images of individuals in vague contexts and unclear narratives. Furthermore, stock photography in western media features a ‘discrimination implied by a well- calculated, almost mandatory inclusion of gender and ethnic minorities’ papadopoulou2014seen , yielding the nominally inclusive images that nonetheless fail to actually bestow agency to the individuals represented. ### 5.4 Digital-Physical spheres Representing 7% of our total image-set, hybrid digital-physical representations, such as digital networks superimposed over cityscapes, the earth, or a more abstract sphere, were of some note to us. To understand what these images mean and how they may affect self-efficacy, we consulted work in other domains that use similar styles of visualisation. Most prominently, Sloterdijk, a German philosopher, studied the history of spherical maps, overlays, and designs, tracing these visual tactics as far back as the late 15th Century sloterdijk2011spheres . Sloterdijk argues that these kinds of images arose naturally from our changing understanding of our planet at that time, which was no longer an enclosed space or the centre of the universe, but a single, contingent celestial body. According to this account, disoriented European map makers began to fetishise spherical imagery as a sense-making device capable of conveying that we were no longer living inside a world, but rather on one. Although the overlays in our image-set differ from these earlier precedents in that they overlay networked security graphs instead of shipping lanes, they may nonetheless embody a return to such ‘spheric-security’ in the face of new metaphysical uncertainties, at least from an aesthetic viewpoint sloterdijk2011spheres . Indeed, a subconscious spheric-security can also be seen in images outside our image-set like network traffic maps, which often take a spherical view despite not being technically constrained in this manner. According to 10.1093/cybsec/tyv004 , spherical shapes suggest something which can be contained and kept secure, and so they may help to imaginatively ‘bound’ the sprawling endlessness of Network traffic, which always threatens to deviate into the unknown. ### 5.5 Trends and Recommendations In the above, we explored the attributes of cyber security aesthetics and speculated about the kind of self-efficacy that these attributes afford. In the following, we highlight the trends we observed across classes and the key issues that cyber security aesthetics must address to improve its effect on users’ self-efficacy. #### 5.5.1 A practical philosophy for cyber security According to the philosophical discourse on aesthetics presented in Section 2, aesthetics provide a basis for savoring sensation, organising sensations into orderly meaning Kant1892-KANTCO-3 , and orienting contexts kikuchi1992philosophic . Moreover, the meanings and contexts thus constructed can influence our decisions chatterjee2014aesthetic . While cyber security aesthetics seem to fulfil the first element of sensation, adapting a long history of mathematical and even science fiction aesthetics (alongside other disciplinary symbols) to frame cyberspace as an other-worldly future domain that is not restricted to what we currently understand, they fail to provide sufficient context for effective navigation or learning. Instead, they present a dazzling spectacle of abstractions and powerful traditional security semiotics without much in the way of meaning. The hackerman archetype, powerful but elusive, and our other subjects involved in procedural and ambiguous work, encourage users to abdicate responsibility based on their own comparative lack of expertise and perceived powerlessness, while the digital- physical spheres, like Sloterdijk’s historical spheres, reify a world that may not match our experience of reality. According to Sloterdijk, humans no longer believe in an all-seeing singularity encompassing us, be it supernatural or human exceptionalism sloterdijk2011spheres . Nonetheless, much user-facing cyber security media depicts cyberspace as a fraught and hostile environment that individuals can’t hope to navigate without expert assistance, a strategy that amounts to ‘fear-mongering’ and allows those with vested interests in the cyber security field to turn security into an all-important and all- encompassing issue neocleous2008critique . Solving these problems is no simple task, and so no easy solution presents itself. One high-level, long-term suggestion is to develop a practical philosophy based on narratives. For example, mcsweeney1999security frame security as a form of resilience that individuals can build through everyday tasks that make them feel secure. Where the person performing these tasks is a trusted friend or confidante, security could also be framed as relational security. Breaking cyber security down into small, actionable tasks can significantly improve users’ self-efficacy, given the theory of self-efficacy presented in Section 2. It would also allow cyber security aesthetics to play a positive role, enabling individuals and society at large to navigate the rapidly changing digital-physical world represented in our image-set. This possibility could be realised through new and innovative semiotics that are not simply copies of the traditional security landscape, or through a more positivist and standardised form of low-abstracted relational imagery with clear links to cyberspace. A practical example of this can be found in refuse recycling, where Gary Anderson, a student, won a nationwide contest for a new symbol for the then fledgling recycling initiative with his Mobius loop-based three-chasing-arrows. This symbol has since risen to global prominence jones1999gary . #### 5.5.2 Improving role models for cyber security Systemic under-representation in certain occupations is a complex and multi- causal problem that needs to be examined using both interdisciplinary and context-specific approaches.101010These approaches must also account for under-representation at multiple phases / points in time, including factors that influence the admission, participation, and progression of under- represented individuals in these industries. Fortunately, we can begin to bolster all users’ self-efficacy through much more straightforward steps.111111In our case, self-efficacy is linked to the context in which these images are used for informally learnt cyber security. For example, insofar as stereotypes can have positive self-efficacy effects in certain contexts czopp2015positive ,121212In the field of education, positive stereotypes have been shown to influence which goals individuals choose to pursue czopp2015positive . we could co-opt the male-dominated hackerman stereotype and make it more inclusive, extending its connotations of moral ambiguity and power to individuals across demographics. We believe that such role models can help to reduce users’ cognitive load when assimilating cyber security knowledge, and that they can make the field as a whole seem more user- friendly. #### 5.5.3 The paradox of simplification In the world of user interface design, developers build persuasive and easy- to-use interfaces to pursue a kind of universal simplicity. This objective could also be applied to cyber security aesthetics, where it could help to magnify users’ self-efficacy. For instance, cyber security aesthetics could become simpler, cleaner, and more pertinent to the subject matter, or they could feature more rhetorical imagery. Where the digital workspaces, networked landscapes, text walls, and semiotic padlocks in our image-set all pose exploratory questions (prompting internal narrative reflection), rhetorical imagery makes a specific point, is designed with a specific audience in mind, and is focused on narrative integrity above all. Of course, the problem is not just about exploratory imagery; we have seen that cyber security aesthetics feature many abstractions and visual devices that confuse the core concept being depicted. If cyber security aesthetics are supposed to simplify the complex nature of cyber security, it appears that we must first simplify the aesthetics themselves. However, simplification can itself lead to misunderstanding, as was the case in the Space Shuttle disaster, which was partially attributable to the oversimplification of data in a graph tufte2006beautiful . This is the cyber security aesthetic paradox: that simplification can aid as well as hinder understanding in equal measure. To overcome this challenge, we might look to other fields that have developed unique aesthetic norms that enhance learners’ self-efficacy. The field of chemistry, for instance, has spent centuries developing a standard aesthetic system that simplifies complex concepts and narratives without rendering them ineffective hoffmann2003thoughts . ## 6 Limitations and future research directions The scope of this study was limited by the definitions we used and the selection criteria we applied to guide our assembly of the image-set. In our case, this meant focusing on English-language material even though a preliminary search conducted before implementation unearthed a rich catalogue of images in other languages. This also means that our analysis and our findings likely exhibit Anglo-Saxon bias. Nevertheless, we expect that our methodology can be adapted to explore the same research aims in other languages and cultural contexts, enabling a more universal understanding of cyber security aesthetics. Given the background we provided in Section 2, our definition of aesthetics likely exhibits Anglo-Saxon or broadly Euro-American bias, failing to encapsulate the aesthetic philosophies of other cultures. It may also be limiting for other reasons, as we focused narrowly on their ability to provide context for, and inlays to, the world of cyberspace, making it more interpretable and navigable for informal learners. Other definitions might yield different insights and support other kinds of research questions. We utilised semi-automated methodologies to classify images based on the semiotic objects within them, and the results are tempered by the respective limitations of these methodologies. Moreover, our results represent a specific snapshot in the security timeline; access to a larger historical image-set would inevitably change the overall results, potentially yielding a more statistically significant sentiment analysis. Furthermore, in order to assemble a unique image-set, we ignored duplicates. However, insofar as aesthetic literacies arise from exposure to images, rather than the absolute number of unique images, including duplicates could help us to ascertain the effective rate of gender representation, as perceived by viewers. This study was exploratory, and we predicted how users’ self-efficacy might be affected by images through a singular focus on qualitative analysis. We did not consider other metrics that could have enhanced the findings, and we did not engage in other kinds of data collection (like surveys) that could have revealed real users’ reactions. Traditionally speaking, image recognition assessments in lab settings involve comprehension tests, eye tracking, and brain-imaging. Knowledge of how users interact with cyber security aesthetics in these terms would allow for a significantly richer analysis of aesthetic effects on self-efficacy. Finally, because self-efficacy is a fluid construct that may vary based on specific emotional awareness and specific tasks or contexts karademas2007optimism , future research in this field could assess the emotional associations of more granular aesthetic elements, like each colour within each image class. For example, mohammad2013colourful crowd-sourced an inventory of colour-word associations that reveal the specific emotions attached to specific colours, which could be used to analyse the colour trends in our image-set. All of these limitations present myriad opportunities to expand on this work, perfecting our understanding of cyber security aesthetics and the effects it can have on cyberspace and its users. ## 7 Conclusion In this paper we have presented work on an image-set of cyber security aesthetics generated from mainstream media articles as might be faced by individual users on a regular basis. The work was oriented by two aims: (1) to ascertain what cyber security aesthetics consist of, from the perspective of an individual user, and (2) to provide an explorative discussion as to the manner in which these aesthetics may affect users’ perceived self-efficacy as they informally learn cyber security precepts. Our findings for (1) indicate that cyber security aesthetics depict a threatening and confusing environment with systemic semiotic and social deficiencies — a distorted vision of cyberspace without clarity of thought. The narrative of cyber security is abstract and opaque, but through informal learning, individual users can assemble a mental representation of this concept, using their senses to intuit meaning and perspective from the visual elements that accompany cyber security texts. For (2) our findings raise important obstacles to self-efficacy potential, from the way that participants of cyberspace are portrayed to the moral ambiguity that characterises a significant proportion of the image-set. Nonetheless, as cyber security continues to evolve into a core concept within cyberspace, we believe that these issues can be overcome. Indeed, several of these problems, and the simplification paradox itself, arise from a lack of vision, or perhaps just the lack of usable semiotics available to cyber security content creators. We believe the work represents the first steps in working towards a more holistic and cohesive cyber security aesthetic vision. ## Statements and Declarations All authors contributed to the study conception and design. Material preparation and data collection were performed by [Mark Quinlan]. Initial data analysis of the image-set were performed by [Mark Quinlan] and [Aaron Ceross], with all further analysis performed by [Mark Quinlan]. The first draft of the manuscript was written by [Mark Quinlan] and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript, and have no relevant financial or non-financial interests to disclose. ## References * * (1) Lin, F., Yao, M.: The impact of accompanying text on visual processing and hedonic evaluation of art. Empirical Studies of the Arts 36(2), 180–198 (2018) * (2) Hattwig, D., Bussert, K., Medaille, A., Burgess, J.: Visual literacy standards in higher education: New opportunities for libraries and student learning. portal: Libraries and the Academy 13(1), 61–89 (2013) * (3) Nichols, J.: Illustrations of the Literary History of the Eighteenth Century vol. 8. Nichols; Bentley, ??? (1858) * (4) Marland, A.: Political photography, journalism, and framing in the digital age: The management of visual media by the prime minister of canada. The International Journal of Press/Politics 17(2), 214–233 (2012) * (5) Cooper, N., Holman, V.: War and visual culture since 1900. Journal of War & Culture Studies 1(3), 219–222 (2008) * (6) Meyer, B., Verrips, J.: Aesthetics. Routledge, ??? (2008) * (7) David, A., Glore, P.: The impact of design and aesthetics on usability, credibility, and learning in an online environment. Online Journal of Distance Learning Administration 13(4) (2010) * (8) Mitchell, C., Weber, S., O’Reilly-Scanlon, K.: Just who do we think we are. Methodologies for (2005) * (9) Page, T.: Skeuomorphism or flat design: future directions in mobile device user interface (ui) design education. International Journal of Mobile Learning and Organisation 8(2), 130–142 (2014) * (10) Curtis, A.: Rhetoric of Flat Design and Skeuomorphism in Apple’s iOS Graphical User Interface. University of Rhode Island, ??? (2015) * (11) Tharangie, K., Irfan, C., Marasinghe, C., Yamada, K.: Kansei engineering assessing system to enhance the usability in e-learning web interfaces: Colour basis. In: 16th International Conference on Computers in Education, vol. 1, pp. 145–150 (2008) * (12) Reyna, J.: The importance of visual design and aesthetics in e-learning. Training and Development 40(5), 28–31 (2013) * (13) Malcolm, J., Hodkinson, P., Colley, H.: The interrelationships between informal and formal learning. Journal of Workplace Learning (2003). https://doi.org/10.1108/13665620310504783 * (14) Ollis, T.: Learning in social action: The informal and social learning dimensions of circumstantial and lifelong activists. Australian Journal of Adult Learning 51(2), 248–268 (2011) * (15) Flavián, C., Gurrea, R., Orús, C.: A heuristic evaluation of websites design for achieving the web success. International Journal of Services and Standards 5(1), 17–41 (2009) * (16) Joshi, D., Datta, R., Fedorovskaya, E., Luong, Q.-T., Wang, J.Z., Li, J., Luo, J.: Aesthetics and emotions in images. IEEE Signal Processing Magazine 28(5), 94–115 (2011) * (17) Shires, J.: Cyber-noir: Cybersecurity and popular culture. Contemporary Security Policy 41(1), 82–107 (2020) * (18) Carroll, F.: Usable security and aesthetics: Designing for engaging online security warnings and cautions to optimise user security whilst affording ease of use. In: European Symposium on Usable Security 2021, pp. 23–28 (2021) * (19) Fogg, B.J.: A behavior model for persuasive design. In: Proceedings of the 4th International Conference on Persuasive Technology, pp. 1–7 (2009) * (20) Robinson, L., Schulz, J., Blank, G., Ragnedda, M., Ono, H., Hogan, B., Mesch, G., Cotten, S.R., Kretchmer, S.B., Hale, T.M., et al.: Digital inequalities 2.0: Legacy inequalities in the information age (2020) * (21) Alsudani, F., Casey, M.: The effect of aesthetics on web credibility. People and Computers XXIII Celebrating People and Technology, 512–519 (2009) * (22) Ma, K.-L.: Cyber security through visualization. In: Proceedings of the 2006 Asia-Pacific Symposium on Information Visualisation-Volume 60, pp. 3–7 (2006). Citeseer * (23) Bernal, V.: 1\. the aesthetics of cyber insecurity: Displaying the digital in three american museum exhibits. In: Futureproof, pp. 33–62. Duke University Press, ??? (2020) * (24) Ghertner, D.A., McFann, H., Goldstein, D.M.: Futureproof: Security Aesthetics and the Management of Life. Duke University Press, ??? (2020) * (25) Dewey, J.: Art as experience. New York: wideview. Pedigree Books (1934) * (26) Beardsley, M.C.: Aesthetics from Classical Greece to the Present vol. 13. University of Alabama Press, ??? (1975) * (27) Kant, I.: The Critique of Judgment. Prometheus Books, ??? (1892) * (28) Zander, T., Öllinger, M., Volz, K.G.: Intuition and insight: Two processes that build on each other or fundamentally differ? Frontiers in Psychology 7, 1395 (2016) * (29) Walsh, D.: Aesthetic objects and works of art. The Journal of Aesthetics and Art Criticism 33(1), 7–12 (1974) * (30) Wissenburg, I.: Aesthetic communication on the possibility of combining semiotics and aesthetics: analysis of ‘delftsche slaolie’by jan toorop. B.S. thesis, Universiteit Utrecht (2012) * (31) Barbosa, S.D.J., Barbosa, G.D.J., Souza, C.S.d., Leitão, C.F.: A semiotics-based epistemic tool to reason about ethical issues in digital technology design and development. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 363–374 (2021) * (32) Cheng, L., Pei, J., Danesi, M.: A sociosemiotic interpretation of cybersecurity in u.s. legislative discourse. Social Semiotics 29(3), 286–302 (2019) https://doi.org/10.1080/10350330.2019.1587843. https://doi.org/10.1080/10350330.2019.1587843 * (33) Sayers, S.: Jacques rancière (2004) the politics of aesthetics: the distribution of the sensible. Culture Machine, Reviews (2005) * (34) Rudner, R.: On semiotic aesthetics. The Journal of Aesthetics and Art Criticism 10(1), 67–77 (1951) * (35) Chatterjee, A.: The Aesthetic Brain: How We Evolved to Desire Beauty and Enjoy Art. Oxford University Press, ??? (2014) * (36) Carper, B.A.: Fundamental Patterns of Knowing in Nursing. Teachers College, Columbia University, ??? (1975) * (37) Keenan, T.M.: The use of aesthetic knowledge in decision making processes in mega projects. PhD thesis, Queensland University of Technology (2016) * (38) Carroll, F.: Designing (for) experiences in photorealistic vr environments. New Review of Hypermedia and Multimedia 16(1-2), 181–194 (2010) * (39) Fernandes, M., Walls, L., Munson, S., Hullman, J., Kay, M.: Uncertainty displays using quantile dotplots or cdfs improve transit decision-making. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2018) * (40) Rader, E., Wash, R., Brooks, B.: Stories as informal lessons about security. In: Proceedings of the Eighth Symposium on Usable Privacy and Security, pp. 1–17 (2012) * (41) Bandura, A., Freeman, W.H., Lightsey, R.: Self-efficacy: The Exercise of Control. Springer, ??? (1999) * (42) Maddux, J.E.: Self-efficacy theory. In: Self-efficacy, Adaptation, and Adjustment, pp. 3–33. Springer, ??? (1995) * (43) Stumpf, S.A., Brief, A.P., Hartman, K.: Self-efficacy expectations and coping with career-related events. Journal of Vocational Behavior 31(1), 91–108 (1987) * (44) Herley, C.E.: So long, and no thanks for the externalities: The rational rejection of security advice by users. In: Proceedings of the 2009 Workshop on New Security Paradigms Workshop. NSPW ’09, pp. 133–144. Association for Computing Machinery, New York, NY, USA (2009). https://doi.org/10.1145/1719030.1719050. https://doi.org/10.1145/1719030.1719050 * (45) Halevi, T., Memon, N., Lewis, J., Kumaraguru, P., Arora, S., Dagar, N., Aloul, F., Chen, J.: Cultural and psychological factors in cyber-security. In: Proceedings of the 18th International Conference on Information Integration and Web-based Applications and Services, pp. 318–324 (2016) * (46) Ajzen, I.: The theory of planned behaviour. organizational behaviour and human decision processes, 50 (2), 179-211. View at (1991) * (47) Bosma, N., Hessels, J., Schutjens, V., Van Praag, M., Verheul, I.: Entrepreneurship and role models. Journal of economic psychology 33(2), 410–424 (2012) * (48) Marres, N., Weltevrede, E.: Scraping the social? issues in live social research. Journal of Cultural Economy 6(3), 313–335 (2013). https://doi.org/10.1080/17530350.2013.772070 * (49) Schatz, D., Bashroush, R., Wall, J.: Towards a more representative definition of cyber-security. Journal of Digital Forensics, Security and Law 12(2), 53–74 (2017). https://doi.org/10.15394/jdfsl.2017.1476 * (50) Humayun, M., Niazi, M., Jhanjhi, N., Alshayeb, M., Mahmood, S.: Cyber-security threats and vulnerabilities: a systematic mapping study. Arabian Journal for Science and Engineering 45(4), 3171–3189 (2020) * (51) Kosar, T., Bohra, S., Mernik, M.: Protocol of a systematic mapping study for domain-specific languages. Journal of Information and Software Technology 21(C), 77–91 (2016) * (52) Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, vol. 1, pp. 1–2 (2004). Prague * (53) Weller, H.I., Westneat, M.W.: Quantitative color profiling of digital images with earth mover’s distance using the R package colordistance. PeerJ 7, 6398 (2019) * (54) Rubner, Y., Tomasi, C., Guibas, L.J.: The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision 40(2), 99–121 (2000) * (55) Gupta, A., Thakkar, K., Gandhi, V., Narayanan, P.: Nose, eyes and ears: Head pose estimation by locating facial keypoints. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1977–1981 (2019). IEEE * (56) Wäldchen, J., Mäder, P.: Plant species identification using computer vision techniques: A systematic literature review. Archives of Computational Methods in Engineering 25(2), 507–543 (2018) * (57) Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.-R.: Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems 28(11), 2660–2673 (2016) * (58) Johnson, M.R.: The history of cyberspace aesthetics in video games. In: Cyberpunk and Visual Culture, pp. 139–154. Routledge, ??? (2017) * (59) Gibson, W.: Neuromancer vol. 1. Aleph, ??? (1984) * (60) Benedikt, M.L.: Cyberspace: First Steps. MIT Press, ??? (1991) * (61) Branch, J.: What’s in a name? metaphors and cybersecurity. International Organization 75(1), 39–70 (2021). https://doi.org/10.1017/S002081832000051X * (62) Nye, D.E.: American Technological Sublime. mit Press, ??? (1996) * (63) Featherstone, M., Burrows, R.: Cyberspace/cyberbodies/cyberpunk: Cultures of Technological Embodiment. Sage, ??? (1996) * (64) Croon, A.: Making sense of cyberspace: a question of being-with information technology. In: Exploring Cyber Society, pp. 1–9 (1999). Univeristy of Northumbria at Newcastle * (65) Shedroff, N., Noessel, C.: Make It So: Interaction Design Lessons from Science Fiction. Rosenfeld Media, ??? (2012) * (66) Greenspan, S.: Future screens are mostly blue (2013). https://99percentinvisible.org/episode/future-screens-are-mostly-blue/ * (67) Beasley, R., Danesi, M.: Persuasive Signs: The Semiotics of Advertising vol. 4. Walter de Gruyter, ??? (2010) * (68) Kearney, R., Rasmussen, D.: Continental aesthetics: Romanticism to postmodernism: An anthology (2001) * (69) Thomas, P.: Discursive trick effects: How raced and gendered semiotics in industry media undermine equal representation in the cybersecurity workforce (2020) * (70) Jethwani, M.M., Memon, N., Seo, W., Richer, A.: “i can actually be a super sleuth” promising practices for engaging adolescent girls in cybersecurity education. Journal of Educational Computing Research 55(1), 3–25 (2017) * (71) Sob, T.: “hackerman”: How diverging cyberspace portrayals influence the adf’s perception of cybersecurity and the cultural ramifications of these judgements. Australian Defense Force - Cyber Security Challenges 1(1) (2021) * (72) Jakubietz, R.G., Jakubietz, M.G., Kloss, D., Gruenert, J.G.: Defining the basic aesthetics of the hand. Aesthetic plastic surgery 29(6), 546–551 (2005) * (73) Goldin-Meadow, S., Wagner, S.M.: How our hands help us learn. Trends in cognitive sciences 9(5), 234–241 (2005) * (74) Gaines, B.: Machinic eyes: New and post-digital aesthetics, surveillance, and resistance. PhD thesis, Clemson University (2019) * (75) Papadopoulou, A.: As seen in your prospectus: A critical essay on the representation of ethnic diversity in stock photography (2014) * (76) Sloterdijk, P.: Spheres volume i: Bubbles microspherology. Los Angeles: Semiotext (e) (2011) * (77) Hall, P., Heath, C., Coles-Kemp, L.: Critical visualization: a case for rethinking how we visualize risk and security. Journal of Cybersecurity 1(1), 93–108 (2015) https://academic.oup.com/cybersecurity/article-pdf/1/1/93/7001223/tyv004.pdf. https://doi.org/10.1093/cybsec/tyv004 * (78) Kikuchi, J.F., Simmons, H.: Philosophic inquiry in nursing (1992) * (79) Neocleous, M.: Critique of Security. Edinburgh University Press, ??? (2008) * (80) McSweeney, W., Bill, M.: Security, Identity and Interests: a Sociology of International Relations vol. 69. Cambridge University Press, Cambridge, ??? (1999) * (81) Jones, P., Powell, J.: Gary anderson has been found! Resource Recycling 18, 25–27 (1999) * (82) Czopp, A.M., Kay, A.C., Cheryan, S.: Positive stereotypes are pervasive and powerful. Perspectives on Psychological Science 10(4), 451–463 (2015) * (83) Tufte, E.R.: Beautiful Evidence. Graphis Pr, ??? (2006) * (84) Hoffmann, R.: Thoughts on aesthetics and visualization in chemistry. HYLE–International Journal for Philosophy of Chemistry 9(1), 7–10 (2003) * (85) Karademas, E.C., Kafetsios, K., Sideridis, G.D.: Optimism, self-efficacy and information processing of threat-and well-being-related stimuli. Stress and Health: Journal of the International Society for the Investigation of Stress 23(5), 285–294 (2007) * (86) Mohammad, S.: Colourful language: Measuring word-colour associations. arXiv preprint arXiv:1309.5942 (2013)
# Analytic Response Relativistic Coupled-Cluster Theory: The first application to indium isotope shifts B.K. Sahoo Atomic and Molecular Physics Division, Physical Research Laboratory, Navrangpura, Ahmedabad 380009, India<EMAIL_ADDRESS>A.R. Vernon School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom<EMAIL_ADDRESS><EMAIL_ADDRESS>, R.F. Garcia Ruiz School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom EP Department, CERN, CH-1211 Geneva 23, Switzerland <EMAIL_ADDRESS>C.L. Binnersley School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom J. Billowes School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom M.L. Bissell School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom T.E. Cocolios KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium G.J. Farooq-Smith KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium K.T. Flanagan School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom Photon Science Institute Alan Turing Building, University of Manchester, Manchester M13 9PY, United Kingdom W. Gins KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium R.P. de Groote KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium Department of Physics, University of Jyväskylä, Survontie 9, Jyväskylä, FI-40014, Finland Á. Koszorús KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium G. Neyens EP Department, CERN, CH-1211 Geneva 23, Switzerland KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium K.M. Lynch EP Department, CERN, CH-1211 Geneva 23, Switzerland F. Parnefjord-Gustafsson KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium C.M. Ricketts School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, United Kingdom K.D.A Wendt Institut für Physik, Johannes Gutenberg-Universität Mainz, D-55128 Mainz, Germany S.G. Wilkins EN Department, CERN, CH-1211 Geneva 23, Switzerland X.F. Yang School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium ###### Abstract With increasing demand for accurate calculation of isotope shifts of atomic systems for fundamental and nuclear structure research, an analytic energy derivative approach is presented in the relativistic coupled-cluster theory framework to determine the atomic field shift and mass shift factors. This approach allows the determination of expectation values of atomic operators, overcoming fundamental problems that are present in existing atomic physics methods, i.e. it satisfies the Hellmann-Feynman theorem, does not involve any non-terminating series, and is free from choice of any perturbative parameter. As a proof of concept, the developed analytic response relativistic coupled- cluster theory has been applied to determine mass shift and field shift factors for different atomic states of indium. High-precision isotope-shift measurements of 104-127In were performed in the 246.8-nm (5p 2P3/2 $\rightarrow$ 9s 2S1/2) and 246.0-nm (5p 2P1/2 $\rightarrow$ 8s 2S1/2) transitions to test our theoretical results. An excellent agreement between the theoretical and measured values is found, which is known to be challenging in multi-electron atoms. The calculated atomic factors allowed an accurate determination of the nuclear charge radii of the ground and isomeric states of the 104-127In isotopes, providing an isotone-independent comparison of the absolute charge radii. ###### pacs: 00.00, 20.00, 42.10 ††: New J. Phys. Keywords: Article preparation, IOP journals ## 1 Introduction The removal or addition of neutrons to the nucleus produces changes in the energy of atomic transitions, known as the isotope shift (IS). These small variations, typically less than 10-6 with respect to the atomic energy levels, can probe fundamental aspects of the electron-nucleus interaction, e.g., the size of the nucleus [1], the existence of new bosons [2, 3], new spin- independent interactions [4, 5] and long-range neutrino-mediated forces [6]. Currently, extensive experimental efforts worldwide have been focused on the development of complementary techniques to perform high-precision measurements of IS in atomic transitions, across different isotopic chains [7, 8, 9, 10]. Alongside the experimental progress, the development of many-body methods plays a central role in these studies as it provides the means to extract nuclear-structure and fundamental-physics parameters from experimental observations [11]. Reliable atomic calculations are critical to establish firm conclusions from high-precision experiments in nuclear [12] and fundamental- physics research [13]. Most of our present knowledge on the nuclear charge radius of unstable nuclei is derived from IS measurements in atomic transitions performed by laser spectroscopy experiments [12]. The calculation of atomic physics factors which are needed to decouple many-body electron correlations from nuclear-structure variations present the main challenges in the interpretation of IS measurements. The coupled-cluster (CC) method is considered as the gold standard for treating electron-correlation effects [14]. However, the current methods used to calculate atomic physics operators present serious drawbacks that can generate uncontrolled theoretical uncertainties. The commonly used expectation-value-evaluation (EVE) approach [15, 16], for example, involves non-terminating series, and the finite-field (FF) approach [17] depends on the choice of a perturbation parameter. To overcome these problems in this work we implement and demonstrate, for the first time in atomic systems, the analytic- response (AR) theory within the CC framework [18] to determine IS shift parameters of atomic systems. The atomic factors involved in the IS measurements can be empirically obtained for even-proton elements [19], where independent charge radii measurements from electron scattering and muonic atoms exists for three or more stable isotopes. However, this is not the case for elements with odd-proton number, where only up to two stable isotopes exists and the accuracy of all charge- radii values obtained from isotope shifts measurements relies on atomic physics calculations. Accurate determination of the charge radii of radioactive isotopes is not only relevant for nuclear structure research, but can provide a deeper insight into nuclear matter [20, 21]. Motivated by the current nuclear structure interest in the study of ISs around proton number $Z$ = 50 [22, 23, 24, 25], our theoretical developments were used to perform, for the first time, ab-initio calculations of atomic factors for indium (In) atom ($Z$ = 49). The In isotope chain offers a comprehensive laboratory to test these theoretical developments. The long chain of isotopes increases the precision in cancelling out the nuclear contribution to the IS, while the presence of at least one isomeric nuclear state at each mass allows for an mass-independent measure of the field-shift (FS) contribution to the IS. This provides a stringent constraint to test our theoretical calculations by increasing the precision on the experimentally determined atomic factors. Moreover, several atomic transitions are experimentally accessible, and precise data on transitions to high-lying states [26] can be combined with our new measurements and calculations to evaluate the individual atomic level-IS (LIS), allowing a direct study of the IS factors for each level. ## 2 Theory The IS of an energy level, $i$, between an isotope, $A$, with mass, $m_{A}$, and an isotope, $A^{\prime}$, with mass, $m_{A^{\prime}}$, is given [27] by a product of nuclear and atomic factors as 111The factor of $h$ is dropped in the notation of this work unless relevant i.e. IS$=\delta E_{i}$. However where values are presented for comparison to experiment the factor is included. $\displaystyle\delta E_{i}$ $\displaystyle=$ $\displaystyle F_{i}\delta\langle r^{2}\rangle+K_{i}^{\text{MS}}\frac{m_{A}-m_{A^{\prime}}}{m_{A}m_{A^{\prime}}},$ (1) where $\delta\langle r^{2}\rangle=\langle r^{2}\rangle_{A}-\langle r^{2}\rangle_{A^{\prime}}$ is the difference between the nuclear mean-square charge radii of the two isotopes [28, 29]. Higher-order effects and non-linear corrections to expression 1 are expected to be lower than 1$\%$ [30], and are thus neglected in our present study. The atomic part is factorized in the constants $F_{i}$ and $K_{i}^{\text{MS}}$, which are the FS and mass shift (MS) contributions to the LIS, respectively. The FS factor, $F_{i}=\frac{\langle\Psi_{i}|\sum_{e}F(r_{e})|\Psi_{i}\rangle}{\langle\Psi_{i}|\Psi_{i}\rangle}$, for atomic level, $i$, described by the wave function, $|\Psi_{i}\rangle$, is calculated using the operator defined by $\displaystyle F(r_{e})$ $\displaystyle=$ $\displaystyle-\frac{\delta V_{\text{nuc}}(r_{N},r_{e})}{\delta\langle r_{N}^{2}\rangle}\;,$ (2) where $r_{N}$ is the nuclear radius ($\langle r_{N}^{2}\rangle$ is the mean) and $r_{e}$ is the electronic coordinate. The electrostatic potential due to the nuclear charge, $V_{\text{nuc}}(r_{N},r_{e})$, is evaluated by assuming a spherically-symmetric Fermi nuclear charge distribution defined by $\rho_{\text{nuc}}(r_{N})=\frac{\rho_{0}}{1+e^{(r_{N}-c)/a}}\;\;,$ (3) for the normalization factor, $\rho_{0}$. $c$ is the half-charge radius and $a$ is related to the skin thickness [31]. The total MS constant is expressed as the sum of the normal MS (NMS), $K_{i}^{\text{NMS}}=\frac{\langle\Psi_{i}|\sum_{e}H_{NMS}(r_{e})|\Psi_{i}\rangle}{\langle\Psi_{i}|\Psi_{i}\rangle}$, and specific MS (SMS), $K_{i}^{\text{SMS}}=\frac{\langle\Psi_{i}|\sum_{k,l\geq k}H_{SMS}(r_{kl})|\Psi_{i}\rangle}{\langle\Psi_{i}|\Psi_{i}\rangle}$ for the inter-electronic distance, $r_{kl}=|\vec{r}_{k}-\vec{r}_{l}|$, between the electrons located at $r_{k}$ and $r_{l}$. These constants are obtained using the relativistic expressions of the operators given by [32] $\displaystyle H_{NMS}(r_{i})=\vec{p}_{i}^{\;2}-\frac{\alpha_{e}Z}{r_{i}}\vec{\alpha}_{i}^{D}\cdot\vec{p}_{i}-\frac{\alpha_{e}Z}{r_{i}}\left\\{(\vec{\alpha}_{i}^{D}\cdot\vec{C}_{i}^{(1)})\vec{C}_{i}^{(1)}\right\\}\cdot\vec{p}_{i}\;\;,$ (4) and $\displaystyle H_{SMS}(r_{ij})=\vec{p}_{i}\cdot\vec{p}_{j}-\frac{\alpha_{e}Z}{r_{i}}\vec{\alpha}_{i}^{D}\cdot\vec{p}_{j}-\frac{\alpha_{e}Z}{r_{i}}\left\\{(\vec{\alpha}_{i}^{D}\cdot\vec{C}_{i}^{(1)})\vec{C}_{i}^{(1)}\right\\}\cdot\vec{p}_{j}\;\;.$ (5) In the above expressions, $\vec{p}$ is the momentum operator, $\alpha_{e}$ is the fine structure constant, $Z$ is the atomic number, $\vec{\alpha}^{D}$ is the Dirac matrix and $\vec{C}^{(1)}$ is the Racah operator of rank one. It is worth noting here is that these expressions in the non-relativistic limit become $H_{NMS}(r_{i})=\vec{p}_{i}^{\;2}$ and $H_{SMS}(r_{ij})=\vec{p}_{i}\cdot\vec{p}_{j}$. Since $H_{SMS}$ is a two-body operator, evaluation of $K_{i}^{SMS}$ using the expectation value expression is computationally cumbersome. ## 3 The relativistic coupled cluster theory and the analytic-response approach Traditionally, the finite-field (FF) approach is adopted through a suitable many-body method for the determination of IS factors, like the configuration- interaction (CI) approach, as they involve both the one-body and the two-body operators. It is also observed that evaluation of expectation value of $\vec{p}^{\;2}$ exhibits strong electron-correlation effects. This introduces difficulties in calculating using either the FF and expectation-value- evaluation (EVE) approaches, as the calculations do not converge with the inclusion of higher-order effects in the atomic wave functions [16]. In fact, this is also one of the reasons $\langle\vec{p}^{\;2}\rangle$ is often approximated from the experimental energy in the heavy atomic systems following the Virial theorem [33]. As pointed out in Refs. [34, 35], it is imperative to include both pair-correlation and core-polarization effects rigorously for accurate calculations of the IS. The coupled cluster (CC) method incorporates both these effects to all orders. Moreover, a truncated CC method, unlike a truncated CI method, is free from the size-extensivity and size-consistency problems appearing in many-body methods (e.g. see Ref. [14]). In this work, we apply relativistic CC (RCC) theory to account for the relativistic effects in our calculations. ### 3.1 Basic aspects The atomic wave function of a state in an atomic system with a closed-shell configuration and with a valence orbital ($v$) can be expressed in the RCC theory as (e.g. see Refs. [16, 35, 36] and therein) $\displaystyle|\Psi_{v}\rangle\equiv e^{\\{T+S_{v}\\}}|\Phi_{v}\rangle=e^{T}\\{1+S_{v}\\}|\Phi_{v}\rangle,$ (6) where $|\Phi_{v}\rangle=a_{v}^{+}|\Phi_{0}\rangle$ with the Dirac-Hartree-Fock (DHF) wave function, $|\Phi_{0}\rangle$, of the closed-core (in this work $[4d^{10}5s^{2}]$). Here $T$ is the RCC excitation operator embodying electron-correlation effects from $|\Phi_{0}\rangle$ and the $S_{v}$ operator incorporates correlation of the electron from the valence orbital along with the core-valence interactions. Amplitudes of the RCC operators and energies are obtained using the following equations $\displaystyle\langle\Phi_{0}^{L}|(He^{T})_{c}|\Phi_{0}\rangle=0\;,$ (7) and $\displaystyle\langle\Phi_{v}^{L}|(He^{T})_{c}S_{v}|\Phi_{v}\rangle=E_{v}\langle\Phi_{v}^{L}|S_{v}|\Phi_{v}\rangle-\langle\Phi_{v}^{L}|(He^{T})_{c}|\Phi_{v}\rangle,\ \ \ \ $ (8) where $H$ is the atomic Hamiltonian and the subscript $c$ indicates connected terms. The superscript, $L$, over the reference states indicates $L^{\text{th}}$-excited determinants with respect to the reference determinants appearing in the ket states. $E_{0}$ and $E_{v}$ are the exact energies of the states containing the closed-core (i.e. for the In+ ion) and the closed-core with valence orbital, $v$, (i.e. for the In atom), respectively. Both the $T$ and $S_{v}$ RCC operators are normal ordered with respect to $|\Phi_{0}\rangle$. For convenience we carry out all the calculations using normal-ordered operators, designated by subscript $N$. The normal-ordered Hamiltonian is defined as $H_{N}=H-\langle\Phi_{0}|H|\Phi_{0}\rangle$, for the DHF energy, $E_{DHF}=\langle\Phi_{0}|H|\Phi_{0}\rangle$, using which the above amplitude solving equations for the RCC operator are given by $\displaystyle\langle\Phi_{0}^{L}|\bar{H}_{N}|\Phi_{0}\rangle=0\;,$ (9) and $\displaystyle\langle\Phi_{v}^{L}|\bar{H}_{N}S_{v}|\Phi_{v}\rangle=\Delta E_{v}\langle\Phi_{v}^{L}|S_{v}|\Phi_{v}\rangle-\langle\Phi_{v}^{L}|\bar{H}_{N}|\Phi_{v}\rangle\;.\ \ \ \ $ (10) Here $\bar{H}_{N}=(H_{N}e^{T})_{c}$, $\Delta E_{0}=E_{0}-E_{DHF}$ is the correlation energy of the closed core and $\Delta E_{v}=E_{v}-E_{0}$ is the electron affinity (EA) of the electron in the valence orbital, $v$. We are interested in the EA values in this work, which are evaluated by $\displaystyle\Delta E_{v}=\langle\Phi_{v}|\bar{H}_{N}\\{1+S_{v}\\}|\Phi_{v}\rangle\;.$ (11) It is clear from the above that both Eqs. (10) and (11) are correlated. In our calculations we have considered Dirac-Coulomb-Breit (DCB) interactions in the atomic Hamiltonian, $H^{a}$. Further, we only consider all possible single- and double-excitation configurations in our RCC theory (RCCSD method). Excitation energy between two states are estimated from the difference of their EA values. ### 3.2 The finite-field approach to isotope shifts Since all the relevant FS, NMS and SMS operators are scalar, they can be included with the atomic Hamiltonian to estimate their contributions to the energies. On the other hand, by expressing the total Hamiltonian as $H=H^{a}+\lambda_{v}^{O}O$ with the atomic DCB Hamiltonian, $H^{a}$, and $O$, representing one of the FS, NMS or SMS operators for an arbitrary parameter, $\lambda_{v}^{O}$, it is possible to express the energy (here EA) in the FF approach as $\displaystyle\Delta E_{v}=\Delta E_{v}^{(0)}+\lambda_{v}^{O}\Delta E_{v}^{(1)}+{\cal O}(\lambda_{v}^{O})^{2}\;.$ (12) The superscripts (0), (1), and ${\cal O}(\lambda_{v}^{O})^{2}$ denote the zeroth, first and higher-order contributions respectively. It can be noted that the ${\cal O}(\lambda^{O})^{2}$ contributions are not of our interest. It clearly follows that $\displaystyle<O>\equiv\Delta E_{v}^{(1)}\simeq\left.\frac{\partial\Delta E_{v}}{\partial\lambda_{v}^{O}}\right|_{\lambda_{v}^{O}=0}.$ (13) This obviously follows the Hellmann-Feynman (H-F) theorem [37, 38], but it has two major problems. First, the behaviors of FS, NMS and SMS operators are very different, the choice of $\lambda_{v}^{O}$ has to be distinct for estimating the FS, NMS and SMS constants reliably, and they can also be atomic state dependent. Secondly, we assume ${\cal O}(\lambda^{O})^{2}$ contributions are neglected in the FF approach based on the choice of the $\lambda_{v}^{O}$ value without removing them. Usually the electron correlation effects contribute significantly to these quantities. Therefore, the IS constants inferred from the FF approach are subjected to large numerical uncertainty. Nevertheless, we use $\lambda_{v}^{O}=1\times 10^{-6}$ to determine all the IS constants to perform the calculations in different states only for making comparative analysis of the results in our study. ### 3.3 The expectation-value-evaluation approach One can find several recent works that present high-precision results of many properties in atomic systems, e.g. hyperfine structure constants [36, 39], by employing the RCC theory. These calculations are carried out using the EVE approach. Since the IS constants are the expectation values of the respective operators, we can evaluate them in the EVE approach using the RCC theory expression $\displaystyle<O>\equiv\frac{\langle\Psi_{v}|O|\Psi_{v}\rangle}{\langle\Psi_{v}|\Psi_{v}\rangle}=\frac{\langle\Phi_{v}|\\{1+S_{v}^{\dagger}\\}e^{T\dagger}O_{N}e^{T}\\{1+S_{v}\\}|\Phi_{v}\rangle}{\langle\Phi_{v}|\\{1+S_{v}^{\dagger}\\}e^{T\dagger}e^{T}\\{1+S_{v}\\}|\Phi_{v}\rangle}$ (14) by determining the wave functions using the Hamiltonian $H\equiv H^{a}$. The advantage of using this approach is that it is possible to analyse and observe the roles of various physical effects to the determination of the properties, whereas one can obtain only the final results in the FF approach without actually understanding the behavior of electron-correlation effects explicitly. Evidently, this approach too has many shortcomings. First, both the numerator and denominator of the above expression have non-terminating series. Secondly, the SMS operator is a two-body operator, so its normal- ordered form will have two components in the calculations as (e.g. refer to [35]) $\displaystyle O_{N}\equiv O_{N}^{1}+O_{N}^{2},$ (15) where superscipts 1 and 2 correspond to the effective one-body and two-body parts. For the properties that are described by one-body operators, such as hyperfine structure constants, we have adopted an iterative procedure to account for contributions from the aforementioned non-terminating series in the numerator and denominator [36]. However, it is impractical to apply a similar technique for the effective two-body terms, as it becomes unmanageable to compute contributions from the two-body components of the SMS operator using a diagrammatic procedure even in the RCCSD method approximation. Thus, we estimate contributions by selecting only important diagrams representing the two-body components of the SMS operator based on the knowledge gained from our earlier studies (see discussions in Ref. [35]). This may lead to large errors in the results. The third notable drawback of the EVE approach is, it does not satisfy the H-F theorem [14]. This can be understood from the simple argument of Thouless [40], that the form of Eq. (14) does not follow the energy-evaluating expression given by Eq. (11). ### 3.4 The analytic-response approach The aforementioned problems of (i) unwanted contributions from ${\cal O}(\lambda_{v}^{O})^{2}$ in the FF approach, (ii) the appearance of non- terminating series in the EVE approach, (iii) the analysability of the roles of various physical effects in the determination of properties, and (iv) satisfying H-F theorem in the determination of the IS constants using the RCC theory, can all be circumvented by adopting the AR procedure as suggested by [18]. The uniqueness of this approach is it uses features from both the FF and EVE procedures, in which Eq. (13) is directly obtained by perturbing the RCC operators due to $O$ as $\displaystyle T=T^{(0)}+\lambda_{v}^{O}T^{(1)}+{\cal O}(\lambda_{v}^{O})^{2},$ (16) and $\displaystyle S_{v}=S_{v}^{(0)}+\lambda_{v}^{O}S_{v}^{(1)}+{\cal O}(\lambda_{v}^{O})^{2},$ (17) where $T$ and $S_{v}$ are the RCC operators for the total Hamiltonian, $H=H^{a}+\lambda_{v}^{O}O$, and superscripts $(0)$ and $(1)$ indicate the unperturbed and the first-order perturbed corrections due to $O$, respectively. Substituting the above expanded form of the operators into Eqs. (9) and (10), and then equating the zeroth-order and first-order terms in $\lambda^{O}$ gives the equations for the unperturbed and perturbed RCC operators, respectively. Similarly, the first-order terms from the expansion in Eq. (11) will correspond to the expectation values of the operator $O$. Thus, using the normal-ordered form of the operators, we can get $\displaystyle\langle\Phi_{0}^{L}|\bar{H}^{a}_{N}T^{(1)}|\Phi_{0}\rangle$ $\displaystyle=$ $\displaystyle-\langle\Phi_{0}^{L}|\bar{O}_{N}|\Phi_{0}\rangle,$ (18) $\displaystyle\langle\Phi_{v}^{L}|(\bar{H}^{a}_{N}-\Delta E_{v}^{(0)})S_{v}^{(1)}|\Phi_{v}\rangle$ $\displaystyle=$ $\displaystyle\Delta E_{v}^{(1)}\langle\Phi_{v}^{L}|S_{v}^{(0)}|\Phi_{v}\rangle,$ (19) $\displaystyle-\langle\Phi_{v}^{L}|(\bar{H}^{a}_{N}T^{(1)}+\bar{O}_{N})\\{1+S_{v}^{(0)}\\}|\Phi_{v}\rangle,$ and $\displaystyle\Delta E_{v}^{(1)}=\langle\Phi_{v}|\bar{H}_{N}S_{v}^{(1)}+(\bar{H}_{N}T^{(1)}+\bar{O}_{N})\\{1+S_{v}^{(0)}\\}|\Phi_{v}\rangle.$ (20) Here, $\bar{O}_{N}=(O_{N}e^{T^{(0)}})_{c}$, and the superscripts $(0)$ and $(1)$ in the energies indicate the zeroth and first-order contributions, respectively. The AR equations have the advantages that were mentioned above. It can be noted that the lowest-order contributions (DHF results) in the EVE and AR approaches are the same, while they are different in the FF procedure. Again, the above equations are modified appropriately for the evaluation of the SMS constants as $\displaystyle\langle\Phi_{0}^{L}|\bar{H}^{a}_{N}T^{(1)}|\Phi_{0}\rangle$ $\displaystyle=$ $\displaystyle-\langle\Phi_{0}^{L}|\bar{O}_{N}^{1}+\bar{O}_{N}^{2}|\Phi_{0}\rangle,$ (21) $\displaystyle\langle\Phi_{v}^{L}|(\bar{H}^{a}_{N}-\Delta E_{v}^{(0)})S_{v}^{(1)}|\Phi_{v}\rangle$ $\displaystyle=$ $\displaystyle\Delta E_{v}^{(1)}\langle\Phi_{v}^{L}|S_{v}^{(0)}|\Phi_{v}\rangle$ (22) $\displaystyle-$ $\displaystyle\langle\Phi_{v}^{L}|(\bar{H}^{a}_{N}T^{(1)}+\bar{O}_{N}^{1}+\bar{O}_{N}^{2})\\{1+S_{v}^{(0)}\\}|\Phi_{v}\rangle,$ and $\displaystyle\Delta E_{v}^{(1)}=\langle\Phi_{v}|\bar{H}_{N}S_{v}^{(1)}+(\bar{H}_{N}T^{(1)}+\bar{O}_{N}^{1}+\bar{O}_{N}^{2})\\{1+S_{v}^{(0)}\\}|\Phi_{v}\rangle,$ (23) due to the two-body nature of the SMS operator. The AR approach also involves a slight computational challenge compared with the FF and EVE approaches as it requires storing matrix elements of the additional one-body and two-body operators than the atomic Hamiltonian. Ideally, if results from all the three, FF, EVE and AR, approaches agree with each other then the results can be assumed as very reliable. However, it is difficult to achieve good agreement between the results from all these procedures in heavy atomic systems using approximated many-body methods and due to large numerical uncertainties associated with the implementation of the EVE and FF approaches. Nonetheless, the results obtained using the AR approach at the given level of approximation in the many-body theory should be treated as more valid owing to the aforementioned merits of this procedure. Table 1: Comparison of FS, NMS and SMS factors of the six states in indium from the FF, EVE and AR approaches obtained using the RCCSD method. Method | 5P1/2 | 5P3/2 | 6S1/2 | 7S1/2 | 8S1/2 | 9S1/2 ---|---|---|---|---|---|--- $F$ (GHz/fm2) FF | 1.544 | 1.491 | -0.437 | -0.155 | -0.069 | -0.033 EVE | 1.275 | 1.299 | -0.408 | -0.135 | -0.061 | -0.033 AR | 1.435(6) | 1.442(6) | -0.383(1) | -0.1281(5) | -0.0559(25) | -0.0307(5) $K_{\text{NMS}}$ (GHz.u) FF | 749 | 711 | 364 | 170 | 98 | 63 EVE | 1340 | 375 | 458 | 201 | 113 | 71 AR | 774(41) | 734(37) | 340(5) | 163(2) | 96(1) | 61.7(5) Experiment† | 768 | 731 | 367 | 171 | 99 | 65 $K_{\text{SMS}}$ (GHz.u) FF | -470 | -403 | 119 | 38 | 17 | 9 EVE | -1048 | -899 | 136 | 42 | 18 | 10 AR | -638(71) | -533(69) | 94(26) | 29(8) | 13(4) | 8.6(5) Experiment∗ | -536(122) | -507(111) | 169(51) | 55(42) | 24(80) | -13(66) $\text{LIS}^{113,115}$ (MHz) Experiment | 277(10) | 272(6)[26] | 17(6) | 12(6) | 9(12) | 2(10) $\dagger$ Level energies from [41] were used. * To determine $K_{\text{SMS}}$ from Eq. 1, the measured differential ISs, $\delta E^{113,115}$, were combined with FS factors from the AR approach and $\delta\left\langle r^{2}\right\rangle_{\mu}^{113,115}$ = 0.157(11) fm2 [42] ## 4 Isotope shift measurements The results of the calculations have been combined with complementary measurements to perform a comprehensive theoretical and experimental study of the FS and SMS constants of the indium atom. Further, they are used to provide accurate nuclear charge-radii of 104-127In. As indium has only two naturally occurring isotopes (113,115In), exotic isotopes were produced at the on-line isotope-separator facility ISOLDE at CERN. To produce the neutron-rich indium isotopes, 115-127In, a beam of 1.4-GeV protons impinged onto the neutron converter of a thick UCx target. The converter suppressed nearby caesium mass contamination and increased utilizable neutron-rich indium yields [43]. The neutron-deficient indium isotopes, 104-115In, were produced by impinging the protons directly onto a thick LaC2 target [44]. The indium isotopes diffused through the target material and their ionization was enhanced by the use of the resonant ionization ion source RILIS [45]. The produced [45] indium ions were then accelerated to 40 keV, mass separated, and injected into a gas- filled linear Paul trap (ISCOOL) [46, 47]. Ion bunches of 2 $\mu$s temporal width, were then re-accelerated to 40 keV and deflected into the CRIS beamline [48, 49]. The indium ions were then neutralised with a sodium-filled vapor cell, with an efficiency of up to 60% and predicted relative atomic populations of 57% and 37% respectively for the 5p 2P3/2 metastable state and 5p 2P1/2 ground state [50]. The remaining ion fraction was removed by electrostatic deflectors, and the neutralized atom bunch was collinearly overlapped with two pulsed lasers, one for excitation and another for non- resonant ionization. The atoms were resonantly excited using two different UV transitions in separate measurements. The first using 246.8-nm laser light for the 5p 2P3/2 $\rightarrow$ 9s 2S1/2 atomic transition. The second using 246.0-nm laser light for the 5p 2P1/2 $\rightarrow$ 8s 2S1/2 atomic transition. The resonant laser light was produced by frequency tripling the light from an injection-locked Ti:Sapphire laser system [51]. This laser was seeded using a narrow-band M Squared SolsTiS continuous-wave Ti:Sapphire laser, and pumped using a LEE LDP-100MQ Nd:YAG laser, producing pulsed narrow- band 740(738)-nm laser light at 1 kHz. This light was then frequency tripled to 246.8(246.0)-nm light by the use of two non-linear BiB3O6 crystals [52], 3 mW of laser light was used to saturate both transitions. The excited atoms were then ionized by a non-resonant 532-nm step, The frequency of the resonant first step was scanned and the resulting ions were deflected onto a detector, producing the hyperfine spectra from which the IS were obtained. The determined IS values are displayed in Table 2. ## 5 Comparison with experiment and evaluation of nuclear mean-squared charge radii ### 5.1 King plot analysis Since the changes in the mean-square charge radii are independent of the atomic transitions, the nuclear dependence can be removed by comparing the IS of two atomic transitions. A combination of the IS using Eq. (1), for two atomic transitions, $i$ and $j$, can be expressed as $\mu_{A,A^{\prime}}\delta E^{A,A^{\prime}}_{j}=\frac{F_{j}}{F_{i}}\mu_{A,A^{\prime}}\delta E^{A,A^{\prime}}_{i}+M_{j}-\frac{F_{j}}{F_{i}}M_{i},$ (24) with $\mu_{A,A^{\prime}}=\frac{m_{A}m_{A^{\prime}}}{m_{A}-m_{A^{\prime}}}$. Hence, in a ‘King’ plot [28] of $\mu_{A,A^{\prime}}\delta E^{A,A^{\prime}}_{j}$ versus $\mu_{A,A^{\prime}}\delta E^{A,A^{\prime}}_{i}$, the gradient provides the FS ratio, $F_{j}/F_{i}$, between two transitions, and the MS differences can be extracted from its intercept. Figure 1: King plots of the 246.0-nm and 246.8-nm and the 410.2-nm and 451.1-nm transitions. Inset: the ratio of isomer shift values allowed mass- shift-independent determination of $\frac{F_{246.0}}{F_{246.8}}=1.04(9)$. Theoretical values are indicated by $\frac{F_{246.0}}{F_{246.8}}_{AR}$. The shaded area indicates the uncertainty of the fits. Error bars include statistical and systematic uncertainties (indicated by the black part of the error bar) The King plot obtained for the transitions measured in this work (246.8-nm (5p 2P3/2 $\rightarrow$ 9s 2S1/2) and 246.0-nm (5p 2P1/2 $\rightarrow$ 8s 2S1/2)), and previous measurements in the 410.2-nm (5p 2P1/2 $\rightarrow$ 6s 2S1/2) and 451.1-nm (5p 2P3/2 $\rightarrow$ 6s 2S1/2) transitions [53] are shown in Fig. 1. The calculations and experimental data agree within 1$\sigma$, using the AR or FF approaches. ### 5.2 Isomer shifts The availability of several isomeric nuclear states in the indium isotope chain allows a further test of the theoretical calculations. For isomeric states, the factor $\frac{m_{A}-m_{A^{\prime}}}{m_{A}m_{A^{\prime}}}$ tends to 0, and the Eq. (24) can be approximated as $\frac{\delta E^{m}_{i}}{\delta E^{m}_{j}}=\frac{F_{i}}{F_{j}}$. This assumption corresponds to an uncertainty of up to 0.02 MHz for the excitation energies of the isomers in this work (<400 keV). Therefore, isomer-shift measurements provide a test of the FS factors and are less sensitive to systematic uncertainties present in the King plot analysis. Previous measurements have not reported values for isomer shifts in the indium atom as they are relatively small and require particularly high precision [53]. The new measurements reported here allowed the extraction of isomer shifts for the 246.8-nm (5p 2P3/2 $\rightarrow$ 9s 2S1/2) and 246.0-nm (5p 2P1/2 $\rightarrow$ 8s 2S1/2) transitions. The extracted FS ratios from the measured isomer shifts are shown in the inset of Fig. 1. This ratio agrees with the value obtained from the King plots, and is within 1$\sigma$ of the presented theoretical calculations. ### 5.3 Experimental level specific mass shift Calculations of SMS are notably challenging. To the authors’ knowledge, they have not yet been reported for the indium atom. Moreover, a reliable experimental test is also difficult as optical measurements provide the difference of SMS between two states and their individual contribution cannot be separated. Yet calculations of the atomic FS and MS factors are typically performed for individual atomic-energy levels, with the difference between two states used to determine the atomic factors for a transition used to measure an isotope shift. In this work the individual atomic-level isotope shift (LIS) values were determined by combining the IS measurements with measurement of transitions to high-lying atomic states in indium [26]. As the contribution to the IS of a transition from an atomic state decreases with the principle quantum number of the state, in measurements to high-lying Rydberg states the IS contribution from the upper state becomes negligible [54]. This allowed the LIS to be determined for each state, and then the specific-mass-shift contribution to the individual state, $l$, could be evaluated for comparison to the calculations. The new measurements of this work provide access to the 8S1/2 and 9S1/2 states. For example, using the 5p 2P3/2 $\rightarrow$ 5s2 np 2P1/2,3/2 transition (27$\leq$n$\leq$35) IS measured for 113,115In [26], a LIS of the 5p 2P3/2 state of $\text{LIS}_{\text{P3/2}}^{113,115}$=272(6) MHz was reported. Using the IS value measured with the 5p 2P3/2 $\rightarrow$ 6s 2S1/2 transition of $\text{LIS}_{\text{P3/2-6s}}^{113,115}$=255.4(5) MHz [55], gives a LIS of $\text{LIS}_{\text{6s}}^{113,115}$ = 17(6) MHz. This LIS value can in turn be used to determine the LIS of the 5p 2P1/2 state from the 5p 2P1/2 $\rightarrow$ 6s 2S1/2 transition [39], giving $\text{LIS}_{\text{5P1/2}}^{113,115}$ = 277(10) MHz. All of the LIS values determined from the new measurements of this work and from literature (6S1/2 and 7S1/2 states [39, 55, 56]) are presented in Table 1. The LIS value, $\text{LIS}^{113,115}_{l}$, of a state, $l$, is the sum of the field-shift (volume isotope shift) and mass-shift contributions given by $\displaystyle\begin{aligned} \text{LIS}^{113,115}_{l}&=F_{l}\delta\langle r^{2}\rangle^{113,115}\\\ &+(K_{l}^{\text{NMS}}+K_{l}^{\text{SMS}})\frac{m_{113}-m_{115}}{m_{113}m_{115}}\end{aligned}$ (25) Using the calculated state FS atomic factors and relativistic normal mass shift factors, $K_{l}^{\text{NMS}}$, given in Table 1, and the literature value of $\delta\left\langle r^{2}\right\rangle_{\mu}^{113,115}$ = 0.157(11) fm2 [42] allowed evaluation of the SMS factors for individual states, $K_{\text{SMS}}^{\text{Exp}}$. The experimental results and theoretical calculations are shown in Table 1. The new calculations presented here, adopting the AR approach, agree within 1$\sigma$ of the experimental values, in addition to the values from the FF approach. In contrast, the EVE results present large discrepancies. ### 5.4 Comparison with nuclear mean-squared charge radii Combining the IS measurements and the calculated FS and MS constants in Eq. (1), a value of $\delta\left\langle r^{2}\right\rangle^{113,115}$ = 0.163(4) fm2 is obtained for the root-mean-square charge radii difference between the stable isotopes 113,115In, in good agreement with the muonic atom result of $\delta\left\langle r^{2}\right\rangle_{\mu}^{113,115}$ = 0.157(11) fm2 [42]. The nuclear charge radii of the exotic indium isotopes were extracted from the measured IS and the calculated FS and SMS constants from the AR approach. The extracted $\delta\left\langle r^{2}\right\rangle^{115,A}$ values are given in Table 2 and are plotted in Fig. 2. The reported uncertainties of the calculated atomic factors using the AR approach were evaluated from a perturbative estimation of the neglected triples contribution. The atomic masses used were taken from [57]. The values obtained from the FF and EVE approaches are also shown in Fig. 2 for comparison. Table 2: IS measured with the 246.0-nm (5p 2P1/2 $\rightarrow$ 8s 2S1/2) and 246.8-nm ( 5p 2P3/2 $\rightarrow$ 9s 2S1/2 ) transitions, and $\delta\left\langle r^{2}\right\rangle^{115,A}$ values extracted using the AR approach. A | I | $IS^{115,A}$ (MHz) | $\delta\left\langle r^{2}\right\rangle^{115,A}$ (fm2) ---|---|---|--- | | 246.0 nm | 246.8 nm | 246.0 nm | 246.8 nm 104 | ($5^{+}$) | -1805(10) | -1753(20) | -1.19(5) | -1.11(5) 105 | $\frac{9}{2}^{+}$ | -1510(10) | -1540(20) | -1.00(5) | -0.97(5) 106 | $7^{+}$ | -1381(10) | -1362(20) | -0.91(4) | -0.86(4) 107 | $\frac{9}{2}^{+}$ | -1166(10) | -1178(20) | -0.77(4) | -0.74(4) 108 | $2^{+}$ | -1033(10) | -978(20) | -0.68(3) | -0.61(3) 108 | $7^{+}$ | -1046(10) | -1011(20) | -0.69(3) | -0.64(3) 109 | $\frac{9}{2}^{+}$ | -835(10) | -855(20) | -0.55(3) | -0.54(3) 110 | $7^{+}$ | | -729(20) | | -0.46(2) 111 | $\frac{9}{2}^{+}$ | -555(30) | -542(20) | -0.37(3) | -0.34(2) 113 | $\frac{9}{2}^{+}$ | -265(5) | -278(5) | -0.175(9) | -0.175(9) 114 | $5^{+}$ | -175(5) | -171(10) | -0.116(5) | -0.109(8) 115 | $\frac{9}{2}^{+}$ | 0 | 0 | 0 | 0 115 | $\frac{1}{2}^{-}$ | 26(8) | 33(5) | 0.018(5) | 0.022(3) 116 | $5^{+}$ | 89(5) | 99(20) | 0.058(5) | 0.06(1) 116 | $8^{-}$ | 86(8) | 99(2) | 0.056(7) | 0.061(4) 117 | $\frac{9}{2}^{+}$ | 243(5) | 265(3) | 0.160(9) | 0.167(8) 117 | $\frac{1}{2}^{-}$ | 261(6) | 282(4) | 0.173(9) | 0.179(8) 118 | $5^{+}$ | 330(5) | 329(2) | 0.22(1) | 0.20(1) 118 | $8^{-}$ | 324(5) | 324(3) | 0.21(1) | 0.20(1) 119 | $\frac{9}{2}^{+}$ | | 475(3) | | 0.30(2) 119 | $\frac{1}{2}^{-}$ | | 488(4) | | 0.30(2) 120 | $(5)^{+}$ | 531(5) | 556(5) | 0.35(2) | 0.35(2) 120 | $(8^{-})$ | 500(5) | 530(2) | 0.33(2) | 0.33(2) 121 | $\frac{9}{2}^{+}$ | | 654(2) | | 0.41(2) 121 | $\frac{1}{2}^{-}$ | | 661(3) | | 0.41(2) 122 | $5^{+}$ | 704(5) | 674(5) | 0.46(3) | 0.41(3) 122 | $8^{-}$ | 687(5) | 658(8) | 0.45(3) | 0.40(3) 123 | $\frac{9}{2}^{+}$ | | 756(3) | | 0.46(3) 123 | $\frac{1}{2}^{-}$ | | 751(2) | | 0.46(3) 124 | $(3)^{+}$ | | 809(10) | | 0.49(3) 124 | $(8^{-})$ | | 810(3) | | 0.49(3) 125 | $\frac{9}{2}^{+}$ | | 941(4) | | 0.58(4) 125 | $\frac{1}{2}^{-}$ | | 926(5) | | 0.57(4) 126 | $3^{+}$ | | 1026(3) | | 0.63(4) 126 | $(8^{-})$ | | 1019(5) | | 0.62(4) 127 | $\frac{9}{2}^{+}$ | 1115(5) | 1129(4) | 0.73(5) | 0.69(4) Figure 2: a) $\delta\left\langle r^{2}\right\rangle^{115,A}$ values for the 104-127In isotopes extracted from the IS measurements of four optical transitions and using the calculated FS and MS factors. The spread in values from each approach is indicated by the colored areas. The shaded area ‘Literature’ indicates the uncertainty from literature FS and MS factors [42, 53]. b) $\sqrt{\left\langle r^{2}\right\rangle^{A}}$ compared to Sn ($Z$ = 50) [58] and Cd ($Z$ = 48) [9] isotopes. Remarkably, the extracted $\delta\left\langle r^{2}\right\rangle$ values agree for all four optical transitions, which gives confidence in the accuracy of the calculations. The absolute charge radii, $\sqrt{\left\langle r^{2}\right\rangle^{A}}$, using the reference isotope 115In (4.615 fm [42]), are compared to its isobaric neighbours Sn ($Z$ = 50) [58] and Cd ($Z$ = 48) [9] in Fig. 2. The effect of inaccurate calculation of the MS factors using the EVE approach is seen to be significant, causing a large discrepancy between the values extracted from the four transitions. Previously, literature values [53, 42] were normalized to the neighboring tin and cadmium isotopes and the $\delta\left\langle r^{2}\right\rangle_{\mu}^{113,115}$ value. This introduces large uncertainties (yellow area in Fig. 2), and prevents an independent comparison of the nuclear charge radii with neighbouring elements. Our theoretical calculations have therefore enabled the first independent comparison of absolute charge radii for an odd-proton system around the $Z$ = 50 nuclear closed shell to be made. ## 6 Conclusion In conclusion, we present in this work a new theoretical method to perform accurate calculations of FS and MS constants in atomic systems. These constants are critical to separate electronic and nuclear structure effects in the interpretation of IS measurements for fundamental and nuclear-physics research. This new theoretical method uses an analytic-energy-derivative approach in the RCC framework, and solves fundamental problems related to the evaluation of operators, which have been present in previous atomic physics calculations. Precise IS measurements in the indium atom were used as an exhaustive experimental test for these theoretical developments. A good agreement was found with all available experimental data. The existence of several isomers and the access to high-lying states in the indium atom allow the separation of FS from MS, providing a stringent test for the calculations. Our calculations of the atomic physics factors are essential to extract nuclear charge radii values from isotope shifts measurements of exotic indium isotopes [59]. These results can be extended to different elements across the nuclear chart. This is especially important for odd-proton nuclei, which rely on atomic theory to extract charge radii from laser-spectroscopy measurements. Our theoretical developments will help to provide a deeper insight in the evolution of the nuclear charge radius for different numbers of protons and neutrons, which is of great importance for our understanding of nuclear structure [9, 60, 1, 61] and nuclear matter [20]. ## 7 Acknowledgments This work was supported by ERC Consolidator Grant No.648381 (FNPMLS); STFC grants ST/L005794/1, ST/L005786/1, ST/P004423/1 and Ernest Rutherford Grant No. ST/L002868/1; GOA 15/010 from KU Leuven, BriX Research Program No. P7/12; the FWO-Vlaanderen (Belgium); the European Unions Grant Agreement 654002 (ENSAR2); National Key R&D Program of China (Contract No:2018YFA0404403), the National Natural Science Foundation of China (No:11875073) and we acknowledge the financial aid of the Ed Schneiderman Fund at New York University. B. K. S. acknowledges use of the Vikram-100 HPC cluster of Physical Research Laboratory, Ahmedabad. We would also like to thank the ISOLDE technical group for their support and assistance, the University of Jyväskylä for the use of the injection-locked cavity, and the Physikalisch-Technische Bundesanstalt (PTB) for the use of their voltage divider. ## References ## References * [1] R. F. Garcia Ruiz, M. L. Bissell, K. Blaum, A. Ekström, N. Frömmgen, G. Hagen, M. Hammen, K. Hebeler, J. D. Holt, G. R. Jansen, M. Kowalska, K. Kreim, W. Nazarewicz, R. Neugart, G. Neyens, W. Nörtershäuser, T. Papenbrock, J. Papuga, A. Schwenk, J. Simonis, K. A. Wendt, and D. T. Yordanov. Unexpectedly large charge radii of neutron-rich calcium isotopes. Nat. Phys., 12(6):594–598, jun 2016. * [2] Julian C Berengut, Dmitry Budker, Cédric Delaunay, Victor V Flambaum, Claudia Frugiuele, Elina Fuchs, Christophe Grojean, Roni Harnik, Roee Ozeri, Gilad Perez, and Yotam Soreq. Probing New Long-Range Interactions by Isotope Shift Spectroscopy. Phys. Rev. Lett., 120(9):091801, feb 2018. * [3] Ulrich D Jentschura and István Nándori. Atomic physics constraints on the $X$ boson. Phys. Rev. A, 97(4):042502, apr 2018. * [4] Cédric Delaunay, Roee Ozeri, Gilad Perez, and Yotam Soreq. Probing atomic Higgs-like forces at the precision frontier. Phys. Rev. D, 96(9):093001, nov 2017. * [5] Cédric Delaunay, Claudia Frugiuele, Elina Fuchs, and Yotam Soreq. Probing new spin-independent interactions through precision spectroscopy in atoms with few electrons. Phys. Rev. D, 96(11):115002, dec 2017. * [6] Yevgeny V Stadnik. Probing Long-Range Neutrino-Mediated Forces with Atomic and Nuclear Spectroscopy. Phys. Rev. Lett., 120(22):223202, jun 2018. * [7] B. A. Marsh, T. Day Goodacre, S. Sels, Y. Tsunoda, B. Andel, A. N. Andreyev, N. A. Althubiti, D. Atanasov, A. E. Barzakh, J. Billowes, K. Blaum, T. E. Cocolios, J. G. Cubiss, J. Dobaczewski, G. J. Farooq-Smith, D. V. Fedorov, V. N. Fedosseev, K. T. Flanagan, L. P. Gaffney, L. Ghys, M. Huyse, S. Kreim, D. Lunney, K. M. Lynch, V. Manea, Y. Martinez Palenzuela, P. L. Molkanov, T. Otsuka, A. Pastore, M. Rosenbusch, R. E. Rossel, S. Rothe, L. Schweikhard, M. D. Seliverstov, P. Spagnoletti, C. Van Beveren, P. Van Duppen, M. Veinhard, E. Verstraelen, A. Welker, K. Wendt, F. Wienholtz, R. N. Wolf, A. Zadvornaya, and K. Zuber. Characterization of the shape-staggering effect in mercury nuclei. Nat. Phys., page 1, oct 2018. * [8] Florian Gebert, Yong Wan, Fabian Wolf, Christopher N. Angstmann, Julian C. Berengut, and Piet O. Schmidt. Precision isotope shift measurements in calcium ions using quantum logic detection schemes. Phys. Rev. Lett., 115:053003, Jul 2015. * [9] M. Hammen, W. Nörtershäuser, D. L. Balabanski, M. L. Bissell, K. Blaum, I. Budinčević, B. Cheal, K. T. Flanagan, N. Frömmgen, G. Georgiev, Ch. Geppert, M. Kowalska, K. Kreim, A. Krieger, W. Nazarewicz, R. Neugart, G. Neyens, J. Papuga, P.-G. Reinhard, M. M. Rajabali, S. Schmidt, and D. T. Yordanov. From calcium to cadmium: Testing the pairing functional through charge radii measurements of ${}^{100\text{$-$}130}\mathrm{Cd}$. Phys. Rev. Lett., 121:102501, Sep 2018. * [10] S. Raeder, D. Ackermann, H. Backe, R. Beerwerth, J. C. Berengut, M. Block, A. Borschevsky, B. Cheal, P. Chhetri, Ch. E. Düllmann, V. A. Dzuba, E. Eliav, J. Even, R. Ferrer, V. V. Flambaum, S. Fritzsche, F. Giacoppo, S. Götz, F. P. Heßberger, M. Huyse, U. Kaldor, O. Kaleja, J. Khuyagbaatar, P. Kunz, M. Laatiaoui, F. Lautenschläger, W. Lauth, A. K. Mistry, E. Minaya Ramirez, W. Nazarewicz, S. G. Porsev, M. S. Safronova, U. I. Safronova, B. Schuetrumpf, P. Van Duppen, T. Walther, C. Wraith, and A. Yakushev. Probing sizes and shapes of nobelium isotopes by laser spectroscopy. Phys. Rev. Lett., 120:232503, Jun 2018. * [11] M. S. Safronova, D. Budker, D. DeMille, Derek F. Jackson Kimball, A. Derevianko, and Charles W. Clark. Search for new physics with atoms and molecules. Rev. Mod. Phys., 90(2):025008, jun 2018. * [12] B. Cheal, T. E. Cocolios, and S. Fritzsche. Laser spectroscopy of radioactive isotopes: Role and limitations of accurate isotope-shift calculations. Phys. Rev. A, 86(4):042501, oct 2012. * [13] V. V. Flambaum, A. J. Geddes, and A. V. Viatkina. Isotope shift, nonlinearity of king plots, and the search for new particles. Phys. Rev. A, 97:032510, Mar 2018. * [14] R. F. Bishop. An overview of coupled cluster theory and its applications in physics. Theor. Chim. Acta, 80(2-3):95–148, 1991. * [15] B. K. Sahoo and B. P. Das. Relativistic Normal Coupled-Cluster Theory for Accurate Determination of Electric Dipole Moments of Atoms: First Application to the Hg 199 Atom. Physical Review Letters, 120(20):203001, may 2018. * [16] B. K. Sahoo. High-precision determination of Lorentz-symmetry-violating parameters in ${\mathrm{Ca}}^{+}$. Phys. Rev. A, 99(5):050501(R), May 2019. * [17] Igor Vasiliev, Serdar Ogut, and James R. Chelikowsky. Ab initio Calculations for the Polarizabilities of Small Semiconductor Clusters. Physical Review Letters, 78(25):4805–4808, jun 1997. * [18] Hendrik J. Monkhorst. Calculation of properties with the coupled-cluster method. International Journal of Quantum Chemistry, 12(S11):421–432, jun 1977. * [19] G. Fricke and K. Heilig. 19-K Potassium. In Nuclear Charge Radii, chapter 19-K Potassium. Springer-Verlag, Berlin/Heidelberg. * [20] B. Alex Brown. Mirror charge radii and the neutron equation of state. Phys. Rev. Lett., 119:122502, Sep 2017. * [21] Junjie Yang and J. Piekarewicz. Difference in proton radii of mirror nuclei as a possible surrogate for the neutron skin. Phys. Rev. C, 97:014314, Jan 2018. * [22] C Gorges, L V Rodríguez, D L Balabanski, M L Bissell, K Blaum, B Cheal, R F Garcia Ruiz, G Georgiev, W Gins, H Heylen, A Kanellakopoulos, S Kaufmann, M Kowalska, V Lagaki, S Lechner, B Maaß, S Malbrunot-Ettenauer, W Nazarewicz, R Neugart, G Neyens, W Nörtershäuser, P.-G Reinhard, S Sailer, R Sánchez, S Schmidt, L Wehner, C Wraith, L Xie, Z Y Xu, X F Yang, and D T Yordanov. Laser Spectroscopy of Neutron-Rich Tin Isotopes: A Discontinuity in Charge Radii across the N = 82 Shell Closure. Physical Review Letters, 122, 2019. * [23] M. Hammen, W. Nörtershäuser, D. L. Balabanski, M. L. Bissell, K. Blaum, I. Budinčević, B. Cheal, K. T. Flanagan, N. Frömmgen, G. Georgiev, Ch. Geppert, M. Kowalska, K. Kreim, A. Krieger, W. Nazarewicz, R. Neugart, G. Neyens, J. Papuga, P.-G. Reinhard, M. M. Rajabali, S. Schmidt, and D. T. Yordanov. From Calcium to Cadmium: Testing the Pairing Functional through Charge Radii Measurements ofCd100-130. Phys. Rev. Lett., 121(10):102501, 2018. * [24] M. Rejmund et al. Electromagnetic properties of neutron-rich nuclei adjacent to the z=50 shell closure. Physics Letters B, 753:86 – 90, 2016. * [25] R.F. Garcia Ruiz et al. Laser spectroscopy of exotic indium (z = 49) isotopes:approaching the n = 50 and n = 82 neutron numbers. CERN-INTC-2017-025, 2017. * [26] R Menges, G Huber, and G Ulm. High Resolution Spectroscopy of Rydberg States in Indium I. Technical report, 1985. * [27] C.J. Foot. Atomic Physics, volume 25. OUP Oxford, 2004. * [28] W.H. King. Isotope Shifts in Atomic Spectra, volume 11. Springer Science & Business Media, 2013. * [29] P. Campbell, I.D. D. Moore, and M.R. R. Pearson. Laser spectroscopy for nuclear structure physics. Progress in Particle and Nuclear Physics, 86:127–180, jan 2016\. * [30] E. C. SELTZER. K X-Ray Isotope Shifts. Phys. Rev., 188(4):1916, dec 1969. * [31] R. Hofstadter, H. R. Fechter, and J. A. McIntyre. High-Energy Electron Scattering and Nuclear Structure Determinations. Phys. Rev., 92(4):978, Nov 1953. * [32] V M Shabaev and A N Artemyev. Relativistic nuclear recoil corrections to the energy levels of multicharged ions. J. Phys. B At. Mol. Opt. Phys., 27(7):1307–1314, apr 1994. * [33] V. Fock. Bemerkung zum Virialsatz. Z. Phys., 63(11):855, Nov 2019. * [34] J G Cubiss, A E Barzakh, M D Seliverstov, A N Andreyev, B Andel, S Antalic, P Ascher, D Atanasov, D Beck, J Bieroń, K Blaum, Ch Borgmann, M Breitenfeldt, B A Marsh, S Mitsuoka, P L Molkanov, Y Nagame, D Neidherr, K Nishio, S Ota, D Pauwels, L Popescu, D Radulov, E Rapisarda, J P Revill, M Rosenbusch, R E Rossel, S Rothe, K Sandhu, L Schweikhard, S Sels, V L Truesdale, C Van Beveren, P Van Den Bergh, Y Wakabayashi, P Van Duppen, K D A Wendt, F Wienholtz, B W Whitmore, G L Wilson, R N Wolf, and K Zuber. Charge radii and electromagnetic moments of 195-211At. Phys. Rev. C, 97:21, 2018. * [35] B K Sahoo. Accurate estimate of $\alpha$ variation and isotope shift parameters in Na and Mg+. J. Phys. B At. Mol. Opt. Phys., 43(23):231001, dec 2010. * [36] B. K. Sahoo, D. K. Nandy, B. P. Das, and Y. Sakemi. Correlation trends in the hyperfine structures of ${}^{210,212}\mathrm{Fr}$. Phys. Rev. A, 91:042507, Apr 2015. * [37] H. Hellmann. A combined approximation method for the energy calculation in the many-electron problem. Acta Physicochim. U.R.S.S., 1:913, 1935. * [38] R. P. Feynman. Forces in Molecules. Phys. Rev., 56(4):340–343, aug 1939. * [39] R. F. Garcia Ruiz, A. R. Vernon, C. L. Binnersley, B. K. Sahoo, M. Bissell, J. Billowes, T. E. Cocolios, W. Gins, R. P. de Groote, K. T. Flanagan, A. Koszorus, K. M. Lynch, G. Neyens, C. M. Ricketts, K. D. A. Wendt, S. G. Wilkins, and X. F. Yang. High-Precision Multiphoton Ionization of Accelerated Laser-Ablated Species. Phys. Rev. X, 8(4):041005, oct 2018. * [40] D. J. Thouless. The Quantum Mechanics of Many-body Systems. Dover Publication, Inc., New York, 1972. * [41] NIST ASD Team A. Kramida, Y. Ralchenko, J. Reader. NIST Atomic Spectra Database, 2014. * [42] G. Fricke and K. Heilig. 49-In Indium. In Nuclear Charge Radii, chapter 49-In Indium. Springer-Verlag, Berlin/Heidelberg, 2004. * [43] I. Dillmann, M. Hannawald, U. Köster, V. N. Fedoseyev, A. Wöhr, B. Pfeiffer, D. Fedorov, J. Shergur, L. Weissman, W. B. Walters, and K. L. Kratz. Selective laser ionization of N => 82 indium isotopes: The new r-process nuclide135In. Eur. Phys. J. A, 13(3):281–284, 2002. * [44] U. Köster. Intense radioactive-ion beams produced with the ISOL method. Eur. Phys. J. A, 15(1-2):255–263, 2002. * [45] S Rothe, B A Marsh, C Mattolat, V N Fedosseev, and K Wendt. A complementary laser system for ISOLDE RILIS. J. Phys. Conf. Ser., 312(5):052020, sep 2011. * [46] E. Mané, J. Billowes, K. Blaum, P. Campbell, B. Cheal, P. Delahaye, K. T. Flanagan, D. H. Forest, H. Franberg, C. Geppert, T. Giles, A. Jokinen, M. Kowalska, R. Neugart, G. Neyens, W. Nörtershäuser, I. Podadera, G. Tungate, P. Vingerhoets, and D. T. Yordanov. An ion cooler-buncher for high-sensitivity collinear laser spectroscopy at ISOLDE. Eur. Phys. J. A, 42(3):503–507, 2009. * [47] H. Frånberg, P. Delahaye, J. Billowes, K. Blaum, R. Catherall, F. Duval, O. Gianfrancesco, T. Giles, A. Jokinen, M. Lindroos, D. Lunney, E. Mane, and I. Podadera. Off-line commissioning of the ISOLDE cooler. Nucl. Instruments Methods Phys. Res. Sect. B Beam Interact. with Mater. Atoms, 266(19-20):4502–4504, 2008. * [48] K. T. Flanagan, K. M. Lynch, J. Billowes, M. L. Bissell, I. Budinčević, T. E. Cocolios, R. P. de Groote, S. De Schepper, V. N. Fedosseev, S. Franchoo, R. F. Garcia Ruiz, H. Heylen, B. A. Marsh, G. Neyens, T. J. Procter, R. E. Rossel, S. Rothe, I. Strashnov, H. H. Stroke, and K. D. A. Wendt. Collinear resonance ionization spectroscopy of neutron-deficient francium isotopes. Phys. Rev. Lett., 111:212501, Nov 2013. * [49] T. E. Cocolios, R. P. de Groote, J. Billowes, M. L. Bissell, I. Budinčević, T. Day Goodacre, G. J. Farooq-Smith, V. N. Fedosseev, K. T. Flanagan, S. Franchoo, R. F. Garcia Ruiz, W. Gins, H. Heylen, T. Kron, R. Li, K. M. Lynch, B. A. Marsh, G. Neyens, R. E. Rossel, S. Rothe, A. J. Smith, H. H. Stroke, K. D A Wendt, S. G. Wilkins, and X. Yang. High-resolution laser spectroscopy with the Collinear Resonance Ionisation Spectroscopy (CRIS) experiment at CERN-ISOLDE. Nucl. Instruments Methods Phys. Res. Sect. B Beam Interact. with Mater. Atoms, 376:284–287, 2016. * [50] A.R. Vernon, J. Billowes, C.L. Binnersley, M.L. Bissell, T.E. Cocolios, G.J. Farooq-Smith, K.T. Flanagan, R.F. Garcia Ruiz, W. Gins, R.P. de Groote, Á. Koszorús, K.M. Lynch, G. Neyens, C.M. Ricketts, K.D.A. Wendt, S.G. Wilkins, and X.F. Yang. Simulation of the relative atomic populations of elements 1 ≤ Z ≤89 following charge exchange tested with collinear resonance ionization spectroscopy of indium. Spectrochimica Acta Part B: Atomic Spectroscopy, 153:61–83, mar 2019. * [51] V Sonnenschein, I D Moore, S Raeder, M Reponen, H Tomita, and K Wendt. Characterization of a pulsed injection-locked Ti:sapphire laser and its application to high resolution resonance ionization spectroscopy of copper. Laser Phys., 27(8):085701, aug 2017. * [52] M. Bass, P. A. Franken, A. E. Hill, C. W. Peters, and G. Weinreich. Optical Mixing. Phys. Rev. Lett., 8(1):18–18, jan 1962. * [53] J. Eberz, U. Dinger, G. Huber, H. Lochmann, R. Menges, R. Neugart, R. Kirchner, O. Klepper, T. Kühl, D. Marx, G. Ulm, and K. Wendt. Spins, moments and mean square charge radii of 104-127In determined by laser spectroscopy. Nucl. Phys. A, 464(1):9–28, mar 1987. * [54] K Niemax and L R Pendrill. Isotope shifts of individual nS and nD levels of atomic potassium. Journal of Physics B: Atomic and Molecular Physics, 13(15):L461–L465, aug 1980. * [55] G J Zaal, W Hogervorst, E R Eliel, J Bouma, and J Blok. A high resolution study of the transition =451.1 nm in In I using a CW dye laser. J. Phys. B At. Mol. Phys., 11(16):2821–2823, aug 1978. * [56] E.R. Eliel, W. Hogervorst, K.A.H. van Leeuwen, and B.H. Post. A frequency-doubled frequency-stabilized cw ring dye laser for spectroscopy: A study of the $\lambda$ = 293.3 nm transition in In1. Opt. Commun., 36(5):366–368, mar 1981. * [57] Meng Wang, G. Audi, F. G. Kondev, W.J. Huang, S. Naimi, and Xing Xu. The AME2016 atomic mass evaluation (II). Tables, graphs and references. Chinese Physics C, 41(3):030003, mar 2017. * [58] Fred L. Le Blanc, L. Cabaret, E. Cottereau, J E. Crawford, S. Essabaa, J. Genevey, R. Horn, G. Huber, J. Lassen, J K P Lee, G L. Le Scornet, J. Lettry, J. Obert, J. Oms, A. Ouchrif, J. Pinard, H. Ravn, B. Roussière, J. Sauvage, and D. Verney. Charge-radius change and nuclear moments in the heavy tin isotopes from laser spectroscopy: Charge radius of sn132. Phys. Rev. C, 72(3):034305, 2005. * [59] R F Garcia Ruiz, C L Binnersley, J Billowes, M L Bissell, T E Cocolios, R P De Groote, K T Flanagan, S Franchoo, G Georgiev, A Gottardo, G Hagen, W Gins, K M Lynch, B A Marsh, G Neyens, G S Simpson, H H Stroke, A R Vernon, K Wendt, S G Wilkins, Z Xu, X F Yang, and D T Yordanov. INTC-P5-04 proposal, 2017. * [60] T. D. Morris, J. Simonis, S. R. Stroberg, C. Stumpf, G. Hagen, J. D. Holt, G. R. Jansen, T. Papenbrock, R. Roth, and A. Schwenk. Structure of the lightest tin isotopes. Phys. Rev. Lett., 120:152503, apr 2018. * [61] A. Ekström, G. R. Jansen, K. A. Wendt, G. Hagen, T. Papenbrock, B. D. Carlsson, C. Forssén, M. Hjorth-Jensen, P. Navrátil, and W. Nazarewicz. Accurate nuclear radii and binding energies from a chiral interaction. Phys. Rev. C, 91:051301, May 2015.
WGMwgm QEqe EPep PMSpms BECbec DEde [1] [2] [3] [1]This document is the results of research funded by the EPSRC additive manufacturing grant. [2]H.P.M.(First Author) contributed equally to this work with J.M. (Last Author) [3]H.P.M, J.M., and G.M.M. conceptualised this study, K.M.H and H.P.M chose the samples, H.P.M. and K.S. did the imaging and analysis, J.M. and H.M. built the software and ran the models, All authors contributed to writing the paper. [type=editor, auid=000,bioid=1, orcid=0000-0002-1445-6354] [1] 1]organization=Institute of GeoEnergy Engineering, addressline=Heriot-Watt University, city=Edinburgh, postcode=EH14 4AS, country=United Kingdom 2]organization=Institute for Sustainable Built Environment, addressline=Heriot-Watt University, city=Edinburgh, postcode=EH14 4AS, country=United Kingdom 3]organization=Narro Associates, addressline=Orchard Brae House, 30 Queensferry Rd, city=Edinburgh, postcode=EH4 2HS, country=United Kingdom 4]organization=Society for the Protection of Ancient Buildings, addressline=37 Spital Square, city=London, postcode=E1 6DY, country=United Kingdom [cor1]Corresponding author # Multi-scale flow, permeability, and heat transport in low-carbon and traditional building materials Hannah P. Menke<EMAIL_ADDRESS>[ Katherine M. Hood Kamaljit Singh Gabriela M. Medero Julien Maes [ [ [ ###### Abstract Permeability and heat transport through building materials ultimately dictates their insulatory performance over a buildings service lifetime. However, characterisation of building materials is challenging because porous building materials are heterogeneous and their macroscopic physical properties (e.g. permeability, thermal, and mechanical behaviour) depend on their micro scale characteristics, i.e. the local distribution, material’s fabric, and features of the solid components and the connectivity of the spaces between them. Large-scale testing can measure these macro-scale properties, but often does not give insight into the underlying micro structural properties that ultimately leads to optimisation. Thus, a knowledge of the 3D structure is therefore required to assist in the development and implementation process for new materials. Experiments combining X-ray microtomography with numerical modelling are an accepted method of studying pore scale processes and have been used extensively in the oil and gas industry to study highly complex reservoir rocks. However, despite the obvious similarities in structure and application, these techniques have not yet been widely adopted by the building and construction industry. An experimental investigation was performed on the pore structure of several building materials, including conventional, historic, and innovative, using X-ray tomography and direct numerical simulation. Six samples were imaged at between a 4 and 15 $\mu$m resolution inside a micro-CT scanner. The porosity and connectivity were extracted with the grain, throat, and pore size distributions using image analysis. The permeability, velocity, and thermal conductivity were then investigated using GeoChemFoam, our highly-versatile and open source numerical solver. It was found that each material had a unique, heterogeneous and sometimes multi-scale structure that had a large impact on the permeability and thermal conductivity. Furthermore, it was found that the method of including sub resolution porosity directly effected these bulk property calculations for both parameters, especially in the materials with high structural heterogeneity. This is the first multi-scale study of structure, flow and heat transport on building materials and this workflow could easily be adapted to understand and improve designs in other industries that use porous materials such as fuel cells and batteries technology, lightweight materials and insulation, and semiconductors. ###### keywords: multi-scale synthetic building materials concrete pore-scale flow and transport micro-CT direct numerical simulation GeoChemFoam heat transfer permeability thermal conductivity First multi-scale study of both low carbon and traditional building materials. Micro-CT imaging and image analysis of building materials. Darcy-Brinkman-Stokes permeability, flow, and heat transport. Conjugate thermal conductivity. ## 1 Introduction The Intergovernmental Panel on Climate Change (IPCC) has stated that reducing carbon emissions from the built environment, which account for almost 40 percent of the total carbon emissions globally, will make a meaningful impact on global climate change[27, 17]. The global building stock continues to rise annually and strategies are being developed to mitigate the associated increases in emissions and achieve net zero carbon, largely by reducing building emissions [40]. Around 25 percent of embedded carbon in construction is generated in the building material production, transport, and construction stages [14]. One method of reducing carbon during the construction process is through the use of recycled or circularly produced materials that use less carbon and energy and are resource efficient to produce than traditional building materials [12, 34, 24]. However, as these building materials are assimilated into the construction industry, their properties must be measured both for optimisation during the material development process and for accurate estimation during building design and construction. Characterisation of building materials is challenging for several reasons. Porous building materials are heterogeneous materials and their macroscopic physical properties (e.g. permeability, thermal, and mechanical) depend on their micro scale characteristics, i.e. the local distribution and features of the solid components and the connectivity of the spaces between them. For example, the porosity and permeability of building materials are ultimately related to the underlying structure and arrangement of the grains (solid particles), pores, and binding agent. Thus, two concretes made of the same materials, but with different sizes of aggregate can have vastly different thermal, transport, and mechanical properties [11]. Porous building materials also often contain a binding agent such as cement, which is itself porous, and has different material properties than the aggregate. The average pore throat size of the cement is often also on a length-scale orders of magnitude lower than the pores between the solid grains which makes it difficult to estimate macro-scale properties of the building material prior to testing. Large-scale testing can measure these macro-scale properties, but often does not give insight into the underlying micro structural properties that ultimately leads to optimisation and thus a knowledge of the 3D structure and particle arrangement is required to assist in the development, optimisation, and implementation process for new low carbon building materials. Experiments combining X-ray microtomography ($\mu$-CT) with numerical modelling are now an accepted method of studying pore scale processes and have been used extensively in the oil and gas and carbon storage industries to study highly complex reservoir rocks [7, 28]. Pore-scale imaging experiments coupled with simulation are an increasingly important tool used in industry prediction of geological and petrophysical properties including porosity and connectivity [25], mineralogical heterogeneity [18], and relative permeability [33, 1], and thermal conductivity [21, 19]. In addition, new numerical techniques such as Darcy-Brinkman-Stokes (DBS) have been implemented to tackle the multi-scale nature of rocks, where processes are modelled explicitly in the pores using Stokes, and implicitly as an averaged volume in the microporous regions [26, 13, 38]. However, despite the obvious similarities in structure and application, these techniques have not yet been widely adopted by the building materials and construction industry. There have been several recent studies that have combined imaging and modelling experimental and modelling workflows to study materials used in the building industry. Bentz et al. [4] showed the applicability of pore scale analysis to porous building materials by imaging a lime clinker brick and a hard-burned clay brick using ($\mu$-CT) and confirmed good agreement with the Katz-Thompson relationship of permeability, diffusivity, and pore-size. However, their imaging resolutions were insufficient to accurately compute these properties with numerical simulations. Nunes et al. [29] used nuclear magnetic resonance imaging to investigate the influence of pore structure on moisture transport in lime-plaster-brick systems and found that linseed oil application hindered the drying of the brick. Chung et al.[9] evaluated the performance of glass beads on thermal conductivity in insulating concrete and confirmed the results using numerical modelling. Still, they used limited image analysis and did not measure the connectivity of the sample, instead relying on probability functions to describe the structural arrangement of beads and voids. In addition, while these studies have shown the potential for modern pore-scale analysis, none of them have incorporated multi-scale modeling techniques to improve accuracy and computational expense. Several studies have also attempted to model heat transport in building materials. Bicer et al. [6] and Guo et al. [16] built purely mathematical models that did not include any simulations or pore scale information. Zhang et al. [43] built a theoreticial mesoscale model for thermal conductivity of concrete that was based on bulk properties and did not consider multi-scale heterogeneity. Zhu et al. [44] built a multi-scale theoretical model for concrete and bench marked the results against simulations and experiments. However, the simulations were only 2D and did not explicitly model pores (only pores completely filled with aggregate) meaning that the heat conductivity along connected pathways of grains or empty pores could not be accounted for explicitly. Panerai et al. [31] investigated thermal conductivity in a range of fibrous structures using microCT imaging and 3D modeling, but did not investigate the effect of the model used for effective thermal conductivity of the microporous fibres. Xiao et al [41] used numerical modelling to investigate heat conduction of random porous fibres but the simulations were limited to 2D. Shen et al. [35] studied the thermal conductivity of various hybrid steel-fine polypropylene and fiber-reinforced concrete with theoretical models, experiments, and 2D simulations, finding that the touching solid components acted as a thermal bridge. Yet, 2D numerical models limited the complexity of these investigations. Yang et al., [42] did numerical investigations into the effect of fractures on thermal conductivity. However, their fractures were simple and straight and their digital rock was not derived from a real 3D sample. Siegert et al [36] were the first to point out the importance of understanding harmonic vs arithmetic averaging on heat transfer in porous media structures, showing vast difference in effective properties depending on which method was used. Nevertheless, their investigations were limited to meshing of simple, relatively homogeneous samples that did not contain multi-scale porosity. However, as of yet, no one has used multi-scale modelling to compare the permeability and thermal conductivity of different building materials with varying structural complexity and materiel components. The objective of this study is to showcase the potential of combined pore- scale imaging and multi-scale modelling in use-cases relevant to the building industry. (1) First, six samples of building materials are imaged inside a micro-CT scanner at a 4-15 micron resolution. (2) The images are then analysed for pore, throat, and grain size distributions, and the connectivity of the pore-space is assessed both with and without including the microporosity. (3) Permeability is then calculated on the images using the Darcy-Brinkman-Stokes method, both included and excluding the microporous phase. (4) Finally, the thermal conductivity inside the materials is estimated. As the microporosity is unresolved in our micro-CT images, its structure is unknown and its thermal conductivity needs to be modelled by correlations. Here, both harmonic and arithmetic averaging of the conductivity of the fluid and solid inside the microporous phase is used and the results compared. ## 2 Sample Selection Seven samples were selected for this study, five traditional building materials from various time periods, and a low carbon brick (KBriq) made from recycled construction and demolition waste and a proprietary binder by Kenoteq [15]. Five traditional materials were chosen: a fired clay brick from a 1930’s building, a brick sample (approximately 1900’s) drilled from the Mackintosh Building at the Glasgow School of Art (GSA), modern aerated concrete, a wooden beam from a 1930’s building, and Bentheimer sandstone. These materials have a range of flow and heat transport properties with heterogeneous pore structures that make them ideal benchmark cases for modern multi-scale numerical solvers. ## 3 Imaging and Analysis Methods Cylindrical cores with a diameter of 9-mm were drilled from each of the samples, except the aerated concrete where a larger piece was extracted to ensure a representative volume due to the large distribution in grain sizes, and Bentheimer for which a 4 $\mu$m resolution image of which was downloaded from https://www.digitalrocksportal.org/. The samples were imaged in three dimensions inside an EasyTom ($\mu$-CT) scanner from Rx Solutions at 100keV and 10W at between a 6 and 15 $\mu$m resolution. The raw projections were then reconstructed (Fig. 1A) and post-processed using the image processing modules in Avizo 2020.3.1 (www.thermofischer.com). The high-resolution micron-scale images were filtered using the non-local means filter [8] (Fig. 1B) and segmented using the watershed segmentation algorithm [5] into three phases (pore, microporosity, and solid grain) (Fig. 1C). The pores, grains, and throats were then each separated into individual components using the separate objects module that uses the watershed technique on the Euclidean distance map of the phase (Fig. 1D) and independently analysed for size distributions (Fig. 1E). The pore space of each image was then analysed for connectivity in the Z direction and both connected and unconnected pores were identified (Fig. 1F). This connectivity analysis was done twice for each image, once for only the macropores, and again for the macropores and micropores together, giving an estimation of the connectivity of flow pathways both including and excluding the sub resolution porosity. The image analysis results are summarised in Table 1. This connectivity analysis is including the wooden beam where the cell walls themselves were assumed to be microporous [39] and thus no solid phase was identified in any segmentation. Figure 1: The raw image of KBriq (A), filtered image (B), segmentation (C) with the primary porosity rendered in light blue, microporosity in dark blue and solid grains in red shown for the KBriq sample. Grains are separated and randomly colored (D), and colored by size (E) with small grains in yellow (0-100$\mu$m equivalent diameter), medium sized grains in blue (100-200$\mu$m), large grains in red (200-500$\mu$m), and extra large grains in green (500+$\mu$m). Connected (blue) and unconnected (red) pores (A). The average porosity of the microporosity was estimated by using the segmented microporosity as a mask for the grey-scale image. The histogram was then plotted for the microporous phase only. The histogram values were then normalised using the grey scale values used in watershed segmentation where the maximum grey-scale value of the macroporous phase was assigned to a porosity of 1 and the minimum grey-scale value of the solid phase was assigned to be a porosity of 0. A linear relationship between grey-scale value and porosity was assumed for the microporous phase and the mean value of the porosity histogram was used in the DBS numerical simulations as the porosity of the microporous phase. ## 4 Numerical Methods For each image, a $400^{3}$ voxel sub-volume was extracted from each image and used in all numerical simulations. Permeability was calculated using the DBS approach, both including and excluding the microporous phase. Steady state heat transfer (thermal conductivity) was then solved with a constant value for the heat conductivity of the solid $\kappa_{s}=1.4$ $kW/m/K$ and heat conductivity of the fluid $\kappa_{f}$ ranging from 0.0006 to 666 $kW/m/K$. All simulations used modules in GeoChemFoam 5.0 [22, 20, 23], our numerical toolbox based on OpenFOAM® [30].The full code can be downloaded free of charge from http://github.com/geochemfoamhttps://github.com/GeoChemFoam. ### 4.1 Calculation of Permeability Flow in the images was solved using the simpleDBSfoam module using the DBS approach [38] in which one equation is used to model the flow within the fully resolved macropores (i.e. voxel porosity equal to 1.0) and the micropores (i.e. voxel porosity lower than 1.0). $\mu\nabla^{2}\mathbf{u}-\nabla\mathbf{p}-\mathbf{\mu}k^{-1}\cdot\mathbf{u}=0,$ (1) $\nabla\cdot\mathbf{u}=0,$ (2) where $\mathbf{u}$ [m.s-1] is the fluid velocity, $\mathbf{p}$ [Pa] is the pressure gradient, $\mu$ [kg.m1.s-1] is the fluid viscosity and $k$ [m2] is the local permeability, i.e. the permeability of the computational cell. This coefficient is assigned to a large value of 1013 m2 in the pores and to a very small value of 10-21 m2 in the solid phase in order to obtain a no-flow no- slip condition at the fluid-solid interface. The permeability of each microporous voxel is assigned using the Kozeny-Carmen relationship, which assumes that the microporosity consists of an even packing of equally-sized elliptical grains. As there is no information about the size of the grains in the microporous phase _a priori_ , the grain size of this phase was taken from literature values of typical grain size distributions for each material. The permeability of the microporous phase was estimated using the equation $K_{\mu}=\frac{h^{2}}{180}\frac{\phi_{\mu}^{3}}{(1-\phi_{\mu})^{2}},$ (3) where $\phi_{\mu}$ is the porosity of the microporous phase and $h$ is the average grain radius. Additional information on Darcy-Brinkman-Stokes modelling methods can be found in [26]. To calculate the overall permeability $K$ of the sample, a pressure drop $\Delta P$ is applied between the left and the right boundaries, and the velocity field is calculated. The permeability is then obtained as $K=-\frac{U_{D}L}{\mu\Delta P}$ (4) where $L$ is the length of the sample and $U_{D}$ (m/s) is the Darcy velocity, defined as $\displaystyle U_{D}=\frac{Q}{A},$ (5) where $A$ (m2) is the cross-sectional area of the domain and $Q$ is the flow rate (m3/s). ### 4.2 Calculation of effective heat conductivity Heat transport was solved using a simplified temperature equation, $0=\nabla\cdot\overline{\kappa}\nabla T$ (6) where $\overline{\kappa}$ is the local heat conductivity, i.e. the heat conductivity of the computational cell. This coefficient must include the contribution of the pores, micropores and solid present inside the computational cell. Inside the solid phase, $\overline{\kappa}=\kappa_{s}$, and inside the pores, $\overline{\kappa}=\kappa_{f}$. In the microporous phase, $\overline{\kappa}$ will depend on the heat conductivities of the fluid and solid, of the porosity $\phi_{\mu}$, and of the underlying structure of the pores inside the microporous phase. Calculating this coefficient accurately would require a high resolution image of this underlying structure. In the absence of such image, the local heat conductivity must be modelled. In this work, the results are compared using two different models: $\mathbf{\kappa}_{\mu(harmonic)}=\frac{\mathbf{\kappa}_{f}\mathbf{\kappa}_{s}}{((1-\mathbf{\phi}_{\mu})\mathbf{\kappa}_{f}+\mathbf{\kappa}_{s}\mathbf{\phi}_{\mu})}$ (7) $\mathbf{\kappa}_{\mu(arithmetic)}=(1-\mathbf{\phi}_{\mu})\mathbf{\kappa}_{s}+\mathbf{\phi}_{\mu}\mathbf{\kappa}_{f}$ (8) Additional information on the $heatTransportFoam$ solver numerical methods in GeoChemFoam can be found in [21]. Upscaling heat transport requires calculating the effective heat conductivity coefficient $\kappa_{eff}$ [$kW/m/K$] between the fluid and the solid, defined by [36]. The effective heat conductivity regroups the contribution of fluid, solid and microporous phase into one single-field coefficient. To calculate it, a temperature drop $\Delta T$ is established between the left and right boundaries, the temperature field inside the domain is calculated and the effective heat conductivity coefficient is obtained as, $\kappa_{eff}=\frac{Q_{h}L}{A\Delta T},$ (9) where $Q_{h}$ is the heat flow rate, calculated as the integrate of the heat flux $\mathbf{J}=-\overline{\kappa}\nabla T$ across the cross secitonal area A. ## 5 Results and Discussion ### 5.1 Image Analysis The images were first segmented into three phases: macro pore, micro pore, and solid grain as show in Fig. 2. The volume fraction of each phase is shown in Table 1. The connectivity of each phase of porosity was then assessed with just the macro porosity and then the connected macro+micro porosity. In this analysis the aerated concrete shows similar volume fractions to the KBriqs with 0.18-0.20 macro porosity, 0.18-0.38 micro porosity, and 0.42-0.64 solid grain. This high macroporosity volume fraction led to a high connectivity of macroporosity of 0.99 for all three samples. The GSA Brick and Fired Clay Brick each have large volume fractions of microporosity of 0.89 with little microporosity of 0.06-0.10 and little solid grain (0.01-0.05). The lack of macroporosity led to zero connectivity in the macroporosity in the Fired Clay Brick. However, in the GSA Brick, the porosity was present predominantly as long, thin fractures, leading to a relatively high connected volume fraction of the macroporosity of 0.3. In the wooden beam the porosity was present in laminated layers of pores with low porosity regions between. However, the connectivity was high regardless with a connected macroporosity of 0.97. Bentheimer did not contain microporosity but was almost entirely connected due to its strong homogeneity and large grains. Figure 2: The raw (A, E, I, M, Q, U) , 3-phase segmentation (B,F,J,N,R,V,Y), connectivity of the macro porosity (C,G,K,O,S,W,Z), and the connectivity of the micro+macro porosity (D,H,L,P,T,X) of each sample. In the 3-phase segmentation, macro pores are light blue, micro pores are dark blue, and solid grains are red. In the connectivity of the macro porosity and macro+micro porosity, unconnected pores are red and connected pores are blue. Bentheimer does not have microporosity and the image was obtained pre-segmented. The images were then analysed for pore, grain, and throat size distributions (Fig. 3). Aerated concrete had the largest grains with large numbers of grains even at the larger radii (Fig. 3A). The grain size distribution of the KBriq are very similar to Aerated Concrete, with a peak in grain radius of around 250 microns. Bentheimer had the smallest variance in grain size with a peak at 250 microns. The Clay Fired and GSA bricks had a small number of isolated grains that were all under 500 microns. No grains were extracted from the wooden beam as all of its solid components were microporous. When comparing pore size (Fig. 3B), Aerated Concrete had the largest pores with an even distribution stretching from 10-1500 microns. The other samples all had peaks between 100 and 200 microns, except the GSA brick which had a long tail indicative of the large cracks that can be seen throughout the sample. It is interesting to note that the Wooden Beam shows a peak much larger than would be expected for its small pores, which is due to the long pores stretching end to end of the sample and the calculation of pore equivalent diameter rather than maximal ball radius [10]. The throat size distributions are shown in Fig. 3C. Wooden beam and Bentheimer have the smallest pore throats with a peak around 20 microns and a very narrow distribution. Aerated concrete has a very wide distribution with a peak around 100 microns. The rest of the samples peak around 50 microns and have a small tail at larger throat sizes indicating some heterogeneity within the samples. Sample | resolution | macro $\mathbf{\phi}$ | micro $\mathbf{\phi}$ | solid grain | connected $\mathbf{\phi}$ | connected $\mathbf{\phi}$ | micro $\mathbf{\phi}$ | $h$ | $\mathbf{K_{\mu}}$ ---|---|---|---|---|---|---|---|---|--- | $\mathbf{\mu}$m | vol Frac | vol Frac | vol Frac | macro | macro+micro | avg porosity | $\mathbf{\mu}$m | m2 Aerated Concrete | 15 | 0.20 | 0.38 | 0.42 | 0.99 | 0.99 | 0.41 | 5 [3] | 2.75 x $10^{-14}$ Fired Clay Brick | 7 | 0.06 | 0.48 | 0.43 | 0.00 | 1.00 | 0.27 | 6.5 [2] | 1.10 x $10^{-14}$ GSA Brick | 7 | 0.06 | 0.89 | 0.05 | 0.33 | 1.00 | 0.34 | 6.5 [2] | 2.12 x $10^{-14}$ Wooden Beam | 7 | 0.34 | 0.66 | 0.0 | 0.97 | 1.0 | 0.72 | N/A | 2.70 x $10^{-20}$ [32]** KBriq | 8 | 0.18 | 0.18 | 0.64 | 0.955 | 0.996 | 0.35 | 5* | 1.41 x $10^{-14}$ Bentheimer | 4 | 0.26 | 0 | 0.74 | 0.99 | N/A | N/A | N/A | N/A Table 1: Porosity and properties of the samples measured using image analysis and estimated from literature values.*The KBriq samples were estimated to have a microporous grain size $h$ of 5 $\mathbf{\mu}$m due to its composition of construction and demolition inert waste. **The permeability of the microporous phase of wood is analogous to the permeability of a cell wall. Figure 3: The grain size distributions (A), pore size distributions (B), and throat size distributions (C) of the samples. Note that the grain size distribution of the wooden beam is excluded because it does not contain grains. ### 5.2 Numerical Simulation: Permeability with Darcy-Brinkman-Stokes The permeability $K$ was calculated on a $400^{3}$ voxel subvolume of the images both including and excluding the microporous phase using the DBS solver in GeoChemFoam. When excluding the microporosity both the solid and microporous phases were given a porosity of 0.0001 and a permeability of $10^{-20}$. When including the microporosity the average porosity of the microporosty was used along with the Kozeny-Carmen calculated $K_{micro}$, with the solid values remaining 0.0001 and $10^{-20}$ $m^{2}$. All calculated $\mathbf{\phi}$, $K$, $L$, and $U{{}_{D}}$ for each scenario are shown in Table 2. The streamlines for each scenario and the pds of velocity are shown in Fig.4. It is observed in Aerated Concrete (Fig.4. A, B, C) that the macro porous regions are highly connected with fast flow in many of the pores resulting in a moderate peak in the velocity PDF at high velocities. This peak is reduced and flattened with the inclusion of microporosity indicating that the binder is contributing to flow in some regions. The permeability of Aerated concrete was the highest of all the samples at 5.4 x $10^{-11}$ $m^{2}$ , which is a reflection of the large pores and throat sizes, and increases less then 1 percent with the addition of microporosity. The Clay Fired Brick (Fig.4. D, E) is not connected in the macro pores, however, when microporosity is included a low permeability of 5.9 x $10^{-14}$ $m^{2}$ is calculated and it is observed that flow is slow through the clay and then increases when an unconnected pore or grain is encountered. Nevertheless the difference between the minimum and maximum velocities is not more than one order of magnitude and thus only one peak is seen in the PDF. The GSA Brick (Fig.4.F, G, H) shows only a single connected path along the fracture with a permeability of 2 x $10^{-14}$ $m^{2}$.. The fraction of the sample volume containing high velocities is so low that the high velocities barely register on the PDF. When microporosity is included, flow is fast in much of the sample as the micropores connect previously unconnected fracture networks and this is reflected in the permeability increase of 3.5x and the PDF with a strong peak at high velocities and a tail leading to lower velocities. The Wooden Beam (Fig.4. I, J, K) has pores that stretch end to end in the direction of flow, resulting in a flow field that is analogous to a bundle of tubes with a permeability of 5.9 x $10^{-13}$ $m^{2}$. For the Stokes case this results in several small peaks and troughs that depend on the various pore radii, with the few pores that are slightly bigger than the others becoming the preferential flow paths. When microporosity is added some of the smaller tubes become more connected and another peak is seen in the mid-range velocities and a small 1.2x increase in permeability. Bentheimer (Fig.4. L,M ) does not contain sub resolution porosity and is well connected resulting in only a single peak at high velocities and a permeability of 3.8 x $10^{-12}$ $m^{2}$. Finally, the KBriq (Fig.4. Q,R,S ) a small peak at high velocities with Stokes flow and a similar permeability of 4 x $10^{-12}$ $m^{2}$. The peak in the PDF is then shifted to lower velocities and widened when microporosity is included due to connecting some stagnant pores that then contribute to over all flow, however, these connected change the permeability by less than 1 percent, which is attributed to the already high permeability and relatively homogeneous arrangement of pores and grains. It is therefore concluded that if the macropores are well connected, there are only minor effects on the flow field and permeability. However, if the macropores are poorly connected or disconnected then microporosity must be included in the flow calculations to solve for permeability. Sample | $K_{Stokes}$ | $K_{DBS}$ | $\mathbf{\phi}_{Stokes}$ | $\mathbf{\phi}_{DBS}$ | $L_{Stokes}$ | $L_{DBS}$ | $U_{D(Stokes)}$ | $U_{D(DBS)}$ ---|---|---|---|---|---|---|---|--- | $[m^{2}]$ | $[m^{2}]$ | [-] | [-] | [m] | [m] | $[m.s^{-1}]$ | $[m.s^{-1}]$ Aerated Concrete | 5.386 x $10^{-11}$ | 5.626 x $10^{-11}$ | 0.210 | 0.371 | 4.535 x $10^{-5}$ | 3.483 x $10^{-5}$ | 2.634 x $10^{-3}$ | 2.751 x $10^{-3}$ Fired Clay Brick | 0 (unconnected) | 5.92 x $10^{-14}$ | N/A | 0.365 | N/A | 1.319 x $10^{-6}$ | N/A | 1.215 x $10^{-1}$ GSA Brick | 2.022 x $10^{-14}$ | 7.13 x $10^{-14}$ | 0.039 | 0.359 | 2.033 x $10^{-6}$ | 1.26 x $10^{-6}$ | 1.20 x $10^{-2}$ | 4.230 x $10^{-2}$ Wooden Beam | 5.866 x $10^{-13}$ | 6.856 x $10^{-13}$ | 0.353 | 0.819 | 3.64 x $10^{-6}$ | 2.589 x $10^{-6}$ | 2.103 x $10^{-3}$ | 2.460 x $10^{-3}$ KBriq | 4.033 x $10^{-12}$ | 4.125 x $10^{-12}$ | 0.179 | 0.245 | 1.33 x $10^{-5}$ | 1.161 x $10^{-5}$ | 6.239x $10^{-3}$ | 6.382 x $10^{-3}$ Bentheimer | 3.750 x $10^{-12}$ | N/A | 0.226 | N/A | 1.150 x $10^{-5}$ | N/A | 5.80 x $10^{-3}$ | N/A Table 2: Permeability and flow properties of the samples as calculated using GeoChemFoam’s Darcy-Brinkman-Stokes Flow Solver (simpleDBSFoam) in 4003 subvolumes. Figure 4: The velocity streamlines of each sample both excluding (Stokes) [A,F,I,L,N] and including (DBS) microporosity [B,D,G,J,O]. The probability density functions of $U/U_{avg}$ for each simulation are shown in column 3 [C,E,H,K,M,P]. ### 5.3 Numerical Simulations: Steady State Heat Transfer As the true thermal conductivity of a microporous solid is dependent on a structure that can not be known in sub resolution region without imaging at another scale, harmonic or arithmetic averaging of that region is a reasonable approximation. However, it has yet to be investigated as to if there is a difference in overall calculated $\mathbf{\kappa}_{eff}$ when each method is used and indeed which method produces more accurate results under which circumstances. For each sample two solver configurations were then used: (1) $\mathbf{\kappa}_{eff}$ calculated with arithmetic averaging of $\mathbf{\kappa}_{s}$ and $\mathbf{\kappa}_{f}$ for the microporous solid phase and (2) $\mathbf{\kappa}_{eff}$ calculated with harmonic averaging $\mathbf{\kappa}_{s}$ and $\mathbf{\kappa}_{f}$ for the microporous phase. Input $\mathbf{\kappa}_{s}$ values for the simulations are shown in Table 3. Heat Transfer was then simulated on the samples for fluids ranging from $\mathbf{\kappa}_{f}$ = 0.0006 to $\mathbf{\kappa}_{f}$ = 666. Fig. 6 shows the J flux plotted for each of the samples for the highest and lowest values of $\mathbf{\kappa}_{f}$. The results of each simulation shown in Fig. 5 are then compared to the theoretical value of $\mathbf{\theta}_{eff}$, corresponding to the theoretical limit as calculated with the same method: $\mathbf{\theta}_{eff(harmonic)}=\frac{\mathbf{\kappa}_{f}\mathbf{\kappa}_{s}}{((1-\mathbf{\phi}_{total})\mathbf{\kappa}_{f}+\mathbf{\kappa}_{s}\mathbf{\phi}_{total})}$ (10) $\mathbf{\theta}_{eff(arithmatic)}=(1-\mathbf{\phi}_{total})\mathbf{\kappa}_{s}+\mathbf{\phi}_{total}\mathbf{\kappa}_{f}$ (11) It is important to note that while in typical systems the pore fluid will be dominated by air or water with a low $\mathbf{\kappa}_{f}$, in extreme weather events such as floods this may not always be the case with penetration of mud or contaminant laden fluids, possibly even metal particulates. Thus we have included a large range of $\mathbf{\kappa}_{f}$ values to span a range of possible pore fluids. Figure 5: The $\mathbf{\kappa}_{eff}$ and $\mathbf{\theta}_{eff}$ normalised to $\mathbf{\kappa}_{s}$ for each of the building materials using both harmonic and arithmetic averaging of the components of the solid phase. Figure 6: The heat flux J of the simulations for $\mathbf{\kappa}_{f}$ = 0.0006 (columns 1 and 3) and $\mathbf{\kappa}_{f}$ = 666 (columns 2 and 4) using $\mathbf{\kappa}_{\mathbf{\mu}(arithmetic)}$ and $\mathbf{\kappa}_{\mathbf{\mu}(harmonic)}$ for all of the samples. For visual illustration purposes, each simulation is scaled based on linear scaling for arithmetic cases and logarithmic scaling in harmonic cases with the same maximum value where red is high J and blue is low J. In Bentheimer (Fig. 5. E ), it is observed that all simulations the $\mathbf{\kappa}_{eff}$ tracks the $\mathbf{\theta}_{eff(arithmatic)}$. This is because there is no porosity in the solid and thus fluid inside the solid phase to modify the $\mathbf{\kappa}_{\mathbf{\mu}}$ of the solid. In addition, both grains and pores are well connected so the heat transfer can be modelled conceptually as a combination of two stacked phases as is analogous to arithmetic averaging. For Aerated Concrete good agreement is observed between $\mathbf{\kappa}_{eff}$ and $\mathbf{\theta}_{eff}$ when $\mathbf{\kappa}_{f}$ ¡ $\mathbf{\kappa}_{s}$ . However, when $\mathbf{\kappa}_{f}$ ¿ $\mathbf{\kappa}_{s}$ both simulations $\mathbf{\kappa}_{eff}$ track the $\mathbf{\theta}_{eff(arithmatic)}$ . This is because the fluid pathways are large and well connected with little tortuosity, so when $\mathbf{\kappa}_{f}$ is high, those pathways can be used for heat transport and the porous medium can be effectively modelled arithmetically. However, when $\mathbf{\kappa}_{f}$ is low, most of the heat is transported through the grains and then the simulations follow the exact law in the solver. Additionally, using the harmonic function pulls the $\mathbf{\kappa}_{\mathbf{\mu}}$ towards the smallest of $\mathbf{\kappa}_{f}$ and $\mathbf{\kappa}_{s}$ . The solid inside the microporous phase thus behaves like a barrier. For the GSA Brick good agreement is observed between $\mathbf{\theta}_{eff}$ and $\mathbf{\kappa}_{eff}$ at all $\mathbf{\kappa}_{f}$ except for a small deviation at high $\mathbf{\kappa}_{f}$ in the harmonic case when the fractures act as a heat conduit and some pull towards $\mathbf{\theta}_{eff(arithmatic)}$ is seen. However, the fractures are only a small portion of the sample and thus this deviation is only small. The Clay Fired Brick shows very similar behavior where vugs and cracks act as faster heat conduits. However, there are enough of them to impede heat flow in the solid when $\mathbf{\kappa}_{f}$ ¡ $\mathbf{\kappa}_{s}$ and thussome deviation of $\mathbf{\kappa}_{eff(harmonic)}$ from $\mathbf{\theta}_{eff(harmonic)}$ is observed in these cases. In the Wooden Beam , $\mathbf{\theta}_{eff}$ and $\mathbf{\kappa}_{eff}$ are in agreement when $\mathbf{\kappa}_{f}$ ¡ $\mathbf{\kappa}_{s}$. However, when $\mathbf{\kappa}_{f}$ is high, the harmonic simulations track $\mathbf{\theta}_{eff(arithmatic)}$. This is attributed to the long and thin pores that act as direct conduits. Yet, these pores do not stretch completely end to end, so the fluid has to pass through some solid and in the harmonic case this results in much lower $\mathbf{\kappa}_{eff}$ for the solid phase. For the KBriq a similar trend to Aerated concrete when $\mathbf{\kappa}_{f}$ ¿ $\mathbf{\kappa}_{s}$ is observed. Nevertheless, there is a smaller influence of the microprosity when $\mathbf{\kappa}_{f}$ ¡ $\mathbf{\kappa}_{s}$, which is attributed to a lower overall porosity and thus greater ability of the solid phase to transmit heat through the grains. It is thus concluded that in many cases knowledge of both the macro and micro structures are required when choosing how to model heat transfer. In cases where there is no microporosity and the solid and grains are well-connected, the arithmetic function should be used. Still, when microporosity is present, the structure of the macropores must be considered. The macro pores themselves can act as either barriers or conduits to heat flow depending on their arrangement and the ratio between $\mathbf{\kappa}_{f}$ and $\mathbf{\kappa}_{s}$. Furthermore, use of arithmetic or harmonic averaging of the microporous phase can change the resulting $\mathbf{\kappa}_{eff}$ by several orders of magnitude in some cases. It is thus posited that some knowledge of the underlying nanostructure within the microporous phase must be incorporated in order to choose an accurate solver method. Sample | $\mathbf{\phi}$ | $\mathbf{\kappa}_{s}$ | $\mathbf{\gamma}_{s}$ | $\mathbf{\rho}_{s}$ | U (arithmetic) | U (harmonic) | t (arithmetic) | t (harmonic) ---|---|---|---|---|---|---|---|--- | $[-]$ | $kW/m/K$ | $kJ/kg/K$ | $kg/m^{3}$ | $W/m^{2}/K$ | $W/m^{2}/K$ | hr | hr Aerated Concrete | 0.372 | 1.4 | 0.96 | 2400 | 0.14 | 0.29 | 5.67 | 11.78 Fired Clay Brick | 0.365 | 1.0 | 1.05 | 1362 | 0.15 | 0.57 | 3.90 | 14.36 GSA Brick | 0.359 | 1.0 | 1.05 | 1362 | 0.16 | 0.62 | 3.97 | 15.85 Wooden Beam | 0.819 | 0.2 | 2.00 | 600 | 1.18 | 1.44 | 7.16 | 8.75 KBriq | 0.179 | 1.4 | 0.96 | 2400 | 0.11 | 0.15 | 5.83 | 7.90 Bentheimer | 0.226 | 3.3 | 0.92 | 2650 | 0.07 | 0.08 | 3.54 | 4.17 Table 3: Thermal and porosity properties of the samples used in the heat transfer simulations. As the makeup of a KBriq is proprietary, the same values for density and specific heat capacity as Aerated Concrete were used. The calculated $\mathbf{U_{eff}}$ for air is shown for both arithmetic and harmonic as well as the time for heat to diffuse through a 0.1 m wall. ### 5.4 Upscaling to real systems It is then possible to extrapolate these results to a real system by considering a wall of 0.1m thickness $\mathbf{L}$. Here the $\mathrm{U-value}$ of the system $\mathrm{[Wm^{-2}K^{-1}]}$ can be approximated as $\mathbf{L}/\kappa_{eff}$. The time $\mathbf{t}$ for heat to diffuse through the wall is equal to $\mathbf{L^{2}/D}$ where $\mathbf{L}$ is the wall thickness and $\mathbf{D}$ = $\mathbf{\kappa}_{eff}/[\rho_{s}(1-\theta)\gamma_{s}+\rho_{f}\theta\gamma_{f}]$ where $\mathbf{\gamma}$ is the specific heat capacity in $\mathrm{[kJkg^{-1}K^{-1}]}$. A summary table of the calculate times using both the Keff from arithmetic and harmonic averaging and resultant U values and diffusion time are shown in Table 3. Here it is shown that, depending on geometry and the presence of microporosity the difference in calculated U value and heat diffusion time can vary between almost no change in the case of Bentheimer (with no microporosity), to almost a 5-fold change in the cases of the Fired Clay and GSA Bricks, where microporosity dominates. This again illustrates the importance of understanding the microporous structure for effective upscaling of these parameters. ## 6 Conclusions A novel workflow for upscaling permeability, flow, and thermal conductivity in a range of porous building materials is presented. Six building material were imaged with micro-CT and analyzed for microporosity and connectivity. The flow and permeability solved on the images both with and without microporosity. Finally, the effective thermal conductivity was solved using both harmonic and arithmetic averaging of the microporous phase and the U value and diffusive time was calculated for a 0.1m thick wall. A strong dependence is found on the connectivity of the pore structure for both permeability and thermal conductivity. When the pores were well connected the microporous phase did not effect permeability appreciably. However, in cases where the microporosity connects otherwise disconnected macropores, it must be included to get an accurate measure of permeability and flow. Indeed, in some cases where the macropores are wholly disconnected, permeability cannot be computed without including the microporous phase. Thermal conductivity was also effected by local heterogeneity and the arrangement of pores, where macro pores act as either heat conduits or barriers depending on the ratio of thermal conductivity between the microporous solid and liquid phases. When the macropores are not well- connected, the choice of arithmetic or harmonic averaging of thermal conductivity in the sub-resolution porosity can change the effective thermal conductivity by order of magnitude. Furthermore, this in turn can change the calculated U-values and heat diffusion timescale of the bulk materials by up to half an order of magnitude. It is thus imperative that the nano structure is investigated to inform this decision and will be a target for future work. This is the first study to combine micro-CT imaging and multi-scale modelling of flow and heat transfer that has applications in the sustainable building industry. These techniques open up the possibility of using this workflow to streamline the design custom materials for optimum permeability and insulation properties for individual use cases. In addition, this workflow could easily be adapted to understand and improve designs in other industries that use porous materials such as fuel cells and batteries technology, lightweight materials and insulation, and semiconductors as well as multi-scale structures from the natural world such as termites nests[37]. ## 7 Acknowledgements This work was generously funded by the EPSRC project EP/P031307/1. H.P.M. and K.M.H. would also like to thank the Glasgow School of Art for permission to use their samples. ## References * Armstrong et al. [2016] Armstrong, R.T., McClure, J.E., Berrill, M.A., Rücker, M., Schlüter, S., Berg, S., 2016\. Beyond darcy’s law: The role of phase topology and ganglion dynamics for two-fluid flow. Physical Review E 94, 043113\. * Baspinar et al. [2010] Baspinar, M.S., Demir, I., Orhan, M., 2010. Utilization potential of silica fume in fired clay bricks. Waste management & research 28, 149–157. * Bentz et al. [1999] Bentz, D.P., Garboczi, E.J., Haecker, C.J., Jensen, O.M., 1999\. Effects of cement particle size distribution on performance properties of portland cement-based materials. Cement and concrete research 29, 1663–1671. * Bentz et al. [2000] Bentz, D.P., Quenard, D., Kunzel, H.M., Baruchel, J., Peyrin, F., Martys, N.S., Garboczi, E., 2000. Microstructure and transport properties of porous building materials. ii: Three-dimensional x-ray tomographic studies. Materials and Structures 33, 147–153. * Beucher and Meyer [1993] Beucher, S., Meyer, F., 1993\. The morphological approach to segmentation: the watershed transformation. Mathematical morphology in image processing 34, 433–481. * Bicer and Devecioglu [2023] Bicer, A., Devecioglu, A.G., 2023\. Modelling for determining the thermal conductivity of porous solid materials. Magazine of Concrete Research , 1–8. * Blunt et al. [2013] Blunt, M.J., Bijeljic, B., Dong, H., Gharbi, O., Iglauer, S., Mostaghimi, P., Paluszny, A., Pentland, C., 2013\. Pore-scale imaging and modelling. Advances in Water resources 51, 197–216. * Buades et al. [2011] Buades, A., Coll, B., Morel, J.M., 2011. Non-local means denoising. Image Processing On Line 1, 208–212. * Chung et al. [2016] Chung, S.Y., Han, T.S., Kim, S.Y., Kim, J.H.J., Youm, K.S., Lim, J.H., 2016\. Evaluation of effect of glass beads on thermal conductivity of insulating concrete using micro ct images and probability functions. Cement and Concrete Composites 65, 150–162. * Dong and Blunt [2009] Dong, H., Blunt, M.J., 2009\. Pore-network extraction from micro-computerized-tomography images. Physical review E 80, 036307\. * Dos Santos [2003] Dos Santos, W.N., 2003. Effect of moisture and porosity on the thermal properties of a conventional refractory concrete. Journal of the European Ceramic Society 23, 745–755. * Etxeberria et al. [2007] Etxeberria, M., Marí, A.R., Vázquez, E., 2007. Recycled aggregate concrete as structural material. Materials and structures 40, 529–541. * Faris et al. [2020] Faris, A., Maes, J., Menke, H., 2020. An investigation into the upscaling of mineral dissolution from the pore to the core scale, in: ECMOR XVII, European Association of Geoscientists & Engineers. pp. 1–15. * Gan et al. [2017] Gan, V.J., Cheng, J.C., Lo, I.M., Chan, C.M., 2017\. Developing a co2-e accounting method for quantification and analysis of embodied carbon in high-rise buildings. Journal of cleaner production 141, 825–836. * Grose [2022] Grose, T.K., 2022. Grown to last. ASEE Prism 31, 28–31. * Guo et al. [2011] Guo, L., Guo, L., Zhong, L., Zhu, Y., 2011. Thermal conductivity and heat transfer coefficient of concrete. Journal of Wuhan University of Technology-Mater. Sci. Ed. 26, 791–796. * IEA [2019] IEA, U., 2019. Global status report for buildings and construction 2019, in: NA, IEA Paris, France. p. NA. * Lai et al. [2015] Lai, P., Moulton, K., Krevor, S., 2015. Pore-scale heterogeneity in the mineral distribution and reactive surface area of porous rocks. Chemical Geology 411, 260–273. * Liu and Wu [2016] Liu, Z., Wu, H., 2016. Pore-scale study on flow and heat transfer in 3d reconstructed porous media using micro-tomography images. Applied Thermal Engineering 100, 602–610. * Maes and Menke [2020] Maes, J., Menke, H.P., 2020\. A bespoke openfoam toolbox for multiphysics flow simulations in pore structures, in: Proceedings of the 17th International Conference on Flow Dynamics (ICFD2020), pp. 1–15. * Maes and Menke [2021a] Maes, J., Menke, H.P., 2021a. Geochemfoam: Direct modelling of flow and heat transfer in micro-ct images of porous media. arXiv preprint arXiv:2110.03311 . * Maes and Menke [2021b] Maes, J., Menke, H.P., 2021b. GeoChemFoam: Direct Modelling of Multiphase Reactive Transport in Real Pore Geometries with Equilibrium Reactions. Transport in porous media doi:10.1007/s11242-021-01661-8. * Maes and Menke [2021c] Maes, J., Menke, H.P., 2021c. Geochemfoam: Operator splitting based time-stepping for efficient volume-of-fluid simulation of capillary-dominated two-phase flow. arXiv:2105.10576 doi:arXiv:2105.10576. * Medero and Chapman [2020] Medero, G., Chapman, S., 2020\. Construction units in form of bricks, blocks or tiles made from recyclable materials and by-products, methods of making the construction units and their use. US Patent 10,669,205. * Menke et al. [2019] Menke, H., Gao, Y., Linden, S., Andrew, M., 2019\. Using nano-xrm and high-contrast imaging to inform micro-porosity permeability during stokes-brinkman single and two-phase flow simulations on micro-ct images. EarthArXiv . * Menke et al. [2021] Menke, H.P., Maes, J., Geiger, S., 2021. Upscaling the porosity–permeability relationship of a microporous carbonate for darcy-scale flow with machine learning. Scientific reports 11, 1–10. * Metz et al. [2005] Metz, B., Davidson, O., De Coninck, H., Loos, M., Meyer, L., 2005. IPCC special report on carbon dioxide capture and storage. Cambridge: Cambridge University Press. * Noiriel [2015] Noiriel, C., 2015. Resolving time-dependent evolution of pore-scale structure, permeability and reactivity using x-ray microtomography. Reviews in Mineralogy and Geochemistry 80, 247–285. * Nunes et al. [2017] Nunes, C., Pel, L., Kuneckỳ, J., Slížková, Z., 2017. The influence of the pore structure on the moisture transport in lime plaster-brick systems as studied by nmr. Construction and Building Materials 142, 395–409. * OpenCFD [2016] OpenCFD, 2016. OpenFOAM, the open source cfd toolbox, User Guide. OpenCFD Ltd. * Panerai et al. [2017] Panerai, F., Ferguson, J.C., Lachaud, J., Martin, A., Gasch, M.J., Mansour, N.N., 2017\. Micro-tomography based analysis of thermal conductivity, diffusivity and oxidation behavior of rigid and flexible fibrous insulators. International Journal of Heat and Mass Transfer 108, 801–811. * Petty and Palin [1983] Petty, J., Palin, M.A., 1983\. Permeability to water of the fibre cell wall material of two hardwoods. Journal of experimental botany 34, 688–693. * Reynolds et al. [2017] Reynolds, C.A., Menke, H., Andrew, M., Blunt, M.J., Krevor, S., 2017. Dynamic fluid connectivity during steady-state multiphase flow in a sandstone. Proceedings of the National Academy of Sciences 114, 8187–8192. * Robayo-Salazar et al. [2017] Robayo-Salazar, R.A., Rivera, J.F., de Gutiérrez, R.M., 2017. Alkali-activated building materials made with recycled construction and demolition wastes. Construction and building materials 149, 130–138. * Shen et al. [2023] Shen, L., Di Luzio, G., Cao, M., Ren, Q., Ren, X., Jiang, M., Zhu, D., Yao, X., 2023. Insights and theoretical model of thermal conductivity of thermally damaged hybrid steel-fine polypropylene fiber-reinforced concrete. Cement and Concrete Composites 138, 105001. * Siegert et al. [2021] Siegert, M., Gurris, M., Saenger, E.H., 2021. Validation suite for numerical solvers calculating effective thermal conductivity in porous media. Journal of Applied Geophysics 189, 104323. * Singh et al. [2019] Singh, K., Muljadi, B.P., Raeini, A.Q., Jost, C., Vandeginste, V., Blunt, M.J., Theraulaz, G., Degond, P., 2019\. The architectural design of smart ventilation and drainage systems in termite nests. Science Advances 5, eaat8520. * Soulaine et al. [2016] Soulaine, C., Gjetvaj, F., Garing, C., Roman, S., Russian, A., Gouze, P., Tchelepi, H.A., 2016. The impact of sub-resolution porosity of x-ray microtomography images on the permeability. Transport in porous media 113, 227–243. * Stamm [2002] Stamm, A.J., 2002. Density of wood substance, adsorption by wood, and permeability of wood. The Journal of Physical Chemistry 33, 398–414. * Twinn et al. [2019] Twinn, R., Desai, K., Box, P., 2019. Net zero carbon buildings: a framework definition. NA . * Xiao et al. [2023] Xiao, T., Zhang, Q., Yang, X., Hooman, K., Li, G., 2023\. Influence of solder condition on effective thermal conductivity of two-directional random fibres: Pore-scale simulation. International Journal of Heat and Mass Transfer 202, 123715. * Yang et al. [2019] Yang, H., Zhang, L., Liu, R., Wen, X., Yang, Y., Zhang, L., Zhang, K., Askari, R., 2019. Thermal conduction simulation based on reconstructed digital rocks with respect to fractures. Energies 12, 2768\. * Zhang et al. [2015] Zhang, W., Min, H., Gu, X., Xi, Y., Xing, Y., 2015\. Mesoscale model for thermal conductivity of concrete. Construction and Building Materials 98, 8–16. * Zhu et al. [2023] Zhu, J., Wang, Y., Xiao, R., Yang, J., 2023. Multiscale theoretical model of thermal conductivity of concrete and the mesoscale simulation of its temperature field. Journal of Materials in Civil Engineering 35, 04022359.
This is a revised and extended version of the paper [CFL15] which has been presented at the 35th IARCS Annual Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2015) in Bangalore, India. Compared to [CFL15], and in addition to numerous small changes and improvements, motivation and examples as well as proofs of all results have been added to the paper. # An $\boldsymbol{\omega}$-Algebra for Real-Time Energy Problems David Cachera Irisa / École normale supérieure, Rennes, France , Uli Fahrenberg École Polytechnique, Palaiseau, France and Axel Legay Irisa / Inria, Rennes, France ###### Abstract. We develop a ∗-continuous Kleene $\omega$-algebra of real-time energy functions. Together with corresponding automata, these can be used to model systems which can consume and regain energy (or other types of resources) depending on available time. Using recent results on ∗-continuous Kleene $\omega$-algebras and computability of certain manipulations on real-time energy functions, it follows that reachability and Büchi acceptance in real- time energy automata can be decided in a static way which only involves manipulations of real-time energy functions. Most of this work was completed while this author was still employed at Irisa / Inria Rennes. ## 1\. Introduction _Energy_ and _resource management_ problems are important in areas such as embedded systems or autonomous systems. They are concerned with the following types of questions: * • Can the system reach a designated state without running out of energy before? * • Can the system reach a designated state within a specified time limit without running out of energy? * • Can the system repeatedly accomplish certain designated tasks without ever running out of energy? Instead of energy, these questions can also be asked using other resources, for example money or fuel. Figure 1. GPS Block II-F satellite (artist’s conception; public domain) As an example, imagine a satellite like in Fig. 1 which is being sent up into space. In its initial state when it has arrived at its orbit, its solar panels are still folded, hence no (electrical) energy is generated. Now it needs to unfold its solar panels and rotate itself and its panels into a position orthogonal to the sun’s rays (for maximum energy yield). These operations require energy which hence must be provided by a battery, and there may be some operational requirements which state that they have to be completed within a given time limit. To minimize weight, one will generally be interested to use a battery which is as little as possible. closedp $0$halfp $2$open $5$closedp $0$halfp $4$opera- tionalopen$-20$open$-20$open$-20$open$-20$rotate$-10$rotate$-10$rotate$-10$ Figure 2. Toy model of the satellite in Fig. 1 Figure 2 shows a simple toy model of such a satellite’s initial operations. We assume that it opens its solar panels in two steps; after the first step they are half open and afterwards fully open, and that it can rotate into orthogonal position at any time. The numbers within the states signify energy gain per time unit, so that for example in the half-open state, the satellite gains $2$ energy units per time unit before rotation and $4$ after rotation. The (negative) numbers at transitions signify the energy cost for taking that transition, hence it costs $20$ energy units to open the solar panels and $10$ to rotate. Now if the satellite battery has sufficient energy, then we can follow any path from the initial to the final state without spending time in intermediate states. A simple inspection reveals that a battery level of $50$ energy units is required for this. On the other hand, if battery level is strictly below $20$, then no path is available to the final state. With initial energy level between these two values, the device has to regain energy by spending time in an intermediate state before proceeding to the next one. The optimal path then depends on the available time and the initial energy. For an initial energy level of at least $40$, the fastest strategy consists in first opening the panels and then spending $2$ time units in state (open—$5$) to regain enough energy to reach the final state. With the smallest possible battery, storing $20$ energy units, $5$ time units have to be spent in state (half—$2$) before passing to (half—$4$) and spending another $5$ time units there. In this paper we will be concerned with models for such systems which, as in the example, allow to spend time in states to regain energy, some of which has to be spent when taking transitions between states. (Instead of energy, other resource types could be modeled, but we will from now think of it as energy.) We call these models _real-time energy automata_. Their behavior depends, thus, on both the initial energy and the time available; as we have seen in the example, this interplay between time and energy means that even simple models can have rather complicated behaviors. As in the example, we will be concerned with the _reachability_ problem for such models, but also with _Büchi acceptance_ : whether there exists an infinite run which visits certain designated states infinitely often. Our methodology is strictly algebraic, using the theory of semiring-weighted automata [DKV09] and extensions developed in [ÉFL15a, ÉFL15b]. We view the finite behavior of a real-time energy automaton as a function $f(x_{0},t)$ which maps initial energy $x_{0}$ and available time $t$ to a final energy level, intuitively corresponding to the highest output energy the system can achieve when run with these parameters. We define a composition operator on such _real-time energy functions_ which corresponds to concatenation of real- time energy automata and show that with this composition and maximum as operators, the set of real-time energy functions forms a _∗ -continuous Kleene algebra_ [Koz94]. This implies that reachability in real-time energy automata can be decided in a static way which only involves manipulations of real-time energy functions. To be able to decide Büchi acceptance, we extend the algebraic setting to also encompass real-time energy functions which model infinite behavior. These take as input an initial energy $x_{0}$ and time $t$, as before, but now the output is Boolean: true if these parameters permit an infinite run, false if they do not. We show that both types of real-time energy functions can be organized into a _∗ -continuous Kleene $\omega$-algebra_ as defined in [ÉFL15a, ÉFL15b]. This entails that also Büchi acceptance for real-time energy automata can be decided in a static way which only involves manipulations of real-time energy functions. The most technically demanding part of the paper is to show that real-time energy functions form a _locally closed semiring_ [DKV09, ÉK02]; generalizing some arguments in [ÉK02, ÉFL15b], it then follows that they form a ∗-continuous Kleene $\omega$-algebra. We conjecture that reachability and Büchi acceptance in real-time energy automata can be decided in exponential time. ### Related work Real-time energy problems have been considered in [Qua11, BFL+08, BFLM10, BLM14, FJLS11]. These are generally defined on _priced timed automata_ [ATP01, BFH+01], a formalism which is more expressive than ours: it allows for time to be reset and admits several independent time variables (or _clocks_) which can be constrained at transitions. All known decidability results apply to priced timed automata with only _one_ clock; in [BLM14] it is shown that with four clocks, it is undecidable whether there exists an infinite run. The work which is closest to ours is [BFLM10]. Their models are priced timed automata with one clock and energy updates on transitions, hence a generalization of ours. Using a sequence of complicated ad-hoc reductions, they show that reachability and existence of infinite runs is decidable for their models; whether their techniques apply to general Büchi acceptance is unclear. Our work is part of a program to make methods from semiring-weighted automata available for energy problems. Starting with [ÉFLQ13], we have developed a general theory of ∗-continuous Kleene $\omega$-algebras [ÉFL15a, ÉFL15b] and shown that it applies to so-called _energy automata_ , which are finite (untimed) automata which allow for rather general _energy transformations_ as transition updates. The contribution of this paper is to show that these algebraic techniques can be applied to a real-time setting. Note that the application of Kleene algebra to real-time and hybrid systems is not a new subject, see for example [HM09, DHMS12]. However, the work in these papers is based on _trajectories_ and _interval predicates_ , respectively, whereas our work is on real-time energy _automata_ , i.e., at a different level. A more thorough comparison of our work to [HM09, DHMS12] would be interesting future work. ### Acknowledgment We are deeply indebted to our colleague and friend Zoltán Ésik who taught us all we know about Kleene algebras and ∗-continuity. This work was started during a visit of Zoltán at Irisa in Rennes; unfortunately, Zoltán did not live to see it completed. ## 2\. Real-Time Energy Automata Let $\mathbbm{R}_{\geq 0}=[0,\infty\mathclose{[}$ denote the set of non- negative real numbers, $[0,\infty]$ the set $\mathbbm{R}_{\geq 0}$ extended with infinity, and $\mathbbm{R}_{\leq 0}=\mathopen{]}-\infty,0]$ the set of non-positive real numbers. A _real-time energy automaton_ (RTEA) $(S,s_{0},F,T,r)$ consists of a finite set $S$ of _states_ , with _initial state_ $s_{0}\in S$, a subset $F\subseteq S$ of _accepting_ states, a finite set $T\subseteq S\times\mathbbm{R}_{\leq 0}\times\mathbbm{R}_{\geq 0}\times S$ of _transitions_ , and a mapping $r:S\to\mathbbm{R}_{\geq 0}$ assigning _rates_ to states. A transition $(s,p,b,s^{\prime})$ is written $s\mathrel{\smash{\xrightarrow[\raisebox{2.15277pt}[0.0pt][0.0pt]{\hskip 3.00003pt$\scriptstyle b$\hskip 3.00003pt}]{p}}}s^{\prime}$, $p$ is called its _price_ and $b$ its _bound_. We assume $b\geq-p$ for all transitions $s\mathrel{\smash{\xrightarrow[\raisebox{2.15277pt}[0.0pt][0.0pt]{\hskip 3.00003pt$\scriptstyle b$\hskip 3.00003pt}]{p}}}s^{\prime}$. An RTEA is _computable_ if all its rates, prices and bounds are computable real numbers. A _configuration_ of an RTEA $A=(S,s_{0},F,T,r)$ is an element $(s,x,t)\in C=S\times\mathbbm{R}_{\geq 0}\times\mathbbm{R}_{\geq 0}$. Let $\mathord{\leadsto}\subseteq C\times C$ be the relation given by $(s,x,t)\leadsto(s^{\prime},x^{\prime},t^{\prime})$ iff $t^{\prime}\leq t$ and there is a transition $s\mathrel{\smash{\xrightarrow[\raisebox{2.15277pt}[0.0pt][0.0pt]{\hskip 3.00003pt$\scriptstyle b$\hskip 3.00003pt}]{p}}}s^{\prime}$ such that $x+(t-t^{\prime})r(s)\geq b$ and $x^{\prime}=x+(t-t^{\prime})r(s)+p$. Hence $t-t^{\prime}$ time units are spent in state $s$ and afterwards a transition $s\mathrel{\smash{\xrightarrow[\raisebox{2.15277pt}[0.0pt][0.0pt]{\hskip 3.00003pt$\scriptstyle b$\hskip 3.00003pt}]{p}}}s^{\prime}$ is taken. A _run_ in $A$ is a path in the infinite directed graph $(C,\mathord{\leadsto})$, i.e., a finite or infinite sequence $(s_{1},x_{1},t_{1})\leadsto(s_{2},x_{2},t_{2})\leadsto\dotsm$. We are ready to state the decision problems for RTEAs with which we will be concerned. Let $A=(S,s_{0},F,T,r)$ be a computable RTEA and $x_{0},t,y\in[0,\infty]$ computable numbers. ###### Problem 1 (State reachability). Does there exist a finite run $(s_{0},x_{0},t)\leadsto\dotsm\leadsto(s,x,t^{\prime})$ in $A$ with $s\in F$? ###### Problem 2 (Coverability). Does there exist a finite run $(s_{0},x_{0},t)\leadsto\dotsm\leadsto(s,x,t^{\prime})$ in $A$ with $s\in F$ and $x\geq y$? ###### Problem 3 (Büchi acceptance). Does there exist $s\in F$ and an infinite run $(s_{0},x_{0},t)\leadsto(s_{1},x_{1},t_{1})\leadsto\dotsm$ in $A$ in which $s_{n}=s$ for infinitely many $n\geq 0$? Note that the coverability problem only asks for the final energy level $x$ to be _above_ $y$; as we are interested in _maximizing_ energy, this is natural. Also, state reachability can be reduced to coverability by setting $y=0$. As the Büchi acceptance problem asks for infinite runs, there is no notion of output energy for this problem. Asking the Büchi acceptance question for a _finite_ available time $t<\infty$ amounts to finding (accepting) _Zeno runs_ in the given RTEA, i.e., runs which make infinitely many transitions in finite time. Hence one will usually be interested in Büchi acceptance only for an infinite time horizon. On the other hand, for $t=\infty$, a positive answer to the state reachability problem 1 will consist of a finite run $(s_{0},x_{0},\infty)\leadsto\dotsm\leadsto(s,x,\infty)$. Now as one can delay indefinitely in the state $s\in F$, this yields an infinite _timed run_ in the RTEA. Per our definition of $\leadsto$ however, such an infinite run will _not_ be a positive answer to the Büchi acceptance problem. ## 3\. Weighted Automata over ∗-Continuous Kleene $\omega$-Algebras We now turn our attention to the algebraic setting of ∗-continuous Kleene algebras and related structures and review some results on ∗-continuous Kleene algebras and ∗-continuous Kleene $\omega$-algebras which we will need in the sequel. ### 3.1. ∗-Continuous Kleene Algebras An _idempotent semiring_ [Gol99] $S=(S,\vee,\cdot,\bot,1)$ consists of an idempotent commutative monoid $(S,\vee,\bot)$ and a monoid $(S,\cdot,1)$ such that the distributive and zero laws $x(y\vee z)=xy\vee xz\qquad\qquad(y\vee z)x=yx\vee zx\qquad\qquad\bot x=\bot=x\bot$ hold for all $x,y,z\in S$. It follows that the product operation distributes over all finite suprema. Each idempotent semiring $S$ is partially ordered by the relation $x\leq y$ iff $x\vee y=y$, and then sum and product preserve the partial order and $\bot$ is the least element. A _Kleene algebra_ [Koz94] is an idempotent semiring $S=(S,\vee,\cdot,\bot,1)$ equipped with an operation ${}^{*}:S\to S$ such that for all $x,y\in S$, $yx^{*}$ is the least solution of the fixed point equation $z=zx\vee y$ and $x^{*}y$ is the least solution of the fixed point equation $z=xz\vee y$ with respect to the order $\leq$. A _∗ -continuous Kleene algebra_ [Koz94] is a Kleene algebra $S=(S,\vee,\cdot,^{*},\bot,1)$ in which the infinite suprema $\bigvee\\{x^{n}\mid n\geq 0\\}$ exist for all $x\in S$, $x^{*}=\bigvee\\{x^{n}\mid n\geq 0\\}$ for every $x\in S$, and product preserves such suprema: for all $x,y\in S$, $y\big{(}\bigvee_{n\geq 0}x^{n}\big{)}=\bigvee_{n\geq 0}yx^{n}\quad\text{and}\quad\big{(}\bigvee_{n\geq 0}x^{n}\big{)}y=\bigvee_{n\geq 0}x^{n}y\,.$ Examples of ∗-continuous Kleene algebras include the set $P(\Sigma^{*})$ of languages over an alphabet $\Sigma$, with set union as $\vee$ and concatenation as $\cdot$, and the set $P(A\times A)$ of relations over a set $A$, with set union as $\vee$ and relation composition as $\cdot$. These are, in fact, _continuous_ Kleene algebras in the sense that suprema $\bigvee X$ of arbitrary subsets $X$ exist. An important example of a ∗-continuous Kleene algebra which is not continuous is the set $R(\Sigma^{*})$ of _regular_ languages over an alphabet $\Sigma$. This example is canonical in the sense that $R(\Sigma^{*})$ is the _free_ ∗-continuous Kleene algebra over $\Sigma$. An idempotent semiring $S=(S,\vee,\cdot,\bot,1)$ is said to be _locally closed_ [ÉK02] if it holds that for every $x\in S$, there exists $N\geq 0$ so that $\bigvee_{n=0}^{N}x^{n}=\bigvee_{n=0}^{N+1}x^{n}$. In any locally closed idempotent semiring, we may define a ∗-operation by $x^{*}=\bigvee_{n\geq 0}x^{n}$. ###### Lemma 4. Any locally closed idempotent semiring is a ∗-continuous Kleene algebra. ###### Proof 3.1. Let $S=(S,\mathord{\vee},\mathord{\cdot},\bot,1)$ be a locally closed idempotent semiring. We need to show that for all elements $x,y,z\in S$, $xy^{*}=\bigvee_{n\geq 0}(xy^{n})\qquad\text{and}\qquad y^{*}z=\bigvee_{n\geq 0}(y^{n}z)\,.$ It is clear that the right-hand sides of the equations are less than or equal to their left-hand sides, so we are left with proving the other inequalities. As $S$ is locally closed, there is $N\geq 0$ such that $y^{*}=\bigvee_{n=0}^{N}y^{n}$, and then by distributivity, $xy^{*}=x\big{(}\bigvee_{n=0}^{N}y^{n})=\bigvee_{n=0}^{N}(xy^{n})\leq\bigvee_{n\geq 0}(xy^{n})$; similarly, $y^{*}z\leq\bigvee_{n\geq 0}(y^{n}z)$. ### 3.2. ∗-Continuous Kleene $\omega$-Algebras An _idempotent semiring-semimodule pair_ [ÉK07b, BÉ93] $(S,V)$ consists of an idempotent semiring $S=(S,\vee,\cdot,\bot,1)$ and a commutative idempotent monoid $V=(V,\vee,\bot)$ which is equipped with a left $S$-action $S\times V\to V$, $(s,v)\mapsto sv$, satisfying $\displaystyle(s\vee s^{\prime})v$ $\displaystyle=sv\vee s^{\prime}v\qquad\qquad$ $\displaystyle s(v\vee v^{\prime})$ $\displaystyle=sv\vee sv^{\prime}\qquad\qquad$ $\displaystyle\bot s$ $\displaystyle=\bot$ $\displaystyle(ss^{\prime})v$ $\displaystyle=s(s^{\prime}v)$ $\displaystyle s\bot$ $\displaystyle=\bot$ $\displaystyle 1v$ $\displaystyle=v$ for all $s,s^{\prime}\in S$ and $v\in V$. In that case, we also call $V$ a _(left) $S$-semimodule_. A _generalized ∗-continuous Kleene algebra_ [ÉFL15a] is an idempotent semiring-semimodule pair $(S,V)$ where $S=(S,\vee,\cdot,^{*},\bot,1)$ is a ∗-continuous Kleene algebra such that for all $x,y\in S$ and for all $v\in V$, $xy^{*}v=\bigvee_{n\geq 0}xy^{n}v$ A _∗ -continuous Kleene $\omega$-algebra_ [ÉFL15a] consists of a generalized ∗-continuous Kleene algebra $(S,V)$ together with an infinite product operation $S^{\omega}\to V$ which maps every infinite sequence $x_{0},x_{1},\dotsc$ in $S$ to an element $\prod_{n\geq 0}x_{n}$ of $V$. The infinite product is subject to the following conditions: * • For all $x_{0},x_{1},\dotsc\in S$, $\displaystyle\prod_{n\geq 0}x_{n}=x_{0}\prod_{n\geq 0}x_{n+1}$. $(\textrm{{C}}1)$ * • Let $x_{0},x_{1},\dotsc\in S$ and $0=n_{0}\leq n_{1}\leq\dotsm$ a sequence which increases without a bound. Let $y_{k}=x_{n_{k}}\dotsm x_{n_{k+1}-1}$ for all $k\geq 0$. Then $\displaystyle\prod_{n\geq 0}x_{n}=\prod_{k\geq 0}y_{k}$. $(\textrm{{C}}2)$ * • For all $x_{0},x_{1},\dotsc,y,z\in S$, $\displaystyle\prod_{n\geq 0}(x_{n}(y\vee z))=\adjustlimits{\bigvee}_{x_{0}^{\prime},x_{1}^{\prime},\dotsc\in\\{y,z\\}\;}{\prod}_{n\geq 0}x_{n}x_{n}^{\prime}$. $(\textrm{{C}}3)$ * • For all $x,y_{0},y_{1},\dotsc\in S$, $\displaystyle\prod_{n\geq 0}x^{*}y_{n}=\adjustlimits{\bigvee}_{k_{0},k_{1},\dotsc\geq 0\;}{\prod}_{n\geq 0}x^{k_{n}}y_{n}$. $(\textrm{{C}}4)$ Hence the infinite product extends the finite product $(\textrm{{C}}1)$; it is finitely associative $(\textrm{{C}}2)$; it preserves finite suprema $(\textrm{{C}}3)$; and it preserves the ∗-operation (and hence infinite suprema of the form $\bigvee_{n\geq 0}x^{n}$) $(\textrm{{C}}4)$. An example of a ∗-continuous Kleene $\omega$-algebra is the structure $(P(\Sigma^{*}),P(\Sigma^{\infty}))$ consisting of the set $P(\Sigma^{*})$ of languages of finite words and of the set $P(\Sigma^{\infty})$ of finite or infinite words over an alphabet $\Sigma$. This is, in fact, a _continuous_ Kleene $\omega$-algebra [ÉFL15a] in the sense that the infinite product preserves _all_ suprema. A ∗-continuous Kleene $\omega$-algebra which is _not_ continuous is $(R(\Sigma^{*}),R^{\prime}(\Sigma^{\infty}))$, where $R(\Sigma^{*})$ is the set of regular languages over $\Sigma$, and $R^{\prime}(\Sigma^{\infty})$ contains all subsets of the set $\Sigma^{\infty}$ of finite or infinite words which are finite unions of _finitary_ infinite products of regular languages, see [ÉFL15a]. ### 3.3. Matrix Semiring-Semimodule Pairs For any idempotent semiring $S$ and $n\geq 1$, we can form the matrix semiring $S^{n\times n}$ whose elements are $n\times n$-matrices of elements of $S$ and whose sum and product are given as the usual matrix sum and product. It is known [Koz90] that when $S$ is a ∗-continuous Kleene algebra, then $S^{n\times n}$ is also a ∗-continuous Kleene algebra, with the ∗-operation defined by $M^{*}_{i,j}=\bigvee_{m\geq 0}\bigvee\big{\\{}M_{k_{1},k_{2}}M_{k_{2},k_{3}}\dotsm M_{k_{m-1},k_{m}}\mathrel{\big{|}}1\leq k_{1},\dotsc,k_{m}\leq n,k_{1}=i,k_{m}=j\big{\\}}$ (1) for all $M\in S^{n\times n}$ and $1\leq i,j\leq n$. Also, if $n\geq 2$ and $M=\left(\begin{smallmatrix}a&b\\\ c&d\end{smallmatrix}\right)$, where $a$ and $d$ are square matrices of dimension less than $n$, then $M^{*}=\begin{pmatrix}(a\vee bd^{*}c)^{*}&(a\vee bd^{*}c)^{*}bd^{*}\\\ (d\vee ca^{*}b)^{*}ca^{*}&(d\vee ca^{*}b)^{*}\end{pmatrix}$ (2) For any idempotent semiring-semimodule pair $(S,V)$ and $n\geq 1$, we can form the matrix semiring-semimodule pair $(S^{n\times n},V^{n})$ whose elements are $n\times n$-matrices of elements of $S$ and $n$-dimensional (column) vectors of elements of $V$, with the action of $S^{n\times n}$ on $V^{n}$ given by the usual matrix-vector product. When $(S,V)$ is a ∗-continuous Kleene $\omega$-algebra, then $(S^{n\times n},V^{n})$ is a generalized ∗-continuous Kleene algebra [ÉFL15a]. By [ÉFL15a, Lemma 17], there is an $\omega$-operation on $S^{n\times n}$ defined by $M^{\omega}_{i}=\bigvee_{1\leq k_{1},k_{2},\dotsc\leq n}M_{i,k_{1}}M_{k_{1},k_{2}}\dotsm$ for all $M\in S^{n\times n}$ and $1\leq i\leq n$. Also, if $n\geq 2$ and $M=\left(\begin{smallmatrix}a&b\\\ c&d\end{smallmatrix}\right)$, where $a$ and $d$ are square matrices of dimension less than $n$, then $M^{\omega}=\begin{pmatrix}(a\vee bd^{*}c)^{\omega}\vee(a\vee bd^{*}c)^{*}bd^{\omega}\\\ (d\vee ca^{*}b)^{\omega}\vee(d\vee ca^{*}b)^{*}ca^{\omega}\end{pmatrix}$ (3) It can be shown [ÉK07a] that the number of semiring computations required in the computation of $M^{*}$ and $M^{\omega}$ in (2) and (3) is $O(n^{3})$ and $O(n^{4})$, respectively. ### 3.4. Weighted automata Let $(S,V)$ be a ∗-continuous Kleene $\omega$-algebra and $A\subseteq S$ a subset. We write $\langle A\rangle$ for the set of all finite suprema $a_{1}\vee\dotsm\vee a_{m}$ with $a_{i}\in A$ for each $i=1,\dotsc,m$. A _weighted automaton_ [DKV09] over $A$ of dimension $n\geq 1$ is a tuple $(\alpha,M,k)$, where $\alpha\in\\{\bot,1\\}^{n}$ is the initial vector, $M\in\langle A\rangle^{n\times n}$ is the transition matrix, and $k$ is an integer $0\leq k\leq n$. Combinatorially, this may be represented as a transition system whose set of states is $\\{1,\dotsc,n\\}$. For any pair of states $i,j$, the transitions from $i$ to $j$ are determined by the entry $M_{i,j}$ of the transition matrix: if $M_{i,j}=a_{1}\vee\dotsm\vee a_{m}$, then there are $m$ transitions from $i$ to $j$, respectively labeled $a_{1},\dotsc,a_{n}$. The states $i$ with $\alpha_{i}=1$ are _initial_ , and the states $\\{1,\dotsc,k\\}$ are _accepting_. The _finite behavior_ of a weighted automaton $A=(\alpha,M,k)$ is defined to be $|A|=\alpha M^{*}\kappa\,,$ where $\kappa\in\\{\bot,1\\}^{n}$ is the vector given by $\kappa_{i}=1$ for $i\leq k$ and $\kappa_{i}=\bot$ for $i>k$. (Note that $\alpha$ has to be used as a _row_ vector for this multiplication to make sense.) It is clear by (1) that $|A|$ is the supremum of the products of the transition labels along all paths in $A$ from any initial to any accepting state. The _Büchi behavior_ of a weighted automaton $A=(\alpha,M,k)$ is defined to be $\|A\|=\alpha\begin{pmatrix}(a+bd^{*}c)^{\omega}\\\ d^{*}c(a+bd^{*}c)^{\omega}\end{pmatrix},$ where $a\in\langle A\rangle^{k\times k}$, $b\in\langle A\rangle^{k\times(n-k)}$, $c\in\langle A\rangle^{(n-k)\times n}$ and $d\in\langle A\rangle^{(n-k)\times(n-k)}$ are such that $M=\left(\begin{smallmatrix}a&b\\\ c&d\end{smallmatrix}\right)$. Note that $M$ is split in submatrices $\left(\begin{smallmatrix}a&b\\\ c&d\end{smallmatrix}\right)$ precisely so that $a$ contains transitions between accepting states and $d$ contains transitions between non-accepting states. By [ÉFL15a, Thm. 20], $\|A\|$ is the supremum of the products of the transition labels along all infinite paths in $A$ from any initial state which infinitely often visit an accepting state. ## 4\. Real-Time Energy Functions We are now ready to consider the algebra of real-time energy functions. We will build this up inductively, starting from the functions which correspond to simple _atomic_ RTEAs. These can be composed to form _linear_ real-time energy functions, and with additional maximum and star operations, they form a ∗-continuous Kleene algebra. When also taking infinite behaviors into account, we get a ∗-continuous Kleene $\omega$-algebra of real-time energy functions. Let $L=[0,\infty]_{\bot}$ denote the set of non-negative real numbers extended with a bottom element $\bot$ and a top element $\infty$. We use the standard order on $L$, i.e., the one on $\mathbbm{R}_{\geq 0}$ extended by declaring $\bot\leq x\leq\infty$ for all $x\in L$. $L$ is a complete lattice, whose suprema we will denote by $\vee$ for binary and $\bigvee$ for general supremum. For convenience we also extend the addition on $\mathbbm{R}_{\geq 0}$ to $L$ by declaring that $\bot+x=x+\bot=\bot$ for all $x\in L$ and $\infty+x=x+\infty=\infty$ for all $x\in L\setminus\\{\bot\\}$. Note that $\bot+\infty=\infty+\bot=\bot$. Let $\mathcal{F}$ denote the set of monotonic functions $f:L\times[0,\infty]\to L$ (with the product order on $L\times[0,\infty]$) for which $f(\bot,t)=\bot$ for all $t\in L$. We will frequently write such functions in curried form, using the isomorphism $\langle L\times[0,\infty]\to L\rangle\approx\langle[0,\infty]\to L\to L\rangle$. ### 4.1. Linear Real-Time Energy Functions We will be considered with the subset of functions in $\mathcal{F}$ consisting of _real-time energy functions_ (RTEFs). These correspond to functions expressed by RTEAs, and we will construct them inductively. We start with atomic RTEFs: Let $r,b,p\in\mathbbm{R}$ with $r\geq 0$, $p\leq 0$ and $b\geq-p$. An _atomic real-time energy function_ is an element $f$ of $\mathcal{F}$ such that $f(\bot,t)=\bot$, $f(\infty,t)=\infty$, $f(x,\infty)=\infty$, and $f(x,t)=\begin{cases}x+rt+p&\text{if }x+rt\geq b\,,\\\ \bot&\text{otherwise}\end{cases}$ for all $x,t\in\mathbbm{R}_{\geq 0}$. The numbers $r,b$ and $p$ are respectively called the _rate_ , _bound_ and _price_ of $f$. We denote by $\mathcal{A}\subseteq\mathcal{F}$ the set of atomic real-time energy functions. These functions arise from RTEAs with one transition: $r$$p$$b$ Non-negativity of $r$ ensures that atomic RTEFs are monotonic. In our examples, when the bound is not explicitly mentioned it corresponds to the lowest possible one: $b=-p$. Atomic RTEFs are naturally combined along acyclic paths by means of a composition operator. Intuitively, a composition of two successive atomic RTEFs determines the optimal output energy one can get after spending some time in either one or both locations of the corresponding automaton. This notion of composition is naturally extended to all functions in $\mathcal{F}$, and formally defined as follows (where $\circ$ denotes standard function composition). The _composition_ of $f,g\in\mathcal{F}$ is the element $f\mathrel{\mathop{\triangleright}}g$ of $\mathcal{F}$ such that $\forall t\in[0,\infty]:f\mathrel{\mathop{\triangleright}}g(t)=\bigvee_{t_{1}+t_{2}=t}g(t_{2})\circ f(t_{1})$ (4) Note that composition is written in diagrammatic order. Uncurrying the equation, we see that $f\mathrel{\mathop{\triangleright}}g(x,t)=\bigvee_{t_{1}+t_{2}=t}g(f(x,t_{1}),t_{2})$. Let $\mathbf{1},\bot\in\mathcal{F}$ be the functions defined by $\mathbf{1}(t)(x)=x$ and $\bot(t)(x)=\bot$ for all $x,t$. ###### Lemma 5. The $\mathrel{\mathop{\triangleright}}$ operator is associative, with $\mathbf{1}$ as neutral and $\bot$ as absorbing elements. ###### Proof 4.1. Let $f\in\mathcal{F}$. It is clear that $f\mathrel{\mathop{\triangleright}}\bot=\bot\mathrel{\mathop{\triangleright}}f=f$. For $f\mathrel{\mathop{\triangleright}}\mathbf{1}$ and $\mathbf{1}\mathrel{\mathop{\triangleright}}f$, we have $f\mathrel{\mathop{\triangleright}}\mathbf{1}(t)(x)=\bigvee_{t_{1}+t_{2}=t}\mathbf{1}(f(x,t_{1}),t_{2})=\bigvee_{t_{1}+t_{2}=t}f(x,t_{1})=f(x,t)$ because of monotonicity of $f$. Similarly, $\mathbf{1}\mathrel{\mathop{\triangleright}}f(t)(x)=\bigvee_{t_{1}+t_{2}=t}f(\mathbf{1}(x,t_{1}),t_{2})=\bigvee_{t_{1}+t_{2}=t}f(x,t_{2})=f(x,t)$ because of monotonicity of $f$. As to associativity, a routine calculation shows that for all $f,g,h\in\mathcal{F}$, $((f\mathrel{\mathop{\triangleright}}g)\mathrel{\mathop{\triangleright}}h)(t)=(f\mathrel{\mathop{\triangleright}}(g\mathrel{\mathop{\triangleright}}h))(t)=\bigvee_{t_{1}+t_{2}+t_{3}=t}h(t_{3})\circ g(t_{2})\circ f(t_{1})$. Compositions of atomic RTEFs along paths are called _linear_ RTEFs: A _linear real-time energy function_ is a finite composition $f_{1}\mathrel{\mathop{\triangleright}}f_{2}\mathrel{\mathop{\triangleright}}\dots\mathrel{\mathop{\triangleright}}f_{n}$ of atomic RTEFs. As an example, and also to show that linear RTEFs can have quite complex behavior, we show the linear RTEF associated to one of the paths in the satellite example of the introduction. Consider the following (linear) RTEA: $0$$2$$5$$-20$$20$$-20$$20$$-10$$10$ Its linear RTEF $f$ can be computed as follows: $f(x,t)=\begin{cases}\bot&\text{if }x<20\text{ or }(20\leq x<40\text{ and }x+2t<44)\\\ &\phantom{\text{if }x<20}\text{ or }(x\geq 40\text{ and }x+5t<50)\\\ 2.5x+5t-110&\text{if }20\leq x<40\text{ and }x+2t\geq 44\\\ x+5t-50&\text{if }x\geq 40\text{ and }x+5t\geq 50\end{cases}$ We show a graphical representation of $f$ on Fig. 3. The left part of the figure shows the boundary between two regions in the $(x,t)$ plane, corresponding to the minimal value $0$ achieved by the function. Below this boundary, no path exists through the corresponding RTEA. Above, the function is linear in $x$ and $t$. The coefficient of $t$ corresponds to the maximal rate in the RTEA; the coefficient of $x$ depends on the relative position of $x$ with respect to (partial sums of) the bounds $b_{i}$. $x$0$20$$40$$50$$t$$2$$12$$f(x,t)=\bot$$f(x,t)\neq\bot$slope $-\infty$slope $-1/2$slope $-1/5$ $x$$t$$f$0$20$$40$$50$$10$ Figure 3. Graphical representation of the linear RTEF from Example 4.1. ### 4.2. Normal Form Next we need to see that all linear RTEFs can be converted to a _normal form_ : A sequence $f_{1},\dotsc,f_{n}$ of atomic RTEFs, with rates, bounds and prices $r_{1},\dotsc,r_{n}$, $b_{1},\dotsc,b_{n}$ and $p_{1},\dotsc,p_{n}$, respectively, is in _normal form_ if * • $r_{1}<\dotsm<r_{n}$, * • $b_{1}\leq\dotsm\leq b_{n}$, and * • $p_{1}=\dotsm=p_{n-1}=0$. ###### Lemma 6. For any linear RTEF $f$ there exists a sequence $f_{1},\dotsc,f_{n}$ of atomic RTEFs in normal form such that $f=f_{1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n}$. A normal form of the RTEF from Example 4.1 is as follows: $0$$2$$5$$0$$20$$0$$40$$-50$$50$ It is clear that its energy function is the same as the one of Example 4.1: any run which satisfies the new constraints is equivalent to one which satisfies the old ones, and vice versa. ###### Proof 4.2. Let $f=f_{1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n}$, where $f_{1},\dotsc,f_{n}$ are atomic RTEFs and assume $f_{1},\dots,f_{n}$ is not in normal form. If there is an index $k\in\\{1,\dotsc,n-1\\}$ with $r_{k}\geq r_{k+1}$, then we can use the following transformation to remove the state with rate $r_{k+1}$: $\leavevmode\hbox to145.72pt{\vbox to29.8pt{\pgfpicture\makeatletter\hbox{\hskip 27.09583pt\lower-16.22154pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{7.11319pt}{0.0pt}\pgfsys@curveto{7.11319pt}{3.92854pt}{3.92854pt}{7.11319pt}{0.0pt}{7.11319pt}\pgfsys@curveto{-3.92854pt}{7.11319pt}{-7.11319pt}{3.92854pt}{-7.11319pt}{0.0pt}\pgfsys@curveto{-7.11319pt}{-3.92854pt}{-3.92854pt}{-7.11319pt}{0.0pt}{-7.11319pt}\pgfsys@curveto{3.92854pt}{-7.11319pt}{7.11319pt}{-3.92854pt}{7.11319pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.9405pt}{-1.18056pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}{}{{}}{}}{{{}}{{}}}{}{{}}{}{{}}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{-20.22981pt}{0.0pt}\pgfsys@lineto{-8.77318pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-8.77318pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-23.76282pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{62.7686pt}{0.0pt}\pgfsys@curveto{62.7686pt}{5.59343pt}{58.23434pt}{10.12769pt}{52.64091pt}{10.12769pt}\pgfsys@curveto{47.04749pt}{10.12769pt}{42.51323pt}{5.59343pt}{42.51323pt}{0.0pt}\pgfsys@curveto{42.51323pt}{-5.59343pt}{47.04749pt}{-10.12769pt}{52.64091pt}{-10.12769pt}\pgfsys@curveto{58.23434pt}{-10.12769pt}{62.7686pt}{-5.59343pt}{62.7686pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{52.64091pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{45.12262pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{118.42401pt}{0.0pt}\pgfsys@curveto{118.42401pt}{5.59343pt}{113.88976pt}{10.12769pt}{108.29633pt}{10.12769pt}\pgfsys@curveto{102.7029pt}{10.12769pt}{98.16864pt}{5.59343pt}{98.16864pt}{0.0pt}\pgfsys@curveto{98.16864pt}{-5.59343pt}{102.7029pt}{-10.12769pt}{108.29633pt}{-10.12769pt}\pgfsys@curveto{113.88976pt}{-10.12769pt}{118.42401pt}{-5.59343pt}{118.42401pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{108.29633pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{100.77803pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{7.31319pt}{0.0pt}\pgfsys@lineto{40.85324pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{40.85324pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{20.75175pt}{5.47743pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$p_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{21.12155pt}{-10.47745pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$b_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{62.9686pt}{0.0pt}\pgfsys@lineto{96.50865pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{96.50865pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{72.82938pt}{5.9441pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$p_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{73.19917pt}{-10.47743pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$b_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad\underset{(r_{k}\geq r_{k+1})}{\longmapsto}\leavevmode\hbox to127.61pt{\vbox to31.98pt{\pgfpicture\makeatletter\hbox{\hskip 27.09583pt\lower-16.86601pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{7.11319pt}{0.0pt}\pgfsys@curveto{7.11319pt}{3.92854pt}{3.92854pt}{7.11319pt}{0.0pt}{7.11319pt}\pgfsys@curveto{-3.92854pt}{7.11319pt}{-7.11319pt}{3.92854pt}{-7.11319pt}{0.0pt}\pgfsys@curveto{-7.11319pt}{-3.92854pt}{-3.92854pt}{-7.11319pt}{0.0pt}{-7.11319pt}\pgfsys@curveto{3.92854pt}{-7.11319pt}{7.11319pt}{-3.92854pt}{7.11319pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.9405pt}{-1.18056pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}{}{{}}{}}{{{}}{{}}}{}{{}}{}{{}}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{-20.22981pt}{0.0pt}\pgfsys@lineto{-8.77318pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-8.77318pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-23.76282pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{100.31377pt}{0.0pt}\pgfsys@curveto{100.31377pt}{5.59343pt}{95.77951pt}{10.12769pt}{90.18608pt}{10.12769pt}\pgfsys@curveto{84.59265pt}{10.12769pt}{80.0584pt}{5.59343pt}{80.0584pt}{0.0pt}\pgfsys@curveto{80.0584pt}{-5.59343pt}{84.59265pt}{-10.12769pt}{90.18608pt}{-10.12769pt}\pgfsys@curveto{95.77951pt}{-10.12769pt}{100.31377pt}{-5.59343pt}{100.31377pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{90.18608pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{82.66779pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{7.31319pt}{0.0pt}\pgfsys@lineto{78.3984pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{78.3984pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{26.88512pt}{5.9441pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$p_{k}+p_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{10.36879pt}{-11.033pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\max(b_{k},b_{k+1}-p_{k})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ Informally, any run through the RTEA for $f_{1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n}$ which maximizes output energy will spend no time in the state with rate $r_{i+1}$, as this time may as well be spent in the state with rate $r_{i}$ without lowering output energy. To make this argument precise, we prove that this transformation does not change the values of $f$. Let $f^{\prime}$ denote the function which results from the transformation. Let $x\in L$ and $t\in[0,\infty]$. We show first that $f(x,t)\leq f^{\prime}(x,t)$, which is clear if $f(x,t)=\bot$. If $f(x,t)\neq\bot$, then there is an accepting run through the RTEA corresponding to $f_{1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n}$. Hence we have $t_{1}+\dotsm+t_{n}=t$ such that $f(x,t)=x+r_{1}t_{1}+p_{1}+\dotsm+r_{n}t_{n}+p_{n}$ and $x+\dotsm+r_{j}t_{j}\geq b_{j}$ for all $j=1,\dotsc,n$. Let $t_{k}^{\prime}=t_{k}+t_{k+1}$, $t_{k+1}^{\prime}=0$, and $t_{j}^{\prime}=t_{j}$ for all $j\notin\\{k,k+1\\}$. By $r_{k}\geq r_{k+1}$, we know that $x+\dotsm+r_{k}t_{k}^{\prime}\geq b_{k}$ and $x+\dotsm+r_{k+1}t_{k+1}^{\prime}\geq b_{k+1}^{\prime}$, hence $x+\dotsm+r_{j}t_{j}^{\prime}\geq b_{j}$ for all $j=1,\dotsc,n$. Hence this new run is also accepting, and $x+r_{1}t_{1}^{\prime}+p_{1}+\dotsm+r_{n}t_{n}^{\prime}+p_{n}\geq f(x,t)$. Because $t_{k+1}^{\prime}=0$, this also yields an accepting run through the RTEA for $f^{\prime}$, showing that $f^{\prime}(x,t)\geq f(x,t)$. The other inequality, $f(x,t)\geq f^{\prime}(x,t)$, is clear if $f^{\prime}(x,t)=\bot$. Otherwise, there is an accepting run through the RTEA for $f^{\prime}$. Hence we have $t_{1}+\dotsm+t_{n}=t$, with $t_{k+1}=0$, such that $f^{\prime}(x,t)=x+r_{1}t_{1}+p_{1}+\dotsm+r_{n}t_{n}+p_{n}$ and $x+\dotsm+r_{j}t_{j}\geq b_{j}$ for all $j=1,\dotsc,n$. But then this is also an accepting run through the RTEA for $f$, showing that $f(x,t)\geq f^{\prime}(x,t)$. We can hence assume that $r_{1}<\dotsm<r_{n}$. To ensure the last two conditions of Definition 4.2, we use the following transformation: $\leavevmode\hbox to145.72pt{\vbox to29.8pt{\pgfpicture\makeatletter\hbox{\hskip 27.09583pt\lower-16.22154pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{7.11319pt}{0.0pt}\pgfsys@curveto{7.11319pt}{3.92854pt}{3.92854pt}{7.11319pt}{0.0pt}{7.11319pt}\pgfsys@curveto{-3.92854pt}{7.11319pt}{-7.11319pt}{3.92854pt}{-7.11319pt}{0.0pt}\pgfsys@curveto{-7.11319pt}{-3.92854pt}{-3.92854pt}{-7.11319pt}{0.0pt}{-7.11319pt}\pgfsys@curveto{3.92854pt}{-7.11319pt}{7.11319pt}{-3.92854pt}{7.11319pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.9405pt}{-1.18056pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}{}{{}}{}}{{{}}{{}}}{}{{}}{}{{}}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{-20.22981pt}{0.0pt}\pgfsys@lineto{-8.77318pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-8.77318pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-23.76282pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{62.7686pt}{0.0pt}\pgfsys@curveto{62.7686pt}{5.59343pt}{58.23434pt}{10.12769pt}{52.64091pt}{10.12769pt}\pgfsys@curveto{47.04749pt}{10.12769pt}{42.51323pt}{5.59343pt}{42.51323pt}{0.0pt}\pgfsys@curveto{42.51323pt}{-5.59343pt}{47.04749pt}{-10.12769pt}{52.64091pt}{-10.12769pt}\pgfsys@curveto{58.23434pt}{-10.12769pt}{62.7686pt}{-5.59343pt}{62.7686pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{52.64091pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{45.12262pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{118.42401pt}{0.0pt}\pgfsys@curveto{118.42401pt}{5.59343pt}{113.88976pt}{10.12769pt}{108.29633pt}{10.12769pt}\pgfsys@curveto{102.7029pt}{10.12769pt}{98.16864pt}{5.59343pt}{98.16864pt}{0.0pt}\pgfsys@curveto{98.16864pt}{-5.59343pt}{102.7029pt}{-10.12769pt}{108.29633pt}{-10.12769pt}\pgfsys@curveto{113.88976pt}{-10.12769pt}{118.42401pt}{-5.59343pt}{118.42401pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{108.29633pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{100.77803pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{7.31319pt}{0.0pt}\pgfsys@lineto{40.85324pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{40.85324pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{20.75175pt}{5.47743pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$p_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{21.12155pt}{-10.47745pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$b_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{62.9686pt}{0.0pt}\pgfsys@lineto{96.50865pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{96.50865pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{72.82938pt}{5.9441pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$p_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{73.19917pt}{-10.47743pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$b_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad\longmapsto\leavevmode\hbox to183.26pt{\vbox to31.98pt{\pgfpicture\makeatletter\hbox{\hskip 27.09583pt\lower-16.86601pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{7.11319pt}{0.0pt}\pgfsys@curveto{7.11319pt}{3.92854pt}{3.92854pt}{7.11319pt}{0.0pt}{7.11319pt}\pgfsys@curveto{-3.92854pt}{7.11319pt}{-7.11319pt}{3.92854pt}{-7.11319pt}{0.0pt}\pgfsys@curveto{-7.11319pt}{-3.92854pt}{-3.92854pt}{-7.11319pt}{0.0pt}{-7.11319pt}\pgfsys@curveto{3.92854pt}{-7.11319pt}{7.11319pt}{-3.92854pt}{7.11319pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.9405pt}{-1.18056pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}{}{{}}{}}{{{}}{{}}}{}{{}}{}{{}}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{-20.22981pt}{0.0pt}\pgfsys@lineto{-8.77318pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-8.77318pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-23.76282pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{62.7686pt}{0.0pt}\pgfsys@curveto{62.7686pt}{5.59343pt}{58.23434pt}{10.12769pt}{52.64091pt}{10.12769pt}\pgfsys@curveto{47.04749pt}{10.12769pt}{42.51323pt}{5.59343pt}{42.51323pt}{0.0pt}\pgfsys@curveto{42.51323pt}{-5.59343pt}{47.04749pt}{-10.12769pt}{52.64091pt}{-10.12769pt}\pgfsys@curveto{58.23434pt}{-10.12769pt}{62.7686pt}{-5.59343pt}{62.7686pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{52.64091pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{45.12262pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{155.96918pt}{0.0pt}\pgfsys@curveto{155.96918pt}{5.59343pt}{151.43492pt}{10.12769pt}{145.84149pt}{10.12769pt}\pgfsys@curveto{140.24806pt}{10.12769pt}{135.7138pt}{5.59343pt}{135.7138pt}{0.0pt}\pgfsys@curveto{135.7138pt}{-5.59343pt}{140.24806pt}{-10.12769pt}{145.84149pt}{-10.12769pt}\pgfsys@curveto{151.43492pt}{-10.12769pt}{155.96918pt}{-5.59343pt}{155.96918pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{145.84149pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{138.3232pt}{-0.94722pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{k+2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{7.31319pt}{0.0pt}\pgfsys@lineto{40.85324pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{40.85324pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{22.3132pt}{3.533pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$0$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{21.12155pt}{-10.47745pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$b_{k}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}{}{}{}{}{{}}\pgfsys@moveto{62.9686pt}{0.0pt}\pgfsys@lineto{134.05382pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{134.05382pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{82.54053pt}{5.9441pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$p_{k}+p_{k+1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{66.0242pt}{-11.033pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\max(b_{k},b_{k+1}-p_{k})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ Informally, any run through the original RTEA can be copied to the other and vice versa, hence also this transformation does not change the values of $f$. The precise argument is as follows. Let $f^{\prime}$ denote the function which results from the transformation. Let $x\in L$ and $t\in[0,\infty]$. The inequality $f(x,t)\leq f^{\prime}(x,t)$ is again clear if $f(x,t)=\bot$, so assume otherwise. Let $t_{1}+\dotsm+t_{n}=t$ such that $f(x,t)=x+r_{1}t_{1}+p_{1}+\dotsm+r_{n}t_{n}+p_{n}$ and $x+\dotsm+r_{j}t_{j}\geq b_{j}$ for all $j=1,\dotsc,n$. Then this also yields an accepting run through the RTEA for $f^{\prime}$, hence $f^{\prime}(x,t)\geq f(x,t)$. The proof that $f(x,t)\geq f^{\prime}(x,t)$ is similar. Next we define a total order on normal-form sequences of atomic RTEFs. Using this ordering, we will later be able to show that the semiring of general real-time energy functions is locally closed. Let $f_{1},\dotsc,f_{n}$ and $f_{1}^{\prime},\dotsc,f_{n^{\prime}}^{\prime}$ be normal-form sequences of atomic RTEFs with rate sequences $r_{1}<\dotsm<r_{n}$ and $r_{1}^{\prime}<\dotsm<r_{n^{\prime}}^{\prime}$, respectively. Then $f_{1},\dotsc,f_{n}$ is _not better than_ $f_{1}^{\prime},\dotsc,f_{n^{\prime}}^{\prime}$, denoted $(f_{1},\dotsc,f_{n})\preceq(f_{1}^{\prime},\dotsc,f_{n^{\prime}}^{\prime})$, if $r_{n}\leq r_{n^{\prime}}^{\prime}$. Note that $(f_{1},\dotsc,f_{n})\preceq(f_{1}^{\prime},\dotsc,f_{n^{\prime}}^{\prime})$ does not imply $f_{1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n}\leq f_{1}^{\prime}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n^{\prime}}^{\prime}$ even for very simple functions. For a counterexample, consider the two following linear RTEFs $f=f_{1}$, $f^{\prime}=f_{1}^{\prime}\mathrel{\mathop{\triangleright}}f_{2}^{\prime}$ with corresponding RTEAs: $\displaystyle f:$$4$$0$$0$$1$$5$$0$$1$$0$$2$ We have $(f_{1})\preceq(f_{1}^{\prime},f_{2}^{\prime})$, and for $x\geq 2$, $f(x,t)=x+4t$ and $f^{\prime}(x,t)=x+5t$, hence $f(x,t)\leq f^{\prime}(x,t)$. But $f(0,1)=4$, whereas $f^{\prime}(0,1)=\bot$. ###### Lemma 7. If $f=f_{1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n}$ and $f^{\prime}=f_{1}^{\prime}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n^{\prime}}^{\prime}$ are such that $(f_{1},\dotsc,f_{n})\preceq(f_{1}^{\prime},\dotsc,f_{n^{\prime}}^{\prime})$, then $f^{\prime}\mathrel{\mathop{\triangleright}}f\leq f^{\prime}$. ###### Proof 4.3. Let $r_{1}<\dotsm<r_{n}$ and $r_{1}^{\prime}<\dotsm<r_{n^{\prime}}^{\prime}$ be the corresponding rate sequences, then $r_{n}\leq r_{n^{\prime}}^{\prime}$. The RTEAs for $f^{\prime}\mathrel{\mathop{\triangleright}}f$ and $f^{\prime}$ are as follows, where we have transformed the former to normal form using that for all indices $i$, $r_{i}\leq r_{n}\leq r_{n^{\prime}}^{\prime}$: $f^{\prime}\mathrel{\mathop{\triangleright}}f:$$r_{1}^{\prime}$$\cdots$$r_{n^{\prime}}^{\prime}$$b_{1}^{\prime}$$0$$\max(b_{n^{\prime}}^{\prime},b_{n}-p^{\prime})$$p+p^{\prime}$$f^{\prime}:$$r_{1}^{\prime}$$\cdots$$r_{n^{\prime}}^{\prime}$$b_{1}^{\prime}$$0$$b_{n^{\prime}}^{\prime}$$p^{\prime}$ As $p+p^{\prime}\leq p^{\prime}$ (because $p\leq 0$) and $\max(b_{n^{\prime}}^{\prime},b_{n}-p^{\prime})\geq b_{n^{\prime}}^{\prime}$, it is clear that $f^{\prime}\mathrel{\mathop{\triangleright}}f(x,t)\leq f^{\prime}(x,t)$ for all $x\in L$, $t\in[0,\infty]$. ### 4.3. General Real-Time Energy Functions We now consider all paths that may arise in a real-time energy automaton. When two locations of an automaton may be joined by two distinct paths, the optimal output energy is naturally obtained by taking the maximum over both paths. This gives rise to the following definition. Let $f,g\in\mathcal{F}$. The function $f\vee g$ is defined as the pointwise supremum: $\forall t\in[0,\infty]:(f\vee g)(t)=f(t)\vee g(t)$ ###### Lemma 8. With operations $\vee$ and $\mathrel{\mathop{\triangleright}}$, $\mathcal{F}$ forms a complete lattice and an idempotent semiring, with $\bot$ as unit for $\vee$ and $\mathbf{1}$ as unit for $\mathrel{\mathop{\triangleright}}$. ###### Proof 4.4. To show completeness, we note that for $\mathcal{X}\subseteq\mathcal{F}$, $\bigvee\mathcal{X}$ is the function defined by $(\bigvee\mathcal{X})(x,t)=\bigvee\\{f(x,t)\mid f\in\mathcal{X}\\}$, which is monotonic and hence an element of $\mathcal{F}$. For the semiring axioms, we only miss to show the distributive laws. Let $f,g,h\in\mathcal{F}$ and $t\in[0,\infty]$, then $\displaystyle(f\mathrel{\mathop{\triangleright}}(g\vee h))(t)$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}(g\vee h)(t_{2})\circ f(t_{1})$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}(g(t_{2})\vee h(t_{2}))\circ f(t_{1})$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}g(t_{2})\circ f(t_{1})\vee h(t_{2})\circ f(t_{1})$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}g(t_{2})\circ f(t_{1})\vee\bigvee_{t_{1}+t_{2}=t}h(t_{2})\circ f(t_{1})$ $\displaystyle=f\mathrel{\mathop{\triangleright}}g(t)\vee f\mathrel{\mathop{\triangleright}}h(t)=(f\mathrel{\mathop{\triangleright}}g\vee f\mathrel{\mathop{\triangleright}}h)(t)\,.$ Similarly, and using monotonicity of $h$, we see that $\displaystyle((f\vee g)\mathrel{\mathop{\triangleright}}h)(t)$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}h(t_{2})\circ(f\vee g)(t_{1})$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}h(t_{2})\circ(f(t_{1})\vee g(t_{1}))$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}h(t_{2})\circ f(t_{1})\vee h(t_{2})\circ g(t_{1})$ $\displaystyle=\bigvee_{t_{1}+t_{2}=t}h(t_{2})\circ f(t_{1})\vee\bigvee_{t_{1}+t_{2}=t}h(t_{2})\circ g(t_{1})$ $\displaystyle=f\mathrel{\mathop{\triangleright}}h(t)\vee g\mathrel{\mathop{\triangleright}}h(t)=(f\mathrel{\mathop{\triangleright}}h\vee g\mathrel{\mathop{\triangleright}}h)(t)\,.$ The proof is complete. Finally, a cycle in an RTEA results in a ∗-operation: Let $f\in\mathcal{F}$. The Kleene star of $f$ is the function $f^{*}\in\mathcal{F}$ such that $\forall t\in[0,\infty]:f^{*}(t)=\bigvee_{n\geq 0}f^{n}(t)$ Note that $f^{*}$ is defined for all $f\in\mathcal{F}$ because $\mathcal{F}$ is a complete lattice. We can now define the set of general real-time energy functions, corresponding to general RTEAs: The set $\mathcal{E}$ of _real-time energy functions_ is the subsemiring of $\mathcal{F}$ generated by $\mathcal{A}$, i.e., the subset of $\mathcal{F}$ inductively defined by * • $\mathcal{A}\subseteq\mathcal{E}$, * • if $f,g\in\mathcal{E}$, then $f\mathrel{\mathop{\triangleright}}g\in\mathcal{E}$ and $f\vee g\in\mathcal{E}$. We will show below that $\mathcal{E}$ is locally closed, which entails that for each $f\in\mathcal{E}$, also $f^{*}\in\mathcal{E}$, hence $\mathcal{E}$ indeed encompasses all RTEFs. ###### Lemma 9. For every $f\in\mathcal{E}$ there exists $N\geq 0$ so that $f^{*}=\bigvee_{n=0}^{N}f^{n}$. ###### Proof 4.5. By distributivity, we can write $f$ as a finite supremum $f=\bigvee_{k=1}^{m}f_{k}$ of linear energy functions $f_{1},\dotsc,f_{m}$. For each $k=1,\dotsc,m$, let $f_{k}=f_{k,1}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{k,n_{k}}$ be a normal-form representation. By re-ordering the $f_{k}$ if necessary, we can assume that $(f_{k,1},\dotsc,f_{k,n_{k}})\preceq(f_{k+1,1},\dotsc,f_{k+1,n_{k+1}})$ for every $k=1,\dotsc,n-1$. We first show that $f^{*}\leq\bigvee_{0\leq n_{1},\dotsc,n_{m}\leq 1}f_{1}^{n_{1}}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{m}^{n_{m}}$: The expansion of $f^{*}=(\bigvee_{k=1}^{m}f_{k})^{*}$ is an infinite supremum of finite compositions $f_{i_{1}}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{i_{p}}$. By Lemma 7, any occurrence of $f_{i_{j}}\mathrel{\mathop{\triangleright}}f_{i_{j+1}}$ in such compositions with $i_{j}\geq i_{j+1}$ can be replaced by $f_{i_{j+1}}$. The compositions which are left have $i_{j}<i_{j+1}$ for every $j$, so the claim follows. Now $\bigvee_{0\leq n_{1},\dotsc,n_{m}\leq 1}f_{1}^{n_{1}}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{m}^{n_{m}}\leq\bigvee_{n=0}^{m}\big{(}\bigvee_{k=1}^{m}f_{k}\big{)}^{n}=\bigvee_{n=0}^{m}f^{n}\leq f^{*}$, so with $N=m$ the proof is complete. ###### Corollary 10. $\mathcal{E}$ is locally closed, hence a ∗-continuous Kleene algebra. ###### Proof 4.6. For every $f\in\mathcal{E}$ there is $N\geq 0$ so that $f^{*}=\bigvee_{n=0}^{N}f^{n}$ (Lemma 9), hence $\bigvee_{n=0}^{N}f^{n}=\bigvee_{n=0}^{N+1}f^{n}$. Thus $\mathcal{E}$ is locally closed, and by Lemma 4, a ∗-continuous Kleene algebra. To illustrate, we compute the Kleene star of the supremum $f=f_{1}\vee f_{2}$ of two linear RTEFs as below. These are slight modifications of some RTEFs from the satellite example, modified to make the example more interesting: $f_{1}$$f_{2}$$0$$4$$1$$5$$0$$30$$-10$$30$$0$$20$$0$$40$$-50$$50$ These functions are in normal form and $f_{1}\preceq f_{2}$. Lemma 9 and its proof allow us to conclude that $f^{*}=\mathbf{1}\vee f_{1}\vee f_{2}\vee f_{1}\mathrel{\mathop{\triangleright}}f_{2}$. Figure 4 shows the boundaries of definition of these functions and the regions in the $(x,t)$ plane where each of them dominates the supremum. $x$0$20$$30$$37$$40$$50$$60$$t$$2$$7$$22$$40$$53$$55$$80$$f_{1}(x,t)=4t+x-10$$f_{2}(x,t)=5t+5x-210$$f_{2}(x,t)=5t+x-50$$f_{1}\mathrel{\mathop{\triangleright}}f_{2}(x,t)=5t+1.25x-72.5$$f_{1}\mathrel{\mathop{\triangleright}}f_{2}(x,t)=5t+x-60$$f^{*}=\mathbf{1}$$f^{*}=f_{2}$$f^{*}=f_{1}$$f^{*}=f_{1}\mathrel{\mathop{\triangleright}}f_{2}$ Figure 4. Computation of $f^{*}$ from Example 4.3. ### 4.4. Infinite Products Let $\mathbbm{B}=\\{\textup{{ff}},\textup{{t\\!t}}\\}$ denote the Boolean lattice with standard order $\textup{{ff}}<\textup{{t\\!t}}$. Let $\mathcal{V}$ denote the set of monotonic functions $v:L\times[0,\infty]\to\mathbbm{B}$ for which $v(\bot,t)=\bot$ for all $t\in L$. We define an infinite product operation $\mathcal{F}^{\omega}\to\mathcal{V}$: For an infinite sequence of functions $f_{0},f_{1},\dotsc\in\mathcal{F}$, $\prod_{n\geq 0}f_{n}\in\mathcal{V}$ is the function defined for $x\in L$, $t\in[0,\infty]$ by $\prod_{n\geq 0}f_{n}(x,t)=\textup{{t\\!t}}$ iff there is an infinite sequence $t_{0},t_{1},\dotsc\in[0,\infty]$ such that $\sum_{n=0}^{\infty}t_{n}=t$ and for all $n\geq 0$, $f_{n}(t_{n})\circ\dotsm\circ f_{0}(t_{0})(x)\neq\bot$. Hence $\prod_{n\geq 0}f_{n}(x,t)=\textup{{t\\!t}}$ iff in the infinite composition $f_{0}\mathrel{\mathop{\triangleright}}f_{1}\mathrel{\mathop{\triangleright}}\dotsm(x,t)$, all finite prefixes have values $\neq\bot$. There is a (left) action of $\mathcal{F}$ on $\mathcal{V}$ given by $(f,v)\mapsto f\mathrel{\mathop{\triangleright}}v$, where the composition $f\mathrel{\mathop{\triangleright}}v$ is given by the same formula as composition $\mathrel{\mathop{\triangleright}}$ on $\mathcal{F}$. Let $\bot\in\mathcal{V}$ denote the function given by $\bot(x,t)=\textup{{ff}}$. ###### Lemma 11. With the $\mathcal{F}$-action $\mathrel{\mathop{\triangleright}}$, $\vee$ as addition, and $\bot$ as unit, $\mathcal{V}$ is an idempotent left $\mathcal{F}$-semimodule. ###### Proof 4.7. Similar to the proofs of Lemmas 5 and 8. Let $\mathcal{U}\subseteq\mathcal{V}$ be the $\mathcal{F}$-subsemimodule generated by $\mathcal{E}\subseteq\mathcal{F}$. Then $\mathcal{U}$ is an idempotent left $\mathcal{E}$-semimodule. ###### Proposition 12. $(\mathcal{E},\mathcal{U})$ forms a ∗-continuous Kleene $\omega$-algebra. ###### Proof 4.8. We first show that $(\mathcal{E},\mathcal{U})$ forms a generalized ∗-continuous Kleene algebra: Let $f,g\in\mathcal{E}$ and $v\in\mathcal{U}$, then we need to see that $f\mathrel{\mathop{\triangleright}}g^{*}\mathrel{\mathop{\triangleright}}v=\bigvee_{n\geq 0}f\mathrel{\mathop{\triangleright}}g^{n}\mathrel{\mathop{\triangleright}}v$. The right-hand side is trivially less than or equal to the left-hand side. For the other inequality, as $g$ is ∗-closed, we have $N\geq 0$ such that $g^{*}=\bigvee_{n=0}^{N}g^{n}$, and then $f\mathrel{\mathop{\triangleright}}g^{*}\mathrel{\mathop{\triangleright}}v=f\mathrel{\mathop{\triangleright}}\big{(}\bigvee_{n=0}^{N}g^{n}\big{)}\mathrel{\mathop{\triangleright}}v=\bigvee_{n=0}^{N}f\mathrel{\mathop{\triangleright}}g^{n}\mathrel{\mathop{\triangleright}}v\leq\bigvee_{n\geq 0}f\mathrel{\mathop{\triangleright}}g^{n}\mathrel{\mathop{\triangleright}}v\,.$ We now need to show that $(\mathcal{E},\mathcal{U})$ satisfies the conditions $(\textrm{{C}}1)$–$(\textrm{{C}}4)$ in Section 3.2. As to $(\textrm{{C}}1)$, let $f_{0},f_{1},\dotsc\in\mathcal{E}$, $x\in L$, and $t\in[0,\infty]$. Then $\displaystyle f_{0}\mathrel{\mathop{\triangleright}}\prod_{n\geq 0}f_{n+1}(x,t)$ $\displaystyle=\bigvee_{t_{0}+t^{\prime}=t}\prod_{n\geq 0}f_{n+1}(t^{\prime})\circ f_{0}(t_{0})(x)$ $\displaystyle=\textup{{t\\!t}}\text{ iff }\exists t_{0}+t^{\prime}=t:\prod_{n\geq 0}f_{n+1}(t^{\prime})\circ f_{0}(t_{0})(x)=\textup{{t\\!t}}$ $\displaystyle=\textup{{t\\!t}}\text{ iff }\exists t_{0}+t^{\prime}=t:\exists t_{1}+t_{2}+\dotsm=t^{\prime}:\forall n\geq 1:$ $\displaystyle\hskip 150.00023ptf_{n}(t_{n})\circ\dotsm\circ f_{0}(t_{0})\neq\bot$ $\displaystyle=\prod_{n\geq 0}f_{n}(x,t)\,.$ For $(\textrm{{C}}2)$, let $f_{0},f_{1},\dotsc\in\mathcal{E}$, $x\in L$, $t\in[0,\infty]$, and $0=n_{0}\leq n_{1}\leq\dotsm$ a sequence which increases without a bound. Then $\displaystyle\smash[b]{\prod_{k\geq 0}(f_{n_{k}}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n_{k+1}-1})(x,t)}=\textup{{t\\!t}}$ $\displaystyle\qquad\qquad\text{iff }\exists u_{0}+u_{1}+\dotsm=t:\forall k\geq 0:$ $\displaystyle\hskip 100.00015pt(f_{n_{k}}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n_{k+1}-1})(u_{k})\circ\dotsm\circ(f_{0}\mathrel{\mathop{\triangleright}}\dotsm\mathrel{\mathop{\triangleright}}f_{n_{1}-1})(u_{0})(x)\neq\bot$ $\displaystyle\qquad\qquad\text{iff }\exists u_{0}+u_{1}+\dotsm=t:\forall k\geq 0:\exists t^{k}_{0},\dotsc,t^{k}_{n_{k+1}-1}:$ $\displaystyle\hskip 100.00015ptt^{k}_{0}+\dotsm+t^{k}_{n_{1}-1}=u_{0},\dotsc,t^{k}_{n_{k}}+\dotsm+t^{k}_{n_{k+1}-1}=u_{k},$ $\displaystyle\hskip 100.00015ptf_{n_{k+1}-1}(t^{k}_{n_{k+1}-1})\circ\dotsm\circ f_{0}(t^{k}_{0})\neq\bot\,.$ We can use a diagonal-type argument to finish the proof: For every $k$, we have $t^{k+1}_{0},\dotsc,t^{k+1}_{n_{k+2}-1}$ such that $f_{n_{k+2}-1}(t^{k+1}_{n_{k+2}-1})\circ\dotsm\circ f_{0}(t^{k+1}_{0})\neq\bot$. But then also $f_{n_{k+1}-1}(t^{k+1}_{n_{k+1}-1})\circ\dotsm\circ f_{0}(t^{k+1}_{0})\neq\bot$, hence we can update $t^{k}_{0}:=t^{k+1}_{0},\dotsc,t^{k}_{n_{k+1}-1}:=t^{k+1}_{n_{k+1}-1}$. In the limit, we have $t_{0},t_{1},\dotsc$ with $t_{0}+\dotsm+t_{n_{1}-1}=u_{0},\dotsc$, hence $t_{0}+t_{1}+\dotsm=t$, and $f_{n}(t_{n})\circ\dotsm\circ f_{0}(t_{0})=\bot$. To show the third condition, we prove that for all $f_{0},f_{1},\dotsc,g_{0},g_{1},\dotsc\in\mathcal{E}$, $\prod_{n\geq 0}(f_{n}\vee g_{n})=\adjustlimits{\bigvee}_{h_{n}\in\\{f_{n},g_{n}\\}\;}{\prod}_{n\geq 0}h_{n}\,,$ (5) which implies $(\textrm{{C}}3)$. By monotonicity of infinite product, the right-hand side is less than or equal to the left-hand side. To show the other inequality, let $x\in L$ and $t\in[0,\infty]$ and suppose that $\prod_{n\geq 0}(f_{n}\vee g_{n})(x,t)=\textup{{t\\!t}}$. We show that there is a choice of functions $h_{n}\in\\{f_{n},g_{n}\\}$ for all $n\geq 0$ such that $\prod_{n\geq 0}h_{n}(x,t)=\textup{{t\\!t}}$. Consider the infinite ordered binary tree where each node at level $n\geq 0$ is the source of an edge labeled $f_{n}$ and an edge labeled $g_{n}$, ordered as indicated. We can assign to each node $u$ the composition $h_{u}$ of the functions that occur as the labels of the edges along the unique path from the root to that node. Let us mark a node $u$ if $h_{u}(x,t)\neq\bot$. As $\prod_{n\geq 0}(f_{n}\vee g_{n})(x,t)=\textup{{t\\!t}}$, each level contains a marked node. Moreover, whenever a node is marked and has a predecessor, its predecessor is also marked. By König’s lemma [Kön27] there is an infinite path going through marked nodes. This infinite path gives rise to the sequence $h_{0},h_{1},\dotsc$ with $\prod_{n\geq 0}h_{n}(x,t)=\textup{{t\\!t}}$. For $(\textrm{{C}}4)$, we need to see that for all $f,g_{0},g_{1},\dotsc\in\mathcal{E}$, $\prod_{n\geq 0}f^{*}\mathrel{\mathop{\triangleright}}g_{n}=\adjustlimits{\bigvee}_{k_{0},k_{1},\dotsc\geq 0\;}{\prod}_{n\geq 0}f^{k_{n}}\mathrel{\mathop{\triangleright}}g_{n}\,.$ Again the right-hand side is less than or equal to the left-hand side because of monotonicity of infinite product. To show the other inequality, we have $N\geq 0$ such that $f^{*}=\bigvee_{k=0}^{N}f^{k}$, and then $\displaystyle\prod_{n\geq 0}f^{*}\mathrel{\mathop{\triangleright}}g_{n}$ $\displaystyle=\prod_{n\geq 0}\big{(}\bigvee_{k=0}^{N}f^{k}\big{)}\mathrel{\mathop{\triangleright}}g_{n}$ $\displaystyle=\prod_{n\geq 0}\big{(}\bigvee_{k=0}^{N}f^{k}\mathrel{\mathop{\triangleright}}g_{n}\big{)}$ $\displaystyle=\adjustlimits{\bigvee}_{0\leq k_{0},k_{1},\dotsc\leq N\;}{\prod}_{n\geq 0}f^{k_{n}}\mathrel{\mathop{\triangleright}}g_{n}$ (6) $\displaystyle\leq\adjustlimits{\bigvee}_{k_{0},k_{1},\dotsc\geq 0\;}{\prod}_{n\geq 0}f^{k_{n}}\mathrel{\mathop{\triangleright}}g_{n}\,,$ where (6) holds because of (5). ## 5\. Decidability We can now apply the results of Section 3.4 to see that our decision problems as stated at the end of Section 2 are decidable. Let Let $A=(S,s_{0},F,T,r)$ be an RTEA, with matrix representation $(\alpha,M,K)$, and $x_{0},t,y\in[0,\infty]$. ###### Theorem 13. There exists a finite run $(s_{0},x_{0},t)\leadsto\dotsm\leadsto(s,x,t^{\prime})$ in $A$ with $s\in F$ iff $|A|(x_{0},t)>\bot$. ###### Theorem 14. There exists a finite run $(s_{0},x_{0},t)\leadsto\dotsm\leadsto(s,x,t^{\prime})$ in $A$ with $s\in F$ and $x\geq y$ iff $|A|(x_{0},t)\geq y$. ###### Theorem 15. There exists $s\in F$ and an infinite run $(s_{0},x_{0},t)\leadsto(s_{1},x_{1},t_{1})\leadsto\dotsm$ in $A$ in which $s_{n}=s$ for infinitely many $n\geq 0$ iff $\|A\|(x_{0},t)=\top$. ###### Theorem 16. Problems 1, 2 and 3 from Section 2 are decidable. ###### Proof 5.1. We have seen in the examples that RTEFs are _piecewise linear_ , i.e., composed of a finite number of (affine) linear functions which are defined on polygonal regions in the $(x,t)$-plane. It is clear that this is generally the case: atomic RTEFs are piecewise linear by definition, and compositions and maxima of piecewise linear functions are again piecewise linear. Piecewise linear functions can be represented using the (finitely many) corner points of their regions of definition together with their values at these corner points. (In case some regions are not convex or disconnected, they can be split into finitely many convex regions.) It is clear that computable atomic RTEFs are computable piecewise linear (i.e., all numbers in their finite representation are computable), and that compositions and suprema of computable piecewise linear are again computable piecewise linear. Using Lemma 9, we see that all functions in $M^{*}$ are computable piecewise linear. We miss to show that the ω-operation is computable. Let $f$ be a computable RTEF. If $f=\bot$, then $f^{\omega}=\bot$, so we can assume $f\neq\bot$. Then $f$ can be expressed as a finite supremum of linear (computable) RTEFs in normal form; let $A$ be the corresponding RTEA. Let $x\in L$ and $t\in[0,\infty]$. If $x=\bot$, then $f^{\omega}(x,t)=\textup{{ff}}$; if $x=\infty$, then $f^{\omega}(x,t)=\textup{{t\\!t}}$. We can hence assume $x\in\mathbbm{R}_{\geq 0}$. We prove the claim for $t\neq\infty$ first. In that case, $f^{\omega}(x,t)=\textup{{t\\!t}}$ iff there is an infinite sequence $t_{0},t_{1},\dotsc\in\mathbbm{R}_{\geq 0}$ whose partial sums converge to $t$: $\sum_{n=0}^{\infty}t_{n}=t$, and such that for all $n\geq 0$, $f(t_{n})\circ\dotsm\circ f(t_{0})(x)\neq\bot$. By convergence, we have $\lim_{n\to\infty}t_{n}=0$. For $y,u\in\mathbbm{R}_{\geq 0}$, $f(y,u)=a(y,u)u+b(y,u)y+p(y,u)$ for piecewise constant functions $a(y,u)$, $b(y,u)$, $p(y,u)$. Let $\alpha=\sup\\{p(y,u)\mid y,u\in\mathbbm{R}_{\geq 0}\\}$, then $\alpha\leq 0$, and $\alpha<0$ iff the prices along all paths through $A$ are non-zero. We have $\lim_{u\to 0}f(y,u)=b(y,0)y+p(y,0)$. But $b(y,0)=1$ (if no time is available, we cannot delay in any states), hence $\lim_{u\to 0}f(y,u)=y+p(y,0)$ for all $y\in\mathbbm{R}_{\geq 0}$. By $\lim_{n\to\infty}t_{n}=0$, this implies that if $\alpha<0$, then there is $n\geq 0$ such that $f(t_{n})\circ\dotsm\circ f(t_{0})(x)=\bot$. If $\alpha=0$ on the other hand, then we can choose $t_{0}=t$ and $t_{n}=0$ for $n\geq 1$. We have shown that $f^{\omega}(x,t)=\textup{{t\\!t}}$ iff there is a path through $A$ with price $0$, which is decidable. Now we show the claim for $t=\infty$. If there is $t_{0}\in[0,\infty]$ for which $f(x,t_{0})\geq x$, then we can assume $t_{0}>0$ and put $t_{n}=t_{0}$ for all $n$ to show that $f^{\omega}(x,t)=\textup{{t\\!t}}$. We now show that if $f(x,t_{0})<x$ for all $t_{0}\in[0,\infty]$, then $f^{\omega}(x,t)=\textup{{ff}}$. Let $\alpha=\sup\\{f(x,t_{0})-x\mid t_{0}\in[0,\infty]\\}$, then $\alpha<0$ as $[0,\infty]$ is compact. We have $f(x,t_{0})\leq x+\alpha$ for all $t_{0}\in[0,\infty]$. Now entering the RTEA $A$ for $f$ with initial energy lower than $x$ can disable some paths, but will not enable any new behavior, hence for $x^{\prime}\leq x$ and any $t_{1}\in[0,\infty]$, $f(x^{\prime},t_{1})\leq f(x,t_{1})+x^{\prime}-x$. Hence $f(t_{1})\circ f(t_{0})(x)\leq f(x,t_{1})+f(x,t_{0})-x\leq f(x,t_{1})+\alpha\leq x+2\alpha$ for all $t_{0},t_{1}\in[0,\infty]$. By induction, we see that for all infinite sequences $t_{0},t_{1},\dotsc\in[0,\infty]$ and all $n\geq 0$, $f(t_{n})\circ\dotsm\circ f(t_{0})(x)\leq x+n\alpha$. By $\alpha<0$, $f^{\omega}(x,t)=\textup{{ff}}$. We have shown that $f^{\omega}(x,t)=\textup{{t\\!t}}$ iff there exists $t_{0}\in[0,\infty]$ with $f(x,t_{0})\geq x$, which is decidable because $f$ is piecewise linear. ## 6\. Conclusion We have developed an algebraic methodology for deciding reachability and Büchi problems on a class of weighted real-time models where the weights represent energy or similar quantities. The semantics of such systems is modeled by real-time energy functions which map initial energy of the system and available time to the maximal final energy level. We have shown that these real-time energy functions form a ∗-continuous Kleene $\omega$-algebra, which entails that reachability and Büchi acceptance can be decided in a static way which only involves manipulations of energy functions. We have seen that the necessary manipulations of real-time energy functions are computable, and in fact we conjecture that our method leads to an exponential-time algorithm for deciding reachability and Büchi acceptance in real-time energy automata. This is due to the fact that operations on real- time energy functions can be done in time linear in the size of their representation, and the representation size of compositions and suprema of real-time energy functions is a linear function of the representation size of the operands. In future work, we plan to do a careful complexity analysis which could confirm this result and to implement our algorithms to see how it fares in practice. This paper constitutes the first application of methods from Kleene algebra to a timed-automata like formalism. In future work, we plan to lift some of the restrictions of the current model and extend it to allow for time constraints and resets à la timed automata. We also plan to extend this work with action labels, which algebraically means passing from the semiring of real-time energy functions to the one of formal power series over these functions. In applications, this means that instead of asking for existence of accepting runs, one is asking for controllability. ## References * [ATP01] Rajeev Alur, Salvatore La Torre, and George J. Pappas. Optimal paths in weighted timed automata. In Di Benedetto and Sangiovanni-Vincentelli [DBSV01], pages 49–62. * [BÉ93] Stephen L. Bloom and Zoltán Ésik. Iteration Theories: The Equational Logic of Iterative Processes. EATCS monographs on theoretical computer science. Springer-Verlag, 1993\. * [BFH+01] Gerd Behrmann, Ansgar Fehnker, Thomas Hune, Kim G. Larsen, Paul Pettersson, Judi Romijn, and Frits W. Vaandrager. Minimum-cost reachability for priced timed automata. In Di Benedetto and Sangiovanni-Vincentelli [DBSV01], pages 147–161. * [BFL+08] Patricia Bouyer, Uli Fahrenberg, Kim G. Larsen, Nicolas Markey, and Jiří Srba. Infinite runs in weighted timed automata with energy constraints. In Franck Cassez and Claude Jard, editors, FORMATS, volume 5215 of Lect. Notes Comput. Sci., pages 33–47. Springer-Verlag, 2008. * [BFLM10] Patricia Bouyer, Uli Fahrenberg, Kim G. Larsen, and Nicolas Markey. Timed automata with observers under energy constraints. In Karl Henrik Johansson and Wang Yi, editors, HSCC, pages 61–70. ACM, 2010. * [BLM14] Patricia Bouyer, Kim G. Larsen, and Nicolas Markey. Lower-bound-constrained runs in weighted timed automata. Perform. Eval., 73:91–109, 2014. * [CFL15] David Cachera, Uli Fahrenberg, and Axel Legay. An omega-algebra for real-time energy problems. In Prahladh Harsha and G. Ramalingam, editors, FSTTCS, volume 45 of Leibniz Int. Proc. Inf., pages 394–407. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2015. * [DBSV01] Maria Domenica Di Benedetto and Alberto L. Sangiovanni-Vincentelli, editors. Hybrid Systems: Computation and Control, 4th International Workshop, HSCC 2001, Rome, Italy, March 28-30, 2001, Proceedings, volume 2034 of Lect. Notes Comput. Sci. Springer-Verlag, 2001. * [DHMS12] Brijesh Dongol, Ian J. Hayes, Larissa Meinicke, and Kim Solin. Towards an algebra for real-time programs. In Wolfram Kahl and Timothy G. Griffin, editors, RAMiCS, volume 7560 of Lect. Notes Comput. Sci., pages 50–65. Springer-Verlag, 2012\. * [DKV09] Manfred Droste, Werner Kuich, and Heiko Vogler, editors. Handbook of Weighted Automata. EATCS Monographs in Theoretical Computer Science. Springer-Verlag, 2009\. * [ÉFL15a] Zoltán Ésik, Uli Fahrenberg, and Axel Legay. Star-continuous Kleene omega-algebras. In Igor Potapov, editor, DLT, volume 9168 of Lect. Notes Comput. Sci., pages 240–251. Springer-Verlag, 2015. * [ÉFL15b] Zoltán Ésik, Uli Fahrenberg, and Axel Legay. ∗-continuous Kleene $\omega$-algebras for energy problems. In Ralph Matthes and Matteo Mio, editors, FICS, volume 191 of Electr. Proc. Theor. Comput. Sci., pages 48–59, 2015. * [ÉFLQ13] Zoltán Ésik, Uli Fahrenberg, Axel Legay, and Karin Quaas. Kleene algebras and semimodules for energy problems. In Dang Van Hung and Mizuhito Ogawa, editors, ATVA, volume 8172 of Lect. Notes Comput. Sci., pages 102–117. Springer-Verlag, 2013. * [ÉK02] Zoltán Ésik and Werner Kuich. Locally closed semirings. Monatsh. Math., 137(1):21–29, 2002. * [ÉK07a] Zoltán Ésik and Werner Kuich. Modern Automata Theory. 2007\. http://dmg.tuwien.ac.at/kuich/mat.pdf. * [ÉK07b] Zoltán Ésik and Werner Kuich. On iteration semiring-semimodule pairs. Semigroup Forum, 75:129–159, 2007. * [FJLS11] Uli Fahrenberg, Line Juhl, Kim G. Larsen, and Jiří Srba. Energy games in multiweighted automata. In Antonio Cerone and Pekka Pihlajasaari, editors, ICTAC, volume 6916 of Lect. Notes Comput. Sci., pages 95–115. Springer-Verlag, 2011. * [Gol99] Jonathan S. Golan. Semirings and their Applications. Springer-Verlag, 1999. * [HM09] Peter Höfner and Bernhard Möller. An algebra of hybrid systems. J. Log. Alg. Prog., 78(2):74–97, 2009. * [Kön27] Dénes König. Über eine Schlussweise aus dem Endlichen ins Unendliche. Acta Sci. Math. (Szeged), 3(2-3):121–130, 1927. * [Koz90] Dexter Kozen. On Kleene algebras and closed semirings. In Branislav Rovan, editor, MFCS, volume 452 of Lect. Notes Comput. Sci., pages 26–47. Springer-Verlag, 1990. * [Koz94] Dexter Kozen. A completeness theorem for Kleene algebras and the algebra of regular events. Inf. Comput., 110(2):366–390, 1994. * [Qua11] Karin Quaas. On the interval-bound problem for weighted timed automata. In Adrian Horia Dediu, Shunsuke Inenaga, and Carlos Martín-Vide, editors, LATA, volume 6638 of Lect. Notes Comput. Sci., pages 452–464. Springer-Verlag, 2011.
# Clubs and their applications Vito Napolitano , Olga Polverino , Paolo Santonastaso and Ferdinando Zullo Vito Napolitano, Olga Polverino, Paolo Santonastaso and Ferdinando Zullo, Dipartimento di Matematica e Fisica, Università degli Studi della Campania “Luigi Vanvitelli”, Viale Lincoln, 5, I– 81100 Caserta, Italy <EMAIL_ADDRESS> ###### Abstract. Clubs of rank $k$ are well-celebrated objects in finite geometries introduced by Fancsali and Sziklai in 2006. After the connection with a special type of arcs known as KM-arcs, they renewed their interest. This paper aims to study clubs of rank $n$ in $\mathrm{PG}(1,q^{n})$. We provide a classification result for $(n-2)$-clubs of rank $n$, we analyze the $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalence of the known subspaces defining clubs, for some of them the problem is then translated in determining whether or not certain scattered spaces are equivalent. Then we find a polynomial description of the known families of clubs via some linearized polynomials. Then we apply our results to the theory of blocking sets, KM- arcs, polynomials and rank metric codes, obtaining new constructions and classification results. AMS subject classification: 51E20; 51E21; 94B05. Keywords: Club; linear set; linearized polynomial; KM-arc; Blocking set; rank metric code. ## 1\. Introduction Linear sets have been shown to be a powerful tool in several classification results and constructions of objects in finite geometry and coding theory, such as semifields, blocking sets, translation ovoids, KM-arcs, rank metric codes, etc. For more details on their applications we refer to [24, 34, 36]. The main topic of this paper regards a special class of linear sets on the projective line called _clubs_. Let $\Lambda=\mathrm{PG}(V,\mathbb{F}_{q^{n}})=\mathrm{PG}(1,q^{n})$, where $V$ is an $\mathbb{F}_{q^{n}}$-vector space of dimension $2$. Let $U$ be an ${\mathbb{F}}_{q}$-subspace of $V$, then the set $L_{U}=\\{\langle{u}\rangle_{\mathbb{F}_{q^{n}}}\colon{u}\in U\setminus\\{{0}\\}\\}$ is said to be an ${\mathbb{F}}_{q}$-_linear set_ of rank $\dim_{{\mathbb{F}}_{q}}(U)$ of $\Lambda$. Another important notion associated with linear sets is those of _weight of a point_ $P$ which, roughly speaking, represents a measure of how much of the linear set is contained in the point. We say that a linear set is _scattered_ if all of its points have weight one, that is its points share with the linear set the minimum possible. The term scattered was introduced for the first time in [7] by Blokhuis and Lavrauw, in which the authors studied this notion in a more general framework. Scattered linear sets recently gained more attention due to their applications in the theory of Maximum Rank Distance codes and more generally to rank metric codes; see [36]. A quite close to the scattered linear sets family is the one of _clubs_. An $i$-_club_ of rank $k$ is an ${\mathbb{F}}_{q}$-linear set $L_{U}$ of rank $k$ for which all but one of the points have weight one, the remaining one has weight $i$. In the case in which the rank is $n$ we will simply say that $L_{U}$ is an $i$-_club_. They were originally introduced by in Fancsali and Sziklai in [19] (see also [20]) when studying maximal partial $2$-spreads in $\mathrm{PG}(8,q)$. These linear sets renewed their interest when De Boeck and Van de Voorde in [15] characterized the translation KM-arcs exactly as those that can be described by $i$-clubs in even characteristic and such a connection enabled them to give new constructions and classifications of KM- arcs. Moreover, $i$-clubs also define linear blocking sets of Rédei type once we look to the $i$-club as the set of determined directions of an affine pointset. Apart for their own interesting combinatorial properties, these pointsets turn out to be also very interesting since they also define Hamming metric codes with _few weights_ ; see [32]. Furthermore, $i$-clubs have a very natural algebraic description via linearized polynomials, which have been recently investigated in [5], under the name of $1$-_fat polynomials_. In particular, in [5] it has been proved that _exceptional_ $1$-fat polynomials do not exist, which means that the nature of these polynomial heavily rely on the extension field in which we are looking for this kind of polynomials. In this paper we start by providing a classification for $(n-2)$-clubs in $\mathrm{PG}(1,q^{n})$, by first studying a class of linear sets which, up to equivalence, contains all the $(n-2)$-clubs and then using an algebraic result which extends the linear analogue of Vosper’s theorem under certain assumptions. Then we study the $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalence of the known subspaces defining clubs (not necessarily $(n-2)$-clubs), in which the scattered spaces play an important role. Then we find linearized polynomials defining the known families of clubs, in particular for the $(n-2)$-clubs we find all of them (up to equivalence). Finally we apply our results to the theory of blocking sets, KM-arcs and rank metric codes, in such a way that we obtain constructions and classification results. In particular, our results imply, as a special case, the classification results of translation KM-arcs of type $q/4$ provided in [15], but with an explicit description of the associated clubs. The paper is organized as follows: in Section 2 we will briefly describe some of the objects and related results we will use. More precisely, we recall linearized polynomials, linear sets and clubs, dual basis and some other algebraic results. In Section 3, we first detect a family of linear sets of rank $h+2$ containing a point of weight $h$, then we show that, up to equivalence, all the $(n-2)$-clubs belong to such a family. Therefore, we characterize the linear sets belonging to such a family which turn out to be clubs via an algebraic description which allowed us to use a generalization (under certain assumption) of the linear analogue of the Vosper’s theorem. Section 4 is devoted to the study of the $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalence of the known subspaces whose associated linear set defines an $i$-club. Our analysis shows that there is a wide variety of examples. Indeed, for the constructions shown by Gács and Weiner in [21] we showed that the number of inequivalent subspaces is related to the number of scattered subspaces in a smaller extension. We also detect some inequivalence subspaces also when considering $(n-2)$-clubs. In Section 5 we describe linearized polynomials that define the known families of clubs and in particular our description is complete when considering $(n-2)$-clubs, answering to an open problem posed in [10]. Section 6 is devoted to the study of blocking sets of Rédei type naturally associated with $i$-clubs and to translation KM-arcs, for which we are able to provide classification results and non-equivalent constructions, as a consequence of the results described for $i$-clubs. In Section 8 we deal with rank metric codes. Using the connection between $q$-systems and linear rank metric codes proved by Randrianarisoa in [37], we are able to classify and construct linear rank metric codes with certain parameters. The interest in these codes is again due to the fact that the number of the non zero weights is only three, and for these kind of codes it seems very difficult to find examples. ## 2\. Preliminaries ### 2.1. Linearized polynomials Let $q=p^{h}$, where $h$ is a positive integer and $p$ is a prime. A _linearized polynomial_ (or a $q$-_polynomial_) over ${\mathbb{F}}_{q^{n}}$ is a polynomial of the form $f(x)=\sum_{i=0}^{t}a_{i}x^{q^{i}},$ where $a_{i}\in{\mathbb{F}}_{q^{n}}$ and $t$ is a non-negative integer. Furthermore, if $a_{t}\neq 0$ we say that $t$ is the $q$-_degree_ of $f(x)$. The set of all $q$-polynomials over ${\mathbb{F}}_{q^{n}}$ will be denoted by $\mathcal{L}_{n,q}$. A well-known example of linearized polynomial is given by the trace, that is $\mathrm{Tr}_{q^{n}/q}(x)=x+x^{q}+\ldots+x^{q^{n-1}}$. Sheekey in [38], in connection with linear sets and rank metric codes, gave the following definition. Let $f(x)\in\mathcal{L}_{n,q}$. Then $f$ is said to be _scattered_ if for every $x,y\in\mathbb{F}_{q^{n}}^{*}$ $\frac{f(x)}{x}=\frac{f(y)}{y}\,\,\Leftrightarrow\,\,\frac{y}{x}\in{\mathbb{F}}_{q},$ or equivalently $\dim_{{\mathbb{F}}_{q}}(\ker(f(x)-mx))\leq 1,\,\,\,\text{for every}\,\,\,m\in\mathbb{F}_{q^{n}}.$ The term _scattered_ arises from the theory of linear sets; see [7]. Scattered polynomials have attracted a lot of attention because of their connections with several objects, such as maximum distance rank metric codes as we will see later. For more details on linearized polynomials see [42]. ### 2.2. Linear sets In this paper, we deal with linear sets on the projective line and in some cases also on the projective plane. More precisely, let $V$ be a $r$-dimensional $\mathbb{F}_{q^{n}}$-vector space and let $\Lambda=\mathrm{PG}(V,\mathbb{F}_{q^{n}})=\mathrm{PG}(r-1,q^{n})$. Let $U$ be an ${\mathbb{F}}_{q}$-subspace of $V$, then the set $L_{U}=\\{\langle{u}\rangle_{\mathbb{F}_{q^{n}}}\colon{u}\in U\setminus\\{{0}\\}\\}$ is said to be an ${\mathbb{F}}_{q}$-_linear set_ of rank $\dim_{{\mathbb{F}}_{q}}(U)$. The rank of $L_{U}$ will also be denoted by $\mathrm{Rank}(L_{U})$. The _weight_ of a projective subspace $S=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\subseteq\Lambda$ in $L_{U}$ is defined naturally as $w_{L_{U}}(S)=\dim_{{\mathbb{F}}_{q}}(U\cap W)$. In some cases, linearized polynomials can be used to describe linear sets. Indeed, suppose that $L_{U}$ is a linear set in $\mathrm{PG}(1,q^{n})$ and it has rank $n$, then there exists a $q$-polynomial $f(x)\in\mathcal{L}_{n,q}$ such that $L_{U}$ is mapped by a collineation of $\mathrm{PGL}(2,q^{n})$ into $L_{f}=L_{U_{f}}$ where $U_{f}=\\{(x,f(x))\colon x\in\mathbb{F}_{q^{n}}\\}.$ Nevertheless, it could be hard to find polynomials describing a fixed linear set. Let now recall some basics relations between the size of a linear set, the number of point of a certain weight and its rank. If $N_{i}$ denotes the number of points of $\Lambda$ having weight $i\in\\{0,\ldots,k\\}$ in $L_{U}$, the following relations hold: (1) $|L_{U}|\leq\frac{q^{k}-1}{q-1},$ (2) $|L_{U}|=N_{1}+\ldots+N_{k},$ (3) $N_{1}+N_{2}(q+1)+\ldots+N_{k}(q^{k-1}+\ldots+q+1)=q^{k-1}+\ldots+q+1.$ Moreover, the following holds (4) $w_{L_{U}}(P)+w_{L_{U}}(Q)\leq\mathrm{Rank}(L_{U}),$ for any $P,Q\in\mathrm{PG}(r-1,q^{n})$ with $P\neq Q$. An ${\mathbb{F}}_{q}$-linear set $L_{U}$ is $\mathrm{PG}(1,q^{n})$ for which all of its points have weight one is called a _scattered_ linear set. If all of its points have weight one except for one which has weight $i$, we call it $i$-_club_. By the above relations, it is easy to see that a scattered linear set of rank $k$ has $\frac{q^{k}-1}{q-1}$ points and an $i$-club of rank $k$ has size $q^{k-1}+\ldots+q^{i}+1$. We refer to [24] and [34] for comprehensive references on linear sets. ### 2.3. Known examples of clubs Up to now, very few examples of $i$-clubs are known, which have been found and summarized in [15], where the authors investigated such a family of linear sets in connection with KM-arcs; see Section 6. The linear set $L_{\mathrm{Tr}_{q^{n}/q}(x)}$ is an example of $(n-1)$-club and in [11, Theorem 3.7], it has been proved that every $(n-1)$-club is $\mathrm{PGL}(2,q^{n})$-equivalent to $L_{\mathrm{Tr}_{q^{n}/q}(x)}$. A further important example is the following, which extends the above example. Let $n=\ell m$, $i=m(\ell-1)$, $\gcd(s,m)=1$ and $\sigma\colon x\in\mathbb{F}_{q^{n}}\mapsto x^{q^{s}}\in\mathbb{F}_{q^{n}}$. Then the linear set $L_{T}$ where (5) $T(x)=\mathrm{Tr}_{q^{\ell m}/q^{m}}\circ\sigma(x)\in\mathcal{L}_{n,q}$ is an $i$-club in $\mathrm{PG}(1,q^{n})$ with $i=m(\ell-1)$. Apart from these examples, polynomials defining $i$-clubs are not known except for [5, Corollary 5.5] for $n=4$ (see also [13]) and [5, Corollary 6.3]. However, in [15] two other examples of clubs were described. ###### Construction 2.1. In [15, Lemma 2.12] it was proved that for any $\lambda\in\mathbb{F}_{q^{n}}^{*}$ such that $\\{1,\lambda,\ldots,\lambda^{n-1}\\}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$, the ${\mathbb{F}}_{q}$-linear set (6) $L_{\lambda}=\\{\langle(t_{1}\lambda+\ldots+t_{n-1}\lambda^{n-1},t_{n-1}+t_{n}\lambda)\rangle_{\mathbb{F}_{q^{n}}}\colon(t_{1},\ldots,t_{n})\in{\mathbb{F}}_{q}^{n}\setminus\\{\mathbf{0}\\}\\},$ is an $(n-2)$-club of rank $n$ in $\mathrm{PG}(1,q^{n})$. ###### Construction 2.2. In [15, Lemma 3.6] was detected the following $i$-clubs. Let $n=rt$, $t,r>1$ and $f$ a scattered polynomial in $\mathcal{L}_{t,q}$. Let (7) $U_{a,b}=\left\\{\left(f(x_{0})-ax_{0},bx_{0}+\sum_{i=1}^{r-1}x_{i}\omega^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{t}}\right\\},$ for some fixed $a,b\in{\mathbb{F}}_{q^{t}}$ with $b\neq 0$ and $\\{1,\omega,\ldots,\omega^{r-1}\\}$ an ${\mathbb{F}}_{q^{t}}$-basis of ${\mathbb{F}}_{q^{n}}$. Then $L_{U_{a,b}}$ is an $i$-club, with $i=\left\\{\begin{array}[]{ll}t(r-1),&\text{if}\,\,f(x)-ax\,\,\text{is invertible over}\,\,{\mathbb{F}}_{q^{t}},\\\ t(r-1)+1,&\text{otherwise}.\end{array}\right.$ ### 2.4. Algebraic preliminaries In this section we will first recall the notion of dual basis and properties, then we will briefly describe the linear analogue of Cauchy-Davenport inequality and Vosper’s theorem and some related results. We start by recalling that the trace function from $\mathbb{F}_{q^{n}}$ over ${\mathbb{F}}_{q}$ is a linear map that defines a nondegenerate symmetric bilinear form as follows: $(a,b)\in{\mathbb{F}}_{q^{n}}\times{\mathbb{F}}_{q^{n}}\mapsto\mathrm{Tr}_{q^{n}/q}(ab)\in{\mathbb{F}}_{q}.$ This allows us to give the following definition. Two ordered ${\mathbb{F}}_{q}$-bases $\mathcal{B}=(\xi_{0},\ldots,\xi_{n-1})$ and $\mathcal{B}^{*}=(\xi_{0}^{*},\ldots,\xi_{n-1}^{*})$ of ${\mathbb{F}}_{q^{n}}$ are said to be _dual bases_ if $\mathrm{Tr}_{q^{n}/q}(\xi_{i}\xi_{j}^{*})=\left\\{\begin{array}[]{ll}1&\text{if }i=j,\\\ 0&\text{if }i\neq j.\end{array}\right.$ For any ${\mathbb{F}}_{q}$-basis $\mathcal{B}=(\xi_{0},\ldots,\xi_{n-1})$ there exists a unique dual basis $\mathcal{B}^{*}=(\xi_{0}^{*},\ldots,\xi_{n-1}^{*})$ of $\mathcal{B}$, see e.g. [25, Definition 2.30]. When $\mathcal{B}$ is a polynomial basis, then using the minimal polynomial of the element defining the polynomial basis over ${\mathbb{F}}_{q}$, we can easily construct the relative dual basis. ###### Corollary 2.3. [31, Corollary 2.7] Let $\lambda\in\mathbb{F}_{q^{n}}$ such that $\mathcal{B}=(1,\lambda,\ldots,\lambda^{n-1})$ is an ordered ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$. Let $f(x)=a_{0}+a_{1}x+\ldots+a_{n-1}x^{n-1}+x^{n}$ be the minimal polynomial of $\lambda$ over ${\mathbb{F}}_{q}$. Then the dual basis $\mathcal{B}^{*}$ of $\mathcal{B}$ is $\mathcal{B}^{*}=(\delta^{-1}\gamma_{0},\ldots,\delta^{-1}\gamma_{n-1}),$ where $\delta=f^{\prime}(\lambda)$ and $\gamma_{i}=\sum_{j=1}^{n-i}\lambda^{j-1}a_{i+j}$, for every $i\in\\{0,\ldots,n-1\\}$. In the case in which the minimal polynomial of $\lambda$ has degree two or three, the dual basis has even a simpler description. ###### Corollary 2.4. [31, Corollary 2.9] Let $\lambda\in\mathbb{F}_{q^{n}}$ such that $\mathcal{B}=(1,\lambda,\ldots,\lambda^{n-1})$ is an ordered ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$. * • If $f(x)=x^{n}-d$ is the minimal polynomial of $\lambda$ over ${\mathbb{F}}_{q}$, then the dual basis of $\mathcal{B}$ is $\mathcal{B}^{*}=\left(\frac{\lambda^{n}}{nd},\frac{\lambda^{n-1}}{nd},\ldots,\frac{\lambda}{nd},\frac{1}{nd}\right).$ * • If $f(x)=x^{n}-cx^{k}-1$ is the minimal polynomial of $\lambda$ over ${\mathbb{F}}_{q}$, then the dual basis of $\mathcal{B}$ is $\mathcal{B}^{*}=(\delta^{-1}\lambda^{k-1},\delta^{-1}\lambda^{k-2},\ldots,\delta^{-1},\delta^{-1}\lambda^{n-1},\ldots,\delta^{-1}\lambda^{k}),$ where $\delta=\frac{n\lambda^{n-1}-ck\lambda^{k-1}}{-c+\lambda^{n-k}}$. We also need the following result, which states that every hyperplane in ${\mathbb{F}}_{q^{n}}$ admits a polynomial basis. ###### Proposition 2.5. Let $S$ be an $(n-1)$-dimensional ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}$. Let $\gamma\in{\mathbb{F}}_{q^{n}}^{*}$ such that ${\mathbb{F}}_{q}(\gamma)={\mathbb{F}}_{q^{n}}$. Then there exists $c\in{\mathbb{F}}_{q^{n}}^{*}$ such that $S=c\langle 1,\gamma,\ldots,\gamma^{n-2}\rangle_{{\mathbb{F}}_{q}}$. ###### Proof. Since $\\{1,\gamma,\ldots,\gamma^{n-1}\\}$ is an ${\mathbb{F}}_{q}$-basis of ${\mathbb{F}}_{q^{n}}$, then the ${\mathbb{F}}_{q}$-subspace $T=\langle 1,\gamma,\ldots,\gamma^{n-2}\rangle_{{\mathbb{F}}_{q}}$ is an hyperplane of $\mathbb{F}_{q^{n}}$. By [22, Propositon 3.5.16], for any hyperlane $S$ of $\mathbb{F}_{q^{n}}$ there exists $c\in{\mathbb{F}}_{q^{n}}^{*}$ such that $S=cT$ and the assertion is proved. ∎ We now recall some algebraic results which will be used in the rest of the paper. The first two correspond to the linear analogue of Cauchy-Davenport inequality and to the linear analogue of the Vosper’s Theorem. Let denote by $S\cdot T$ the set of the products of any two elements in the subset $S$ and $T$ of $\mathbb{F}_{q^{n}}$. ###### Theorem 2.6. [2, Theorem 3] Let $S$ be an ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}$. Then * i) either for every ${\mathbb{F}}_{q}$-subspace $T\subseteq{\mathbb{F}}_{q^{n}}$ we have $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})\geq\min\\{\dim_{{\mathbb{F}}_{q}}(S)+\dim_{{\mathbb{F}}_{q}}(T)-1,n\\},$ * ii) or there exists a positive integer $t>1$ that divides $n$, such that for every ${\mathbb{F}}_{q}$-subspace $T\subseteq{\mathbb{F}}_{q^{n}}$ satisfying $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})<\dim_{{\mathbb{F}}_{q}}(S)+\dim_{{\mathbb{F}}_{q}}(T)-1,$ we have that $\langle S\cdot T\rangle_{{\mathbb{F}}_{q}}$ is also an ${\mathbb{F}}_{q^{t}}$-subspace. The pairs $(S,T)$ satisfying the equality in i) of Theorem 2.6 with $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})=\dim_{{\mathbb{F}}_{q}}(S)+\dim_{{\mathbb{F}}_{q}}(T)-1,$ are called _critical pairs_. The classification of critical pairs is a hard problem in general and few results are known. The next result classify them when $n$ is a prime number. ###### Theorem 2.7. [1, Theorem 3] Suppose $n$ is a prime number. Let $S,T$ be ${\mathbb{F}}_{q}$-subspaces of ${\mathbb{F}}_{q^{n}}$ such that $2\leq\dim_{{\mathbb{F}}_{q}}(S),\dim_{{\mathbb{F}}_{q}}(T)$ and $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})\leq n-2$. If $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})=\dim_{{\mathbb{F}}_{q}}(S)+\dim_{{\mathbb{F}}_{q}}(T)-1,$ then $S=g\langle 1,a,\ldots,a^{\dim_{{\mathbb{F}}_{q}}(S)-1}\rangle_{{\mathbb{F}}_{q}}$ and $T=g^{\prime}\langle 1,a,\ldots,a^{\dim_{{\mathbb{F}}_{q}}(T)-1}\rangle_{{\mathbb{F}}_{q}}$, for some $g,g^{\prime},a\in{\mathbb{F}}_{q^{n}}$. Here we recall a result from [30], whose consequence is the classification of critical pairs in the case in which one of the subspaces has dimension $2$; see [30, Proposition 6.3]. ###### Lemma 2.8. [30, Lemma 3.1] Let $S$ be an ${\mathbb{F}}_{q}$-subspace of $\mathbb{F}_{q^{n}}$ of dimension $k\geq 2$ and let $\mu\in\mathbb{F}_{q^{n}}\setminus{\mathbb{F}}_{q}$. Let $t=\dim_{{\mathbb{F}}_{q}}({\mathbb{F}}_{q}(\mu))$. * (a) If $\dim_{{\mathbb{F}}_{q}}(S\cap\mu S)=k$, then $S$ is an ${\mathbb{F}}_{q}(\mu)$-subspace. * (b) Suppose that $\dim_{{\mathbb{F}}_{q}}(S\cap\mu S)=k-1$. * (b1) If $t\geq k$ then $S=b\langle 1,\mu,\ldots,\mu^{k-1}\rangle_{{\mathbb{F}}_{q}}$, for some $b\in\mathbb{F}_{q^{n}}^{*}$ and $t\neq k$. * (b2) If $t\leq k-1$, write $k=t\ell+m$ with $m<t$, then $m>0$ and $S=\overline{S}\oplus b\langle 1,\mu,\ldots,\mu^{m-1}\rangle_{{\mathbb{F}}_{q}}$, where $\overline{S}$ is an ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$, $b\in\mathbb{F}_{q^{n}}^{*}$ and $b{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$. In particular, $t$ divides $n$. ## 3\. Classification of $(n-2)$-clubs In this section we are going to classify $(n-2)$–clubs in $\mathrm{PG}(1,q^{n})$. We will first detect a larger family of linear sets which contains all the $(n-2)$–clubs (up to equivalence), then we will describe how to find the number of points of a certain weight in relation to the intersection of some subspaces. Using the above mentioned results together with the results on critical pairs over finite fields, we are able to completely classify $(n-2)$-clubs. First observe that for $n=3$ all the $2$-clubs are $\mathrm{P\Gamma L}(2,q^{3})$-equivalent to $L_{\mathrm{Tr}_{q^{3}/q}}$, for $n=4$ we only have two not $\mathrm{P\Gamma L}(2,q^{4})$-equivalent $2$-clubs (that are $L_{x^{q}-x^{q^{3}}}$ and the one from Construction 2.1) and the only $3$-club is $L_{\mathrm{Tr}_{q^{4}/q}}$. So, in this section we will assume $n\geq 5$. Let us start by studying the weight distribution of a class of linear sets which contains the clubs we are interested. ###### Proposition 3.1. Let $S$ be an $h$-dimensional ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}$ such that $1\in S$ and $h\leq n-2$. Let $(a,b)\in{\mathbb{F}}_{q^{n}}^{2}$, with $a\notin S$ and $b\notin{\mathbb{F}}_{q}$. Let $U=(S\times\\{0\\})+\langle(1,1)\rangle_{{\mathbb{F}}_{q}}+\langle(a,b)\rangle_{{\mathbb{F}}_{q}}\subseteq{\mathbb{F}}_{q^{n}}^{2}.$ Then (8) $L_{U}=\\{\langle(s+\alpha+\beta a,\alpha+\beta b)\rangle\colon s\in S,\alpha,\beta\in{\mathbb{F}}_{q},(s,\alpha,\beta)\neq(0,0,0)\\}$ is an ${\mathbb{F}}_{q}$-linear set of rank $h+2$ in $\mathrm{PG}(1,q^{n})$, with $w_{L_{U}}(\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}})=h$ and $w_{L_{U}}(\langle(0,1)\rangle_{{\mathbb{F}}_{q^{n}}})=1$. ###### Proof. Since $b\notin{\mathbb{F}}_{q}$, we have that $(a,b)\notin S\times\\{0\\}+\langle(1,1)\rangle_{{\mathbb{F}}_{q}}$ and so $U$ is a direct sum and $\dim_{{\mathbb{F}}_{q}}(U)=h+2$, that is $Rank(L_{U})=h+2$. From $b\notin{\mathbb{F}}_{q}$, it also follows that $\alpha+\beta b=0$, if and only if $\alpha=\beta=0$. Thus, $w_{L_{U}}(\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}})=h$. Finally, $s+\alpha+\beta a=0$ implies $\beta=0$. Indeed, if $\beta\neq 0$, then $a=-\beta^{-1}(s+\alpha)\in S$, a contradiction. So, $w_{L_{U}}(\langle(0,1)\rangle_{{\mathbb{F}}_{q^{n}}})=1$. ∎ Note that, since the point $\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$ has weight $h$ in $L_{U}$ and the rank of $L_{U}$ is $h+2$, by (4) all the other points can have weight at most two in $L_{U}$. In the next result we show how to find the points of weight two. ###### Theorem 3.2. Let $S$ be an $h$-dimensional ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}$ such that $1\in S$ and $3\leq h\leq n-2$. Consider $U=(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}\oplus\langle(a,b)\rangle_{{\mathbb{F}}_{q}}\subseteq{\mathbb{F}}_{q^{n}}^{2},$ with $a\notin S$, and $b\notin{\mathbb{F}}_{q}$. Then the set of points of weight $2$ in $L_{U}$ different from $\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$ is $\\{P_{s}=\langle(-s+a,b)\rangle_{{\mathbb{F}}_{q^{n}}}\colon s\in S\cap(a+bS)\\}$ and the size of such a set is $\lvert S\cap(a+bS)\rvert$. In particular, $L_{U}$ is a $h$-club of rank $h+2$ if and only if $a\notin S+bS$. ###### Proof. From Proposition 3.1, $L_{U}$ is a linear set of rank $h+2$, with $w_{L_{U}}(\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}})=h$ and $w_{L_{U}}(\langle(0,1)\rangle_{{\mathbb{F}}_{q^{n}}})=1$. First note that, by (4), $w_{L_{U}}(P)\leq 2$ for every point $P$ different from $\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$. Let consider the map $\Phi:$ | $S\cap(a+bS)$ | $\longrightarrow$ | $\\{P\in L_{U}\colon w_{L_{U}}(P)=2,P\neq\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}\\}$ ---|---|---|--- | $s$ | $\longmapsto$ | $P_{s}=\langle(-s+a,b)\rangle_{{\mathbb{F}}_{q^{n}}}$. First, we prove that $\Phi$ is well-defined. If $S\cap(a+bS)=\emptyset$, then $\Phi$ is not defined. Therefore, suppose that $S\cap(a+bS)\neq\emptyset$ and let $\overline{s}\in S\cap(a+bS)$, so $\overline{s}=a+b\overline{s}^{\prime}$, for some $\overline{s}^{\prime}\in S$. Then $(-\overline{s}+a,b)\in U$ and $(-\overline{s}+a,b)=b(-\overline{s}^{\prime},1)=b(-1-\overline{s}^{\prime}+1,1)\in U$. Since $b\notin{\mathbb{F}}_{q}$, the point $P_{\overline{s}}$ has weight 2 in $L_{U}$. Now, we show that $\Phi$ is a one-to-one correspondence. Indeed, we only need to prove that $\Phi$ is surjective. If $L_{U}$ has no point of weight 2, it is clear. Otherwise, let $P$ be a point with weight $2$ in $L_{U}\setminus\\{\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}\\}$. So $P=\langle(s_{1}+\alpha_{1}+\beta_{1}a,\alpha_{1}+\beta_{1}b)\rangle_{{\mathbb{F}}_{q^{n}}}=\langle(s_{2}+\alpha_{2}+\beta_{2}a,\alpha_{2}+\beta_{2}b)\rangle_{{\mathbb{F}}_{q^{n}}},$ for some $s_{1},s_{2}\in S$ and $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\in{\mathbb{F}}_{q}$, with $(s_{1}+\alpha_{1}+\beta_{1}a,\alpha_{1}+\beta_{1}b)$ and $(s_{2}+\alpha_{2}+\beta_{2}a,\alpha_{2}+\beta_{2}b)$ ${\mathbb{F}}_{q^{n}}$-linearly dependent by a scalar $t\in{\mathbb{F}}_{q^{n}}\setminus{\mathbb{F}}_{q}$, that is $(s_{1}+\alpha_{1}+\beta_{1}a,\alpha_{1}+\beta_{1}b)=t(s_{2}+\alpha_{2}+\beta_{2}a,\alpha_{2}+\beta_{2}b)$. This implies $(s_{1}+\alpha_{1}+\beta_{1}a)(\alpha_{2}+\beta_{2}b)=(s_{2}+\alpha_{2}+\beta_{2}a)(\alpha_{1}+\beta_{1}b),$ and so (9) $s_{1}\alpha_{2}-s_{2}\alpha_{1}=a(\alpha_{1}\beta_{2}-\alpha_{2}\beta_{1})-b(\alpha_{1}\beta_{2}-\alpha_{2}\beta_{1})+b(s_{2}\beta_{1}-s_{1}\beta_{2})$ Now, we observe that $\gamma=\alpha_{1}\beta_{2}-\alpha_{2}\beta_{1}\neq 0$, that is $(\alpha_{1},\beta_{1})$ and $(\alpha_{2},\beta_{2})$ are ${\mathbb{F}}_{q}$-linearly independent. Indeed, suppose that $(\alpha_{1},\beta_{1})$ and $(\alpha_{2},\beta_{2})$ are ${\mathbb{F}}_{q}$-proportional then there exists $\lambda\in{\mathbb{F}}_{q}$ such that $\alpha_{2}+\beta_{2}b=\lambda(\alpha_{1}+\beta_{1}b)=\lambda t(\alpha_{2}+\beta_{2}b)$. This implies that $\lambda t=1$ and so $t\in{\mathbb{F}}_{q}$, a contradiction. Therefore (9) implies (10) $s_{1}\frac{\alpha_{2}}{\gamma}-s_{2}\frac{\alpha_{1}}{\gamma}=a+b\left(s_{2}\frac{\beta_{1}}{\gamma}-s_{1}\frac{\beta_{2}}{\gamma}-1\right).$ Let $\overline{s}=s_{1}\frac{\alpha_{2}}{\gamma}-s_{2}\frac{\alpha_{1}}{\gamma}$, because of $s_{2}\frac{\beta_{1}}{\gamma}-s_{1}\frac{\beta_{2}}{\gamma}-1\in S$. By (10), $\overline{s}\in S\cap(a+bS)$. Since $(s_{1}+\alpha_{1}+\beta_{1}a,\alpha_{1}+\beta_{1}b),(s_{2}+\alpha_{2}+\beta_{2}a,\alpha_{2}+\beta_{2}b)\in U$ then $\frac{\alpha_{2}}{\gamma}(s_{1}+\alpha_{1}+\beta_{1}a,\alpha_{1}+\beta_{1}b)-\frac{\alpha_{1}}{\gamma}(s_{2}+\alpha_{2}+\beta_{2}a,\alpha_{2}+\beta_{2}b)=(\overline{s}-a,-b)$ belongs to $U$ and defines again the point $P$ and hence $\Phi$ is surjective. So, $P=\langle(\overline{s}-a,-b)\rangle_{{\mathbb{F}}_{q^{n}}}=\Phi({-\overline{s}})$. In particular, if there exists a point of weight 2 in $L_{U}$ then $S\cap(a+bS)\neq\emptyset$. Therefore, $L_{U}$ has only one point of weight greater than $1$ if and only if $S\cap(a+bS)=\emptyset$, which happens if and only if $a\notin S+bS$. This completes the proof. ∎ Also in the case in which the linear sets as in Theorem 3.2 are not $(h-2)$-club, they have an interesting weight distribution if the points and size. ###### Corollary 3.3. Let $S$ be an $h$-dimensional ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}$ such that $1\in S$ and $3\leq h\leq n-2$. Consider $U=(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}\oplus\langle(a,b)\rangle_{{\mathbb{F}}_{q}}\subseteq{\mathbb{F}}_{q^{n}}^{2},$ with $a\notin S$, and $b\notin{\mathbb{F}}_{q}$. If $a\in S+bS$, then $L_{U}$ has $q^{j}$ points of weight $2$, with $j=\dim_{{\mathbb{F}}_{q}}(S\cap bS)=2h-\dim_{{\mathbb{F}}_{q}}(S+bS)$ and $\lvert L_{U}\rvert=q^{h+1}+q^{h}-q^{j+1}+1.$ ###### Proof. Since $a\in S+bS$, we have that $S\cap(a+bS)\neq\emptyset$. So, by Theorem 3.2, we know that that number of points of weight 2 is $\lvert S\cap(a+bS)\rvert$. Finally by (2) and (3), we get the assertion. ∎ Next result shows that $h$-clubs of rank $h+2$ (up to the action of $\mathrm{PGL}(2,q^{n})$ on $\mathrm{PG}(1,q^{n})$) are as described in (8). ###### Lemma 3.4. If $L_{W}$ is an $h$-club of rank $h+2\leq n$ in $\mathrm{PG}(1,q^{n})$. Then, $L_{W}$ is $\mathrm{PGL}(2,q^{n})$-equivalent to a linear set $L_{U}$, where $U=(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}\oplus\langle(a,b)\rangle_{{\mathbb{F}}_{q}}$, for some $h$-dimensional ${\mathbb{F}}_{q}$-subspace $S$ containing $1$, with $a\notin S$, $b\notin{\mathbb{F}}_{q}$ and $a\notin S+bS$. ###### Proof. Let $P=\langle\mathbf{v}_{1}\rangle_{{\mathbb{F}}_{q^{n}}}\in L_{W}$ be the point of weight $h$, with $\mathbf{v}_{1}\in W$. Let $Q=\langle\mathbf{v}_{2}\rangle_{{\mathbb{F}}_{q^{n}}}\in L_{W}\setminus\\{P\\}$, with $\mathbf{v}_{2}\in W$ and let $\psi\in\mathrm{PGL}(2,q^{n})$ be the projectivity of $\mathrm{PG}(1,q^{n})$ induced by the unique ${\mathbb{F}}_{q^{n}}$-isomorpishm $f$ of ${\mathbb{F}}_{q^{n}}^{2}$ that maps $\mathbf{v}_{1}$ in $(1,0)$ and $\mathbf{v}_{2}$ in $(1,1)$. Then $\psi(L_{W})=L_{U}$ where $U=f(W)$ and $(1,0),(1,1)\in U$. Clearly, $\psi(P)=\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$, $\psi(Q)=\langle(1,1)\rangle_{{\mathbb{F}}_{q^{n}}}$ and $w_{L_{U}}(\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}})=h$. Let $S^{\prime}=U\cap\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$. We can write $S^{\prime}=S\times\\{0\\}$, where $S$ is an ${\mathbb{F}}_{q}$-subspace of $\mathbb{F}_{q^{n}}$, $\dim_{{\mathbb{F}}_{q}}(S^{\prime})=\dim_{{\mathbb{F}}_{q}}(S)=h$ and $1\in S$. Let $(a,b)\in U$ such that $U=(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}\oplus\langle(a,b)\rangle_{{\mathbb{F}}_{q}}$, so (11) $U=\\{(s+\alpha+\beta a,\alpha+\beta b)\colon s\in S,\alpha,\beta\in{\mathbb{F}}_{q}\\}.$ Since $(a,b)\notin(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}$, then either $a\notin S$ or $b\notin{\mathbb{F}}_{q}$. Suppose that $b\in{\mathbb{F}}_{q}$. Then $(-b+a,0)\in U$ (choosing $s=0$, $\alpha=-b$ and $\beta=1$ in (11)). So $-b+a\in S$ and then $a\in S$ (since $b\in{\mathbb{F}}_{q}$), a contradiction. Therefore, we have that $b\notin{\mathbb{F}}_{q}$. Suppose now that $a\in S$. Then, (choosing $s=-a$, $\alpha=0$, $\beta=1$ in (11)) the point $\langle(0,b)\rangle_{{\mathbb{F}}_{q^{n}}}=\langle(0,1)\rangle_{{\mathbb{F}}_{q^{n}}}$ is a point different from $\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$ that has weight $2$ in $L_{U}$, this is a contradiction since $L_{U}$ is an $h$-club. Finally, by Theorem 3.2, we get $a\notin S+bS$. ∎ ###### Remark 3.5. Note that if $\dim_{{\mathbb{F}}_{q}}(S)=h<n/2$, then for every $b\in{\mathbb{F}}_{q^{n}}\setminus{\mathbb{F}}_{q}$, $\dim_{{\mathbb{F}}_{q}}(S+bS)<n$ and hence there exists $a\in{\mathbb{F}}_{q^{n}}\setminus(S+bS)$. Therefore, the ${\mathbb{F}}_{q}$-subspace $U=(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}\oplus\langle(a,b)\rangle_{{\mathbb{F}}_{q}}$ defines an $h$-club of rank $h+2$. Our aim is to give a complete classification of $(n-2)$-club of rank $n$. By Lemma 3.4, we may assume that $(n-2)$-clubs are as described in (8) and we start to prove under which conditions on $S$, $a$ and $b$ linear sets as in Lemma 3.4 are clubs. ###### Theorem 3.6. Let $S$ be an $(n-2)$-dimensional ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}$, such that $1\in S$. Consider $U=(S\times\\{0\\})\oplus\langle(1,1)\rangle_{{\mathbb{F}}_{q}}\oplus\langle(a,b)\rangle_{{\mathbb{F}}_{q}}\subseteq{\mathbb{F}}_{q^{n}}^{2},$ with $a\notin S$ and $b\notin{{\mathbb{F}}_{q}}$. Let $T=\langle 1,b\rangle_{{\mathbb{F}}_{q}}$ and ${\mathbb{F}}_{q^{t}}={\mathbb{F}}_{q}(b)$. Then only one of the following three cases occurs: 1. (1) $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})=n$ and $L_{U}$ is not an $(n-2)$-club. In this case $L_{U}$ contains $q^{n-4}$ points of weight 2 and $\lvert L_{U}\rvert=q^{n-1}+q^{n-2}-q^{n-3}+1$. 2. (2) $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})=n-1$, and in this case $L_{U}$ is an $(n-2)$-club of rank $n$ if and only if $a\notin\langle S\cdot T\rangle_{{\mathbb{F}}_{q}}$. So, if $a\in\langle S\cdot T\rangle_{{\mathbb{F}}_{q}}$, $L_{U}$ is an ${\mathbb{F}}_{q}$-linear set having $q^{n-3}$ points of weight 2 and $\lvert L_{U}\rvert=q^{n-1}+1$. While if $a\notin\langle S\cdot T\rangle_{{\mathbb{F}}_{q}}$: 1. (2.1) if $t\geq n-2$ there exists $c\in{\mathbb{F}}_{q^{n}}^{*}$, such that $S=c\langle 1,b,\ldots,b^{n-3}\rangle_{{\mathbb{F}}_{q}}$, $L_{U}=\left\\{\left\langle\left(\sum_{i=0}^{n-3}cx_{i}b^{i}+\alpha+\beta a,\alpha+\beta b\ \right)\right\rangle_{{\mathbb{F}}_{q^{n}}}\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q}\mbox{ not all zero}\right\\},$ and $t=n$. 2. (2.2) if $t\leq n-3$, there exist $\ell\in\mathbb{N},c\in{\mathbb{F}}_{q^{n}}^{*}$ such that $n=t(\ell+1)$, with $t\geq 3$ $S=\overline{S}\oplus c\langle 1,b,\ldots,b^{t-3}\rangle_{{\mathbb{F}}_{q}}$, with $\overline{S}$ an ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ such that $c{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$, and $L_{U}=\left\\{\left\langle\left(\overline{s}+\sum_{i=0}^{t-3}cx_{i}b^{i}+\alpha+\beta a,\alpha+\beta b\right)\right\rangle_{{\mathbb{F}}_{q^{n}}}\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q},\overline{s}\in\overline{S}\mbox{ not all zero}\right\\}.$ 3. (3) $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})=n-2$, then $n$ is even, ${\mathbb{F}}_{q}(b)={\mathbb{F}}_{q^{2}}$, $S$ is an ${\mathbb{F}}_{q^{2}}$-subspace. Moreover, there exists $c\in{\mathbb{F}}_{q^{n}}^{*}$ such that $S=c\langle 1,\gamma,\ldots,\gamma^{n/2-2}\rangle_{{\mathbb{F}}_{q^{2}}}$ and $L_{U}=\left\\{\left\langle\left(\sum_{i=0}^{n/2-2}cx_{i}\gamma^{i}+\alpha+\beta a,\alpha+\beta b\ \right)\right\rangle_{{\mathbb{F}}_{q^{n}}}\colon x_{i}\in{\mathbb{F}}_{q^{2}},\alpha,\beta\in{\mathbb{F}}_{q}\mbox{not all zero}\right\\},$ for some $\gamma\in{\mathbb{F}}_{q^{n}}^{*}$ such that ${\mathbb{F}}_{q^{2}}(\gamma)={\mathbb{F}}_{q^{n}}$. Moreover, $L_{U}$ is an $(n-2)$-club of rank $n$ if and only if $a\notin c\langle 1,\gamma,\ldots,\gamma^{n/2-2}\rangle_{{\mathbb{F}}_{q^{2}}}$. If $a\in c\langle 1,\gamma,\ldots,\gamma^{n/2-2}\rangle_{{\mathbb{F}}_{q^{2}}}$, then all the points of $L_{U}$ different from $\langle(1,0)\rangle_{{\mathbb{F}}_{q^{n}}}$ have weight 2 and $\lvert L_{U}\rvert=q^{n-2}+1$. In particular, if $n$ is prime Cases (2.2) and (3) cannot occur. ###### Proof. Case (1): This case is a consequence of Theorem 3.2. Indeed, note that $\langle S\cdot T\rangle_{{\mathbb{F}}_{q}}=S+bS={\mathbb{F}}_{q^{n}}$ and hence $a\in S+bS$, so that Theorem 3.2 implies that $L_{U}$ is not an $(n-2)$-club. Also, by Corollary 3.3 the weight distribution and the size of $L_{U}$ is determined. Case (2): In the case that $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle)=n-1$, we have $\dim_{{\mathbb{F}}_{q}}(S\cap bS)=n-3$, and so by Lemma 2.8, we need to analyze the following two cases: * • if $t\geq n-2$, there exists $c\in{\mathbb{F}}_{q^{n}}$ such that $S=c\langle 1,b,\ldots,b^{n-3}\rangle_{{\mathbb{F}}_{q}}$. Moreover, since $n>4$, we have $t>n-2$ and so $t=n$. * • If $t<n-2$, we write $n-2=t\ell+m$, with $0<m<t$. In this case, since $t\mid n$ then $t\mid m+2$ and so there exists $s\in\mathbb{N}$ such that $m+2=st$. Therefore, since $m<t$ then $2>t(s-1)$. Because of $t\geq 2$, $s=1$, hence $n=t(\ell+1)$. Also since $t\mid m+2$ and $0<m<t$, we have $t\geq 3$. Case (3): Note that $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})=n-2$ implies $S\cap bS=S$. From (a) of Lemma 2.8, we get that $S$ is an ${\mathbb{F}}_{q}(b)$-subspace and hence $t\mid n-2$. Since $t\mid n$ then $t\mid 2$ and $t=2$. Hence $\dim_{{\mathbb{F}}_{q^{2}}}(S)=n/2-1$ and fixed $\gamma\in{\mathbb{F}}_{q^{n}}^{*}$ such that ${\mathbb{F}}_{q^{n}}={\mathbb{F}}_{q^{2}}(\gamma)$, we can now apply Proposition 2.5 to obtain the desired form. In the case $a\in c\langle 1,\gamma,\ldots,\gamma^{n/2-2}\rangle_{{\mathbb{F}}_{q^{2}}}$ we have that $a\in S+bS=S$ and so by Corollary 3.3 the weight distribution and the size of $L_{U}$ are determined. Finally, suppose that $n$ is a prime number. Then only the first case of Theorem 2.6 can occur, which means that $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})\in\\{n-1,n\\}$ and correspond to Cases (1) and (2.1). ∎ As a consequence of Lemma 3.4 and Theorem 3.6, we get the following theorem of classification on $(n-2)$–clubs. ###### Corollary 3.7. Let $L$ be a linear set of rank $n$ of $\mathrm{PG}(1,q^{n})$. Then $L$ is an $(n-2)$–club if and only if it is $\mathrm{PGL}(2,q^{n})$–equivalent to a linear set $L_{U}$ such that $U$ has the following form: $n=t(\ell+1)$, with $\ell\in\mathbb{N}$, and $U=\left\\{\left(\overline{s}+\sum_{i=0}^{t-3}cx_{i}b^{i}+\alpha+\beta a,\alpha+\beta b\ \right)\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q},\overline{s}\in\overline{S}\right\\},$ with $a,b,c\in{\mathbb{F}}_{q^{n}}^{*}$, $\overline{S}\subseteq{\mathbb{F}}_{q^{n}}$ such that * • ${\mathbb{F}}_{q}(b)={\mathbb{F}}_{q^{t}}$, * • $\overline{S}$ is an ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ such that $c{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$, * • $1\in\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$, * • $a\notin\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$. ###### Proof. Theorem 3.6 gives a complete characterization of subspaces defining $(n-2)$-clubs. So, to prove the assertion, it is enough to observe that the subspace $U$ in Case (2.1) of Theorem 3.6 is obtained choosing $\ell=0$, Case (2.2) choosing $3\leq t\leq n-3$ and finally Case (3) choosing $t=2$. ∎ The known examples of $(n-2)$-clubs are described in the following remarks. ###### Remark 3.8. Let $U$ be defined as follows $U=\left\\{\left(\sum_{i=0}^{n-3}cx_{i}b^{i}+\alpha+\beta a,\alpha+\beta b\right)\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q}\right\\},$ with $a,b,c\in{\mathbb{F}}_{q^{n}}^{*}$ such that * • ${\mathbb{F}}_{q}(b)={\mathbb{F}}_{q^{n}}$, * • $a\notin c\langle 1,b,\ldots,b^{n-3},b^{n-2}\rangle_{{\mathbb{F}}_{q}}$. Choosing $c=1$, $a=b-b^{n-1}$ and by applying the projectivity of $\mathrm{PG}(1,q^{n})$ induced by the matrix $\left(\begin{matrix}b&-a\\\ 0&1\end{matrix}\right)$, we obtain the family of linear sets described in Construction 2.1. ###### Remark 3.9. The families of $i$-clubs described in Construction 2.2 give $(n-2)$-clubs in the following cases * • $n=2r$, $r>1$, $f(x)=a_{0}x+a_{1}x^{q}\in{\mathbb{F}}_{q^{2}}[x]$ with $a_{1}\neq 0$ and $U_{a,b}=\left\\{\left(a_{0}x_{0}+a_{1}x_{0}^{q}-ax_{0},bx_{0}+\sum_{i=1}^{t-1}x_{i}\omega^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{2}}\right\\},$ for some fixed $a,b\in{\mathbb{F}}_{q^{2}}$ with $b\neq 0$ and $\\{1,\omega,\ldots,\omega^{t-1}\\}$ an ${\mathbb{F}}_{q^{2}}$-basis of ${\mathbb{F}}_{q^{n}}$. Then $L_{U_{a,b}}$ is an $(n-2)$-club if $a_{0}x+a_{1}x^{q}-ax$ is invertible over ${\mathbb{F}}_{q^{2}}$ * • $n=3r$, $r>1$, $f(x)=a_{0}x+a_{1}x^{q}+a_{2}x^{q^{2}}\in{\mathbb{F}}_{q^{3}}[x]$ such that either $a_{1}=0$ and $a_{2}\neq 0$ or $a_{1}\neq 0$ and $\mathrm{N}_{q^{3}/q}(a_{2}/a_{1})\neq 1$, and $U_{a,b}=\left\\{\left(a_{0}x_{0}+a_{1}x_{0}^{q}+a_{2}x_{0}^{q^{2}}-ax_{0},bx_{0}+\sum_{i=1}^{t-1}x_{i}\omega^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{3}}\right\\},$ for some fixed $a,b\in{\mathbb{F}}_{q^{3}}$ with $b\neq 0$ and $\\{1,\omega,\ldots,\omega^{t-1}\\}$ an ${\mathbb{F}}_{q^{3}}$-basis of ${\mathbb{F}}_{q^{n}}$. Then $L_{U_{a,b}}$ is an $(n-2)$-club if $a_{0}x+a_{1}x^{q}+a_{2}x^{q^{2}}-ax$ is not invertible over ${\mathbb{F}}_{q^{3}}$. ###### Remark 3.10. The condition on being scattered on the polynomial $f$ for the subspaces in (7) has been written down explicitly. Indeed, linearized polynomials which results scattered in ${\mathbb{F}}_{q^{2}}$ and in ${\mathbb{F}}_{q^{3}}$ are exactly those that define linear sets of pseudoregulus type. In particular, for the case of ${\mathbb{F}}_{q^{3}}$, $f$ is scattered if and only if there not exists an element $\alpha\in{\mathbb{F}}_{q^{3}}$ such that $\dim_{{\mathbb{F}}_{q}}(\ker(f(x)-\alpha x))=2.$ By [25, Theorem 2.24], this is possible if and only if $f(x)-\alpha x=\gamma\mathrm{Tr}_{q^{3}/q}(\beta x),$ for some nonzero $\beta,\gamma\in{\mathbb{F}}_{q^{3}}$. Clearly, this can happen if and only if $a_{1}\neq 0$ and $\mathrm{N}_{q^{3}/q}(a_{2}/a_{1})=1$. ## 4\. Equivalence of clubs In this section we will deal with the $\mathrm{\Gamma L}$-equivalence issue for the known families of clubs. We start by studying when two examples of Construction 2.2 are $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalent. ###### Theorem 4.1. Let $n=rt$, $t,r>1$, $f_{1},f_{2}\in\mathcal{L}_{t,q}$ such that $f_{1}$ and $f_{2}$ are scattered $q$-polynomials. Let $U_{1}=\left\\{\left(f_{1}(x_{0})-a_{1}x_{0},b_{1}x_{0}+\sum_{i=1}^{r-1}x_{i}\omega_{1}^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{t}}\right\\},$ and $U_{2}=\left\\{\left(f_{2}(x_{0})-a_{2}x_{0},b_{2}x_{0}+\sum_{i=1}^{r-1}x_{i}\omega_{2}^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{t}}\right\\},$ for some fixed $a_{1},a_{2},b_{1},b_{2}\in{\mathbb{F}}_{q^{t}}$ with $b_{1},b_{2}\neq 0$ and $\\{1,\omega_{1},\ldots,\omega_{1}^{r-1}\\}$ and $\\{1,\omega_{2},\ldots,\omega_{2}^{r-1}\\}$ two ${\mathbb{F}}_{q^{t}}$-basis of ${\mathbb{F}}_{q^{n}}$. Then $U_{1}$ and $U_{2}$ are $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalent if and only if the spaces $\overline{U}_{f_{1}}=\\{(f_{1}(x_{0})-a_{1}x_{0},b_{1}x_{0}):x_{0}\in{\mathbb{F}}_{q^{t}}\\}$ and $\overline{U}_{f_{2}}=\\{(f_{2}(x_{0})-a_{2}x_{0},b_{2}x_{0}):x_{0}\in{\mathbb{F}}_{q^{t}}\\}$ are $\operatorname{\Gamma\mathrm{L}}(2,q^{t})$-equivalent via an element $\phi\in\operatorname{\Gamma\mathrm{L}}(2,q^{t})$ such that $\phi$ fixes the subspace $\langle(0,1)\rangle_{{\mathbb{F}}_{q^{t}}}$. ###### Proof. Denote $\overline{S}_{1}=\langle\omega_{1},\ldots,\omega_{1}^{r-1}\rangle_{{\mathbb{F}}_{q^{t}}}$ and $\overline{S}_{2}=\langle\omega_{2},\ldots,\omega_{2}^{r-1}\rangle_{{\mathbb{F}}_{q^{t}}}$. Suppose that $U_{1}$ and $U_{2}$ are $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalent. Then there exists a matrix $\begin{pmatrix}A&B\\\ C&D\end{pmatrix}\in\mathrm{GL}(2,q^{n})$ and $\rho\in\operatorname{Aut}({\mathbb{F}}_{q^{n}})$ such that (12) $\begin{pmatrix}A&B\\\ C&D\end{pmatrix}U_{1}^{\rho}=U_{2}.$ Since $\langle(0,1)\rangle_{{\mathbb{F}}_{q^{n}}}$ is the only point in $L_{U_{1}}$ and $L_{U_{2}}$ having weight greater than 1, then $B=0,D\neq 0$ and $D\overline{S}_{1}^{\rho}=\overline{S}_{2}.$ Moreover, (12) is satisfied if and only if for every $x_{0}\in{\mathbb{F}}_{q^{t}}$ and $s_{1}\in\overline{S}_{1}$ there exist $y_{0}\in{\mathbb{F}}_{q^{t}}$ and $s_{2}\in\overline{S}_{2}$ such that (13) $C(f_{1}^{\rho}(x^{\rho}_{0})-a_{1}^{\rho}x^{\rho}_{0})+D(b^{\rho}_{1}x^{\rho}_{0}+s^{\rho}_{1})=b_{2}y_{0}+s_{2}$ and (14) $A(f_{1}^{\rho}(x^{\rho}_{0})-a_{1}^{\rho}x^{\rho}_{0})=f_{2}(y_{0})-a_{2}y_{0}.$ Note that $\\{1,\omega_{2},\ldots,\omega_{2}^{r-1}\\}$ is an ${\mathbb{F}}_{q^{t}}$-basis of ${\mathbb{F}}_{q^{n}}$, this allows us to write $C=\sum_{i=0}^{r-1}C_{i}\omega_{2}^{i}$ and $D=\sum_{i=0}^{r-1}D_{i}\omega_{2}^{i}$, with $C_{i}$’s and $D_{i}$’s in ${\mathbb{F}}_{q^{t}}$. Now, (13) reads as follows $\sum_{i=0}^{r-1}C_{i}\omega_{2}^{i}(f_{1}^{\rho}(x^{\rho}_{0})-a_{1}^{\rho}x^{\rho}_{0})+\sum_{i=0}^{r-1}D_{i}\omega_{2}^{i}(b^{\rho}_{1}x^{\rho}_{0}+s^{\rho}_{1})=b_{2}y_{0}+s_{2}$ and since $Ds^{\rho}_{1}\in S_{2}$, the above equality implies (15) $C_{0}(f_{1}^{\rho}(x^{\rho}_{0})-a_{1}^{\rho}x^{\rho}_{0})+D_{0}(b^{\rho}_{1}x^{\rho}_{0})=b_{2}y_{0}.$ Note also that $A\in{\mathbb{F}}_{q^{t}}$, because of Equation (14). Moreover, note that $(C_{0},D_{0})\neq(0,0)$. Suppose that $D_{0}=0$, then (15) together with (14) implies that $f_{2}(y_{0})=\frac{A}{C_{0}}b_{2}y_{0}+a_{2}y_{0}$, for every $y_{0}\in{\mathbb{F}}_{q^{t}}$. So $f_{2}$ is not a scattered $q$-polynomial, a contradiction. So $D_{0}\neq 0$. This means that the spaces $\overline{U}_{f_{1}}$ and $\overline{U}_{f_{2}}$ are $\operatorname{\Gamma\mathrm{L}}(2,q^{t})$-equivalent via the map of $\operatorname{\Gamma\mathrm{L}}(2,q^{t})$ defined by the matrix $\begin{pmatrix}A_{0}&0\\\ C_{0}&D_{0}\end{pmatrix}$ and the restriction of the automorphism $\rho$ to ${\mathbb{F}}_{q^{t}}$. Conversely, suppose that $\overline{U}_{f_{1}}$ and $\overline{U}_{f_{2}}$ are $\operatorname{\Gamma\mathrm{L}}(2,q^{t})$-equivalent via an element $\phi\in\operatorname{\Gamma\mathrm{L}}(2,q^{t})$ such that $\phi(\langle(0,1)\rangle_{{\mathbb{F}}_{q^{t}}})=\langle(0,1)\rangle_{{\mathbb{F}}_{q^{t}}}$. Then $\phi$ can be represented by a matrix $\begin{pmatrix}A_{0}&0\\\ C_{0}&D_{0}\end{pmatrix}\in\mathrm{GL}(2,q^{t})$ and $\rho\in\operatorname{Aut}({\mathbb{F}}_{q^{t}})$ such that (16) $\begin{pmatrix}A_{0}&0\\\ C_{0}&D_{0}\end{pmatrix}\overline{U}_{f_{1}}^{\rho}=\overline{U}_{f_{2}}.$ Moreover, $\overline{S}^{\rho}_{1}$ is an $(r-1)$-dimensional ${\mathbb{F}}_{q^{t}}$-subspace of ${\mathbb{F}}_{q^{n}}$. By Lemma 2.5 there exists $D^{\prime}\in{\mathbb{F}}_{q^{n}}^{*}$ such that $D^{\prime}\overline{S}_{1}^{\rho}=\overline{S}_{2}$. Let write $D^{\prime}$ as an ${\mathbb{F}}_{q^{t}}$-linear combination of $\\{1,\omega_{2},\ldots,\omega_{2}^{r-1}\\}$, namely $D^{\prime}=\sum_{i=0}^{r-1}D_{i}^{\prime}\omega_{2}^{i}$. Then we also have $\frac{D_{0}}{D_{0}^{\prime}}D^{\prime}\overline{S}_{1}^{\rho}=\overline{S}_{2}$ holds. Denote by $D=\frac{D_{0}}{D_{0}^{\prime}}D^{\prime}$. Therefore, since $D=D_{0}+\sum_{i=1}^{r-1}\frac{D_{i}^{\prime}}{D_{0}^{\prime}}\omega_{2}^{i}$ and $b_{1}^{\rho}x_{0}^{\rho}\sum_{i=1}^{r-1}\frac{D_{i}^{\prime}}{D_{0}^{\prime}}\omega_{2}^{i}\in\overline{S}_{2}$, then the image of $U_{1}$ via the invertible map $\Phi$ represented by the matrix $\begin{pmatrix}A_{0}&0\\\ C_{0}&D\end{pmatrix}$ and the automorphism $\rho$ (whose action is extended to $\mathbb{F}_{q^{n}}$) is contained in $U_{2}$. Since $U_{1}$ and $U_{2}$ have the same dimension we obtain that $\Phi(U_{1})=U_{2}$ and hence $U_{1}$ and $U_{2}$ are $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalent. ∎ Due to the above result, we know that there are several $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-inequivalent examples of $i$-club in $\mathrm{PG}(1,q^{n})$, we refer to Table 1 provided in [31] for a list of all the known families of scattered polynomials (for the more recent examples see [27, 26, 33]). ###### Remark 4.2. The number of inequivalent subspaces of the recent family associated with the scattered polynomials presented in [33], which extends those in [27, 26], is computed in [33, Theorem 4.12] and a lower bound is given in [33, Theorem 4.14]. For the number of inequivalent subspaces associated with the more classical family of Lunardon-Polverino polynomials see [41]. Now we proceed by showing that the two different types (according to the behaviour of $\dim_{{\mathbb{F}}_{q}}(\langle S\cdot T\rangle_{{\mathbb{F}}_{q}})$, cfr. Theorem 3.6) of ${\mathbb{F}}_{q}$-subspaces obtained in Theorem 3.7 are $\Gamma\mathrm{L}(2,q^{n})$-inequivalent. To this aim the following lemma is needed. ###### Lemma 4.3. Consider the following ${\mathbb{F}}_{q}$-subspaces of ${\mathbb{F}}_{q^{n}}$ and let 1. (1) $S_{1}=\left\\{\sum_{i=0}^{n-3}c_{1}x_{i}b_{1}^{i}\colon x_{i}\in{\mathbb{F}}_{q}\right\\},$ with $b_{1},c_{1}\in{\mathbb{F}}_{q^{n}}^{*}$ such that ${\mathbb{F}}_{q}(b_{1})={\mathbb{F}}_{q^{n}}$, 2. (2) if $n=t(\ell+1)$, with $\ell\in\mathbb{N}$, $S_{2}=\left\\{\overline{s}+\sum_{i=0}^{t-3}c_{2}x_{i}b_{2}^{i}\colon x_{i}\in{\mathbb{F}}_{q},\overline{s}\in\overline{S}\right\\},$ with $b_{2},c_{2}\in{\mathbb{F}}_{q^{n}}^{*}$ such that ${\mathbb{F}}_{q}(b_{2})={\mathbb{F}}_{q^{t}}$, with $3\leq t\leq n-3$, and $\overline{S}\subseteq{\mathbb{F}}_{q^{n}}$ is an ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ and $c_{2}{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$. 3. (3) if $n$ is even, $S_{3}=\left\\{\sum_{i=0}^{n/2-2}c_{3}x_{i}b_{3}^{i}\colon x_{i}\in{\mathbb{F}}_{q^{2}}\right\\},$ with $c_{3},b_{3}\in{\mathbb{F}}_{q^{n}}^{*}$, such that ${\mathbb{F}}_{q^{2}}(b_{3})={\mathbb{F}}_{q^{n}}$. Then there cannot exist $\lambda\in{\mathbb{F}}_{q^{n}}^{*}$ such that $S_{i}=\lambda S_{j}^{\rho}$, for some $\rho\in\operatorname{Aut}({\mathbb{F}}_{q^{n}})$ and $i,j\in\\{1,2,3\\}$ such that $i\neq j$. ###### Proof. Let $n$ any positive integer for which we can consider $S_{i}$ and $S_{j}$ for $i,j\in\\{1,2,3\\}$ with $i\neq j$. Suppose that there exists $\lambda\in{\mathbb{F}}_{q^{n}}^{*}$ such that $S_{i}=\lambda S_{j}^{\rho}$ with $\rho\in\operatorname{Aut}({\mathbb{F}}_{q^{n}})$. Without loss of generality, we may assume that $i<j$ and $\rho$ is the identity (up to replace the $c_{i}$’s, $b_{i}$’s, $\gamma$ and $\overline{S}$) and hence $S_{i}=\lambda S_{j}$. Now, we give a case-by-case analysis: * (i=1,j=2) Here, we have that $\lambda\overline{S}\subseteq S_{1}$. It follows that $S_{1}^{\perp}\subseteq(\lambda\overline{S})^{\perp}$ and $\dim_{{\mathbb{F}}_{q}}((\lambda\overline{S})^{\perp})=n-\ell t=t$. So, $(\lambda\overline{S})^{\perp}$ is an ${\mathbb{F}}_{q^{t}}$-subspace of $\mathbb{F}_{q^{n}}$ of dimension one. Consider the ordered basis $\mathcal{B}=(1,b_{1},\ldots,b_{1}^{n-1})$ and its dual basis $\mathcal{B}^{*}=(\mu_{0}^{*},\mu_{1}^{*},\ldots,\mu_{n-1}^{*})$. It follows that $S_{1}^{\perp}=c_{1}^{-1}\langle\mu_{n-2}^{*},\mu_{n-1}^{*}\rangle_{{\mathbb{F}}_{q}}$ and by Lemma 2.3 we have that $\mu_{n-2}^{*}=\delta^{-1}(a_{n-1}+b_{1}),$ and $\mu_{n-1}^{*}=\delta^{-1},$ where $f(x)=a_{0}+a_{1}x+\ldots+a_{n-1}x^{n-1}+x^{n}$ is the minimal polynomial of $b_{1}$ over ${\mathbb{F}}_{q}$ and $\delta=f^{\prime}(b_{1})$. Now, since $\mu_{n-2}^{*},\mu_{n-1}^{*}\in(\lambda\overline{S})^{\perp}$ and since $(\lambda\overline{S})^{\perp}$ is a $1$-dimensional ${\mathbb{F}}_{q^{t}}$-subspace, it follows $\frac{\mu_{n-2}^{*}}{\mu_{n-1}^{*}}=a_{n-1}+b_{1}\in{\mathbb{F}}_{q^{t}},$ that is $b_{1}\in{\mathbb{F}}_{q^{t}}$, a contradiction. * (i=1,j=3) Since $S_{1}=c_{1}\langle 1,b_{1}\ldots,b_{1}^{n-3}\rangle_{{\mathbb{F}}_{q}}$, then $S_{1}\cap b_{1}S_{1}=c_{1}\langle b_{1},\ldots,b_{1}^{n-3}\rangle_{{\mathbb{F}}_{q}}$. Since $S_{3}$ is an ${\mathbb{F}}_{q^{2}}$-subspace then also $S_{1}$ has to be an ${\mathbb{F}}_{q^{2}}$-subspace and consequently $S_{1}\cap b_{1}S_{1}$ is an ${\mathbb{F}}_{q^{2}}$-subspace as well. Then $2$ divides $\dim_{{\mathbb{F}}_{q}}(S_{1}\cap b_{1}S_{1})=n-3$, this is a contradiction since $n$ is even. * (i=2,j=3) Since $S_{2}=\overline{S}\oplus c_{2}\langle 1,b_{2},\ldots,b_{2}^{t-3}\rangle_{{\mathbb{F}}_{q}}$, then $S_{2}\cap b_{2}S_{2}=\overline{S}\oplus c_{2}\langle b_{2},\ldots,b_{2}^{t-3}\rangle_{{\mathbb{F}}_{q}}$. Since $S_{3}$ is an ${\mathbb{F}}_{q^{2}}$-subspace, then we can argue as in the previous case getting a contradiction to the fact that $n$ is even. ∎ Consequently to Lemma 4.3, we show that any two different ${\mathbb{F}}_{q}$-subspaces obtained in Theorem 3.7 for different cases are $\Gamma\mathrm{L}(2,q^{n})$-inequivalent. ###### Theorem 4.4. Consider the following ${\mathbb{F}}_{q}$-subspaces: * (1) Let $U_{1}=\left\\{\left(\sum_{i=0}^{n-3}c_{1}x_{i}b_{1}^{i}+\alpha+\beta a_{1},\alpha+\beta b_{1}\right)\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q}\right\\},$ with $a_{1},b_{1},c_{1}\in{\mathbb{F}}_{q^{n}}^{*}$ such that: * – ${\mathbb{F}}_{q}(b_{1})={\mathbb{F}}_{q^{n}}$, * – $1\in c_{1}\langle 1,b_{1},\ldots,b_{1}^{n-3},b_{1}^{n-2}\rangle_{{\mathbb{F}}_{q}}$ * – $a_{1}\notin c_{1}\langle 1,b_{1},\ldots,b_{1}^{n-3},b_{1}^{n-2}\rangle_{{\mathbb{F}}_{q}}$, * (2) if $n=t(\ell+1)$, with $\ell\in\mathbb{N}$, let $U_{2}=\left\\{\left(\overline{s}+\sum_{i=0}^{t-3}c_{2}x_{i}b_{2}^{i}+\alpha+\beta a_{2},\alpha+\beta b_{2}\ \right)\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q},\overline{s}\in\overline{S}\right\\},$ with $a_{2},b_{2},c_{2}\in{\mathbb{F}}_{q^{n}}^{*}$, $\overline{S}\subseteq{\mathbb{F}}_{q^{n}}$ such that: * – ${\mathbb{F}}_{q}(b_{2})={\mathbb{F}}_{q^{t}}$, with $3\leq t\leq n-3$, * – $\overline{S}$ ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ such that $c_{2}{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$, * – $1\in\overline{S}\oplus c_{2}\langle 1,b_{2},\ldots,b_{2}^{t-2}\rangle_{{\mathbb{F}}_{q}}$, * – $a\notin\overline{S}\oplus c_{2}\langle 1,b_{2},\ldots,b_{2}^{t-2}\rangle_{{\mathbb{F}}_{q}}$, * (3) if $n$ is even, let $U_{3}=\left\\{\left(\sum_{i=0}^{n/2-2}c_{3}x_{i}\gamma^{i}+\alpha+\beta a_{3},\alpha+\beta b_{3}\right)\colon x_{i}\in{\mathbb{F}}_{q^{2}},\alpha,\beta\in{\mathbb{F}}_{q}\right\\},$ with $a_{3},b_{3},c_{3},\gamma\in{\mathbb{F}}_{q^{n}}^{*}$, such that: * – ${\mathbb{F}}_{q}(b_{3})={\mathbb{F}}_{q^{2}}$, * – ${\mathbb{F}}_{q^{2}}(\gamma)={\mathbb{F}}_{q^{n}}$, * – $1\in c_{3}\langle 1,\gamma,\ldots,\gamma^{n/2-2}\rangle_{{\mathbb{F}}_{q^{2}}}$, * – $a_{3}\notin c_{3}\langle 1,\gamma,\ldots,\gamma^{n/2-2}\rangle_{{\mathbb{F}}_{q^{2}}}$. Then two ${\mathbb{F}}_{q}$-subspaces $U_{i}$ and $U_{j}$ (with $i\neq j$) are $\Gamma\mathrm{L}(2,q^{n})$-inequivalent. ###### Proof. Suppose that $U_{i}$ and $U_{j}$ are $\Gamma\mathrm{L}(2,q^{n})$-equivalent, then there exist a matrix $\left(\begin{array}[]{cc}A&B\\\ C&D\end{array}\right)\in\mathrm{GL}(2,q^{n})$ and $\rho\in\mathrm{Aut}(\mathbb{F}_{q^{n}})$ such that (17) $\left(\begin{array}[]{cc}A&B\\\ C&D\end{array}\right)U_{i}^{\rho}=U_{j}.$ Since $\langle(1,0)\rangle_{\mathbb{F}_{q^{n}}}$ is the only point in $L_{U_{i}}$ and $L_{U_{j}}$ having weight greater than 1 and the elements of $\mathrm{\Gamma L}(2,q^{n})$ preserves the weight of points, then $\left(\begin{matrix}A&B\\\ C&D\end{matrix}\right)\left(\begin{matrix}1\\\ 0\end{matrix}\right)=\left(\begin{matrix}\rho\\\ 0\end{matrix}\right),$ for some nonzero $\rho\in{\mathbb{F}}_{q^{n}}$. This implies that $C=0$, and therefore (17) implies $AS_{i}^{\rho}=S_{j}$, where $S_{i}$ and $S_{j}$ are as in Lemma 4.3. By applying Lemma 4.3 we obtain that such $A$ cannot exist and hence $U_{i}$ and $U_{j}$ cannot be $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalent. ∎ The above result allows us to prove the existence of new examples of $(n-2)$-clubs. ###### Corollary 4.5. If $6\nmid n$, the constructions of Case (2) in Corollary 3.7 are new. ###### Proof. The previously known examples of $(n-2)$-clubs are those described in Constructions 2.1 and 2.2. By Remark 3.8, the Construction 2.1 falls in Case (1) of Corollary 3.7 and the Constructions 2.2, by Remark 3.9, give $(n-2)$-club only when either $2$ or $3$ divides $n$. ∎ ###### Remark 4.6. Using Theorem 4.4, the $(n-2)$-clubs of Construction 2.2 (cf. Remark 3.9) are not $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalent to the family of $(n-2)$-associated with $U_{1}$ in Theorem 4.4 (that is Construction 2.1). ## 5\. Polynomials defining clubs In this section, we will provide a polynomial description for the known families of clubs and for those we have found in the previous section. As in the above section, in this section we will assume that $n\geq 5$ since if $n\leq 4$ we already have a polynomial description of clubs. We start with the family of $(n-2)$-clubs of Construction 2.1. ###### Theorem 5.1. Let $\lambda\in\mathbb{F}_{q^{n}}^{*}$ such that $\\{1,\lambda,\ldots,\lambda^{n-1}\\}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$, the ${\mathbb{F}}_{q}$-linear set $L_{\lambda}$ defined as in Construction 2.1 is $\mathrm{PG}\mathrm{L}(2,q^{n})$-equivalent to $L_{U_{p}}$ with $U_{p}=\\{(x,p(x)\colon x\in{\mathbb{F}}_{q^{n}}\\}$ and $p(x)=\mathrm{Tr}_{q^{n}/q}(c_{n-2}x)+\lambda\mathrm{Tr}_{q^{n}/q}(c_{n-1}x)$ where $\omega\in\mathbb{F}_{q^{n}}$ is such that $(1,\lambda,\ldots,\lambda^{n-3},\lambda^{n-2}+\omega,\omega\lambda)$ is an order ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$ and $(c_{0},\ldots,c_{n-1})$ is its dual basis. In particular, if $q$ is odd then we may choose $\omega=\lambda^{n-2}$ and in this case $c_{i}=\frac{1}{\delta}\sum_{j=0}^{n-i-1}\lambda^{j}a_{i+j+1},$ for $i\in\\{n-2,n-1\\}$, where $f(x)=a_{0}+a_{1}x+\ldots+a_{n-1}x^{n-1}+x^{n}$ is the minimal polynomial of $\lambda$ over ${\mathbb{F}}_{q}$ and $\delta=f^{\prime}(\lambda)$. ###### Proof. Consider $U_{\lambda}=\\{(t_{1}\lambda+\ldots+t_{n-1}\lambda^{n-1},t_{n-1}+t_{n}\lambda)\colon(t_{1},\ldots,t_{n})\in{\mathbb{F}}_{q}^{n}\\}.$ Note that such $\omega$ as in the statement there exists since the set $Y=\left\\{\frac{-\alpha_{0}-\ldots-\alpha_{n-2}\lambda^{n-2}}{\alpha_{n-2}+\lambda\alpha_{n-1}}\colon\alpha_{0},\ldots,\alpha_{n-1}\in{\mathbb{F}}_{q},(\alpha_{n-2},\alpha_{n-1})\neq(0,0)\right\\}$ does not cover all the elements of $\mathbb{F}_{q^{n}}$. Indeed, $\lvert Y\rvert=\lvert L_{U_{\lambda}}\rvert-1=q^{n-1}+q^{n-2}$. Now, applying $\left(\begin{array}[]{cc}\lambda^{-1}&\omega\\\ 0&1\end{array}\right)$ to $U_{\lambda}$ we obtain the following subspace $U=\\{(t_{1}+\ldots+t_{n-1}\lambda^{n-2}+\omega(t_{n-1}+t_{n}\lambda),t_{n-1}+t_{n}\lambda)\\}.$ Since $(c_{0},\ldots,c_{n-1})$ is the dual basis of $(1,\lambda,\ldots,\lambda^{n-3},\lambda^{n-2}+\omega,\omega\lambda)$, then $p(t_{1}+\ldots+t_{n-1}\lambda^{n-2}+\omega(t_{n-1}+t_{n}\lambda))=t_{n-1}+\lambda t_{n},$ that is $U=U_{p}$ and the first part of the assertion is proved. The second part follows from the fact that when $q$ is odd we may choose $\omega=\lambda^{n-2}$ so that $U$ can be written as follows $\\{(t_{1}+\ldots+2t_{n-1}\lambda^{n-2}+t_{n}\lambda^{n-1},t_{n-1}+t_{n}\lambda)\colon(t_{1},\ldots,t_{n})\in{\mathbb{F}}_{q}^{n}\\}.$ Since now the first component can be expressed through a polynomial basis the assertion follows by Corollary 2.3. ∎ ###### Remark 5.2. In the case in which the minimal polynomial of $\lambda$ has degree two or three we may use Corollary 2.4 to obtain nicer expressions for $p(x)$. Regarding Construction 2.2 we can give the relative polynomial description which will rely on the associated scattered polynomial (for $q=2$ see also [16]). ###### Theorem 5.3. Let $n=tr$, $t,r>1$, $f\in\mathcal{L}_{t,q}$ a scattered polynomial. Let $U_{a,b}=\left\\{\left(f(x_{0})-ax_{0},bx_{0}+\sum_{i=1}^{r-1}x_{i}\omega^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{t}}\right\\},$ for some fixed $a,b\in{\mathbb{F}}_{q^{t}}$ with $b\neq 0$ and $\\{1,\omega,\ldots,\omega^{r-1}\\}$ an ${\mathbb{F}}_{q^{t}}$-basis of ${\mathbb{F}}_{q^{n}}$. Then $L_{U_{a,b}}$ is $\mathrm{PG}\mathrm{L}(2,q^{n})$-equivalent to $L_{U_{p}}$ with $U_{p}=\\{(x,p(x)\colon x\in{\mathbb{F}}_{q^{n}}\\}$ and $p(x)=\mathrm{Tr}_{q^{n}/q^{t}}(f(x)-x).$ ###### Proof. First observe that $U_{a,b}$ is $\mathrm{GL}(2,q^{n})$-equivalent to $U=\left\\{\left(f(x_{0})-ax_{0},x_{0}+\sum_{i=1}^{r-1}x_{i}\omega^{i}\right)\colon x_{i}\in{\mathbb{F}}_{q^{t}}\right\\}.$ Then consider $(c_{0},\ldots,c_{t-1})$ be the dual basis of $(1,\omega,\ldots,\omega^{t-1})$. Noting that if $x=\sum_{i=0}^{t-1}x_{i}\omega^{i}$ then $x_{0}=\mathrm{Tr}_{q^{n}/q^{r}}(c_{0}x)$ and applying $\left(\begin{array}[]{cc}0&1\\\ 1&0\end{array}\right)$ we obtain that $U_{a,b}$ is $\mathrm{GL}(2,q^{n})$-equivalent to $U^{\prime}=\\{(x,f(\mathrm{Tr}_{q^{n}/q^{t}}(c_{0}x))-a\mathrm{Tr}_{q^{n}/q^{t}}(c_{0}x))\colon x\in{\mathbb{F}}_{q^{t}}\\},$ since $f\circ\mathrm{Tr}_{q^{n}/q^{t}}=\mathrm{Tr}_{q^{n}/q^{t}}\circ f$, and by applying $\left(\begin{array}[]{cc}c_{0}&0\\\ 0&1\end{array}\right)$ to $U^{\prime}$ we obtain that $U^{\prime}$ is $\mathrm{GL}(2,q^{n})$-equivalent to $U_{p}$ and hence the assertion. ∎ Now, we give the polynomial description of any possible $(n-2)$-clubs. ###### Theorem 5.4. Let $L_{W}$ be an $(n-2)$-club in $\mathrm{PG}(1,q^{n})$. There exist $b\in\mathbb{F}_{q^{n}}\setminus{\mathbb{F}}_{q}$ and $\xi,\eta\in\mathbb{F}_{q^{n}}^{*}$ such that $L_{W}$ is $\mathrm{PGL}(2,q^{n})$-equivalent to $L_{p}$, where $p(x)=\mathrm{Tr}_{q^{n}/q}(\xi x)+b\mathrm{Tr}_{q^{n}/q}(\eta x),$ with $\langle\xi,\eta\rangle_{{\mathbb{F}}_{q}}^{\perp}=\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$, where $t=[{\mathbb{F}}_{q}(b),{\mathbb{F}}_{q}]$, $c\in\mathbb{F}_{q^{n}}^{*}$ and $\overline{S}$ an ${\mathbb{F}}_{q^{t}}$-subspace of $\mathbb{F}_{q^{n}}$ of dimension $n/t-1$. ###### Proof. Let $L_{W}$ be an $(n-2)$-club, then by Corollary 3.7 $L_{W}$ is $\mathrm{PGL}(2,q^{n})$-equivalent to $L_{U}$, where $n=t(\ell+1)$, with $\ell\in\mathbb{N}_{0}$, and $U=\left\\{\left(\overline{s}+\sum_{i=0}^{t-3}cx_{i}b^{i}+\alpha+\beta a,\alpha+\beta b\right)\colon x_{i},\alpha,\beta\in{\mathbb{F}}_{q},\overline{s}\in\overline{S}\right\\},$ with $a,b,c\in{\mathbb{F}}_{q^{n}}^{*}$, $\overline{S}\subseteq{\mathbb{F}}_{q^{n}}$ such that * • ${\mathbb{F}}_{q}(b)={\mathbb{F}}_{q^{t}}$, * • $\overline{S}$ is an ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ such that $c{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$, * • $1\in\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$, * • $a\notin\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$. Now, we prove the existence of $\omega\in\mathbb{F}_{q^{n}}^{*}$ such that $(s_{1},\ldots,s_{\ell t},c,cb,\ldots,cb^{t-3},1+\omega,a+\omega b)$ is an ordered ${\mathbb{F}}_{q}$-basis, where $\\{s_{1},\ldots,s_{\ell t}\\}$ is an ${\mathbb{F}}_{q}$-basis of $\overline{S}$. Indeed, $\omega$ exists if and only if the set $\left\\{\frac{d_{1}s_{1}+d_{2}s_{2}+\ldots+d_{n-2}cb^{n-3}+d_{n-1}+ad_{n}}{d_{n-1}+d_{n}b}\colon d_{1},\ldots,d_{n}\in{\mathbb{F}}_{q},(d_{n-1},d_{n})\neq(0,0)\right\\}$ does not cover all the elements of ${\mathbb{F}}_{q^{n}}$ and this is equivalent to the fact that there exists a point of the form $\langle(1,\omega)\rangle_{\mathbb{F}_{q^{n}}}$ which is not in $L_{U}$. This is always true since $L_{U}$ has size $q^{n-1}+q^{n-2}+1$ which is always less than $q^{n}-1$. Finally, applying $\left(\begin{array}[]{cc}1&\omega\\\ 0&1\end{array}\right)$ to $U$ we obtain $U_{p}$, where $p(x)=\mathrm{Tr}_{q^{n}/q}(\xi_{n-2}x)+b\mathrm{Tr}_{q^{n}/q}(\xi_{n-1}x)$ and $(\xi_{0},\ldots,\xi_{n-1})$ is the dual basis of $(s_{1},\ldots,s_{\ell t},c,cb,\ldots,cb^{t-3},1+\omega,a+\omega b)$. This is due to the fact that every element $\delta$ of $\mathbb{F}_{q^{n}}$ may be written as $\delta=\overline{s}+\sum_{i=0}^{t-3}cx_{i}b^{i}+\alpha+\beta a$ for some $\overline{s}\in\overline{S}$, $x_{0},\ldots,x_{t-3},\alpha,\beta\in{\mathbb{F}}_{q}$ and $p(\delta)=\alpha+b\beta$. Therefore $\xi$ and $\eta$ of the statement correspond to $\xi_{n-2}$ and to $\xi_{n-1}$. ∎ ## 6\. Blocking sets and KM-arcs A well-studied topic in finite geometry is the theory of blocking set. A _blocking set_ $\mathcal{B}$ in $\mathrm{PG}(2,q)$ is a set of points with the property that every line meets $\mathcal{B}$ in at least one point. An easy example is given by a line, which is also called a _trivial blocking set_. An important class of blocking sets is given by the blocking sets of Rédei type. If $\mathcal{B}$ is a blocking set in $\mathrm{PG}(2,q)$ of size $q+t$ with $t<q$ for which there exists a line $\ell$ meeting $\mathcal{B}$ in exactly $t$ points, then $\mathcal{B}$ is called a blocking set of _Rédei type_. Related to the celebrated linearity conjecture are the linear blocking sets. An _${\mathbb{F}}_{q}$ -linear blocking set_ $L_{W}$ in $\mathrm{PG}(2,q^{n})$ is any ${\mathbb{F}}_{q}$-linear set of rank $n+1$, and those of Rédei type are the one that can be described as follows: let $\ell$ be the Rédei line of $\mathcal{B}$ and consider $L_{U}=\ell\cap L_{W}$ and $v\in W\notin\langle U\rangle_{{\mathbb{F}}_{q^{n}}}$. Then $L_{W}=L_{U\oplus\langle v\rangle_{{\mathbb{F}}_{q}}}$ and $L_{U}$ is an ${\mathbb{F}}_{q}$-linear set of rank $n$ contained in the line $\ell$. The Rédei line $\ell$ is unique if $L_{U}$ is a strictly ${\mathbb{F}}_{q}$-linear set which is not an $(n-1)$-club, equivalently $L_{U}$ is not $\mathrm{P\Gamma L}(2,q^{n})$-equivalent to $L_{\mathrm{Tr}_{q^{n}/q}}$. We are now interested in studying those associated with $i$-clubs. ###### Remark 6.1. Let $\ell_{\infty}$ be the line in $\mathrm{PG}(2,q^{n})=\mathrm{PG}(V,{\mathbb{F}}_{q^{n}})$ with equation $X_{2}=0$ and consider $L_{U}\subseteq\ell_{\infty}$ to be an $i$-club with $i\leq n-2$. Consider $v\in V\setminus\langle U\rangle_{{\mathbb{F}}_{q^{n}}}$ and $W=U\oplus\langle v\rangle_{{\mathbb{F}}_{q}}$. Then $L_{W}$ is an ${\mathbb{F}}_{q}$-linear blocking set of Rédei type in $\mathrm{PG}(2,q^{n})$ with size $q^{n}+q^{n-1}+\ldots+q^{i}+1$ in which all but one of the points have weight one and the remaining one has weight $i$. Moreover, for any line $\ell$ then $w_{L_{U}}(\ell)\in\\{1,2,i+1,n\\}.$ In particular, since $i<n-1$, there exists only one line having weight $n$ and exactly $q^{n-i}$ lines having weight $i+1$. Moreover, $L_{W}$ has a $(q+1)$-secant line since $L_{U}$ contains at least one point of weight one. Using the classification proved for $(n-2)$-clubs we obtain the classification of linear blocking sets of small Rédei type with some constraints on the weight of the points. Recall that a blocking set in $\mathrm{PG}(2,q^{n})$ is said to be _small_ if $|\mathcal{B}|\leq 3(q^{n}+1)/2$. ###### Theorem 6.2. Suppose that $\mathrm{char}(\mathbb{F}_{q^{n}})>2$ and $n\geq 5$. Let $\mathcal{B}$ be a small minimal blocking set in $\mathrm{PG}(2,q^{n})$ of Rédei type and suppose that for every line $s$ different from the Rédei line $\ell_{\infty}$ we have $|s\cap\mathcal{B}|\in\\{1,q+1,q^{n-2}+1\\},$ and for all of the values there exists at least one line meeting $\mathcal{B}$ in this number of points. Then $\mathcal{B}$ is $\mathrm{PGL}(3,q^{n})$-equivalent to $L_{\overline{U}}$ where $\overline{U}$ has the following form: let $n=t(\ell+1)$, with $\ell\in\mathbb{N}_{0}$ and $\overline{U}=\left\\{\left(\overline{s}+\sum_{i=0}^{t-3}cx_{i}b^{i}+\alpha+\beta a,\alpha+\beta b,\delta\right)\colon x_{i},\alpha,\beta,\delta\in{\mathbb{F}}_{q},\overline{s}\in\overline{S}\right\\},$ with $a,b,c\in{\mathbb{F}}_{q^{n}}^{*}$, $\overline{S}\subseteq{\mathbb{F}}_{q^{n}}$ such that * • ${\mathbb{F}}_{q}(b)={\mathbb{F}}_{q^{t}}$, * • $\overline{S}$ is an ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ such that $c{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$, * • $1\in\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$, * • $a\notin\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$. ###### Proof. Since $\mathcal{B}$ is a small minimal blocking set in $\mathrm{PG}(2,q^{n})$ of Rédei type with a $(q+1)$-secant line, then $\mathcal{B}$ is an ${\mathbb{F}}_{q}$-linear set, that is $\mathcal{B}=L_{W}$, see [8, Theorem 4.3 (iv)] (which relies on the well-celebrated results in [3, 4]). Since $L_{W}$ is an ${\mathbb{F}}_{q}$-linear blocking set of Rédei type, then we have $W=U\oplus\langle v\rangle_{{\mathbb{F}}_{q}}$ and $L_{U}$ is an ${\mathbb{F}}_{q}$-linear set of rank $n$ contained in the Rédei line $\ell_{\infty}$. Since every line different from the Rédei line has weight $1$, $2$ or $n-1$ in $L_{W}$, it follows that the points of $L_{U}$ have weight either $1$ or $n-2$ and $n\geq 5$, it results that $L_{U}$ is an $(n-2)$-club and the assertion now follows by Corollary 3.7. ∎ ###### Remark 6.3. Since [8, Theorem 4.3 (iv)] needs the assumption $\mathrm{char}(\mathbb{F}_{q^{n}})>2$, in the above statement we have to add such a condition. However, if we add the assumption on $\mathcal{B}$ to be linear, then the above result holds for every characteristic. Moreover, from the Construction 2.2, we have examples of linear blocking sets of Rédei type of the following form. ###### Remark 6.4. Let $n=rt$, $t,r>1$, $f\in\mathcal{L}_{t,q}$ a scattered polynomial. Let $U_{a,b}=\left\\{\left(f(x_{0})-ax_{0},bx_{0}+\sum_{i=1}^{r-1}x_{i}\omega^{i},0\right)\colon x_{i}\in{\mathbb{F}}_{q^{t}}\right\\},$ for some fixed $a,b\in{\mathbb{F}}_{q^{t}}$ with $b\neq 0$ and $\\{1,\omega,\ldots,\omega^{r-1}\\}$ an ${\mathbb{F}}_{q^{t}}$-basis of ${\mathbb{F}}_{q^{n}}$. Let $i=\left\\{\begin{array}[]{ll}t(r-1),&\text{if}\,\,f(x)-ax\,\,\text{is invertible over}\,\,{\mathbb{F}}_{q^{t}},\\\ t(r-1)+1,&\text{otherwise}.\end{array}\right.$ Then $W=U_{a,b}\oplus\langle(0,0,1)\rangle_{{\mathbb{F}}_{q}}$ defines an ${\mathbb{F}}_{q}$-linear blocking set of Rédei type with size $q^{n}+q^{n-1}+\ldots+q^{i}+1$. We now prove that in the family of blocking set described in the above proposition, there are several inequivalent examples. To this aim we recall the following result. ###### Lemma 6.5. [9, Proposition 2.3] Let $L_{W}$ be an ${\mathbb{F}}_{q}$-linear blocking set with a $(q+1)$-secant line. Then $L_{W}=L_{W^{\prime}}$ if and only if there exists $\lambda\in\mathbb{F}_{q^{n}}^{*}$ such that $W^{\prime}=\lambda W$. Let $L_{W}$ and $L_{W^{\prime}}$ be two ${\mathbb{F}}_{q}$-linear sets of Rédei type with Rédei lines $\ell_{1}$ and $\ell_{2}$, respectively, and let $\ell_{1}\cap L_{W}=L_{U}$ and $\ell_{2}\cap L_{W^{\prime}}=L_{U^{\prime}}$. Then by the above lemma $L_{W}$ and $L_{W^{\prime}}$ are $\mathrm{P\Gamma L}(3,q^{n})$-equivalent if and only if $W$ and $W^{\prime}$ are $\mathrm{\Gamma L}(3,q^{n})$-equivalent. Moreover, if $L_{W}$ and $L_{W^{\prime}}$ has only one Rédei line, then they are $\mathrm{P\Gamma L}(3,q^{n})$-equivalent if and only if $U$ and $U^{\prime}$ are $\mathrm{\Gamma L}(2,q^{n})$-equivalent. Therefore, from Theorem 4.1 we have the following result. ###### Corollary 6.6. The number of $\mathrm{P\Gamma L}(3,q^{n})$-inequivalent blocking sets of the family described in Remark 6.4 coincide with the number of $\mathrm{\Gamma L}(2,q^{t})$-inequivalent scattered ${\mathbb{F}}_{q}$-subspaces in ${\mathbb{F}}_{q^{t}}^{2}$ of dimension $t$. KM-arcs have been originally introduced in [23] by Korchmarós and Mazzocca. A _KM-arc_ $\mathcal{A}$ of type $t$ is a set of $q+t$ points in $\mathrm{PG}(2,q)$ such that every line contains $0,2$ or $t$ points. When $2<t<q$, in [23, Theorem 2.5] it has been proved that if KM-arc of type $t$ exists then $q$ is even and $t\mid q$. Moreover, $\mathcal{A}$ is called a _translation_ KM-arc if there exists a line $\ell$ of $\mathrm{PG}(2,q)$ such that the group of elations with axis $\ell$ and fixing $\mathcal{A}$ acts transitively on the points of $\mathcal{A}\setminus\ell$. In [15], De Boeck and Van de Voorde proved several and important results on translation KM-arcs and pointed out that the main question on studying this object regards for which values of $q$ and $t$ a KM-arc of type $t$ exists in $\mathrm{PG}(2,q)$ and which nonequivalent one there exist. Using the link between translation KM-arcs and $i$-clubs established in [15, Theorem 2.2] we will give a classification for translation KM-arcs of type $2^{n-2}$ in $\mathrm{PG}(2,2^{n})$. Although this result has been already established in [15, Theorem 4.12], our result give a more direct expression of these sets. To this aim let’s recall [15, Theorem 2.2]. ###### Construction 6.7. Let $\ell\colon X_{2}=0\subset\mathrm{PG}(2,2^{n})=\mathrm{PG}(V,{\mathbb{F}}_{2^{n}})$ and consider $L_{U}\subset\ell$ to be an $i$-club. Let $v\in V\notin\langle U\rangle_{{\mathbb{F}}_{2^{n}}}$ and let $U^{\prime}=U\oplus\langle v\rangle_{{\mathbb{F}}_{2}}$. Consider the following set of points $\mathcal{A}(U,v)=(L_{U^{\prime}}\setminus\ell)\cup(\ell\setminus L_{U}).$ In [15, Theorem 2.1] it has been proved that $\mathcal{A}(U,v)$ is a translation KM-arc of type $2^{i}$ in $\mathrm{PG}(2,2^{n})$. Actually, all the translation KM-arcs can be constructed as before. ###### Theorem 6.8. [15, Theorem 2.2] Every translation KM-arc of type $2^{i}$ in $\mathrm{PG}(2,2^{n})$ can be constructed as in Construction 6.7 by using an $i$-club. The first example of KM-arc of type $2^{m(\ell-1)}$ in $\mathrm{PG}(2,2^{\ell m})$ was presented in [23] and can be described via the club $L_{T}$, where $T(x)$ is as in Equation 5; see [15, Theorem 3.2]. The clubs arising from Construction 2.2 define the KM-arcs found by Gács and Weiner in [21]. We are now able to prove the following classification result for translation KM-arcs of type $2^{n-2}$ in $\mathrm{PG}(2,2^{n})$. These KM-arcs have been already classified in [15, Theorem 4.12], but our result give a more explicit description. ###### Theorem 6.9. Let $\mathcal{A}$ be a translation KM-arcs of type $2^{n-2}$ in $\mathrm{PG}(2,2^{n})$. Then $\mathcal{A}$ is $\mathrm{PGL}(2,q^{n})$-equivalent to $\mathcal{A}(U,(0,0,1))$, where $U$ is described as in Corollary 3.7. ###### Proof. Since $\mathcal{A}$ be a translation KM-arcs of type $2^{n-2}$ in $\mathrm{PG}(2,2^{n})$, by Theorem 6.8 $\mathcal{A}$ is $\mathrm{PGL}(2,2^{n})$-equivalent to $\mathcal{A}(U,(0,0,1))$ where $L_{U}$ is an $(n-2)$-club. Using the classification of $(n-2)$-clubs provided in Corollary 3.7 we obtain the assertion. ∎ ## 7\. Linearized polynomials with conditions on its value set The problem of estimating the size of the value set of a polynomial over finite field or finding polynomials with large/small value set is a quite classical problem in the theory of polynomials over finite field; see [29]. When considering linearized polynomials, we have more information when considering its quotient with $x$. More precisely, let $f(x)$ be a linearized polynomial, then the value set we are interested in is $\mathcal{V}\left(\frac{f(x)}{x}\right)=\left\\{\frac{f(\alpha)}{\alpha}\colon\alpha\in{\mathbb{F}}_{q^{n}}^{*}\right\\}.$ If $f(x)$ is a $q$-polynomial having ${\mathbb{F}}_{q}$ as maximum field of linearity, i.e. $f(x)$ does not define an ${\mathbb{F}}_{q^{i}}$-linear map from ${\mathbb{F}}_{q^{n}}$ to ${\mathbb{F}}_{q^{n}}$ for $1<i<n$ and $i\mid n$, by the results in [3, 4], we have the following bounds (18) $q^{n-1}+1\leq\left|\mathcal{V}\left(\frac{f(x)}{x}\right)\right|\leq\frac{q^{n}-1}{q-1}.$ This is indeed connected with the directions of $f$. Consider the projective plane $\mathrm{PG}(2,q^{n})$ as the projective closure of $\mathrm{AG}(2,q^{n})$ via the line $\ell_{\infty}(=\mathrm{PG}(1,q^{n}))$, that without loss of generality we may assume to have equation $X_{2}=0$. Let $f(x)$ be any linearized polynomial in $\mathcal{L}_{n,q}$, then $\mathcal{V}\left(\frac{f(x)}{x}\right)=\mathcal{D}_{f}=\left\\{\frac{f(x)-f(y)}{x-y}\colon x,y\in\mathbb{F}_{q^{n}},x\neq y\right\\}$ is the set $\mathcal{D}_{f}$ of directions determined by the graph of $f$, that is the set $\mathcal{G}_{f}=\\{\langle(x,f(x),1)\rangle_{\mathbb{F}_{q^{n}}}\colon x\in\mathbb{F}_{q^{n}}\\}\subseteq\mathrm{AG}(2,q^{n})$. In the case in which $f(x)$ is a $q$-polynomial having ${\mathbb{F}}_{q}$ as maximum field of linearity, then by (18) we have $q^{n-1}+1\leq\left|\mathcal{D}_{f}\right|\leq\frac{q^{n}-1}{q-1}.$ If $\mathcal{D}_{f}$ is a scattered ${\mathbb{F}}_{q}$-linear set in $\mathrm{PG}(1,q^{n})$, then $\left|\mathcal{V}\left(\frac{f(x)}{x}\right)\right|=\frac{q^{n}-1}{q-1}$, whereas if $f(x)=\mathrm{Tr}_{q^{n}/q}(x)$ or $f(x)$ is as in [30] then $\left|\mathcal{V}\left(\frac{f(x)}{x}\right)\right|=q^{n-1}+1$. So, both the bounds in (18) are sharp. Two polynomials $f$ and $g$ are said to be _equivalent_ if there exists $\varphi\in\mathrm{\Gamma L}(3,q^{n})$ such that $\varphi(\mathcal{G}_{f})=\mathcal{G}_{g}$, see [10]. Using this definition of equivalence, in [10, Problem 4] the authors asked to classify and to find more examples, up to equivalence, of $q$-polynomials $f(x)\in\mathcal{L}_{n,q}$ such that in the multiset $\\{f(\alpha)/\alpha\colon\alpha\in\mathbb{F}_{q^{n}}^{*}\\}$ there is a unique element which is represented more than $q-1$ times, namely $q^{i}-1$ times for some $i\in\\{1,\ldots,n-1\\}$. We call these polynomials _$i$ -club polynomials_ because they correspond to those polynomials for which $L_{f}$ is an $i$-club. Up to our knowledge, only two examples of $i$-clubs polynomials were previously known: the one presented in (5) and some examples when $n\leq 5$ (see [9, 13, 5]). We will first show the new examples of club polynomials. ###### Proposition 7.1. Let $\lambda\in\mathbb{F}_{q^{n}}^{*}$ such that $\\{1,\lambda,\ldots,\lambda^{n-1}\\}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$, $\omega\in\mathbb{F}_{q^{n}}$ is such that $(1,\lambda,\ldots,\lambda^{n-3},\lambda^{n-2}+\omega,\omega\lambda)$ is an order ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$ and $(b_{0},\ldots,b_{n-1})$ is its dual basis. Then $p(x)=\mathrm{Tr}_{q^{n}/q}(b_{n-2}x)+\lambda\mathrm{Tr}_{q^{n}/q}(b_{n-1}x)$ is an $(n-2)$-club polynomial. Let $n=rt$, $t,r>1$, $f\in\mathcal{L}_{t,q}$ be a scattered $q$-polynomial. Then $p(x)=\mathrm{Tr}_{q^{n}/q^{t}}(f(x)-x),$ is an $i$-club polynomial with $i=\left\\{\begin{array}[]{ll}t(r-1),&\text{if}\,\,f(x)-x\,\,\text{is invertible over}\,\,{\mathbb{F}}_{q^{t}},\\\ t(r-1)+1,&\text{otherwise}.\end{array}\right.$ ###### Proof. It follows from the fact that if $L_{f}$ is an $i$-club, then $f(x)$ is an $i$-club polynomial. ∎ We are now able to give a classification of $(n-2)$-club polynomials. ###### Theorem 7.2. Let $f\in\mathcal{L}_{n,q}$ an $(n-2)$-club polynomial. Then $f$ is equivalent to $p(x)=\mathrm{Tr}_{q^{n}/q}(\xi x)+b\mathrm{Tr}_{q^{n}/q}(\eta x)$, where $b,\xi$ and $\eta$ are as in Theorem 5.4. ###### Proof. The proof again follows from the fact that $f$ is an $(n-2)$-club if and only if $U_{f}$ defines an $(n-2)$-club. This can happen if and only if $U_{f}$ is $\mathrm{GL}(2,q^{n})$-equivalent to $U_{g}$, where $g$ is one of the polynomials described in Theorem 5.4. ∎ ## 8\. Linear rank metric codes Rank metric codes were introduced by Delsarte [18] in 1978 as subsets of matrices and they have been intensively investigated in recent years because of their applications; we refer to [39, 36]. In this section we will be interested in rank metric codes in ${\mathbb{F}}_{q^{m}}^{n}$. The _rank_ (weight) $w(v)$ of a vector $v=(v_{1},\ldots,v_{n})\in{\mathbb{F}}_{q^{m}}^{n}$ is defined as the dimension of the vector space generated over ${\mathbb{F}}_{q}$ by its entries, i.e $w(v)=\dim_{{\mathbb{F}}_{q}}(\langle v_{1},\ldots,v_{n}\rangle_{{\mathbb{F}}_{q}})$. A _(linear vector) rank metric code_ $\operatorname{\mathcal{C}}$ is an ${\mathbb{F}}_{q^{m}}$-subspace of ${\mathbb{F}}_{q^{m}}^{n}$ endowed with the rank distance, where such a distance is defined as $d(x,y)=w(x-y),$ where $x,y\in{\mathbb{F}}_{q^{m}}^{n}$. Let $\operatorname{\mathcal{C}}\subseteq{\mathbb{F}}_{q^{m}}^{n}$ be a linear rank metric code. We will write that $\operatorname{\mathcal{C}}$ is an $[n,k,d]_{q^{m}/q}$ code (or $[n,k]_{q^{m}/q}$ code) if $k=\dim_{{\mathbb{F}}_{q^{m}}}(\operatorname{\mathcal{C}})$ and $d$ is its minimum distance, that is $d=\min\\{d(x,y)\colon x,y\in\operatorname{\mathcal{C}},x\neq y\\}.$ By the classification of ${\mathbb{F}}_{q^{m}}$-linear isometry of ${\mathbb{F}}_{q^{m}}^{n}$ (see [6]), we say that two rank metric codes $\operatorname{\mathcal{C}},\operatorname{\mathcal{C}}^{\prime}\subseteq{\mathbb{F}}_{q^{m}}^{n}$ are _(linearly) equivalent_ if and only if there exists a matrix $A\in\mathrm{GL}(n,q)$ such that $\operatorname{\mathcal{C}}^{\prime}=\operatorname{\mathcal{C}}A=\\{vA:v\in\operatorname{\mathcal{C}}\\}$. Moreover, we will say that $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{C}}^{\prime}$ are _semilinearly equivalent_ if and only if there exist a matrix $A\in\mathrm{GL}(n,q)$ and an automorphism $\rho\in\mathrm{Aut}({\mathbb{F}}_{q^{m}})$ such that $\operatorname{\mathcal{C}}^{\prime}=\operatorname{\mathcal{C}}^{\rho}A=\\{v^{\rho}A:v\in\operatorname{\mathcal{C}}\\}$, where the action of $\rho$ is extended entrywise. Most of the codes we will consider are _non-degenerate_ , i.e. those for which the columns of any generator matrix of $\operatorname{\mathcal{C}}$ are ${\mathbb{F}}_{q}$-linearly independent. Denote by $\mathfrak{C}[n,k,d]_{q^{m}/q}$ the set of all linear $[n,k,d]_{q^{m}/q}$ rank metric codes in ${\mathbb{F}}_{q^{m}}^{n}$. The geometric counterpart of rank metric are the systems. An $[n,k,d]_{q^{m}/q}$ _system_ $U$ is an ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{m}}^{k}$ of dimension $n$, such that $\langle U\rangle_{{\mathbb{F}}_{q^{m}}}={\mathbb{F}}_{q^{m}}^{k}$ and $d=n-\max\left\\{\dim_{{\mathbb{F}}_{q}}(U\cap H)\mid H\textnormal{ is an ${\mathbb{F}}_{q^{m}}$-hyperplane of }{\mathbb{F}}_{q^{m}}^{k}\right\\}.$ Moreover, two $[n,k,d]_{q^{m}/q}$ systems $U$ and $U^{\prime}$ are _equivalent_ if there exists an ${\mathbb{F}}_{q^{m}}$-isomorphism $\varphi\in\mathrm{GL}(k,{\mathbb{F}}_{q^{m}})$ such that $\varphi(U)=U^{\prime}.$ We denote the set of equivalence classes of $[n,k,d]_{q^{m}/q}$ systems by $\mathfrak{U}[n,k,d]_{q^{m}/q}$. ###### Theorem 8.1. (see [37]) Let $\operatorname{\mathcal{C}}$ be a non-degenerate $[n,k,d]_{q^{m}/q}$ rank metric code and let $G$ be an its generator matrix. Let $U\subseteq{\mathbb{F}}_{q^{m}}^{k}$ be the ${\mathbb{F}}_{q}$-span of the columns of $G$. The rank weight of an element $xG\in\operatorname{\mathcal{C}}$, with $x\in{\mathbb{F}}_{q^{m}}^{k}$ is (19) $w(xG)=n-\dim_{{\mathbb{F}}_{q}}(U\cap x^{\perp}),$ where $x^{\perp}=\\{y\in{\mathbb{F}}_{q^{m}}^{k}\colon x\cdot y=0\\}.$ In particular, (20) $d=n-\max\left\\{\dim_{{\mathbb{F}}_{q}}(U\cap H)\colon H\mbox{ is an }{\mathbb{F}}_{q^{m}}\mbox{-hyperplane of }{\mathbb{F}}_{q^{m}}^{k}\right\\}.$ Actually, the above result allows us to give a one-to-one correspondence between equivalence classes of non-degenerate $[n,k,d]_{q^{m}/q}$ codes and equivalence classes of $[n,k,d]_{q^{m}/q}$ systems, see [37]. The system $U$ and the code $\operatorname{\mathcal{C}}$ and in Theorem 8.1 are said to be _associated_. Moreover, the semilinear inequivalence on linear rank metric codes can be read also on the associated systems via the action of $\mathrm{\Gamma L}(k,q^{m})$ on the ${\mathbb{F}}_{q}$-subspaces of ${\mathbb{F}}_{q^{m}}^{n}$. ###### Theorem 8.2. (see [38] and [40]) Let $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{C}}^{\prime}$ two linear $[n,k,d]_{q^{m}/q}$ non- degenerate rank metric codes and let $U$ and $U^{\prime}$ be two associated systems with $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{C}}^{\prime}$, respectively. Then $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{C}}^{\prime}$ are semilinearly equivalent if and only if $U$ and $U^{\prime}$ are $\mathrm{\Gamma L}(k,q^{m})$-equivalent. ### 8.1. $2$-dimensional linear rank metric codes Let $U$ be an ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}^{2}$ of dimension $n$ such that $\langle U\rangle_{{\mathbb{F}}_{q^{n}}}={\mathbb{F}}_{q^{n}}^{2}$. Then $U$ is an $[n,2,d]_{q^{n}/q}$ system where $d=n-\max\\{w_{L_{U}}(P)\colon P\in\mathrm{PG}(1,q^{n})\\}$. Up to the action of $\mathrm{GL}(2,q^{n})$, as already said in Section 2.2, we can assume that $U=\\{(x,f(x))\colon x\in\mathbb{F}_{q^{n}}\\},$ for some $f\in\mathcal{L}_{n,q}$. Then a code associated with $U$ is the code $\mathcal{C}_{f}$ whose generator matrix is $G=\left(\begin{array}[]{cccc}\xi_{1}&\xi_{2}&\cdots&\xi_{n}\\\ f(\xi_{1})&f(\xi_{2})&\cdots&f(\xi_{n})\end{array}\right),$ where $\xi_{1},\ldots,\xi_{n}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$. This means that a code associated with $U$ is given by the evaluation of the polynomials in $\langle x,f(x)\rangle_{\mathbb{F}_{q^{n}}}$. Let determine the weight distribution of the codes associated with $i$-clubs. ###### Proposition 8.3. Let $U$ be an ${\mathbb{F}}_{q}$-subspace of ${\mathbb{F}}_{q^{n}}^{2}$ of dimension $n$ such that $\langle U\rangle_{{\mathbb{F}}_{q^{n}}}={\mathbb{F}}_{q^{n}}^{2}$. If $L_{U}$ is an $i$-club, then the weight distribution of an associated code is the following: * • $q^{n}-1$ codewords of weight $n-i$; * • $(q^{n}-1)(q^{n-1}+\ldots+q^{i})$ of weight $n-1$; * • $(q^{n}-1)(q^{n}-q^{n-1}-\ldots-q^{i})$ of weight $n$. Conversely, let $\operatorname{\mathcal{C}}$ be a linear $[n,2,n-i]_{q^{n}/q}$ non-degenerate rank metric code with $q^{n}-1$ codewords of weight $n-i$ and all the remaining nonzero codewords having weight either $n-1$ or $n$, then any associate system with $\mathcal{C}$ defines an $i$-club. ###### Proof. The proof follows by Theorem 8.1 and from the fact that an $i$-club has size $q^{n-1}+\ldots+q^{i}+1$ points, one point of weight $i$ and all the remaining points have weight one. ∎ Maximum rank distance codes of dimension $2$ are exactly those that have only codewords of weight $n-1$ and $n$. However, the codes associated with $i$-clubs are very _close_ to them and for this reason they are certainly of interest. Also, they are codes for which the nonzero weights are only three, see [35]. Therefore, we call the examples of rank metric codes as in Proposition 8.3 _$i$ -club rank metric codes_. We list now the examples of $i$-club rank metric codes using the polynomial description we found in Section 5. Examples of $(n-1)$-club rank metric codes. Up to equivalence, they are all of the form $\mathcal{C}_{f}$ with $f(x)=\mathrm{Tr}_{q^{n}/q}(x)$. This is a consequence of [11, Theorem 3.7]. Examples of $(n-2)$-club rank metric codes. Up to equivalence, they are all of the form $\mathcal{C}_{f}$ with $f(x)=\mathrm{Tr}_{q^{n}/q}(\xi_{n-2}x)+b\mathrm{Tr}_{q^{n}/q}(\xi_{n-1}x)$, where $n=t(\ell+1)$ and there exist $a,b,c,\omega\in{\mathbb{F}}_{q^{n}}^{*}$, $\overline{S}\subseteq{\mathbb{F}}_{q^{n}}$ such that: * • ${\mathbb{F}}_{q}(b)={\mathbb{F}}_{q^{t}}$, with $1\leq t\leq n$, * • $\overline{S}$ ${\mathbb{F}}_{q^{t}}$-subspace of dimension $\ell$ such that $c{\mathbb{F}}_{q^{t}}\cap\overline{S}=\\{0\\}$, * • $a\notin\overline{S}\oplus c\langle 1,b,\ldots,b^{t-2}\rangle_{{\mathbb{F}}_{q}}$, * • $(s_{1},\ldots,s_{\ell t},c,cb,\ldots,cb^{t-3},1+\omega,a+\omega b)$ is an ordered ${\mathbb{F}}_{q}$-basis and $(\xi_{0},\ldots,\xi_{n-1})$ is its dual basis. This follows by Theorem 5.4. Examples of $i$-club rank metric codes. Let $n=rt$ with $t,r>1$. Examples of $i$-club rank metric codes are the code $\mathcal{C}_{f}$ with $f(x)=\mathrm{Tr}_{q^{n}/q^{t}}(g(x)-x)$, where $g(x)\in\mathcal{L}_{t,q}$ is a scattered $q$-polynomial. This follows by Theorem 5.3. ### 8.2. $3$-dimensional linear rank metric codes with minimum distance $1$ As a consequence on the classification of the linear Rédei blocking sets of Theorem 6.2 we also obtain the following classification of rank metric codes with the following properties: let $\operatorname{\mathcal{C}}$ be a linear $[n+1,3,1]_{q^{n}/q}$ non-degenerate rank metric code with at least a codeword of weight $n-1$. Let $U$ be any system associated with $\operatorname{\mathcal{C}}$ and let $W$ be a $2$-dimensional ${\mathbb{F}}_{q^{n}}$-subspace such that $\dim_{\mathbb{F}_{q^{n}}}(\langle U\cap W\rangle_{\mathbb{F}_{q^{n}}})=2$, then we say that $U$ is _$q$ -non- degenerate_. ###### Theorem 8.4. Let $\operatorname{\mathcal{C}}$ be a linear $[n+1,3,1]_{q^{n}/q}$ $q$-non- degenerate rank metric code. Then any associated system with $\operatorname{\mathcal{C}}$ is a linear Rédei type blocking set in $\mathrm{PG}(2,q^{n})$. Then * • If $\operatorname{\mathcal{C}}$ has more than $q^{n}-1$ codewords of weight one, then $\operatorname{\mathcal{C}}$ is semilinearly equivalent to the code generated by $G=\left(\begin{array}[]{ccccc}\xi_{1}&\xi_{2}&\cdots&\xi_{n}&0\\\ \mathrm{Tr}_{q^{n}/q}(\xi_{1})&\mathrm{Tr}_{q^{n}/q}(\xi_{2})&\cdots&\mathrm{Tr}_{q^{n}/q}(\xi_{n})&0\\\ 0&0&\cdots&0&1\end{array}\right),$ where $\xi_{1},\ldots,\xi_{n}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$. * • If $n=4$, then $\operatorname{\mathcal{C}}$ is semilinearly equivalent to the code generated by $G=\left(\begin{array}[]{ccccc}\xi_{1}&\xi_{2}&\cdots&\xi_{n}&0\\\ f(\xi_{1})&f(\xi_{2})&\cdots&f(\xi_{n})&0\\\ 0&0&\cdots&0&1\end{array}\right),$ where $\xi_{1},\ldots,\xi_{n}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$ and $f$ is one of the polynomial described in [31, Theorem A.3]. * • If $\operatorname{\mathcal{C}}$ has exactly $q^{n}-1$ codewords of weight one, $(q^{n}-1)q^{2}$ codewords of weight two and all the remaining nonzero codewords have weight greater than or equal to $n$, then $\operatorname{\mathcal{C}}$ is semilinearly equivalent to the code generated by $G=\left(\begin{array}[]{ccccc}\xi_{1}&\xi_{2}&\cdots&\xi_{n}&0\\\ f(\xi_{1})&f(\xi_{2})&\cdots&f(\xi_{n})&0\\\ 0&0&\cdots&0&1\end{array}\right),$ where $\xi_{1},\ldots,\xi_{n}$ is an ${\mathbb{F}}_{q}$-basis of $\mathbb{F}_{q^{n}}$ and $f$ is as described in Theorem 5.4. ###### Proof. Let $U$ be any associated system with $\operatorname{\mathcal{C}}$. Then $\dim_{{\mathbb{F}}_{q}}(U)=n+1$ and, since the minimum distance is one, by Theorem 8.1 there exists at least one $\mathbb{F}_{q^{n}}$-subspace of dimension $2$ such that $\dim_{{\mathbb{F}}_{q}}(U\cap W)=n$. Therefore, $L_{U}$ is a linear blocking set of Rédei type with $\ell=\mathrm{PG}(W,\mathbb{F}_{q^{n}})$ an its Rédei line. Moreover, since $\operatorname{\mathcal{C}}$ is $q$-non-degenerate, then there exists $\overline{W}$ $\mathbb{F}_{q^{n}}$-subspace of dimension $2$ such that $\dim_{{\mathbb{F}}_{q}}(U\cap\overline{W})=2$ and $\dim_{\mathbb{F}_{q^{n}}}(\langle U\cap\overline{W}\rangle_{\mathbb{F}_{q^{n}}})=2$, that is $L_{U}$ has a $(q+1)$-secant line. * • If $\operatorname{\mathcal{C}}$ has more than $q^{n}-1$ codewords of weight one, then there is more than one Rédei line to $L_{U}$. Moreover, since there exists a codeword of weight $n-1$. By [14, Theorem 4.1] we have that $L_{U}$ has size at least $q^{n}+q^{n-1}+1$. In [28, Theorem 5], it has been proved that an ${\mathbb{F}}_{q}$-linear blocking set in $\mathrm{PG}(2,q^{n})$ of Rédei type having size at least $q^{n}+q^{n-1}+1$ and with at least two Rédei lines is $\mathrm{PGL}(2,q^{n})$-equivalent to $L_{U}^{\prime}$, where $U^{\prime}=U_{\mathrm{Tr}_{q^{n}/q}}\oplus\langle(0,0,1)\rangle_{{\mathbb{F}}_{q}}.$ By Lemma 6.5, then $U$ and $U^{\prime}$ are $\mathrm{GL}(3,q^{n})$-equivalent and hence the assertion. * • This point follows from the classification result [9, Section 4] and from Lemma 6.5 and Theorem 8.2. * • Because of the conditions on the weight distribution of $\operatorname{\mathcal{C}}$, we have that $L_{U\cap W}$ is an $(n-2)$-club contained in the line $\mathrm{PG}(W,\mathbb{F}_{q^{n}})$. Therefore, the result follows from Theorem 5.4 and again from Lemma 6.5 and Theorem 8.2. ∎ ## 9\. Conclusions and open problems In this paper we have investigated clubs of rank $n$ in $\mathrm{PG}(1,q^{n})$. We have been able to provide a classification result for $(n-2)$-clubs. Then we have analyzed the $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-equivalence of the known subspaces defining clubs, for some of them the problem is then translated in determining whether or not certain scattered spaces are equivalent. Then we have detected all the linearized polynomials defining the known families of clubs. Then we have applied our results to the theory of blocking sets, KM-arcs and rank metric codes. Here, we list some open problems and questions that naturally arise: * • To classify $i$-clubs for $i<n-2$. * • Can a coding-theoretical approach be used to the first problem? * • Recently De Boeck and Van de Voorde in [17] proved that $2$-clubs do not exist in $\mathrm{PG}(1,q^{5})$. Can this result be extended to $\mathrm{PG}(1,q^{n})$ for any value of $n\geq 5$? * • As for the blocking sets, are KM-arcs constructed from two $\operatorname{\Gamma\mathrm{L}}(2,q^{n})$-inequivalent clubs $\mathrm{P\Gamma L}(3,q^{n})$-inequivalent? * • Can we find other conditions on the parameters of a linear rank metric code in such a way that results on linear blocking sets can be still used to classify them? * • Similarly to what happens to the traditional KM-arcs, can linear sets be used to construct example of generalized KM-arcs introduced in [12]? ## Acknowledgement The authors were supported by the project “VALERE: VAnviteLli pEr la RicErca” of the University of Campania “Luigi Vanvitelli” and by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INdAM). ## References * [1] C. Bachoc, O. Serra and G. Zémor: An analogue of Vosper’s theorem for extension fields, _Math. Proc. Camb. Philos. Soc._ 163(3) (2017), 423–452. * [2] C. Bachoc, O. Serra and G. Zémor: Revisiting Kneser’s theorem for field extensions, _Combinatorica_ 38(4) (2018), 759–777. * [3] S. Ball: The number of directions determined by a function over finite field, _J. Combin. Theory Ser. A_ 104 (2003), 341–350. * [4] S. Ball, A. Blokhuis, A. E. Brouwer, L. Storme and T. Szőnyi: On the number of slopes of the graph of a function defined on a finite field, _J. Combin. Theory Ser. A_ 86(1) (1999), 187–-196. * [5] D. Bartoli, G. Micheli, G. Zini and F. Zullo: $r$-fat linearized polynomials over finite fields, _J. Combin. Theory Ser. A_ 109 (2022), 105609. * [6] T.P. Berger: Isometries for rank distance and permutation group of Gabidulin codes, _IEEE Trans. Inform. Theory_ 49(11) (2003), 3016–3019. * [7] A. Blokhuis and M. Lavrauw: Scattered spaces with respect to a spread in $\mathrm{PG}(n,q)$, _Geom. Dedicata_ 81 (2000), 231–243. * [8] A. Blokhuis, P. Sziklai, and T. Szőnyi: Blocking sets in projective spaces, _Current research topics in Galois geometry_ (2011), 61–84. * [9] G. Bonoli and O. Polverino: $\mathbb{F}_{q}$-linear blocking sets in $\mathrm{PG}(2,q^{4})$, Innov. Incidence Geom. 2 (2005), 35–56. * [10] B. Csajbók, G. Marino and O. Polverino: A Carlitz type result for linearized polynomials, _Ars Math. Contemp._ 16(2) (2019), 585–608. * [11] B. Csajbók, G. Marino and O. Polverino: Classes and equivalence of linear sets in $\mathrm{PG}(1,q^{n})$, _J. Combin. Theory Ser. A_ 157 (2018), 402–426. * [12] B. Csajbók and Zs. Weiner: Generalizing Korchmáros–Mazzocca arcs, _Combinatorica_ 41(5) (2021), 601–623. * [13] B. Csajbók and C. Zanella: Maximum scattered ${\mathbb{F}}_{q}$-linear sets of $\mathrm{PG}(1,q^{4})$, _Discrete Math._ 341 (2018), 74–-80. * [14] J. De Beule and G. Van de Voorde: The minimum size of a linear set, J. Combin. Theory Ser. A 164 (2019), 109–124. * [15] M. De Boeck and G. Van de Voorde: A linear set view on KM-arcs, _J. Algebraic Combin._ 44(1) (2016), 131–164. * [16] M. De Boeck and G. Van de Voorde: Elation KM-arcs, _Combinatorica_ 39(3) (2019), 501–544. * [17] M. De Boeck and G. Van de Voorde: The weight distributions of linear sets in $\mathrm{PG}(1,q^{5})$, _Finite Fields Appl._ 82 (2022), 102034. * [18] P. Delsarte: Bilinear forms over a finite field, with applications to coding theory, J. Combin. Theory Ser. A 25 (1978), 226–241. * [19] Sz. Fancsali and P. Sziklai: About maximal partial $2$-spreads in PG$(3m-1,q)$, _Innov. Incidence Geom._ 4(1) (2006), 89–102. * [20] Sz. Fancsali and P. Sziklai: Description of the clubs, _Annales Univ. Rolando Eötvös_ , 2009. * [21] A. Gács and Zs. Weiner: On $(q+t)$-arcs of type $(0,2,t)$, _Des. Codes Cryptogr._ 29(1–3) (2003), 131–139. * [22] Y.J. Ionin and M.S. Shrikhande: Combinatorics of symmetric designs, Cambridge University Press, 2006. * [23] G. Korchmáros and F. Mazzocca: On $(q+t)$-arcs of type $(0,2,t)$ in a desarguesian plane of order $q$, _Math. Proc. Camb. Philos. Soc._ 108(3) (1990), 445–459. * [24] M. Lavrauw and G. Van de Voorde: Field reduction and linear sets in finite geometry, In: Topics in Finite Fields, _AMS Contemporary Math_ , vol. 623, pp. 271-293. American Mathematical Society, Providence (2015). * [25] R. Lidl and H. Niederreiter: Finite fields, volume 20 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, second edition, 1997. * [26] G. Longobardi, G. Marino, R. Trombetti and Y. Zhou: A large family of maximum scattered linear sets of $\mathrm{PG}(1,q^{n})$ and their associated MRD codes, arXiv:2102.08287. * [27] G. Longobardi and C. Zanella: Linear sets and MRD-codes arising from a class of scattered linearized polynomials, _J. Algebr. Combin._ (2021), https://doi.org/10.1007/s10801-020-01011-9. * [28] G. Lunardon and O. Polverino: Blocking sets of size $q^{t}+q^{t-1}+1$, _J. Combin. Theory Ser. A_ 90 (2000), 148–-158. * [29] G. Mullen and M. Zieve: Value sets of polynomials, In: _Handbook of finite fields_ , CRC Press, Boca Raton, FL, 2013, pp. 232–235. * [30] V. Napolitano, O. Polverino, P. Santonastaso and F. Zullo: Classifications and constructions of minimum size linear sets, arXiv:2201.02003. * [31] V. Napolitano, O. Polverino, P. Santonastaso and F. Zullo: Linear sets on the projective line with complementary weights, _Discrete Math._ 345(7) (2022), 112890. * [32] V. Napolitano, O. Polverino, P. Santonastaso and F. Zullo: Two pointsets in $\mathrm{PG}(2,q^{n})$ and the associated codes, _Adv. Math. Commun._ (2022). * [33] A. Neri, P. Santonastaso and F. Zullo: Extending two families of maximum rank distance codes, Finite Fields Appl. 81 (2022), 102045. * [34] O. Polverino: Linear sets in finite projective spaces, _Discrete Math._ 310(22) (2010), 3096–3107. * [35] O. Polverino, P. Santonastaso, J. Sheekey and F. Zullo: On the linearity of rank metric codes, _in preparation_. * [36] O. Polverino and F. Zullo: Connections between scattered linear sets and MRD-codes, _Bulletin of the ICA_ 89 (2020), 46-74. * [37] T.H. Randrianarisoa: A geometric approach to rank metric codes and a classification of constant weight codes, _Des. Codes Cryptogr._ 88(7) (2020), 1331–1348. * [38] J. Sheekey: A new family of linear maximum rank distance codes, _Adv. Math. Commun._ 10(3) (2016), 475–488. * [39] J. Sheekey: MRD codes: constructions and connections, _Combinatorics and finite fields: Difference sets, polynomials, pseudorandomness and applications_ , Radon Series on Computational and Applied Mathematics 23, K.-U. Schmidt and A. Winterhof (eds.), De Gruyter (2019). * [40] J. Sheekey and G. Van de Voorde: Rank-metric codes, linear sets, and their duality, _Des. Codes Cryptogr._ 88(4) (2020), 655–675. * [41] W. Tang, Y. Zhou and F. Zullo: On the automorphism groups of Lunardon-Polverino scattered linear sets, arXiv:2201.12777. * [42] B. Wu and Z. Liu: Linearized polynomials over finite fields revisited, Finite Fields Appl. 22 (2013), 79–100.
# Characterization of magnetic properties, including magnetocaloric effect, of RE5Pt2In4 (RE = Gd-Tm) compounds Altifani Rizky Hayyu<EMAIL_ADDRESS>M. Smoluchowski Institute of Physics, Jagiellonian University, prof. Stanisława Łojasiewicza 11, PL-30-348 Kraków, Poland Stanisław Baran<EMAIL_ADDRESS>M. Smoluchowski Institute of Physics, Jagiellonian University, prof. Stanisława Łojasiewicza 11, PL-30-348 Kraków, Poland Andrzej Szytuła M. Smoluchowski Institute of Physics, Jagiellonian University, prof. Stanisława Łojasiewicza 11, PL-30-348 Kraków, Poland ###### Abstract The RE5Pt2In4 (RE = Gd-Tm) rare earth compounds have been investigated by means of X-ray diffraction (XRD) as well as by DC and AC magnetometric measurements. The compounds crystallize in an orthorhombic crystal structure of the Lu5Ni2In4-type (Pbam space group, No. 55). With decreasing temperature the intermetallics undergo a transition from para- to ferro-/ferri- (RE = Gd, Tb, Ho, Er) or antiferromagnetic state (RE = Tm). In case of Dy5Pt2In4, the ferromagnetic state is reached through an intermediate antiferromagnetic order present in a limited temperature range. The critical temperatures of magnetic ordering range from 4.1 K (RE = Tm) up to 108 K (RE = Tb). For the majority of investigated compounds, a cascade of additional magnetic transitions is found below the respective critical temperatures of magnetic ordering. The magnetic moments are found solely on the rare earth atoms, while the moments of the remaining Pt and In atoms are absent or are too small to be detected while accompanied by the strong rare earth’s moments. The magnetocaloric (MCE) performance of RE5T2In4 (RE = Gd-Tm) is found quite good, especially while taking into account the compounds with RE = Ho and Er. Maximum magnetic entropy change ($-\Delta S_{M}^{max}$) reaches 11.8 (RE = Ho) or 11.4 J$\cdot$kg${}^{-1}\cdot$K-1 (RE = Er) under magnetic flux density change of 0-9 T. Under the same conditions, the relative cooling power (RCP) and refrigerant capacity (RC) equal 607 and 495 J$\cdot$kg-1 (RE = Ho) or 434 and 341 J$\cdot$kg-1 (RE = Er). keywords: rare earth intermetallics, magnetic properties, magnetization, magnetocaloric effect, magnetic entropy change, relative cooling power ## I Introduction Rare earth intermetallic compounds have been attracting researchers’ interest due to a number of intriguing physical phenomena including multiple magnetic transitions, metamagnetism, large magnetocaloric effect (MCE), spin glass state, heavy fermion behavior, superconductivity, and many more. A review paper by Gupta and Suresh [1], reporting the current state of knowledge for the RTX (R = rare earths, T = Sc, Ti, Mn, Fe, Co, Ni, Cu, Ru, Rh, Pd, Ag, Os, Ir, Pt, Au, and X = Al, Ga, In, Si, Ge, Sn, As, Sb, Bi) series of intermetallics, may provide an idea of the multitude of interesting phenomena observed in rare earth intermetallic compounds. Nowadays, the compounds with complex crystal structure and magnetic properties resulting from multiple magnetic sublattices are of particular interest. The RE5Pt2In4 (RE = Gd-Tm) indides are a good example of such a family of compounds, as they crystallize in an orthorhombic structure of the Lu5Ni2In4-type (Pbam space group, No. 55) [2]. In this structure, reported for the first time by Zaremba et al. [3], the rare earth atoms occupy three different Wyckoff sites, namely, the 2(a) site and two 4(g) sites with different atomic positional parameters. Up to now magnetic properties of RE5Pt2In4 (RE = Gd-Tm) remain unexplored. However, such properties including magnetic structures, have been reported for the isostructural RE5Ni2In4 [4, 5, 6, 7, 8, 9, 10] and RE5Pd2In4 [11] (RE = Tb-Tm). Both the Ni- and Pd-based intermetallics are found to order magnetically at low temperatures with the critical temperatures of magnetic ordering ranging from about 4 K in the Tm-based compounds up to 125 K reported for Tb5Ni2In4. The magnetic moments are carried solely by the rare earth atoms (Ni, Pd and In remain non-magnetic or have magnetic too small to be detectable in presence of the strong rare earth moments). Below the respective critical temperature, a cascade of temperature-induced magnetic transitions is observed for most of the compounds. Neutron diffraction data reveal that these complex magnetic properties can be attributed to either different ordering temperatures in different rare earth sublattices or order-order magnetic transitions. The low-temperature magnetic structures show a variety of forms including ferro- and antiferromagnetic spin arrangements, as well as the coexistence of both types of ordering in selected compounds. The reported magnetic structures include both commensurate and incommensurate ones. Magnetocaloric effect (MCE) in the rare earth-based compounds is recently of great interest due to possible applications in refrigeration [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. It is until now that MCE in the RE5T2In4 (RE = rare earth element; T = transition metal) intermetallics has only been reported for RE5Ni2In4 (RE = Dy, Ho, and Er) [10]. It has been found that for RE = Ho and Er, the maximum entropy changes exceed 10 J$\cdot$kg${}^{-1}\cdot$K-1 under magnetic flux change 0-7 T. The maximum entropy changes are reached in the vicinity of the respective Curie temperatures, which are close to 20 K. Complex magnetic properties, reported for the isostructural RE5Ni2In4 and RE5Pd2In4 (RE = Tb-Tm), has inspired us to undertake the current study concentrated on RE5Pt2In4 (RE = Gd-Tm). We report here not only the basic magnetic properties, like magnetic transition temperatures, effective magnetic moments, the moments under applied magnetic field, critical and coercivity fields, etc., but also we report the magnetocaloric effect studied under magnetic flux change up to 0-9 T. As a result, we determine the number of parameters important from the point of view of potential application in a low- temperature refrigeration. The parameters include temperature averaged magnetic entropy change (TEC), relative cooling power (RCP), and refrigerant capacity (RC). The experimental results reported in this work are compared with those reported previously for the isostructural RE5Ni2In4 and RE5Pd2In4 (RE = Tb-Tm). ## II Materials and methods The samples have been prepared by arc melting of high-purity elements (at least 99.9 wt %) under titanium-gettered argon atmosphere. The elements have been taken in a stoichiometric ratio. The obtained ingots have been remelted a few times in order to improve their homogeneity. No annealing have been applied as our previous tests have shown that annealing leads to the appearance of impurity phases. The crystal structure of the obtained samples has been examined by X-ray powder diffraction at room temperature using a PANalytical X’Pert PRO diffractometer (Cu K$\alpha$ radiation, Bragg-Brentano geometry, measured angle interval of $2\theta=10-100^{\circ}$, $2\theta$ step = 0.033∘, 150 s/step). The X-ray diffraction data have been processed using the FullProf program package [23, 24]. For the DC and AC magnetic measurements, the powder samples have been encapsuled in plastic containers and glued by varnish in order to prevent grain rotation or movement during the measurement. A Vibrating Sample Magnetometer (VSM) option of the Physical Properties Measurement System (PPMS) by Quantum Design has been utilized for DC magnetic measurements. Magnetic susceptibility data have been collected over a wide temperature range of 1.85–390 K using both the Zero Field Cooling (ZFC) and Field Cooling (FC) regimes. Every time before collecting a ZFC curve, the sample has been heated to a temperature exceeding the respective critical temperature of magnetic ordering, then demagnetized by the oscillating magnetic field, and finally cooled down to 1.9 K. The ZFC and FC collected at a low magnetic field of 50 Oe have been used to determine magnetic transition temperatures, while from the ZFC data taken at 1 kOe the effective magnetic moments ($\mu_{eff}$) and paramagnetic Curie temperatures ($\theta_{p}$) have been derived. In order to follow the thermal evolution of magnetic order, isothermal magnetization curves have been collected in magnetic fields up to 90 kOe (9 T) at a number of selected temperatures. Before collecting a magnetization curve, the sample has been demagnetized using the described above procedure. AC magnetic susceptibility measurements have been performed using an AC Measurement System (ACMS) option of PPMS. The data have been collected under an oscillating magnetic field of 2 Oe amplitude at a number of selected frequencies between 100 and 5000 Hz. The temperature interval has covered 1.9–300 K. The crystal structure has been visualized (see Fig. 1 with the use of the VESTA program [25]. ## III Results ### III.1 Crystal structure Figure 1: Perspective view of the crystal structure of RE5Pt2In4 showing its layered nature. X-ray powder diffraction data collected for the RE5Pt2In4 (RE = Gd-Tm) samples at room temperature confirm that the compounds crystallize in an orthorhombic crystal structure of the Lu5Ni2In4-type (Pearson symbol oP22, space group Pbam, No. 55, $Z=2$). This result is in agreement with the previous report [2]. The Lu5Ni2In4-type structure consists of distorted trigonal and square prismatic slabs of the REPt and REIn compositions [3]. The structure is a layered-one with atoms situated on the mirror planes at $z=0$ and $z=\frac{1}{2}$. The layers formed by the rare earth atoms are separated by the layers containing Pt and In (see Fig. 1). The rare earth atoms occupy three different Wyckoff sites, namely, one 2a site (0,0,0) and two 4g sites $(x,y,0)$ with different atom positional parameters. The Pt atoms occupy one 4h site $(x,y,\frac{1}{2})$, while the In atoms are located at two other 4h sites with different atom positional parameters. Figure 2: X-ray diffraction patterns collected at room temperature for RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy (d) RE = Ho, (e) RE = Er and (f) RE = Tm. The solid circles denote the experimental data, while the black lines show the Rietveld refinement results. The vertical bars indicate Bragg reflection positions, whereas the difference between the experimental data and the refinement is presented as the blue line at the bottom of each subfigure. Table 1: Crystallographic data obtained from Rietveld refinement of the X-ray powder diffraction patterns collected at room temperature for RE5Pt2In4 (RE = Gd-Tm; Lu5Ni2In4-type structure, space group $Pbam$, No. 55). The agreement factors Rprofile, RF, RBragg, and $\chi^{2}$ characterizing quality of refinements are listed at the bottom of the table. RE | Gd | Tb | Dy | Ho | Er | Tm ---|---|---|---|---|---|--- $a$ [Å] | 18.2045(20) | 18.1218(14) | 18.0551(13) | 17.9991(15) | 17.9447(11) | 17.8935(18) $b$ [Å] | 8.0461(9) | 8.0140(6) | 7.9886(5) | 7.9731(6) | 7.9524(5) | 7.9212(8) $c$ [Å] | 3.6771(5) | 3.6500(5) | 3.6270(3) | 3.6115(4) | 3.5942(2) | 3.5776(5) $V$ [Å3] | 538.61(23) | 530.08(16) | 523.13(14) | 518.28(16) | 512.92(12) | 507.08(20) RE1 at 2a (0, 0, 0) | 0 | 0 | 0 | 0 | 0 | 0 RE2 at 4g (x, y, 0) | $x=0.2181(10)$ | 0.2192(8) | 0.2200(7) | 0.2180(9) | 0.2190(7) | 0.2184(8) | $y=0.2394(28)$ | 0.2459(21) | 0.2399(19) | 0.2389(24) | 0.2470(18) | 0.2471(23) RE3 at 4g (x, y, 0) | $x=0.4204(12)$ | 0.4181(10) | 0.4168(8) | 0.4215(11) | 0.4168(7) | 0.4181(9) | $y=0.1238(21)$ | 0.1215(17) | 0.1150(15) | 0.1196(19) | 0.1244(14) | 0.1218(18) Pt at 4h (x, y, $\frac{1}{2}$) | $x=0.3056(8)$ | 0.3031(6) | 0.3028(5) | 0.3043(6) | 0.3044(5) | 0.3012(8) | $y=0.0217(20)$ | 0.0236(13) | 0.0265(12) | 0.0259(14) | 0.0257(12) | 0.0295(17) In1 at 4h (x, y, $\frac{1}{2}$) | $x=0.5695(14)$ | 0.5678(9) | 0.5645(8) | 0.5652(10) | 0.5691(9) | 0.5606(11) | $y=0.2152(26)$ | 0.2102(17) | 0.2175(17) | 0.2111(17) | 0.2056(15) | 0.2094(19) In2 at 4h (x, y, $\frac{1}{2}$) | $x=0.8459(14)$ | 0.8494(10) | 0.8492(9) | 0.8511(11) | 0.8484(8) | 0.8516(12) | $y=0.0753(27)$ | 0.0759(18) | 0.0748(18) | 0.0714(20) | 0.0778(18) | 0.0759(24) Rprofile [%] | 2.76 | 3.07 | 2.40 | 3.59 | 3.07 | 4.39 RF [%] | 8.19 | 9.55 | 6.08 | 6.71 | 4.88 | 5.95 RBragg [%] | 12.5 | 11.3 | 9.34 | 10.9 | 6.96 | 9.70 $\chi^{2}$ | 7.30 | 6.87 | 6.46 | 10.4 | 8.14 | 16.3 Fig. 2 shows X-ray powder diffraction patterns of RE5Pt2In4 (RE = Gd-Tm) collected at room temperature, together with the results of Rietveld refinement. The refined parameters of the crystal structure are listed in Table 1. Figure 3: Reciprocal magnetic susceptibility with the fitted line represents the Curie-Weiss law for RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy (d) RE = Ho, (e) RE = Er, and (f) RE = Tm. The low-temperature behavior measured at 50 Oe (ZFC and FC regimes) is shown in the upper insets, while the isothermal magnetization vs. external magnetic field at selected temperatures is presented in the lower insets. Figure 4: AC magnetic susceptibility of RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy (d) RE = Ho, (e) RE = Er, and (f) RE = Tm taken at frequencies between 100 Hz and 5000 Hz. $\chi^{\prime}$ and $\chi$ refer to the real and imaginary components, respectively. ### III.2 Magnetic properties #### III.2.1 DC and AC magnetic susceptibility data The results of DC magnetic measurements of RE5Pt2In4 (RE = Gd–Tm) are presented in Fig. 3 and summarized in Table 2. At a low magnetic field of 50 Oe, all compounds show transitions from para- to magnetically ordered state with decreasing temperature, namely, for RE = Gd, Tb, Ho and Er a rapid increase of magnetic susceptibility, characteristic of para- to ferromagnetic transition, is visible, while for RE = Tm a maximum typical of para- to antiferromagnetic transition is found (see the upper insets in Figs. 3a-f). The case of Dy5Pt2In4 is more complicated as the ferromagnetic transition at $T_{C}=80$ K is preceded by an intermediate antiferromagnetic state manifesting itself by a maximum at $T_{N}=93$ K. Below the critical temperatures of magnetic ordering, a number of additional magnetic transitions are detected from either additional maxima or inflection points. The magnetic transition temperatures are listed in Table 2. Large discrepancies between the ZFC and FC curves, visible especially for RE = Gd–Ho below the corresponding Curie temperatures, are magnetic domains-related effects, indicating presence of a ferromagnetic component of the magnetic order. The discrepancy found for RE = Tm, both below and above the Néel temperature of 4.1 K, can be attributed to a very small amount of ferromagnetic impurity phase, whose content is too small to be detectable by other experimental techniques like X-ray powder diffraction (XRD). The maxima visible in the AC magnetic susceptibility vs. temperature curves (see Fig. 4) coincide with the magnetic transition temperatures derived from the DC magnetic data (see Table 2). Some discrepancies between the DC and AC data can be attributed to different experimental conditions, namely, the AC data have been collected under the oscillating field of 2 Oe and no applied DC field, while the DC data have been recorded under a constant field of 50 Oe. Although relatively low, the latter field may influence temperatures of magnetic transition sensitive to the applied magnetic field. It is worth noting that for Dy5Pt2In4, the Néel temperature derived from the DC data coincides with a distinct maximum observed in $\chi^{\prime}_{ac}$ at 93 K (there is no anomaly at this temperature in $\chi^{\prime\prime}_{ac}$), while the Curie point of 80 K is accompanied by maxima in both $\chi^{\prime}_{ac}$ and $\chi^{\prime\prime}_{ac}$. Table 2: Parameters characterizing magnetic order: $T_{C}$ (Curie temperature), $T_{N}$ (Néel temperature), $T_{t}$ (temperatures of additional anomalies), $\theta_{p}$ (paramagnetic Curie temperature), $\mu_{eff}$ (effective magnetic moment), $\mu[\mu_{B}]$ (magnetic moment in the ordered state), $H_{cr}[kOe]$ (the critical field) and $H_{coer}[kOe]$ (the coercivity field) for RE5Pt2In4 (RE = Gd–Tm), as derived from DC and/or AC magnetometric measurements. The $i$ and $m$ indices indicate whether the transition temperature corresponds to an inflection point or to a maximum in the $\chi(T)$ curve, respectively. The transition temperatures are calculated from the ZFC data, except few cases based on the FC data and marked explicitly by an additional index $f$. RE | $T_{C}$[K] | $T_{N}$[K] | $T_{t}$[K] | $\theta_{p}[K]$ | $\mu_{eff}[\mu_{B}]$ | $\mu[\mu_{B}]$ | $H_{cr}[kOe]$ | $H_{coer}[kOe]$ ---|---|---|---|---|---|---|---|--- | $\chi_{dc}$ | $\chi^{\prime}_{ac}$ | $\chi^{\prime\prime}_{ac}$ | $\chi_{dc}$ | $\chi^{\prime}_{ac}$ | $\chi^{\prime\prime}_{ac}$ | $\chi_{dc}$ | $\chi^{\prime}_{ac}$ | $\chi^{\prime\prime}_{ac}$ | | Exp. | Theor. | Exp. | Theor. | | Gd | 76i | 74m | 73m | | | | 22.5i | | | +72 | 7.87 | 7.94 | 3.86 | 7.00 | T = 1.9 K: 0.16 | 0.11 | | | | | | | | | | | | | | | T = 40.0 K: 0.05 | 0.02 | | | | | | | | | | | | | | | T = 68.0 K: 0.02 | Tb | 108i | 106m | 104m | | | | 83i | | | +48 | 9.55 | 9.72 | 4.34 | 9.00 | T = 1.9 K: 44 | 23.7 | | | | | | | | | | | | | | | T = 20.0 K: 1.7 | 1.9 | | | | | | | | | | | | | | | T = 100.0 K: 0.03 | Dy | 80i | 84m | 80m | 93m | 93m | | 19.6m,f, 71i | 14.1m | | +24.6 | 10.61 | 10.65 | 5.74 | 10.00 | T = 1.9 K: 24.3, 45 | 22.1 | | | | | | | | | | | | | | | T = 12.0 K: 13.5 | 11.5 | | | | | | | | | | | | | | | T = 25.0 K: 2.2 | 2.4 | | | | | | | | | | | | | | | T = 75.0 K: 0.1 | 0.03 | | | | | | | | | | | | | | | T = 87.0 K: 0.5 | Ho | 23.5i | 22.7m | 22.1m | | | | 9.1m,f, 13.5i | 8.1m | | +10.1 | 10.62 | 10.61 | 6.98 | 10.00 | T = 1.9 K: 2.9 | 1.4 | | | | | | | | | | | | | | | T = 9.0 K: 0.22 | 0.23 | | | | | | | | | | | | | | | T = 15.0 K: 0.08 | 0.03 | | | | | | | | | | | | | | | T = 20.0 K: 0.02 | Er | 12.6i | 12.1m | 11.9m | | | | 6.9i, 8.8i | | 7.1m | +11.1 | 9.44 | 9.59 | 3.24 | 9.00 | T = 1.9 K: 0.96 | 0.3 | | | | | | | | | | | | | | | T = 7.0 K: 0.21 | 0.09 | | | | | | | | | | | | | | | T = 9.5 K: 0.08 | 0.02 Tm | | | | 4.1m | 4.2m | 4.2m | | | | +4.6 | 7.45 | 7.57 | 3.54 | 7.00 | T = 1.9 K: 9.2 | 0.03 | | | | | | | | | | | | | | | T = 3.0 K: 7.6 | 0.04 | | | | | | | | | | | | | | | T = 4.0 K: 0.07 | 0.02 $i$ – inflection point; $m$ – maximum; $f$ – determined from the FC curve The reciprocal magnetic susceptibility curves, collected at H = 1 kOe, become linear at high-enough temperatures as predicted by the Curie-Weiss law [26]: $\chi=\frac{C}{T-\theta_{p}}$ (1) where $C$ is the Curie constant related to the effective magnetic moment ($\mu_{eff}$), while $\theta_{p}$ is a paramagnetic Curie temperature. The values of $\mu_{eff}$ and $\theta_{p}$, as derived from fitting the linear dependence to $\chi_{dc}^{-1}(T)$ at high temperature regions, are listed in Table 2. The values of $\mu_{eff}$ are very close to those predicted for the free RE3+ ions. It is worth noting that for all investigated compounds, the determined paramagnetic Curie temperatures are positive, indicating the predominant role of the ferromagnetic interactions. #### III.2.2 Magnetization The lower insets in Figs. 3a-f show magnetization curves for RE5Pt2In4 (RE = Gd-Tm) taken in applied magnetic fields up to 90 kOe at a number of selected temperatures. The shape of the curves, especially those collected at lower temperatures, testifies to the coexistence of ferro- and antiferromagnetic components of the magnetic structures as both metamagnetic transitions as well as magnetic hysteresis are observed. The critical fields, corresponding to the metamagnetic transitions, have been determined from inflection points in the primary magnetization curves, i.e. the curves recorded directly after the zero-field cooling (ZFC) procedure. The values of critical fields derived from experimental data are listed in Table 2. The appearance of metamagnetic transition confirms the existence of an antiferromagnetic component, which is suppressed by application of a high-enough external magnetic field. The ferromagnetic component of the magnetic structure manifests itself in non-zero coercivity field. The observed values of coercivity fields are also reported in Table 2. It is worth noting that both the Tb- and Dy-based compounds show very high values of critical and coercivity fields at 1.9 K, testifying to stronger magnetic interactions than in the other investigated compounds. Such a result coincides with the highest ordering temperatures observed for RE = Tb and Dy. The case of Dy5Pt2In4 is very special as the critical field initially decreases with increasing temperature, reaching 0.1 kOe at 75 K, and afterward increases again to 0.5 kOe at 87 K. Such a result is in agreement with the susceptibility data (see subsection III.2.1), which suggest suppression of the ferromagnetic component of the Dy5Pt2In4 magnetic structure at $T_{C}=80$ K with further development of the antiferromagnetic one, which exists up to $T_{N}=93$ K. The determined values of the magnetic moments in the ordered state, as derived from magnetization data taken at 1.9 K and 90 kOe, are significantly lower than the values expected for free RE3+ ions (for example, for Tb5Pt2In4 it is 4.34 $\mu_{B}$, which is only 48 % of the theoretical value for Tb3+ which equals 9.00 $\mu_{B}$ – for other compounds, see Table 2). Nevertheless, one should take into account that the magnetization curves collected at 1.9 K are far away from saturation even at a relatively high magnetic field of 90 kOe. #### III.2.3 Magnetocaloric effect ##### Magnetic entropy change and temperature averaged magnetic entropy change (TEC) Figure 5: Magnetization vs temperature curves under various magnetic flux density values up to 9 T for RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy (d) RE = Ho, (e) RE = Er, and (f) RE = Tm. Figure 6: Temperature dependence of the magnetic entropy change $-\Delta S_{M}^{max}$, as derived from the $M(H,T)$ data, under various magnetic flux density changes $\Delta\mu_{0}H$ up to 0–9 T, for RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy (d) RE = Ho, (e) RE = Er, and (f) RE = Tm. Figure 7: The values of $-\Delta S_{M}^{max}$, TEC(3 K), TEC(5 K), and TEC(10 K) under various magnetic flux density changes $\Delta\mu_{0}H$ up to 0–9 T for RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy, (d) RE = Ho, (e) RE = Er, and (f) RE = Tm, respectively. For RE = Tb and Dy, the magnetic entropy changes corresponding to the low- and high-temperature maxima are indicated by black and brown lines, respectively. Considering the magnetic entropy ($S_{M}$) as a function of magnetic flux density ($\mu_{0}H$) and temperature ($T$), its differential $\mathrm{d}S_{M}$ can be expressed as: $\mathrm{d}S_{M}=\left(\frac{\partial S_{M}}{\partial{(\mu_{0}H)}}\right)_{T}\mathrm{d}(\mu_{0}H)+\left(\frac{\partial S_{M}}{\partial T}\right)_{(\mu_{0}H)}\mathrm{d}T$ (2) Taking into account one of the Maxwell’s relations, namely that of $\left(\frac{\partial S_{M}}{\partial{(\mu_{0}H})}\right)_{T}=\left(\frac{\partial M}{\partial T}\right)_{(\mu_{0}H)}$, and inserting it to Eq. 2, leads to: $\mathrm{d}S_{M}=\left(\frac{\partial M}{\partial T}\right)_{(\mu_{0}H)}\mathrm{d}(\mu_{0}H)+\left(\frac{\partial S_{M}}{\partial T}\right)_{(\mu_{0}H)}\mathrm{d}T$ (3) where $M$ denotes magnetization. By applying the isothermal condition ($\mathrm{d}T=0$) and integrating Eq. 3 over magnetic flux density, one gets: $\Delta S_{M}(T,\Delta\mu_{0}H)=\int_{0}^{\mu_{0}H_{max}}\left(\frac{\partial M(\mu_{0}H,T)}{\partial T}\right)_{(\mu_{0}H)}\,\mathrm{d}\mu_{0}H$ (4) where $\Delta\mu_{0}H$ is a change of the magnetic flux density (usually calculated with respect to the initial flux equal to zero), while $\left(\frac{\partial M(\mu_{0}H,T)}{\partial T}\right)_{(\mu_{0}H)}$ is a derivative of magnetization over temperature at fixed magnetic flux density of $\mu_{0}H$. Details of mathematical formalism related to the magnetocaloric effect can be found for example in the book by Tishin and Spichkin [27]. Fig. 5 shows the magnetization vs. temperature $M(T)$ curves collected at a number of fixed magnetic flux density values up to 9 T for the RE5Pt2In4 (RE = Gd–Tm) compounds. Based on the above data, the corresponding magnetic entropy changes have been calculated using Eq. 4. The results are presented in Fig. 6. Maximum entropy changes around corresponding magnetic transition temperatures reach 3.7, 3.4, 6.3, 11.8, 11.4, and 10.2 J$\cdot$kg${}^{-1}\cdot$K-1 under magnetic flux density change of 0–9 T for RE = Gd, Tb, Dy, Ho, Er, and Tm, respectively. The values for other magnetic flux density changes are listed in Table 3. The cases of Tb5Pt2In4 and Dy5Pt2In4 need an extra comment, as two distinct maxima are found in the magnetic entropy change vs. temperature plots (see Figs. 6b and 6c). The high-temperature maximum ($\sim$110 K for RE = Tb and $\sim$95 K for RE = Dy), which dominates for low magnetic flux density changes, corresponds to the transition from para- to ferromagnetic state. The low-temperature maximum ($\sim$45 K for RE = Tb and $\sim$25 K for RE = Dy), which dominates for high magnetic flux density changes, indicates an extra transformation of the magnetic structure. The magnetic entropy changes corresponding to the low- and high-temperature maxima are shown separately in Figs. 6b and 6c. The temperature averaged magnetic entropy change (TEC) is another parameter used for evaluating MCE. TEC is defined by the following formula [28]: $\displaystyle TEC(\Delta T_{\mathrm{lift}},\Delta\mu_{0}H)=$ $\displaystyle\frac{1}{\Delta T_{\mathrm{lift}}}\max_{T_{\mathrm{mid}}}\Bigg{\\{}\int_{T_{\mathrm{mid}}-\frac{\Delta T_{\mathrm{lift}}}{2}}^{T_{\mathrm{mid}}+\frac{\Delta T_{\mathrm{lift}}}{2}}\Delta S_{\mathrm{M}}(T,\Delta\mu_{0}H)\,\mathrm{d}T\Bigg{\\}}$ (5) where $T_{mid}$ is the center temperature of the temperature span ${\Delta T_{lift}}$. The value of $T_{mid}$ is determined by finding the one that maximizes the integral appearing in Eq. 5. The TEC values of RE5Pt2In4 (RE = Gd–Tm), calculated for the temperature spans of 3, 5 and 10 K, are presented in Fig. 7. It is worth noting that the TEC values for RE = Tb and Dy are always related to the dominating maximum, i.e. to the high-temperature maximum for the low magnetic flux density changes and the low-temperature one for the high magnetic flux density changes (see Figs. 7b and 7c). ##### Relative Cooling Power (RCP) and Refrigerant Capacity (RC) Besides $\Delta S_{M}^{max}$ and TEC, the relative cooling power (RCP) [29] and refrigerant capacity (RC) [30] are another important parameters that can be used to assess the MCE performance. RCP is defined by the equation: $RCP=-\Delta S_{M}^{max}\times\delta T_{FWHM}$ (6) where $\delta T_{FWHM}$ is the full width at half maximum of the entropy change vs. temperature curve. RC is calculated from the following integral: $RC=\int_{T_{1}}^{T_{2}}|\Delta S_{M}|\,\mathrm{d}T$ (7) where $T_{1}$ and $T_{2}$ denote the lower and upper limits of the FWHM temperature range, respectively. Table 3: The temperatures of maximum entropy change, maximum entropy changes $-\Delta S_{M}^{max}$ together with RCP and RC values under magnetic flux density changes $\Delta\mu_{0}H$ = 0–2 T, 0–5 T, 0–7 T and 0–9 T for the RE5Pt2In4 (RE = Gd–Tm) and isostructural compounds. Materials | Temp. for $-\Delta S_{\mathrm{M}}^{\mathrm{max}}$ [K] | $-\Delta S_{\mathrm{M}}^{\mathrm{max}}$[J$\cdot$kg${}^{-1}\cdot$K-1] | RCP [J$\cdot$kg-1] | RC [J$\cdot$kg-1] | Ref. ---|---|---|---|---|--- | | 0–2 T | 0–5 T | 0–7 T | 0–9 T | 0–2 T | 0–5 T | 0–7 T | 0–9 T | 0–2 T | 0–5 T | 0–7 T | 0–9 T | Dy5Ni2In4 | 19 | 1.8 | 3.6 | 4.7 | - | 49 | 178 | 286 | - | 37 | 130 | 209 | - | [10] Ho5Ni2In4 | 103 | 2.6 | 7.1 | 10.1 | - | 84 | 298 | 458 | - | 66 | 234 | 352 | - | [10] Er5Ni2In4 | 20 | 3.3 | 7.7 | 10.2 | - | 71 | 248 | 377 | - | 52 | 180 | 273 | - | [10] Gd5Pt2In4 | 78 | 1.0 | 2.2 | 3.0 | 3.7 | 58 | 172 | 261 | 359 | 48 | 139 | 209 | 290 | this work Tb5Pt2In4 | 45, 110 | 0.8 | 1.7 | 2.5 | 3.4 | 82 | 198 | 303 | 428 | 57 | 165 | 248 | 340 | this work Dy5Pt2In4 | 25, 95 | 1.1 | 2.9 | 4.7 | 6.3 | 45 | 290 | 250 | 363 | 33 | 201 | 180 | 263 | this work Ho5Pt2In4 | 23 | 2.5 | 6.9 | 9.5 | 11.8 | 94 | 302 | 451 | 607 | 82 | 254 | 373 | 495 | this work Er5Pt2In4 | 14 | 3.6 | 7.5 | 9.6 | 11.4 | 79 | 218 | 328 | 434 | 63 | 175 | 256 | 341 | this work Tm5Pt2In4 | 8 | 3.2 | 7.7 | 9.2 | 10.2 | 34 | 125 | 189 | 260 | 27 | 98 | 150 | 205 | this work Table 3 lists the determined values of RCP and RC for RE5Pt2In4 (RE = Gd–Tm) under selected changes of magnetic flux densities. It is worth noting that for Dy5Pt2In4, the RCP and RC values under magnetic flux density changes of 0–5 T are higher than those under 0–7 T. This unusual behavior is due to the fact that under $\Delta\mu_{0}H$ = 0–5 T, the low- and high-temperature maxima in $-\Delta S_{M}(T)$ are of similar heights and they overlap, leading to FWHM temperature interval being significantly wider than the one at higher magnetic flux density changes where low-temperature maximum dominates. ## IV Discussion This work reports the results of X-ray, DC, and AC magnetic measurements for the RE5Pt2In4 (RE = Gd-Tm) compounds. The X-ray diffraction data confirm that RE5Pt2In4 (RE = Gd-Tm) have an orthorhombic crystal structure of the Lu5Ni2In4-type. The crystal structure is a typical two-layered one with layers formed by the rare earth atoms $(z=0)$ separated by layers containing the remaining Pd and In atoms $(z=\frac{1}{2})$. The refined lattice parameters and atomic coordinates, presented in Table 1, are in good agreement with the previously reported ones [2]. The results of magnetometric measurements reveal that all investigated compounds order magnetically with decreasing temperature. The shapes of the magnetic susceptibility vs. temperature curves (see Figs. 3, 4) suggest para- to ferromagnetic transition for RE = Gd, Tb, Ho and Er, while for RE = Tm a para- to antiferromagnetic transition is found. The case of RE = Dy is more complicated as the ferromagnetic state is reached through an intermediate antiferromagnetic state occuring in realtively narrow temperature range. Anomalies in the $\chi(T)$ curves, observed below the respective critical temperatures of magnetic ordering for all compounds except RE = Tm, suggest presence of additional temperature-induced magnetic transitions. Such a behavior has been observed in the isostructural RE5Ni2In4 [4, 5, 6, 7, 8, 9, 10]. It can be attributed to complexity of the crystal structure as the rare earth atoms occupy three nonequivalent Wyckoff sites, which leads to competition of different interactions. Complex magnetic properties manifest themselves also in coexistance of the ferro- and antiferromagnetic contributions to the magnetic structure, as confirmed by the shapes of the magnetization vs. applied magnetic field curves, which show both presence of coercivity fields characteristic of ferromagnetic order as well as metamagnetic transitions characteristic of antiferromagnetic ordering (see the lower insets in Figs. 3a-f). It has to be mentioned that the exact thermal evolution of the magnetic structures cannot be determined solely from the magnetic data, but further neutron diffraction studies are required. Figure 8: Critical temperatures of magnetic ordering ($T_{C,N}$) along with paramagnetic Curie temperatures $\theta_{p}$ vs. de Gennes factor for RE5T2In4 (RE = rare earth element, T = Ni, Pd, Pt). The data for T = Ni have been taken after [4, 5, 6, 8, 9], while those for T = Pd after [11]. The dotted lines indicate theoretical de Gennes scaling for RE5Pt2In4 while taking the characteristic temperatures of Gd5Pt2In4 as reference. Reciprocal magnetic susceptibilities of RE5Pt2In4 (RE = Tb-Tm) follow the Curie-Weiss law at high temperatures (see Fig. 3). The deviations from linearity, as observed at lower temperatures, indicate influence of the crystalline electric field (CEF). The determined Curie temperatures are positive revealing that the ferromagnetic interactions are dominant. The obtained values of the effective magnetic moments are close to those redicted for the free RE3+ ions. Small discrepancies between the experimental and theoretical values do not exceed systematic errors of the experiment. Therefore the magnetism of RE5Pt2In4 (RE = Tb-Tm) is strictly related to the rare earth magnetic moments, while the magnetic moments of the remaining Pt and In elements are either zero or they are too small to be detectable when accompanied by high rare earth moments. According to the crystallographic data reported in Tables 1 and 4 in [2], the rare earth interatomic distances in RE5Pt2In4 (RE = Tb-Tm) exceed 3.3 Å. Therefore, they are high enough to exclude any direct interactions and the indirect interactions of the RKKY-type are expected in this family of compounds. One of the predictions of the RKKY theory is so called de Gennes scaling, which assumes direct proportionality between the critical temperature of magnetic ordering and the de Gennes factor defined as $(g_{J}-1)^{2}J(J+1)$, where $g$ is a Landé factor, while $J$ is a total angular momentum of the RE3+ rare earth ion. Figure 8 presents the critical temperatures of magnetic ordering for RE5T2In4 (T = Ni, Pd, Pt) plotted against de Gennes factor. In addition, the paramagnetic Curie temperatures, which are also a measure of the strength of magnetic interactions, are shown. The lack of theoretically predicted proportionality is yet another evidence of significant influence of CEF on magnetic properties of RE5T2In4 (RE = rare earth element, T = Ni, Pd, Pt). It is worth noting that the critical temperature of magnetic ordering of a particular Ni-based compound is in the most cases higher than those of its Pd- and Pt-based analogues. This finding coincides with increase of the interatomic distances due to increasing number of the atomic number of the d-electron element (compare the lattice parameters for T = Ni [3, 31, 5], Pd [32] and Pt [2]). Therefore, the increase of the interatomic distances leads to weaken the magnetic interactions. Figure 9: The plots of $M^{2}$ versus $\mu_{0}H/M$ at selected temperatures for RE5Pt2In4: (a) RE = Gd, (b) RE = Tb, (c) RE = Dy (d) RE = Ho, (e) RE = Er and (f) RE = Tm. The type of magnetic transition (the first or second order) can be derived from shapes of the Arrott plots ($M^{2}$ vs. $\mu_{0}H/M$) collected at selected temperatures [33]. Positive slope of the Arrott curve corresponds to the second order phase transitions (SOPT), while the negative slope to the first order phase transitions (FOPT). Fig 9 shows the Arrott plots for RE5Pt2In4 (RE = Gd-Tm). For RE = Gd, Ho, Er and Tm only positive slopes are found, indicating magnetic transition of the second order type. The situation is more complicated for RE = Tb and Dy, as positive slopes characteristic of SOPT are found below the critical temperature of magnetic ordering, except the lowest temperature of 2 K, where positive slope characteristic of FOPT is observed within limited range. This behavior is consistent with the shapes of the magnetization vs. temperature curves (see Fig. 5). For the low values of the applied magnetic field, magnetization initially increases with decreasing temperature, but finally undergoes a sudden drop below 20 K. Such a behavior indicates that the high-temperature ferro-/ferrimagnetic structure transforms into an antiferromagnetic one with decreasing temperature. According to the shape of Arrott plots this phase transition is of the first order type (FOPT). Table 4: Comparison of the magnetocaloric performance under magnetic flux density change of 0-7 T for RE5Pt2In4 (RE = Gd-Tm), isostructural RE5Ni2In4 (RE = Dy, Ho, Er) and other selected ternary rare earth-based indides. $T_{cr}$ denotes critical temperature of the magnetic ordering (Curie or Néel temperature). Material | $T_{cr}$ [K] | $-\Delta S_{\mathrm{M}}^{\mathrm{max}}$ [J$\cdot$kg${}^{-1}\cdot$K-1] | RCP $[\mathrm{J}$$\cdot$$\mathrm{kg}^{-1}$] | RC $[\mathrm{J}$$\cdot$$\mathrm{kg}^{-1}$] | Ref. ---|---|---|---|---|--- Gd5Pt2In4 | 76 | 3.0 | 261 | 209 | this work Tb5Pt2In4 | 108 | 2.5 | 303 | 248 | this work Dy5Pt2In4 | 93 | 4.7 | 250 | 180 | this work Ho5Pt2In4 | 23.5 | 9.5 | 451 | 373 | this work Er5Pt2In4 | 12.6 | 9.6 | 328 | 256 | this work Tm5Pt2In4 | 4.1 | 9.2 | 189 | 150 | this work Dy5Ni2In4 | 105 | 4.7 | 286 | 209 | [10] Ho5Ni2In4 | 31 | 10.1 | 458 | 352 | [10] Er5Ni2In4 | 21 | 10.2 | 377 | 273 | [10] Gd11Co4In9 | 86 | 10.95 | 538.1 | 405.9 | [34] Tb11Co4In9 | 95∗ | 4.43 | 376.8 | 274.4 | [35] Dy11Co4In9 | 37 | 4.66 | 213.9 | 165.9 | [34] Ho11Co4In9 | 20 | 12.29 | 475.2 | 357.4 | [34] Er11Co4In9 | 5.4∗ | 12.80 | 416.1 | 320.0 | [35] Gd11Ni4In9 | 91 | 3.58 | 269.0 | 206.4 | [36] Dy11Ni4In9 | 18 | 6.02 | 194.9 | 144.7 | [36] Ho11Ni4In9 | 13.5 | 12.44 | 353.0 | 269.2 | [36] Gd6Co2.2In0.8 | 76 | 11.84 | 814.23 | 633.55 | [37] Tb6Co2.2In0.8 | 32 | 8.96 | 394.15 | 284.32 | [37] Dy6Co2.2In0.8 | 50 | 9.59 | 517.01 | 389.77 | [37] Ho6Co2.2In0.8 | 18 | 20.83 | 626.36 | 466.07 | [37] Table 4 contains comparison of the MCE performance under magnetic flux density change of 0-7 T for RE5Pt2In4 (RE = Gd-Tm), isostructural RE5Ni2In4 (RE = Dy, Ho, Er) and other selected ternary rare earth-based indides. For a selected rare earth element, the MCE performance of a member of the RE5Pt2In4 (RE = Gd- Tm) family of compounds shows similar performance to that of its RE5Ni2In4 and RE11T4In9 (T = Co, Ni) analogues. It is worth mentioning that the MCE performance of RE5Pt2In4 (RE = Ho and Er), which is reported in this work, is quite high and comparable to that of the best known low-temperature magnetocaloric materials [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 34, 39, 40, 36, 10, 37]. ## V Conclusions The RE5Pt2In4 (RE = Gd-Tm) compounds crystallize in an orthorhombic crystal structure of the the Lu5Ni2In4-type (Pbam space group) in which the rare earth atoms occupy three nonequivalent sites. The compounds show complex magnetic properties and quite good MCE performance at low temperatures. With decreasing temperature all investigated compounds undergo a transition to magnetically ordered state, in the most cases followed by a cascade of extra magnetic transitions appearing below the respective critical temperature of magnetic ordering. The rare earth atoms are found to possess magnetic moments, while the moments of the remaining Pt and In atoms are absent or are too small to be detected while accompanied by the strong rare earth’s moments. The MCE performance of RE5Pt2In4 (RE = Gd-Tm) is comparable to that of the best known low-temperature magnetocaloric materials, especially while taking into account the compounds with RE = Ho and Er. ## Declaration of Competing Interest The authors declare no conflict of interest. ## Acknowledgements This research was supported in part by the Excellence Initiative – Research University Program at the Jagiellonian University in Kraków. The research was partially carried out with the equipment purchased thanks to the financial support of the European Regional Development Fund in the framework of the Polish Innovation Economy Operational Program (contract no.POIG.02.01.00-12-023/08). ## References * Gupta and Suresh [2015] S. Gupta, K. Suresh, Review on magnetic and related properties of RTX compounds, Journal of Alloys and Compounds 618 (2015) 562–606. 10.1016/j.jallcom.2014.08.079. * R. Zaremba et al. [2007] R. Zaremba, Ute Ch. Rodewald, R. Pöttgen, Rare Earth-Rich Indides RE5Pt2In4 (RE = Sc, Y, La–Nd, Sm, Gd–Tm, Lu), Monatshefte fur Chemie 138 (2007) 819–822. 10.1007/s00706-007-0702-6. * V. I. Zaremba et al. [1991] V. I. Zaremba, Ya. M. Kalychak, P. Yu. Zavalii, V. A. Bruskov, Crystal structures of the compounds R5Ni2In4 (R = Ho, Er, Tm, Lu), Kristallografiya 36 (1991) 1415–1418. * Yu. B. Tyvanchuk et al. [2010] Yu. B. Tyvanchuk, B. Penc, A. Szytuła, A. Zarzycki, Magnetic Properties of Ho5Ni2In4, Acta Phys. Pol. A 117 (2010) 599–600. 10.12693/APhysPolA.117.599. * A. Provino et al. [2012] A. Provino, Y. Mudryk, D. Paudyal, P. V. Smetana, V. K. Pecharsky, K. A. Gschneidner, Jr., J. D. Corbett, Crystal structure of Tb5Ni2In4 and Y5Ni2In4, and magnetic properties of Dy5Ni2In4, J. Appl. Phys. 111 (2012) 07E122. 10.1063/1.3673432. * Gondek et al. [2012] Ł. Gondek, J. Przewoźnik, J. Czub, Yu. Tyvanchuk, A. Szytuła, A. Arurlaj, Crystal and magnetic properties of Er5Ni2In4 at low temperatures, Intermetallics 21 (2012) 10–17. 10.1016/j.intermet.2011.09.007. * A. Szytula et al. [2013] A. Szytula, Yu. Tyvanchuk, S. Baran, J. Przewoźnik, Ya. M. Kalychak, Magnetic and thermal properties of Tm5Ni2In4, Intermetallics 43 (2013) 99–102. 10.1016/j.intermet.2013.07.014. * Szytuła et al. [2014] A. Szytuła, S. Baran, D. Kaczorowski, W. Sikora, A. Hoser, Magnetic ordering in Tm5Ni2In4, Journal of Alloys and Compounds 617 (2014) 149–153. 10.1016/j.jallcom.2014.07.190. * Ritter et al. [2015] C. Ritter, A. Provino, P. Manfrinetti, V. K. Pecharsky, K. A. Gschneidner, S. K. Dhar, Magnetic structures of R5Ni2In4 and R11Ni4In9 (R = Tb and Ho): strong hierarchy in the temperature dependence of the magnetic ordering in the multiple rare-earth sublattices, Journal of Physics: Condensed Matter 27 (2015) 476001. 10.1088/0953-8984/27/47/476001. * Zhang et al. [2018] Z. Zhang, X. Dong, Q. Wang, L. Li, Investigation of the crystal structure, magnetic phase transition and magnetocaloric effect in RE5Ni2In4 (RE = Dy, Ho and Er) compounds, Intermetallics 100 (2018) 136–141. 10.1016/j.intermet.2018.06.012. * Baran et al. [2021] S. Baran, A. Deptuch, M. Reehuis, Y. Tyvanchuk, F. Yokaichiya, A. Szytuła, Complex magnetic ordering in RE5Pd2In4 (RE = Tb-Tm) compounds investigated by neutron diffraction and magnetometric measurements, Journal of Alloys and Compounds 877 (2021). 10.1016/j.jallcom.2021.160171. * Li and Yan [2023] L. Li, M. Yan, Recent progress in the development of RE2TMTM’O6 double perovskite oxides for cryogenic magnetic refrigeration, Journal of Materials Science & Technology 136 (2023) 1–12. 10.1016/j.jmst.2022.01.041. * Guo et al. [2022] D. Guo, L. M. Moreno-Ramírez, J.-Y. Law, Y. Zhang, V. Franco, Excellent cryogenic magnetocaloric properties in heavy rare-earth based HRENiGa2 (HRE = Dy, Ho, or Er) compounds, Science China Materials (2022). 10.1007/s40843-022-2095-6. * Xu et al. [2022] P. Xu, L. Hu, Z. Zhang, H. Wang, L. Li, Electronic structure, magnetic properties and magnetocaloric performance in rare earths (RE) based RE2BaZnO5 (RE = Gd, Dy, Ho, and Er) compounds, Acta Materialia 236 (2022) 118114. 10.1016/j.actamat.2022.118114. * Zhang et al. [2022] Y. Zhang, S. Li, L. Hu, X. Wang, L. Li, M. Yan, Excellent magnetocaloric performance in the carbide compounds RE2Cr2C3 (RE = Er, Ho, and Dy) and their composites, Materials Today Physics 27 (2022) 100786. 10.1016/j.mtphys.2022.100786. * Ma et al. [2021] Z. Ma, X. Dong, Z. Zhang, L. Li, Achievement of promising cryogenic magnetocaloric performances in La1-xPrxFe12B6 compounds, Journal of Materials Science & Technology 92 (2021) 138–142. 10.1016/j.jmst.2021.02.055. * Zhang [2019] Y. Zhang, Review of the structural, magnetic and magnetocaloric properties in ternary rare earth RE2T2X type intermetallic compounds, Journal of Alloys and Compounds 787 (2019) 1173–1186. 10.1016/j.jallcom.2019.02.175. * Li and Yan [2020] L. Li, M. Yan, Recent progresses in exploring the rare earth based intermetallic compounds for cryogenic magnetic refrigeration, Journal of Alloys and Compounds 823 (2020) 153810. 10.1016/j.jallcom.2020.153810. * Zhang et al. [2022] Y. Zhang, Y. Tian, Z. Zhang, Y. Jia, B. Zhang, M. Jiang, J. Wang, Z. Ren, Magnetic properties and giant cryogenic magnetocaloric effect in B-site ordered antiferromagnetic Gd2MgTiO2 double perovskite oxide, Acta Materialia 226 (2022) 117669. 10.1016/j.actamat.2022.117669. * Xu et al. [2021] P. Xu, Z. Ma, P. Wang, H. Wang, L. Li, Excellent cryogenic magnetocaloric performances in ferromagnetic Sr2GdNbO6 double perovskite compound, Materials Today Physics 20 (2021) 100470. 10.1016/j.mtphys.2021.100470. * Li et al. [2020] L. Li, P. Xu, S. Ye, Y. Li, G. Liu, D. Huo, M. Yan, Magnetic properties and excellent cryogenic magnetocaloric performances in b-site ordered RE2ZnMnO6 (RE = Gd, Dy and Ho) perovskites, Acta Materialia 194 (2020) 354–365. 10.1016/j.actamat.2020.05.036. * Zhang et al. [2022] Y. Zhang, J. Zhu, S. Li, J. Wang, Z. Ren, Achievement of giant cryogenic refrigerant capacity in quinary rare-earths based high-entropy amorphous alloy, Journal of Materials Science & Technology 102 (2022) 66–71. 10.1016/j.jmst.2021.06.028. * Rodríguez-Carvajal [1993] J. Rodríguez-Carvajal, Recent advances in magnetic structure determination by neutron powder diffraction, Physica B 192 (1993) 55–69. 10.1016/0921-4526(93)90108-I. * Rodríguez-Carvajal [2001] J. Rodríguez-Carvajal, Recent developments of the program fullprof, Newsletter of the Commission for Powder Diffraction of the IUCr 26 (2001) 12–19. * Momma and Izumi [2011] K. Momma, F. Izumi, VESTA3 for three-dimensional visualization of crystal, volumetric and morphology data, Journal of Applied Crystallography 44 (2011) 1272–1276. 10.1107/S0021889811038970. * C. Kittel [2004] C. Kittel, Introduction to Solid State Physics 8th ed., John Wiley and Sons, New Jersey, America, 2004. * A. M. Tishin and Y. I. Spichkin [2003] A. M. Tishin, Y. I. Spichkin, The Magnetocaloric Effect and its Applications, Institute of Physics Publishing, Bristol and Philadelphia, 2003. * L. D. Griffith et al. [2018] L. D. Griffith, Y. Mudryk, J. Slaughter, V. K. Pecharsky, Material-based figure of merit for caloric materials, J. Appl. Phys. 123 (2018) 034902. 10.1063/1.5004173. * K. A. Gschneidner, Jr. and V. K. Pecharsky [2000] K. A. Gschneidner, Jr., V. K. Pecharsky, Magnetocaloric materials, Annu. Rev. Mater. Sci. 30 (2000) 387–429. 10.1146/annurev.matsci.30.1.387. * M. E. Wood and W. H. Potter [1985] M. E. Wood, W. H. Potter, General analysis of magnetic refrigeration and its optimization using a new concept: maximization of refrigerant capacity, Cryogenics 25 (1985) 667–683. 10.1016/0011-2275(85)90187-0. * Yu. B. Tyvanchuk et al. [2008] Yu. B. Tyvanchuk, U. C. Rodewald, Ya. M. Kalychak, R. Pöttgen, Rare earth–nickel–indides Dy5Ni2In4 and RE4Ni11In20 (RE = Gd, Tb, Dy), Journal of Solid State Chemistry 181 (2008) 878–883. 10.1016/j.jssc.2008.01.035. * Sojka et al. [2008] L. Sojka, M. Dashkevych, B. Belan, M. Manyako, V. Davydov, L. Akselrud, Ya. M. Kalychak, Crystal structure of alloys R5Pd2In4 (R = Y, Tb, Dy, Ho, Er, Tm, Lu), Ukr. Chem. J. 74 (2008) 90–94. * B. K. Banerjee [1964] B. K. Banerjee, On a generalised approach to first and second order magnetic transitions, Phys. Lett. 12 (1964) 16–17. 10.1016/0031-9163(64)91158-8. * Zhang et al. [2020] Z. Zhang, P. Wang, N. Wang, X. Wang, P. Xu, L. Li, Structural and cryogenic magnetic properties of rare earth rich RE11Co4In9 (RE= Gd, Dy and Ho) intermetallic compounds, Dalton Trans. 49 (2020) 8764–8773. 10.1039/D0DT01212B. * Baran et al. [2023] S. Baran, A. R. Hayyu, Yu. Tyvanchuk, A. Szytuła, Magnetocaloric performance of RE11Co4In9 (RE = Tb, Er), Phase Transitions 96 (2023) 115–125. %****␣RE5Pt2In4_magn_effect_arXiv_20230425_AH.bbl␣Line␣350␣****10.1080/01411594.2022.2149398. * Zhang et al. [2021a] Z. Zhang, P. Wang, Y. Jia, X. Wang, L. Li, Crystal structure, magnetic phase transitions and magnetocaloric effect (MCE) in layer-like RE11Ni4In9 (RE = Gd, Dy and Ho) compounds, Journal of Alloys and Compounds 851 (2021a) 155863\. 10.1016/j.jallcom.2020.155863. * Zhang et al. [2021b] Z. Zhang, I. Muts, L. Li, R. Pöttgen, Magnetic properties and magnetocaloric performances of the rare earth-rich indides RE6Co2.2In0.8 (RE = Gd, Tb, Dy and Ho) with Ho6Co2Ga-type structure, Intermetallics 136 (2021b) 107254. 10.1016/j.intermet.2021.107254. * Baran et al. [2021] S. Baran, Yu. Tyvanchuk, A. Szytuła, Crystal structure and magnetic properties of R11Co4In9 (R = Tb, Dy, Ho and Er) compounds, Intermetallics 130 (2021) 107065. 10.1016/j.intermet.2020.107065. * Gschneidner, Jr. et al. [2005] K. A. Gschneidner, Jr., V. K. Pecharsky, A. O. Tsokol, Recent developments in magnetocaloric materials, Rep. Prog. Phys. 68 (2005) 1479–1539. 10.1088/0034-4885/68/6/R04. * J. Lyubina [2017] J. Lyubina, Magnetocaloric materials for energy efficient cooling, Journal of Physics D: Applied Physics 50 (2017) 053002. 10.1088/1361-6463/50/5/053002.
11institutetext: Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, 69117 Heidelberg, Germany 11email<EMAIL_ADDRESS>22institutetext: Department of Astronomy, Stockholm University, AlbaNova University Center, SE-106 91 Stockholm, Sweden 33institutetext: Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia 44institutetext: ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia 55institutetext: George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA 66institutetext: Department of Physics & Astronomy, Texas A&M University, 4242 TAMU, College Station, TX 77843, USA # Metallicities for more than 10 million stars derived from $Gaia$ BP/RP spectra ††thanks: Table 3 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/. T. Xylakis-Dornbusch 0000-0002-1296-2907 11 N. Christlieb 0000-0002-4043-2727 11 T.T. Hansen 0000-0001-6154-8983 22 T.Nordlander 0000-0001-5344-8069 3344 K. B. Webber 0000-0002-9762-4308 5566 J. Marshall 0000-0003-0710-9474 5566 ###### Abstract Context. The third $Gaia$ Data Release, which includes BP/RP spectra for 219 million sources, has opened a new window in the exploration of the chemical history and evolution of the Milky Way. The wealth of information encapsulated in these data is far greater than their low resolving power ($R\sim 50$) at first glance would suggest, as shown in many studies. We zero in on the use of this data for the purpose of the detection of ‘new’ metal-poor stars, which are hard to find yet essential for understanding - among other - several aspects of the origin of the Galaxy, star formation and the creation of the elements. Aims. We strive to refine a metal-poor candidate selection method which was developed with simulated $Gaia$ BP/RP spectra, with an ultimate objective of providing the community with both a recipe to select stars for medium/high resolution observations and a catalogue of stellar metallicities. Methods. We used a datased comprised of GALAH DR3 and SAGA database stars in order to verify and adjust to real world data our selection method. For that purpose, we used dereddening as a mean to tackle the issue of extinction, and then we applied our fine-tuned method to select metal-poor candidates, which we thereafter observed and analysed. Results. We were able to infer metallicities for GALAH DR3 and SAGA stars - with color excesses up to $E(B-V)<1.5$ \- with an uncertainty of $\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.36$, which is good enough for the purpose of identifying new metal-poor stars. Further, we selected 26 metal- poor candidates - via our method - for observations. As spectral analysis showed, 100% of them had $\mathrm{[Fe/H]}<-2.0$, 57% had $\mathrm{[Fe/H]}<-2.5$ and 8% had $\mathrm{[Fe/H]}<-3.0$. We inferred metallicities for these stars with an uncertainty $\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.31$, as was proven when comparing to the spectroscopic $\mathrm{[Fe/H]}$. Finally, we assembled a catalogue of metallicities for 10 861 062 stars. ###### Key Words.: stars: Population II - Catalogs - Surveys ## 1 Introduction The oldest stars that are still alive today and are located nearby, have metallicities $<-3$ (extremely metal-poor (EMP) stars (Beers & Christlieb, 2005)). EMP stars are rare and difficult to find. They are the descendants of the first generation of stars. Hence, EMP stars carry information that can shed light on the properties of their predecessors, as well as on how the latter exploded and ended their lives. Consequently, finding a large number of new EMP stars for which detailed studies of their chemical composition could be conducted is of the essence, since that would provide constraints on the assembly of the Galaxy, on the initial mass function of the first stars and on the nucleosynthesis processes that formed the heavy elements. The $Gaia$ Survey (Gaia Collaboration et al., 2016) released in 2022 the low-resolution (R$\sim 50$) $Gaia$ BP/RP spectra for 219 million sources (De Angeli et al., 2023), and there have already been many studies that provide metallicity estimates for several thousands to millions of objects by extracting information from BP/RP spectra, often complementary to the use of additional data from $Gaia$ itself (for example Radial Velocity Spectrometer (RVS) spectra; Katz et al. 2023) or other surveys. Bellazzini et al. (2023) derived metallicities for $\sim 700\,000$ stars, and Andrae et al. (2023a) delivered a catalog of stellar parameters - including the metallicity - using a Bayesian forward-modelling approach (Bailer-Jones et al., 2013). Yao et al. (2024) used a classification algorithm (XGBoost; Chen & Guestrin 2016) to identify 188 000 very metal-poor star candidates. Rix et al. (2022) used the machine learning algorithm XGBoost to estimate $\mathrm{[M/H]}$ for 2 million stars, with 18 000 of them in the very- and metal-poor regime. Andrae et al. (2023b) produced a new catalogue, improving the Rix et al. (2022) one. The new catalogue was assembled by training the XGBoost algorithm on stellar parameters taken from the Data Release 17 (DR17) of the Sloan Digital Sky Survey’s (SDSS) APOGEE survey (Abdurro’uf et al., 2022), and from Li et al. (2022) who derived stellar parameters for 400 extremely and ultra metal-poor stars . Andrae et al. (2023b) delivered a catalogue for $\sim$ 175 million stars, with a mean precision of 0.1 dex for $\mathrm{[M/H]}$. Zhang et al. (2023) used a forward model to estimate effective temperature, surface gravity, metallicity, distance, and extinction for 220 million stars. In order to do so, they used Gaia XP based data-driven models along with 2MASS (Skrutskie et al., 2006), and WISE (Schlafly et al., 2019) photometry. The forward model was then trained and validated on stellar parameters from the LAMOST survey (Wang et al., 2022), yielding $\mathrm{[Fe/H]}$ with a typical uncertainty of 0.15 dex. Martin et al. (2023) used the BP/RP spectra to derive synthetic photometry of the Ca H & K region, based on the narrow-band photometry of the Pristine Survey (Starkenburg et al., 2017). They updated the Pristine metallicity inference model, so that it is exclusively based on Gaia magnitudes ($G$, $G_{BP}$, and $G_{RP}$), and produced a catalogue of metallicities for more than 52 million stars. Martin et al. (2023) show that their photometric metallicities are accurate down to $\mathrm{[Fe/H]}\sim\,-3.5$, and thus very much suited for the study of the metal-poor Galaxy. Another study that take advantage of the BP/RP spectra in order to derive stellar parameters and/or metallicities is Cunningham et al. (2023). Xylakis-Dornbusch et al. (2022) (Paper I) developed an empirical method based on flux-ratios of synthetic $Gaia$ BP/RP spectra for the purpose of identifying new metal-poor stars. Specifically, the flux-ratios were those of the Ca H & K lines to the H$\beta$ region ($fr\mathrm{{}_{CaHK/H\beta}}$ with $388\mathrm{nm}<\lambda<401\mathrm{nm}$ and $479\mathrm{nm}<\lambda<501\mathrm{nm}$), and the G-band region to the Ca near infrared triplet (Ca NIR) ($fr\mathrm{{}_{G/CaNIR}}$ with $420\mathrm{nm}<\lambda<444\mathrm{nm}$ and $846\mathrm{nm}<\lambda<870\mathrm{nm}$). It was shown that for roughly constant $fr\mathrm{{}_{G/CaNIR}}$, the $fr\mathrm{{}_{CaHK/H\beta}}$ is exponentially declining as metallicity increases. This work is a follow-up to Paper I, aiming at verifying the metal-poor star selection recipe presented therein by applying it to $Gaia$ DR3 BP/RP spectra. The paper is laid out in the following manner: in Section 2 we describe the dataset we used for the purpose of the validation of the method in Paper I, as well as how we addressed the issue of extinction which was not dealt with in our previous work. We close the Section with the description of the modifications we performed on the selection procedure and metallicity estimation of the metal- poor candidate stars compared to that introduced in Paper I. Next, we present in Section 3 the results of the method verification, including the expected success rate in selecting stars that are very metal-poor ($\mathrm{[Fe/H]}<-2$) and below that threshold, and the purity of that ensemble. Then we investigate the plausibility of OBA stars being selected as metal-poor stars via our method. Furthermore, in Section 5 we describe the application of our fine-tuned recipe by selecting candidate metal-poor stars and subsequently observing them. We then present the results of our observations. Finally, in Section 6 we present a catalogue of metallicities including stars in both the metal-poor and -rich regimes. ## 2 Methods For the verification of the selection process we used stellar parameters from high and medium resolution surveys/studies, and the respective flux dereddened Gaia BP/RP spectra. The software GaiaXPy 111Software available at https://gaia-dpci.github.io/GaiaXPy-website/, version: DOI v2.0.1: 10.5281/zenodo.7566303 was used to generate the $Gaia$ BP/RP spectra, dust_extinction 222https://github.com/karllark/dust_extinction and dustmaps 333https://github.com/gregreen/dustmaps (Green, 2018) were used to deredden the spectral flux. ### 2.1 Dataset The dataset we used for this work is comprised of two different cross-matches with $Gaia$ BP/RP externally calibrated spectra (Montegriffo et al., 2023; Gaia Collaboration et al., 2023, 2016): the first one with the Stellar Abundances for Galactic Archaeology (SAGA) database (Suda et al., 2008, 2011; Yamada et al., 2013; Suda et al., 2017), and the second with the Galactic Archaeology with HERMES data release 3 (GALAH DR3) (Buder et al., 2021). Both datasets together consist of 21 812 stars. We applied quality cuts on the aforementioned dataset by finding correlations between falsely identified metal-poor stars and quality parameters and ended up with 20 850 stars. Since this procedure could only be done after the application of our method to the dataset, we will be elaborating on it both in this section, as well as in the results section. The quality cuts we applied were twofold: one with respect to the quality of the stellar parameters of the dataset and another stemming from the quality of the $Gaia$ BP/RP spectra themselves, as well as of the effect of reddening. Concerning the first, stars for which there was no reliable metallicity estimate from GALAH were dropped (flag_fe_h=0). The mean uncertainty in the iron abundance for the remaining GALAH stars is 0.12 dex. We did not use any quality flag for the SAGA stars, but we resorted to the provided iron abundance uncertainties ($\overline{\sigma}_{SAGA}\sim 0.17$ dex). The GALAH $\mathrm{[Fe/H]}$ were computed using $\mathrm{A(Fe)_{\odot}=7.38}$ (for details see Buder et al. 2021), while the SAGA database utilizes the Asplund et al. (2009) solar chemical composition, that is, $\mathrm{A(Fe)_{\odot}=7.50}$. We consider this difference in the normalization of the metallicities of the two components comprising our dataset to be negligible, since we are not aiming at delivering high precision iron abundances, but rather intent to identify metal-poor stars. Further, as appears in the Kiel Diagram (Figure 2), the final dataset we used spans from dwarf to giant stars, with most of the GALAH stars having disk-like kinematics (Buder et al., 2021) and a mean distance of $\overline{D}\sim 1.9$ kpc (distances taken from Bailer-Jones et al. 2018), and the SAGA stars having $\overline{D}\sim 1.8$ kpc (distances taken from Fouesneau et al. 2023) and belonging to the Galactic halo. Regarding the spectra quality, we set a limit to the blending fraction $\beta$ of the BP/RP spectra and the color excess ($E(B-V)$). The former was defined by Riello et al. (2021) as “…the sum of the number of blended transits in BP and RP divided by the sum of the number of observations in BP and RP”. We slightly modified it, specifically as $\displaystyle\beta=\mathrm{(bp\\_n\\_blended\\_transits+rp\\_n\\_blended\\_transits}+$ $\displaystyle\mathrm{bp\\_n\\_contaminated\\_transits}+\mathrm{rp\\_n\\_contaminated\\_transits)}/$ $\displaystyle\mathrm{(bp\\_n\\_transits+rp\\_n\\_transits)}$ We set $\beta\leq 0.5$. Finally, complementary to our work in Paper I, we include objects in our dataset whose reddening is well above $E(B-V)=0.06$ (see Figure 2), which mandates that we tackle the issue of extinction. Figure 1: Kiel Diagram of the dataset. Figure 2: Histogram of the reddening distribution of our dataset. ### 2.2 Reddening As a first approach we aimed at finding a reddening independent index, similar to Bonifacio et al. (2000a). Since the region of the H$\beta$ line is part of the $fr_{\mathrm{CaHK/H\beta}}$ ratio (see Paper1 for details), we decided to test if the Strömgren $\beta$ index withstands extinction and replace the H$\beta$ region in $fr_{\mathrm{CaHK/H\beta}}$ with the former. The results were not those we anticipated: the $\beta$ index changed with extinction, even though it showed a sensitivity to effective temperature. As we were not able to define a reddening-independent metallicity calibration, we instead sought to implement reddening corrections for the metallicity calibration by means of dereddening the spectra. Therefore, we used the dust maps of Schlegel et al. (1998) (SFD) re-calibrated by Schlafly & Finkbeiner (2011), the extinction model of Fitzpatrick (1999) and $R_{v}=3.1$ to deredden the externally calibrated BP/RP spectra. We repeated the above procedure using the extinction model of Cardelli et al. (1989) and found that the resulting flux-ratios have minimal differences with those calculated with the Fitzpatrick (1999) model. We chose the SFD maps because they cover the entire sky. Considering the fact that the SFD maps account for the foreground dust, our stars need to be distant enough or at a galactic latitude great enough for the distance dependence to be neglected. The SAGA stars are halo stars, and thus distant enough (D $\geq 1$ kpc ; Schlafly & Finkbeiner 2011). In total, 81% of the stars in our dataset are either at a distance D $\geq 1$ kpc or at a latitude $\mid b\mid>30\degree$. For the remaining 19%, we calculated the reddening correction from Bonifacio et al. (2000b). For most of the stars we found no or very small (¡0.001 mag) correction. Only for 4% of the total sample we found $E(B-V)$ corrections $\geq 0.02$ mag, so applying such a correction would have a negligible effect on the distribution in the dereddened flux-ratio plane (Figure 3). ### 2.3 Application of the method In Paper I we provided coefficients for different pairs of $\mathrm{T_{eff}}$ and $\log g$ for the estimation of $\mathrm{[Fe/H]}$. We calibrated the coefficients for application to the real data, but the results were not corresponding to the theoretical expectations. Further, the problem of acquiring well-defined effective temperatures and surface gravities for millions of stars -so that the metal-poor ones among them could be identified- became apparent. We decided to use only quantities that can be directly derived from the spectra, i.e. the flux-ratios. The plane of the $fr_{\mathrm{CaHK/H\beta}}$ and $fr_{\mathrm{G/CaNIR}}$ flux-ratios (see Figure 3), enables us to find the loci of metal-poor ($\mathrm{[Fe/H]}<-1.0$) and further metal-deficient stars. The grey lines in Figure 3 represent different metallicity regimes, with the stars below the dashed-dotted and dotted ones being metal-poor ($\mathrm{[Fe/H]}<-1$) and very metal-poor ($\mathrm{[Fe/H]}<-2$) respectively. ## 3 Results The results in the right panel of Figure 3, depict a clear correlation between metallicity, $fr_{\mathrm{CaHK/H\beta}}$ and $fr_{\mathrm{G/CaNIR}}$ flux- ratios. The left panel shows the flux-ratios before dereddening and the right panel shows the dereddened values. We overplot a dashed-dotted (Cutoff1) and a dotted line (Cutoff2), to designate flux-ratio areas where objects with $\mathrm{[Fe/H]}_{ref}\leq-1$ and $\mathrm{[Fe/H]}_{ref}\leq-2$ respectively, are primarily found ($\mathrm{[Fe/H]}_{ref}$ is the reference $\mathrm{[Fe/H]}$). By selecting metal-poor stars in that way, we found that there was a correlation between a high blending fraction $\beta$ and contaminants, i.e. stars with $\mathrm{[Fe/H]}_{ref}>-1$. We chose the $\beta$ such that there is a balance between acceptable contamination and completeness, since a greater $\beta$ means a greater number of stars. We define the completeness as the ratio of the number of selected stars below a certain metallicity threshold to the total number of stars in the dataset that have $\mathrm{[Fe/H]}_{ref}\leq\mathrm{threshold}$, the success rate as the percent of the selected stars that have $\mathrm{[Fe/H]}_{ref}$ below a certain specified value, and the contamination as the percent of selected stars that have a metallicity above the specified threshold. The results in Figure 3 were generated after the application of the quality cuts described above. By choosing all the stars below Cutoff1 in Figure 3, we are able to recover from the GALAH-SAGA sample more than 98% of the stars with $\mathrm{[Fe/H]}\leq-2$, all the ultra metal-poor stars ($\mathrm{[Fe/H]}\leq-4$) and 70% of stars $\mathrm{[Fe/H]}<-1$. We record a success rate of $\sim 80\%$, 44% and 20% for stars with $\mathrm{[Fe/H]}\leq-1,-1.5$ and -2 respectively. When we select stars below Cutoff2 we make a trade-off between the success rate and the completeness: we still recover more than 90% and 94% of very and extremely metal-poor stars respectively, but lose about 40% of those with $-2<\mathrm{[Fe/H]}\leq-1$ compared to the other cut off. The success rate increases significantly to $\sim 99\%$, 95% and 60% for stars with $\mathrm{[Fe/H]}\leq-1,-1.5$ and -2 respectively (summarized in Figure 4). For comparison, we selected all stars that fall below the same -as before- dotted and dashed-dotted line without dereddening (left panel of Figure 3) and calculated the statistics as above. Even though the completeness for different metallicity categories are fairly similar and in some cases even slightly better, the success rate is much lower and in consequence the contamination is much higher. Figure 3: Flux-ratios of raw (left) and dereddened (right) fluxes from $Gaia$ BP/RP spectra are shown in the left and right panel respectively. The color- coding reflects the metallicity of the stars of the dataset we used. Below the dashed-dotted and dotted lines are the flux-ratio areas where stars with $\mathrm{[Fe/H]}\leq-1$ and $\mathrm{[Fe/H]}\leq-2$ respectively, are primarily found. Figure 4: Completeness, success rate and contamination of the stars that were selected from below the dashed-dotted (left panel) and dotted line (right panel). The stars were selected from a dereddened flux-ratio plane. Further, we find that by selecting the metal-poor candidates through the flux- ratio plane, we can extrapolate the theoretical method described in Paper I, to a broader parameter space. Specifically, in Paper I the recipe was developed for FGK stars in the effective temperature range of 4800-6300 K, and in this study we retrieve metal-poor stars that have $4636\,K\leq\mathrm{T_{eff}}\leq 7150\,K$. Finally, we estimated the iron abundances of our dataset as follows. First, we randomly sampled our GALAH-SAGA dataset, and split it into two equal parts. We divided the flux-ratios of the first sampled sub-dataset into $fr\mathrm{{}_{G/CaNIR}}$ bins. Then, we split each of those bins into metallicity bins, for which we calculated the mean $fr_{\mathrm{CaHK/H\beta}}$. Next we found best fits to the sets of $fr_{\mathrm{CaHK/H\beta}}$ \- $\mathrm{[Fe/H]}$ pairs (Figure 5), which we subsequently used to estimate the iron abundance of the second sub-dataset. We used the following function in order to perform the fittings: $fr\mathrm{{}_{G/CaNIR}}=-a\cdot fr\mathrm{{}_{CaHK/H\beta}}^{b}+c$ (1) where $a,b$ and $c$ are the coefficients of the best fit, which are shown in Table 1. The respective results of the metallicity estimation are presented in Figure 6. We were able to infer $\mathrm{[Fe/H]}$ with an uncertainty of $\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.36$ dex. This precision is sufficient for reliably identifying metal-poor stars. Table 1: Coefficients of the best fit. $a$ | $b$ | $c$ | $fr\mathrm{{}_{G/CaNIR}}$ ---|---|---|--- 17.497139 | 1.119316 | 2.506009 | [1.3-1.8) 22.219935 | 1.512732 | 2.800237 | [1.8-2.3) 29.101554 | 2.624439 | 1.18252 | [2.3-2.8) 32.268827 | 3.351201 | 0.815609 | [2.8-3.3] 444The $fr\mathrm{{}_{G/CaNIR}}$ values, are the ranges of applicability of each set of coefficients. Figure 5: Best fits to the $fr_{\mathrm{CaHK/H\beta}}$ \- $\mathrm{[Fe/H]}$ pairs. The different line colors convey the $fr\mathrm{{}_{G/CaNIR}}$ range of applicability. Figure 6: Metallicity estimation of a subset of the GALAH-SAGA dataset. The points that have a black circle around them, are located below the black-dotted line in the flux-ratio plane (Figure 3). The color-coding reflects the effective temperature of the stars. We plot the inferred and reference $\mathrm{[Fe/H]}$ on the x- and y-axis respectively. ## 4 OBA stars OBA stars are young hot stars that can present emission lines in their spectra. When OB stars are highly reddened, they can appear as K-type stars, hence good reddening values are essential in order to tell them apart from metal-poor FGK stars. Also, young or accreting stars can show emission lines at various spectral regions, including the Ca H&K absorption lines. Consequently, the emission in the Ca II H&K lines results in a net weak absorption line, masking those stars as being metal-poor. Therefore we wish to test to which degree those stars are expected to contaminate a selected metal- poor-candidate sample. We select a random subset of 200 stars from the OBA stars golden sample (European Space Agency & DPAC Consortium(2022), ESA). From those, 193 stars have an externally calibrated BP/RP spectrum, and 173 have a blending fraction $\beta\leq 0.5$. We deredden the externally calibrated spectra as described in section 2.2 and subsequently compute the flux-ratios. In Figure 7 we plot the flux-ratios of the OBA golden sample subset. In order to show the effect of extinction, which depends on the color-excess rather than on the flux-ratios, we use logarithmic axes. The effect of extinction is demonstrated with an arrow (orange arrow), where its nock and point represent the flux-ratios before and after dereddening, respectively, for $E(B-V)\approx 0.3$ mag. As can be seen, none of the 173 stars appear in the region of the flux-ratio plane where the metal-poor stars frequent (Figure 3). However, due to the fact that the location of the stars on the flux-ratio plane depends on the extinction, we caution the reader that highly reddened OBA stars with underestimated color-excess values could appear in the region of metal-poor stars (yellow area in Figure 7) and hence, contaminate the sample of metal- poor stars selected via this method. Figure 7: Flux-ratios of OBA stars. The solid and dotted lines represent the Cutoff1 and Cutoff2 respectively, while the yellow shaded area designates the region that is populated by very-metal poor (VMP) stars (see Figure 3). The color-coding indicates the Galactic Latitude $b$ of each star. As can be seen, most of the stars are located on the Galactic plane ($\mid b\mid\leq 10\degree$),. The orange arrow illustrates the effect of extinction for a color excess $E(B-V)\approx 0.3$ mag. The nock and the point of the arrow represent the flux-ratios before and after dereddening, respectively. ## 5 Observational metal-poor star candidate verification In order to verify our metal-poor candidate selection method as well as the metallicity estimation presented herein, we selected a sample of stars from $Gaia$ DR3, which had not been observed before. We opted to select fairly bright giant stars, in order to achieve a good enough signal-to-noise ratio (SNR) for the purpose of deriving precise $\mathrm{[Fe/H]}$. Further, the location of the telescope to be used was known beforehand, hence we used the following selection criteria: G=12-13 mag, Ra=16-02h, Dec= 00$\degree$-+20$\degree$, $\mid b\mid>20\degree$ and $\beta\leq 0.5$, which rendered 90 798 stars. Following, we computed the flux-ratios. From the 90 798 stars, we chose those with flux-ratios $1\leq fr\mathrm{{}_{G/CaNIR}}\leq 5$, which left us with 70 509 stars. Then, we selected all the stars below a more stringent cut than Cutoff2, which is a line that is shifted parallel to Cutoff2 by 0.1+$fr\mathrm{{}_{CaHK/H\beta}}$. That cutoff left us with 77 stars, of which 10 stars had already been observed in high resolution and their metallicities are -or will be- in the literature. It is worth noting, that all the 10 stars that appear in literature, are metal-poor. The reason we used a more stringent cut was, that there is a clear correlation between the inferred metallicity and the position of the star on the flux-ratio plane. We opted to observe candidates with the lowest predicted metallicities, so if we would have used Cutoff2, most of the stars above the more stringent cut off would have not made it into the final target list due to the higher estimated $\mathrm{[Fe/H]}_{inf}$. We show the distribution of the inferred $\mathrm{[Fe/H]}_{inf}$ for metal-poor candidates that were located between Cuttoff2 and our chosen cut off in Figure 10. Finally, we estimated the $\mathrm{[Fe/H]}$ for the remaining 67 stars, and our final target list was comprised of 32 stars with $\mathrm{[Fe/H]}_{inf}\leq-2.35$, of which we managed to observed 26. Of the 35 stars that were not included in the target list, 8 of them were outside the metallicity inference range ($fr\mathrm{{}_{G/CaNIR}}>3.3$). The distribution of the inferred metallicities for the remaining 27 metal-poor candidates that were not included in the final target list is show in Figure 11. ### 5.1 Observations and metallicity determinations The targets were observed at the McDonald Observatory with the Harlan J. Smith 2.7m telescope and the TS23 echelle spectrograph (Tull et al., 1995). The spectra were obtained using a 1.2” slit and 1x1 binning, yielding a resolving power of $R\sim 60,000$ and covering a wavelength range of 3600-10000 Å. The 26 stars were observed over four nights in August 2023. The data was reduced using standard IRAF packages (Tody, 1986, 1993), including correction for bias, flatfield, and scattered light. Table 2 lists the Gaia DR3 id, right ascension, declination, Heliocentric Julian Date (HJD), exposure times, the signal-to-noise ratio per pixel (SNR) at 5000$\AA$ and heliocentric radial velocities. The heliocentric radial velocities were determined via cross- correlation with a spectrum of the standard star HD 182488 ($V_{hel}=-21.2$ kms-1; Soubiran et al. (2018)) obtained on the same run. We determined stellar parameters ($\mathrm{T_{eff}}$, $\log g$, $\mathrm{[Fe/H]}$, and $v_{t}$) for the observed stars from a combination of photometry and equivalent width (EW) measurements of Fe i and Fe ii lines, and using the software smhr 555https://github.com/andycasey/smhr (Casey, 2014) to run the radiative transfer code MOOG 666https://github.com/alexji/moog17scat (Sneden, 1973; Sobeck et al., 2011) assuming local thermodynamical equilibrium. We used one dimensional plane-parallel, $\alpha$-enhanced ($\mathrm{[\alpha/Fe]=+0.4}$) stellar model atmospheres computed from the ATLAS9 grid (Castelli & Kurucz, 2003), line lists from linemake 777https://github.com/vmplacco/linemake (Placco et al., 2021), and Solar abundances were taken from Asplund et al. (2009). $\mathrm{T_{eff}}$ for the stars was determined from dereddened $Gaia$ $G$, $BP$, $RP$ (Anders et al., 2022; Gaia Collaboration et al., 2018) and 2MASS $K$ magnitudes (Cutri et al., 2003) using the color-$\mathrm{T_{eff}}$ relations from Mucciarelli et al. (2021). For the $K$ magnitudes, we used the extinction coefficient from McCall (2004). The $\log g$ were then determined by requiring ionisation equilibrium between the Fe i and Fe ii lines and $v_{t}$ by requiring no correlation of Fe i line abundances with reduced EW. Final $\mathrm{[Fe/H]_{spec}}$ of stars is taken as the mean abundances of the Fe i lines and the uncertainties are the standard deviation of these. Final stellar parameters are listed in Table 2. ### 5.2 Results The stellar parameters of the observed stars are shown in Table 2. As the parameters show, all observed stars are metal-poor FGK stars. The uncertainty in our metallicity inference is $\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.31$, which agrees with the uncertainty in deriving metallicities for the GALAH-SAGA sample ($\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.36$), as described above. Figure 8 shows $\mathrm{[Fe/H]}_{inf}$ versus the spectroscopic determined $\mathrm{[Fe/H]_{spec}}$. Further, 100% of the observed stars are very metal- poor, 58% have $\mathrm{[Fe/H]}<-2.5$ and 8% are EMP. Lastly, we did not have any contamination from OBA stars, which agrees with our finding in Section 4. Figure 8: $\mathrm{[Fe/H]}_{inf}$ versus $\mathrm{[Fe/H]}_{spec}$. The solid grey line is the 1 to 1 line, and the dashed grey line designates the 1$\sigma$ uncertainty ($\sigma_{\mathrm{[Fe/H]}_{spec}}=0.31$ dex) in $\mathrm{[Fe/H]}_{spec}$. Table 2: Stellar Parameters and observation log of observed metal-poor candidates. Gaia DR3 ID | RA (J2000) | Dec (J2000) | HJD | exp time | SNR | $V_{hel}\pm\sigma$ | $\mathrm{[Fe/H]}_{inf}$ | $G$ | $\mathrm{T_{eff}}$ | $\log g$ | $\mathrm{[Fe/H]}_{spec}$ | $\sigma_{\mathrm{[Fe/H]}_{spec}}$ | $v_{t}$ | $E(B-V)$ ---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- | (hrs) | ($\degree$) | | (s) | 5000Å | (kms-1) | (dex) | (mag) | (K) | (dex) | (dex) | (dex) | (kms-1) | (mag) 4560234719702983552 | 17 00 58.86 | +18 12 48.76 | 2460177.61 | 3x1200 | 38 | $-200.2\pm 0.7$ | -3.49 | 12.95 | 5512 | 1.81 | -2.67 | 0.12 | 2.26 | 0.067 2770306858573498880 | 23 54 36.90 | +13 43 33.61 | 2460175.83 | 3x900 | 43 | $-91.8\pm 0.9$ | -3.19 | 12.75 | 5280 | 2.42 | -2.69 | 0.14 | 1.63 | 0.043 2740778202499153280 | 00 09 13.91 | +03 34 27.64 | 2460175.88 | 3x1200 | 29 | $-63.3\pm 0.7$ | -3.17 | 12.52 | 5172 | 2.18 | -2.89 | 0.12 | 1.58 | 0.024 2783063972298129280 | 00 42 31.57 | +18 34 52.58 | 2460175.93 | 3x1200 | 23 | $-349.7\pm 0.7$ | -3.13 | 12.53 | 5463 | 1.92 | -2.49 | 0.16 | 1.98 | 0.058 38721161695303808 | 03 55 46.04 | +13 28 40.99 | 2460178.96 | 3x1200 | 11 | $59.1\pm 1.1$ | -2.99 | 12.89 | 5666 | 3.09 | -2.18 | 0.21 | 1.95 | 0.298 1788649988097920768 | 21 20 25.52 | +19 16 40.19 | 2460175.75 | 3x1200 | 35 | $2.1\pm 1.4$ | -3.24 | 12.84 | 5868 | 1.95 | -3.01 | 0.13 | 2.65 | 0.078 2739719922558093440 | 23 50 58.88 | +02 36 12.99 | 2460176.84 | 3x600 | 38 | $15.4\pm 1.3$ | -2.89 | 12.10 | 6488 | 3.40 | -2.85 | 0.15 | 1.77 | 0.033 3268830653286376704 | 03 15 35.79 | +02 25 49.29 | 2460176.96 | 3x1200 | 27 | $203.3\pm 0.7$ | -2.72 | 12.81 | 5114 | 2.18 | -2.59 | 0.18 | 1.49 | 0.095 4446252678577892224 | 16 34 16.18 | +08 49 40.19 | 2460178.63 | 3x900 | 22 | $-12.1\pm 0.8$ | -2.71 | 12.27 | 4995 | 1.88 | -2.48 | 0.15 | 1.81 | 0.065 2719036833232602752 | 22 57 03.19 | +12 58 25.60 | 2460175.80 | 3x600 | 41 | $-242.8\pm 0.5$ | -2.68 | 12.15 | 5316 | 1.90 | -2.34 | 0.12 | 2.06 | 0.047 4229999872631438848 | 20 32 11.41 | +01 02 05.16 | 2460177.71 | 3x1200 | 37 | $-240.6\pm 0.5$ | -2.67 | 12.82 | 5221 | 2.71 | -2.49 | 0.17 | 1.36 | 0.097 1757147197551005952 | 21 09 25.50 | +11 48 44.80 | 2460175.70 | 3x1200 | 26 | $71.0\pm 1.3$ | -2.86 | 12.60 | 5440 | 1.23 | -2.80 | 0.11 | 2.29 | 0.108 1730672812979631104 | 20 56 02.56 | +02 07 13.56 | 2460178.69 | 3x1200 | 14 | $-160.8\pm 1.0$ | -2.58 | 12.84 | 5194 | 1.09 | -2.35 | 0.21 | 2.38 | 0.107 2814304091236720000 | 23 27 36.00 | +15 23 54.70 | 2460177.80 | 3x1200 | 31 | $-81.5\pm 0.9$ | -2.56 | 12.96 | 5096 | 1.95 | -2.76 | 0.12 | 2.05 | 0.054 4561199025759521920 | 17 02 37.48 | +19 17 21.35 | 2460177.66 | 3x1200 | 31 | $-174.7\pm 0.5$ | -2.54 | 12.79 | 5267 | 2.57 | -2.38 | 0.18 | 1.73 | 0.088 4503007613380083328 | 17 53 57.89 | +17 45 19.99 | 2460176.64 | 3x1200 | 50 | $-100.9\pm 1.0$ | -2.69 | 12.42 | 6030 | 4.22 | -2.78 | 0.15 | 1.72 | 0.081 4449403019908847488 | 16 48 40.46 | +13 32 43.83 | 2460176.61 | 3x600 | 36 | $195.9\pm 0.7$ | -2.45 | 12.15 | 5214 | 2.33 | -2.65 | 0.17 | 1.48 | 0.060 2574400790177777408 | 01 58 30.71 | +11 18 42.38 | 2460177.93 | 3x1200 | 38 | $-101.6\pm 1.0$ | -2.42 | 12.58 | 6547 | 4.04 | -2.23 | 0.13 | 1.62 | 0.116 2554217295745049856 | 00 39 13.34 | +04 23 33.16 | 2460176.87 | 3x1200 | 38 | $-145.0\pm 0.9$ | -2.39 | 12.81 | 6235 | 3.51 | -2.32 | 0.10 | 1.55 | 0.027 2580053787477560576 | 01 19 10.86 | +10 07 08.75 | 2460177.85 | 3x1200 | 40 | $-29.0\pm 0.6$ | -2.45 | 12.55 | 5139 | 2.33 | -2.47 | 0.14 | 1.68 | 0.070 2732958716319826048 | 22 33 41.08 | +14 49 05.58 | 2460178.73 | 3x900 | 20 | $26.8\pm 0.9$ | -2.35 | 12.23 | 5636 | 2.59 | -2.54 | 0.17 | 1.24 | 0.069 2756350516963035904 | 23 40 19.64 | +05 34 00.66 | 2460176.80 | 3x1200 | 33 | $111.7\pm 1.7$ | -2.35 | 12.91 | 6235 | 2.40 | -2.73 | 0.12 | 1.50 | 0.081 2698131578134995456 | 21 34 27.89 | +04 04 38.23 | 2460177.76 | 3x900 | 39 | $-299.1\pm 0.6$ | -2.35 | 12.27 | 5294 | 2.90 | -2.51 | 0.15 | 1.39 | 0.054 4432234794379005952 | 16 34 54.04 | +02 06 14.94 | 2460178.60 | 3x900 | 17 | $85.5\pm 0.8$ | -2.35 | 12.23 | 5257 | 1.60 | -2.30 | 0.16 | 1.99 | 0.062 2706127364131151744 | 22 41 26.08 | +05 07 30.91 | 2460178.77 | 3x1000 | 21 | $-139.7\pm 2.1$ | -2.83 | 12.65 | 5648 | 2.56 | -3.29 | 0.14 | 2.09 | 0.077 1733398605383859840 | 21 09 34.21 | +05 14 05.85 | 2460176.69 | 3x1200 | 30 | $-52.3\pm 0.7$ | -2.74 | 12.70 | 5200 | 1.26 | -2.62 | 0.11 | 1.97 | 0.112 ## 6 Catalogue of stellar $\mathrm{[Fe/H]}$ For the purpose of providing the community with a catalogue of metallicities, we used the following criteria from ”The Milky Way Halo High-Resolution Survey´´ (Christlieb et al., 2019) of the 4-metre Multi-Object Spectroscopic Telescope ($4MOST$) (De Jong et al., 2019), combined with those developed for this work, to select stars from $Gaia$ DR3: * • $\mid b\mid>10\degree$ * • $0.15\,\mathrm{mag}\leq\mathrm{(BP-RP)}_{0}<1.1\,$mag * • blending index $\beta\leq 0.5$ * • $1.3\leq fr\mathrm{{}_{G/CaNIR}}\leq 3.3$ * • $E(B-V)\leq 1.5\,$mag The above criteria turned over 10 861 062 stars, for which we estimated the metallicity. 225 498 stars in this catalogue have $\mathrm{[Fe/H]}_{inf}<-2.0$. Further, in our catalogue 2236 stars have $\mathrm{[Fe/H]}_{inf}<-5.0$, which suggests that these stars have probably emission lines rather than being metal-poor. We cross-matched the stars of our catalogue that have $\mathrm{[Fe/H]}_{inf}<-2.0$ with the Gaia OBA golden sample (European Space Agency & DPAC Consortium(2022), ESA), and found that 104 of those stars are indeed OBA stars. Out of those OBA contaminants, 8 have an estimated metallicity $\mathrm{[Fe/H]}_{inf}<-5.0$ in the catalogue. Finally, a sample of the catalogue is shown in Table 3. Table 3: Sample of the metallicities catalogue. $source\\_id$ | $RA(J2000)$ | $DEC(J2000)$ | $E(B-V)$ | $fr\mathrm{{}_{CaHK/H\beta}}$ | $fr\mathrm{{}_{G/CaNIR}}$ | $\mathrm{[Fe/H]}_{inf}$ | $G$ | $G_{BP}$ | $G_{RP}$ ---|---|---|---|---|---|---|---|---|--- | (°) | (°) | (mag) | | | (dex) | (mag) | (mag) | (mag) 1736084918450522624 | 311.672374 | 6.454326 | 0.086352 | 0.395785 | 3.040589 | -0.65 | 14.839189 | 15.116240 | 14.387665 1736086121041383936 | 311.640473 | 6.501470 | 0.080705 | 0.386366 | 2.951276 | -0.53 | 12.410907 | 12.699553 | 11.959683 1736086464640617472 | 311.605184 | 6.517305 | 0.079810 | 0.384081 | 3.279787 | -0.51 | 12.523436 | 12.753575 | 12.055906 1736086769579627008 | 311.686028 | 6.547942 | 0.081594 | 0.440737 | 2.951817 | -1.27 | 14.591786 | 14.878093 | 14.132823 1736089213417835264 | 311.740409 | 6.639877 | 0.088562 | 0.383000 | 2.963972 | -0.50 | 14.826837 | 15.120335 | 14.363322 1736089934974194688 | 311.614227 | 6.572804 | 0.078512 | 0.320748 | 2.815581 | 0.09 | 12.647016 | 12.943399 | 12.197193 1736090381648971776 | 311.524228 | 6.550632 | 0.083777 | 0.382586 | 2.909278 | -0.49 | 13.581572 | 13.875311 | 13.124882 1736093847685000960 | 311.868064 | 6.611197 | 0.092579 | 0.410637 | 3.239939 | -0.84 | 14.073834 | 14.347256 | 13.636151 1736099693138064384 | 311.706790 | 6.756967 | 0.085753 | 0.351283 | 2.938921 | -0.17 | 13.712016 | 14.000586 | 13.259321 1736099693138065408 | 311.698148 | 6.754874 | 0.084601 | 0.376363 | 2.852484 | -0.42 | 15.257664 | 15.554148 | 14.791958 888The color excess values, $E(B-V)$, are taken from Schlegel et al. (1998) (re-calibrated by Schlafly & Finkbeiner (2011)). The full catalogue is available at the CDS. ## 7 Comparison to other catalogues As already described in the introduction, many studies have taken advantage of the wealth of information encapsulated in the Gaia BP/RP spectra, and have provided to the community catalogues of stellar atmospheric parameters. Specifically, the catalogues of Andrae et al. (2023b) and Martin et al. (2023) have been shown to work very well in the metal-poor regime. We used the GALAH- SAGA verification sub-dataset (Figure 6) to compare the metallicities we estimated versus those of Andrae et al. (2023b) and Martin et al. (2023). The $\mathrm{[Fe/H]}_{inf}$ we estimated for this sub-dataset are independent of the fitting procedure. Figure 9 shows the performance of each catalogue. At first glance it is visible that the catalogue of Martin et al. (2023) performs better in the metal-poor regime than ours, and that of Andrae et al. (2023b). However, the difference in accuracy of the inferred metallicities in all three catalogues is comparable, specifically for $\mathrm{[Fe/H]}_{ref}<-2$ the iron abundances of Martin et al. (2023) and Andrae et al. (2023b) have $\sigma\sim 0.39$ and are 0.1 dex better than ours. For $\mathrm{[Fe/H]}_{ref}<-3$ the standard deviation of the estimated metallicities in all three catalogues is the same, that is $\sim 0.36$ dex. In the metal-rich regime, our metallicities have uncertainties that are $\sim 0.2$ dex higher than those of the other two catalogues, whose performance is similar, $\sigma\sim 0.24$ dex. Figure 9: Comparison of our derived metallicities with those from the Andrae et al. (2023b) (XGBOOST), and Martin et al. (2023) ($\mathrm{CaHK}_{synth}$) catalogues. Top: from left to right the metallicities of our, XGBOOST, and $\mathrm{CaHK}_{synth}$ catalogues are plotted, respectively, for the GALAH- SAGA validation dataset ($\mathrm{[Fe/H]}_{ref}$). The color-coding reflects the effective temperature of the stars. Middle and bottom: as in the top panels, but here the color-coding depicts the color excess and surface gravity, respectively. The solid black line shows the 1-1 line, while the dashed lines show a $\sigma=$0.36 dex uncertainty. ## 8 Summary We applied the metal-poor star candidate selection recipe described in Paper I (Xylakis-Dornbusch et al., 2022) on $Gaia$ DR3 BP/RP spectra. In order to do so, we updated the selection method. Specifically, instead of using effective temperature and surface gravity information, we only used the flux-ratios - $fr\mathrm{{}_{G/CaNIR}}$ and $fr\mathrm{{}_{CaHK/H\beta}}$ \- determined in Paper I to estimate the metallicity of stars. We addressed the extinction by means of dereddening the spectra before computing the flux-ratios, and found that the method can be applied to stars with color excesses $E(B-V)\leq 1.5$. We then used BP/RP spectra through a cross-match between $Gaia$ DR3 and GALAH DR3, as well as with the SAGA database, to validate the selection method. We were able to estimate the $\mathrm{[Fe/H]}$, solely with the use of the flux- ratios, with an uncertainty of $\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.36$ dex. Next, we assessed to which degree OBA stars could contaminate a metal-poor candidate sample selected via the method described herein. We found that it is not very likely, as long as one has high level color excesses at their disposal to perform the dereddening of the spectra. Following, we selected stars from $Gaia$ DR3 via our updated selection procedure for spectroscopic validation. We observed 26 stars of which 100% had $\mathrm{[Fe/H]}<-2.0$, 58% had $\mathrm{[Fe/H]}<-2.5$ and 8% had $\mathrm{[Fe/H]}<-3.0$. We inferred metallicites for this sample of stars prior to observations with an uncertainty $\sigma_{\mathrm{[Fe/H]}_{inf}}\sim 0.31$. Finally, we assembled a catalogue of metallicities for 10 861 062, of which 225 498 have $\mathrm{[Fe/H]}_{inf}<-2.0$. ###### Acknowledgements. We thank the anonymous referee for their comments, which helped improve this manuscript. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 (“The Milky Way System”, subproject A04). This research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. This work was supported by computational resources provided by the Australian Government through the National Computational Infrastructure (NCI) under the National Computational Merit Allocation Scheme and the ANU Merit Allocation Scheme (project y89). TXD acknowledges support from the Heidelberg Graduate School for Physics (HGSFP). TTH acknowledges support from the Swedish Research Council (VR 2021-05556). This work made use of the Third Data Release of the GALAH Survey (Buder et al. 2021). The GALAH Survey is based on data acquired through the Australian Astronomical Observatory, under programs: A/2013B/13 (The GALAH pilot survey); A/2014A/25, A/2015A/19, A2017A/18 (The GALAH survey phase 1); A2018A/18 (Open clusters with HERMES); A2019A/1 (Hierarchical star formation in Ori OB1); A2019A/15 (The GALAH survey phase 2); A/2015B/19, A/2016A/22, A/2016B/10, A/2017B/16, A/2018B/15 (The HERMES-TESS program); and A/2015A/3, A/2015B/1, A/2015B/19, A/2016A/22, A/2016B/12, A/2017A/14 (The HERMES K2-follow-up program). We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. This paper includes data that has been provided by AAO Data Central (datacentral.org.au). This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work has made use of the Python package $Gaia$XPy, developed and maintained by members of the $Gaia$ Data Processing and Analysis Consortium (DPAC) and in particular, Coordination Unit 5 (CU5), and the Data Processing Centre located at the Institute of Astronomy, Cambridge, UK (DPCI). ## References * Abdurro’uf et al. (2022) Abdurro’uf, Accetta, K., Aerts, C., et al. 2022, ApJS, 259, 35 * Anders et al. (2022) Anders, F., Khalatyan, A., Queiroz, A. B. A., et al. 2022, A&A, 658, A91 * Andrae et al. (2023a) Andrae, R., Fouesneau, M., Sordo, R., et al. 2023a, A&A, 674, A27 * Andrae et al. (2023b) Andrae, R., Rix, H.-W., & Chandra, V. 2023b, ApJS, 267, 8 * Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481 * Bailer-Jones et al. (2013) Bailer-Jones, C. A. L., Andrae, R., Arcay, B., et al. 2013, A&A, 559, A74 * Bailer-Jones et al. (2018) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Mantelet, G., & Andrae, R. 2018, AJ, 156, 58 * Beers & Christlieb (2005) Beers, T. C. & Christlieb, N. 2005, ARA&A, 43, 531 * Bellazzini et al. (2023) Bellazzini, M., Massari, D., De Angeli, F., et al. 2023, A&A, 674, A194 * Bonifacio et al. (2000a) Bonifacio, P., Caffau, E., & Molaro, P. 2000a, A&AS, 145, 473 * Bonifacio et al. (2000b) Bonifacio, P., Monai, S., & Beers, T. C. 2000b, AJ, 120, 2065 * Buder et al. (2021) Buder, S., Sharma, S., Kos, J., et al. 2021, MNRAS, 506, 150 * Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245 * Casey (2014) Casey, A. R. 2014, PhD thesis, Australian National University, Canberra * Castelli & Kurucz (2003) Castelli, F. & Kurucz, R. L. 2003, in Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray, Vol. 210, A20 * Chen & Guestrin (2016) Chen, T. & Guestrin, C. 2016, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16 (ACM) * Christlieb et al. (2019) Christlieb, N., Battistini, C., Bonifacio, P., et al. 2019, The Messenger, 175, 26 * Cunningham et al. (2023) Cunningham, E. C., Hunt, J. A. S., Price-Whelan, A. M., et al. 2023, arXiv e-prints, arXiv:2307.08730 * Cutri et al. (2003) Cutri, R. M., Skrutskie, M. F., van Dyk, S., et al. 2003, VizieR Online Data Catalog, II/246 * De Angeli et al. (2023) De Angeli, F., Weiler, M., Montegriffo, P., et al. 2023, A&A, 674, A2 * De Jong et al. (2019) De Jong, R. S., Agertz, O., Berbel, A. A., et al. 2019, Published in The Messenger vol. 175, pp. 3-11, March 2019. * European Space Agency & DPAC Consortium(2022) (ESA) European Space Agency (ESA) & DPAC Consortium. 2022, Gaia DR3 source IDs of O, B, and A-type stars * Fitzpatrick (1999) Fitzpatrick, E. L. 1999, PASP, 111, 63 * Fouesneau et al. (2023) Fouesneau, M., Frémat, Y., Andrae, R., et al. 2023, A&A, 674, A28 * Gaia Collaboration et al. (2018) Gaia Collaboration, Babusiaux, C., van Leeuwen, F., et al. 2018, A&A, 616, A10 * Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1 * Gaia Collaboration et al. (2023) Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1 * Green (2018) Green, G. 2018, The Journal of Open Source Software, 3, 695 * Katz et al. (2023) Katz, D., Sartoretti, P., Guerrier, A., et al. 2023, A&A, 674, A5 * Li et al. (2022) Li, H., Aoki, W., Matsuno, T., et al. 2022, ApJ, 931, 147 * Martin et al. (2023) Martin, N. F., Starkenburg, E., Yuan, Z., et al. 2023, arXiv e-prints, arXiv:2308.01344 * McCall (2004) McCall, M. L. 2004, AJ, 128, 2144 * Montegriffo et al. (2023) Montegriffo, P., De Angeli, F., Andrae, R., et al. 2023, A&A, 674, A3 * Mucciarelli et al. (2021) Mucciarelli, A., Bellazzini, M., & Massari, D. 2021, A&A, 653, A90 * Placco et al. (2021) Placco, V. M., Sneden, C., Roederer, I. U., et al. 2021, Research Notes of the American Astronomical Society, 5, 92 * Riello et al. (2021) Riello, M., De Angeli, F., Evans, D. W., et al. 2021, A&A, 649, A3 * Rix et al. (2022) Rix, H.-W., Chandra, V., Andrae, R., et al. 2022, ApJ, 941, 45 * Schlafly & Finkbeiner (2011) Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103 * Schlafly et al. (2019) Schlafly, E. F., Meisner, A. M., & Green, G. M. 2019, ApJS, 240, 30 * Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 * Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163 * Sneden (1973) Sneden, C. A. 1973, PhD thesis, University of Texas, Austin * Sobeck et al. (2011) Sobeck, J. S., Kraft, R. P., Sneden, C., et al. 2011, AJ, 141, 175 * Soubiran et al. (2018) Soubiran, C., Jasniewicz, G., Chemin, L., et al. 2018, A&A, 616, A7 * Starkenburg et al. (2017) Starkenburg, E., Martin, N., Youakim, K., et al. 2017, MNRAS, 471, 2587 * Suda et al. (2017) Suda, T., Hidaka, J., Aoki, W., et al. 2017, PASJ, 69, 76 * Suda et al. (2008) Suda, T., Katsuta, Y., Yamada, S., et al. 2008, PASJ, 60, 1159 * Suda et al. (2011) Suda, T., Yamada, S., Katsuta, Y., et al. 2011, MNRAS, 412, 843 * Tody (1986) Tody, D. 1986, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 627, Instrumentation in astronomy VI, ed. D. L. Crawford, 733 * Tody (1993) Tody, D. 1993, in Astronomical Society of the Pacific Conference Series, Vol. 52, Astronomical Data Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes, 173 * Tull et al. (1995) Tull, R. G., MacQueen, P. J., Sneden, C., & Lambert, D. L. 1995, PASP, 107, 251 * Wang et al. (2022) Wang, C., Huang, Y., Yuan, H., et al. 2022, ApJS, 259, 51 * Xylakis-Dornbusch et al. (2022) Xylakis-Dornbusch, T., Christlieb, N., Lind, K., & Nordlander, T. 2022, A&A, 666, A58 * Yamada et al. (2013) Yamada, S., Suda, T., Komiya, Y., Aoki, W., & Fujimoto, M. Y. 2013, MNRAS, 436, 1362 * Yao et al. (2024) Yao, Y., Ji, A. P., Koposov, S. E., & Limberg, G. 2024, MNRAS, 527, 10937 * Zhang et al. (2023) Zhang, X., Green, G. M., & Rix, H.-W. 2023, MNRAS, 524, 1855 ## Appendix A Additional figures. We present additional figures which are described in Section 5. We show the inferred metallicity distribution of candidate metal-poor stars that are located on the flux-ratio plane between Cutoff2 and the more stringent cut we used to select stars for observations. Lastly, we show the metallicity distribution of metal-poor candidates below the stringent cut off that were not included in the target list. Figure 10: Distribution of $\mathrm{[Fe/H]}_{inf}$ for metal-poor candidates located between Cutoff2 and the cut off we used to select candidates for observations. Figure 11: Distribution of $\mathrm{[Fe/H]}_{inf}$ of the 27 metal-poor candidates that were not included in the final target list.
# A Survey on Computing Schematic Network Maps: The Challenge to Interactivity Hsiang-Yun Wu TU Wien, Austria <EMAIL_ADDRESS>Benjamin Niedermann University of Bonn, Germany <EMAIL_ADDRESS>Shigeo Takahashi University of Aizu, Japan <EMAIL_ADDRESS>Martin Nöllenburg TU Wien, Austria <EMAIL_ADDRESS> ###### Abstract Schematic maps are in daily use to show the connectivity of subway systems and to facilitate travellers to plan their journeys effectively. This study surveys up-to-date algorithmic approaches in order to give an overview of the state of the art in schematic network mapping. The study investigates the hypothesis that the choice of algorithmic approach is often guided by the requirements of the mapping application. For example, an algorithm that computes globally optimal solutions for schematic maps is capable of producing results for printing, while it is not suitable for computing instant layouts due to its long running time. Our analysis and discussion, therefore, focus on the computational complexity of the problem formulation and the running times of the schematic map algorithms, including algorithmic network layout techniques and station labeling techniques. The correlation between problem complexity and running time is then visually depicted using scatter plot diagrams. Moreover, since metro maps are common metaphors for data visualization, we also investigate online tools and application domains using metro map representations for analytics purposes, and finally summarize the potential future opportunities for schematic maps. ###### Index Terms: Metro Maps, Graph Drawing, Metaphors ## I Introduction A metro map is a schematic visual representation of an underlying transit network that depicts the connectivity between metro stations and lines of a public transportation system [30]. With well-designed metro maps, travellers can effectively identify their locations and find their way or perform routing planning on a complex transportation system in a big city, such as London, Paris, or Tokyo. These maps are especially helpful since travellers often look for a quick solution for the shortest or cheapest path from station $A$ to $B$, how to transfer from $A$ to $B$, and how many stations left to $B$ [33]. To support these tasks, Henry Beck introduced the so-called Tube Map of London Underground, and has proposed several drawing criteria to achieve this [14], and many extended versions have been evaluated [34][35]. Nonetheless, the need for automatic drawing algorithms is still increasing due to the high cost and limited adaptability of hand-drawn maps. This has been considered as a difficult problem because several subproblems, including layout schematization, line crossing minimisation, and map label placement have been studied and proved as complex problems [26]. Two state-of-the-art reports from 2007 and 2014 have surveyed similar approaches before [50][26]. Hence we focus on relatively new approaches from the last 5 years after Nöllenburg’s report [26] and discuss the correlation in terms of computational complexity and interactivity, as well as the potential techniques that can be used in different research domains. ### I-A Problem Definition The initial idea for a metro map lies on simplifying the layout geometry to facilitate users’ comprehensive understanding of connectivity between metro stations. This allows us to formulate the drawing problem as network visualization problems, which are often studied to untangle visual clutter of the layout to improve its readability [51]. #### Metro Map Problem Let us formulate the metro map problem by introducing an undirected graph $G=(V,E)$ embedded in a plane $\mathbb{R}^{2}$. Each vertex $v\in V$ represents a metro station and an edge $e=(v_{i},v_{j})\in E$ indicates the physical connection between two metro stations. In addition, a line $l\in L$ is a metro line containing a set of metro stations and edges that are defined from the corresponding metro system. Note that $l$ is a set cover, which implies each edge $e$ belongs to at least one $l\in L$. We call $MG=(G,L)$ here a _Metro Graph_ as previously defined by Nöllenburg [26]. The input of the proposed problem is the connectivity of a Metro Graph $MG$ together with its geographically accurate embedding, and the solution aims to find a schematic layout of $MG$ that maximally satisfies several user-defined aesthetic drawing criteria. ### I-B Our Taxonomy Design In this study, we investigate the trade-off between computation times and layout quality as well as application requirements. This is motivated by our observation that a user who is trying to create a map would accept longer computation times for an exact solution if the generated map is expected to be printed, while the user will not be patient if the application is expected to be an interactive application. To investigate our assumption systematically, we analyze several factors in this survey (see Tables I and III) that influence the computational complexity and optimality requirements of a problem together with the corresponding running time of the algorithmic approach (Table III). Our two primary topics cover algorithmic network layout techniques and station labeling techniques. The correlation between problem complexity and time complexity is then given using a scatter plot diagram visually. This is done by categorizing publications along two primary coordinates, problem complexity and running time. These include (1) the range from local to global optimality in terms of the incorporated aesthetic criteria and objective functions, and (2) the range of suitability from static to interactive visualizations based on the computation speed. The values on the first coordinate are computed using the scoring Tables I and III. For simplicity, we assume that the degree of interactivity, running time in other words, of a schematic map algorithm is potentially linearly-correlated to the degree of criteria selected by the proposed approach. In other words, we expect the developed techniques would find a good balance between the efficiency of the algorithms and the quality of the schematic maps generated by those approaches. For better instructing readers along this assumption, we will focus our discussion on these two aspects in the following sections. ### I-C Tasks and Design Rules It is known that drawing criteria are often designed based on users’ effectiveness to accomplish tasks on a map. These tasks are similar to tasks on graphs, such as _Topology-Based Tasks_ , _Attribute-Based Tasks_ , and _Browsing Tasks_ since finding shortest or cheapest paths, identifying a station of a line, and navigating along a specific route are mostly performed by the travellers. We revisit Nöllenburg’s list [26] of design principles and investigate which of these serve as dominant constraints, for generating a globally optimal map, in the coming sections. In this survey, we primarily focus on two directions in this field, including network layout techniques (N) and labeling techniques (L) as summarized in Table I and Table III, respectively. Note that the selected criteria are ordered in the sense that the criterion with higher scores influences global structures of the layout, while the one with lower scores affects the layout in a local fashion. This scoring scheme will then be used as an indicator for guiding readers to navigate the diagrams created in the coming sections. Based on the aforementioned scoring scheme, we derive our novel taxonomy of metro map techniques in this survey paper. The remainder of this paper is structured as follows. In Section II, we summarize relevant schematic network layout algorithms, and in Section III, research approaches integrating map features, mainly on text and image labels. In Section IV, we then introduce several tools that are accessible online and demonstrate the usability of the collected techniques in different research domains. Finally in Section VI, we conclude this paper and list several future directions to this topic. ## II Algorithmic Network Layout Techniques | LocalGlobalSlowFast[42][28][49][52][43][29][46][8][9][45][18][39][27][12][48]paper $\geq 2014$paper $<2014$ ---|--- (a) | (b) Figure 1: Network map visualization, including an example of (1) Vienna metro map [27], and (b) layout techniques with respect to their globality and potential for interactiveness. TABLE I: The point system for evaluating the criterion effectiveness. ID | S | Description (x-coordinate in Figs. 1\- 3) ---|---|--- (N1) | | Combinatorial property (N1.1) | 4 | Overall combinatorial optimization. (N1.2) | 3 | Combinatorial criteria for sets of vertices or edges. (N1.3) | 2 | Combinatorial criteria for pairs of vertices or edges. (N1.4) | 1 | Combinatorial criteria for single vertices or edges. (N2) | | Geometry property (N2.1) | 4 | Uniform geometric optimization. (N2.2) | 3 | Geometric criteria for sets of vertices or edges. (N2.3) | 2 | Geometric criteria for pairs of vertices or edges. (N2.4) | 1 | Geometric criteria for single vertices or edges. (N3) | | Approach optimality (N3.1) | 4 | Global optimality and exact global optimization. (N3.2) | 3 | Global optimality, but local optimization. (N3.3) | 2 | Local optimality, but global optimality for sub-problems. (N3.4) | 1 | Local optimality, and local optimization. TABLE II: The point system for evaluating the time effectiveness. ID | S | Time complexity (y-coordinate in Figs. 1\- 3) ---|---|--- T1 | 4 | The layout is computed less than a second. T2 | 3 | The layout is computed in a few seconds. T3 | 2 | The layout is computed within a few minutes . T4 | 1 | The layout is computed in several hours or more. In this section we classify the different network layout algorithms proposed in the literature over the past 15 years along three criteria that determine the degree of globality of the algorithmic techniques. These criteria comprise combinatorial properties (N1) of the underlying graph structure (e.g., topology preservation), geometric properties (N2) of the input network and the schematic layout (e.g., directional deviations and line straightness), and the degree of optimality (N3) of the computed solutions, i.e., whether an global optimum is computed or just a local optimum. Table I lists the three criteria, which are ranked by scores ranging from 1 (high locality) to 4 (high globality). Additionally, we assess each technique by the required computational resources and resulting degree of suitability for interactive applications (summarized in Table II). Each of the papers discussed in the following paragraphs thus receives two scores that we use as coordinates in the two-dimensional scatter plot of Figure 1. We start with four classic papers that serve as representatives of the main algorithmic techniques that were already discussed in the 2014 survey of Nöllenburg [26]. Since our focus is on new results from 2014 onward, we refer to the 2014 survey for a comprehensive discussion of the earlier literature. The force-based algorithm of Hong et al. [18] considers combinatorial and geometric parameters of individual and pairs of edges and vertices in their objective function, e.g., the slope of edges, or the preservation of the local network topology at each vertex. The objective function considers the sum of several forces acting upon each vertex to push it toward a state of locally minimal energy. Layouts are computed within a few seconds. Stott et al. [39] define an explicit global objective function for metro maps, which takes smaller values for maps with higher layout quality. This quality function measures, e.g., angles at each vertex or between neighboring edges, local edge length differences, or the octilinearity of edges, which corresponds to a score of 2 in these criteria. The optimization itself is by a hill climbing algorithm that moves vertices to the best position in a local neighborhood, but evaluates the global map quality. The reported computation times range between a few minutes and a few hours. The mixed-integer linear programming (MIP) approach of Nöllenburg and Wolff [27] defines combinatorial and geometric constraints among pairs of edges and vertices to guarantee, e.g., the correct network topology, octilinear edges, or a bounded amount of angular distortion, as well as the minimization of bends along lines and subpaths or the total network length. Their solution model, however, guarantees a globally optimal solution. To achieve this highest degree of optimality, one needs to invest computation times ranging from several minutes for simple network maps to several hours for more complex instances. Finally, Wang and Chi [48] define an energy function to represent the layout quality, with the goal of minimizing this energy function. As in the previous methods, this comprises aspects defined on pairs of vertices and edges such as edge lengths, angles, or octilinearity. Their optimization aims for global optimality, but using an iterative conjugate gradient method for least squares minimization to converge to a global minimum. Their algorithms typically run in less than a second. Among the more recent papers, we mostly find approaches that improve and extend one of the classic techniques like MIP, local optimization heuristics, force-based iterative methods, or stroke-based incremental algorithms. Chivers and Rodgers [8] present a hybrid force-based method that defines both a set of classic spring embedder forces [13] as well as a magnetic force field that models octilinearity [41]. Their hybrid idea is to first let the spring forces find a well distributed layout and then let the magnetic forces become dominant to achieve octilinear edges. By the nature of their method they aim for locally optimal stable layouts, but the convergence is quite fast and typical metro maps can be computed in less than a second. A second and very fast approach with similar properties is given by van Dijk and Lutz [45]. They propose a method based on linearized least-squares minimization as an approximation of a non-linear model for computing linear cartograms, which are network drawings with prescribed edge lengths. This idea is then applied to metro map layout as one problem instance, where the length and slope of edges are soft constraints to be optimized. While several typical constraints of metro maps are not considered in their algorithm, it is an extremely fast technique with running times in the range of a few milliseconds. Wang and Peng [49] provide another system using a global energy function whose minima correspond to locally optimal metro layouts in their model. The optimization technique is least-squares minimization. It comprises both octilinear and curvilinear layouts and is designed for interactive editing by a user who can modify the positions of a few _handle_ vertices. It is very fast and only requires a few milliseconds to compute medium-sized metro maps. Three papers take the idea of decomposing the network into paths (also called _strokes_), computing schematic representations of the strokes and then composing them into a single network again, see e.g. [20]. Ti and Li [42] and Ti et al. [43] both propose an approach that first detects and enlarges areas in the geographically accurate representation of the metro network that have high vertex or edge density and then apply multifocal fisheye transformations to get a more uniform spatial distribution of the underlying network. In a second step, strokes are identified and locally schematized in an octilinear fashion. While no running times are reported in the papers, these methods are typically quite fast and run within at most a few seconds. On the other hand, the schematization only minimizes local distances to the input geometry and no explicit global objective function is optimized. The algorithm by van Dijk et al. [46] proposes schematic maps using strokes that are represented as circular arcs, and not as octilinear paths. They first simplify sequences of edges of the same metro line into longer paths, even across junctions, to reduce overall complexity of the layout and then find locally optimal representations of these paths/strokes by circular arcs guided by the Fréchet distance between paths and circular arcs. Chivers and Rodgers [9] extend the search space of the hill climbing local search technique [39] by parameterizing grid spacing, local neighborhood size for node movements, and the number of iterations. The constraints and objectives are the same as those by Stott et al. [39]. With their improvements they gain a speed-up factor between 5 and 8 compared to the earlier multi- criteria hill climbing technique [39] and obtain the final (labeled) layouts within 5 to 60 minutes. Finally, the MIP model has been improved and accelerated by several authors. Oke and Siddiqui [28] relax some of the integrality constraints of Nöllenburg and Wolff [27] and drop one term in the objective function. Their model improves the running time by up to one order of magnitude to computation times of few minutes. Onda et al. [29] also modify the existing MIP model [27]. They split the computation into two phases in order to speed up computation of large instances such as the Tokyo metro map. In the first phase they generate a rough layout that satisfies all hard constraints in an incremental face-by- face manner. The second phase optimizes the directions of short subpaths while keeping the directions of the remaining layout fixed. A layout of the Tokyo network was computed within 5 hours. Certain topological structures can be highlighted by Wu et al., who allow for straightening a user-specified path [55], or deforming a cyclic path into a circle [52] in order to emphasize meaningful structures. This has been achieved by introducing additional constraints to the standard MIP model, which increases the time for finding the corresponding solutions by a small amount. Fink et al. [12] use the MIP technique to model layout of concentric metro maps, where the network consists of polylines composed of radial or circular segments. The optimization considers among others bend minimization, minimization of distinct radial slopes, or uniformity of edge lengths. The reported running times are in the range of several minutes for small input maps like Vienna and Montreal. ## III Labeling Techniques | LocalGlobalSlowFast[56][25][52][53][55][48][27][7][39][18][54][49]paper $\geq 2014$paper $<2014$ ---|--- (a) | (b) Figure 2: Labeled network maps, including an example of (1) Vienna metro map with image labels [53], and (b) labeling techniques with respect to their globality and interactiveness. TABLE III: The point system for evaluating the criterion effectiveness. ID | S | Description (x-coordinate in Fig. 2) ---|---|--- (L1) | | Adjustment of metro maps (L1.1) | 4 | The layout and label placement is done simultaneously. (L1.2) | 3 | The layout is uniformly scaled to fit in labels. (L1.3) | 2 | The directions of the edges are preserved. (L1.4) | 1 | The layout is not changed when labeling is created. (L2) | | Placement of labels (L2.1) | 4 | Uniform placement. (L2.2) | 3 | Criteria for sets of labels. (L2.3) | 2 | Criteria for pairs labels. (L2.4) | 1 | Criteria for single labels only. (L2) | | Optimality (L3.1) | 4 | Global optimality and exact global optimization. (L3.2) | 3 | Global optimality, but local optimization. (L3.3) | 2 | Local optimality, but global optimality for sub-problems. (L3.4) | 1 | Local optimality, and local optimization. In this section we discuss the current state on algorithmic labeling techniques for metro maps. Similarly to Section II, we classify the approaches with respect to their degree of globality and interactivity; see also Figure 2. We used the following three criteria to decide on the locality of a labeling approach, including adjustment of metro map (L1), placement of labels (L2), and optimality (L3) (see also Table III). For each criterion and each labeling approach we determined the highest score that suits the approach. The overall degree of globality of a labeling approach is then the average over its three assigned scores. In order to decide on the degree of interactivity for each labeling approach, we applied a similar approach as we did in Section II using Table II. We assigned each approach to the lowest category that suits the approach best. Altogether, we obtain a measure for both the degree of globality and interactivity of a labeling approach, which we depict in Figure 2. In the following we discuss the single results in increasing order with respect to their degree of interactivity. Nöllenburg and Wolff [27] present an approach that creates the layout of the metro map and the label placement simultaneously. To that end, they present a mixed integer linear programming formulation that optimally creates a metro map with respect to a global cost-function. They particularly require that labels in between two crossings lie on the same side of the metro line. However, their approach comes with a high running time, e.g., for the metro system of Sydney they report a running time up to $12$ hours. Stott et al. [39] present an iterative approach that creates the layout and label placement in an integrated way using hill climbing based on a global multi-criteria function. Among other criteria they prefer labels on the same side of the metro line. They report running times between 2 and 120 minutes. In contrast, Wu et al. [53] assume when placing labels that the layout is given. Still, they scale the map to increase free spaces, which they use for label placement based on a MIP approach. By considering the layout and labeling step independently, they achieve running times up to minutes. Yoshida et al. [56] scale single edges of the map while preserving the directions of the edges. The main idea is to create free spaces in dense parts of the map, while leaving other parts of the maps unchanged. Similarly to Wu et al. [53], Niedermann and Haunert [25] systematically scale the map layout to fit in the labeling. For the labeling procedure they propose a dynamic programming approach that labels a single metro line optimally. They use this method as a subroutine of their heuristic, which labels the map within few minutes. Wu et al. [55] label the stations with photos instead of text labels. They place the photos alongside two boundaries of the rectangular map and associate them with their stations via connecting curves, also called _leaders_. They aim at crossing-free solutions that minimize the length of the leaders. Wu et al. [52] generalize this approach to all four sides of the map. Wu et al. [54] separate the labels from the metro map less clearly. They present a genetic algorithm that places overlap-free text labels and photos within the free spaces of the metro map connecting the labels with their stations using straight-line leaders. At the core of the genetic algorithm, which systematically explores different orders of placing labels, the actual placement is done by a simple and fast greedy procedure. Hong et al. [18] separate the placement of the labels from the computation of the layout. They describe the labeling problem as a conflict graph based on label candidates and their occlusions. The label placement corresponds to an independent set of vertices within the graph, which they determine using simulated annealing. Although they maximize the number of labels, they cannot guarantee that each station gets a label. Wang and Chi [48] also separate the placement of labels from the computation of the metro map layout. They describe the labeling problem as a multi- criteria energy function, which they locally optimize. Their approach particularly avoids occlusions between labels and prefers labels on the same side of the metro line. They report running times below one second, which makes their approach applicable for interactive real-time scenarios. Wang and Peng [49] use the same labeling approach for their interactive editing system. Chivers and Rodgers [7] present an approach that after fixing the metro map layout selects for each station a label out of eight candidates using a simple greedy approach. They penalize label occlusions and prefer similarly placed labels of adjacent stations. Altogether, we observe that with increasing degree of interactivity the globality of the approaches decreases. In particular, those approaches that are applicable for interactive real-world scenarios, typically assume a fixed metro map layout. While this provides the possibility of computing the label placement much faster than approaches integrating the process of creating the layout, the given metro map layout may not host all labels. This is either resolved by allowing occlusions [48][49] or by labeling only a (possibly large) subset of stations [18]. Hence, those approaches are far away from achieving the same quality with respect to the labeling as approaches with a smaller degree of interactivity. We therefore deem the development of labeling algorithms with the potential for a high degree of interactivity as a still open and important research problem. ## IV Applications and On-line Tools Schematic network maps have widely been employed for representing abstract relationships between subjects. In such cases, the schematic maps serve as visual metaphors that effectively systematize the underlying structures hidden behind the target information space. In this section, we first focus on techniques for visualizing abstract information space using schematic map metaphors and then raise representative on-line tools for designing such maps. | LocalGlobalSlowFast[40][6][37]​​[36][38][47][28][24][15]paper $\geq 2014$paper $<2014$ ---|--- (a) | (b) Figure 3: Visualization using schematic maps as metaphors, including (1) an example of fruit maps, and (b) Applications with respect to their globality and interactiveness. ### IV-A Metro Maps as Visual Metaphors Effects of using schematic maps as visual metaphors have been investigated, though several case studies are at an early stage. Nesbitt [23] conducted preliminary studies to evaluate the capabilities of metro map metaphors in understanding abstract data and showed that they can provide a better mental model of the target data as a first impression. Pontikakis and Twaroch [32] also examined the potential use of metro maps to represent the topological connections between points of interest especially when the underlying topographic maps are unavailable. Metro map metaphors have often been employed to explore the underlying structure of an information space. In this case, each line corresponds to a chain of subjects such as events, topics, plans, articles, etc. Stott et al. [40] visualized the time-lines of project plans and their dependency as a metro map by employing the early version of their hill-climbing algorithm [39]. Aguiregoitia et al. [6] applied such an idea to software project visualization where a different set of tasks is aligned with each metro line according to the software development process. A set of documents is another target for which we can effectively employ a schematic map as a visual metaphor. Shahaf et al. [37] proposed an algorithm for constructing a metro map of documents in such a way that the topology of a document network reflects the desired sparseness inherent in ordinary railway networks. This technique has been devised to visualize chains of scientific papers to clarify the evolution of research fields [36], and then further extended to the scalable scheme that enables the the level of detail control in metro map metaphors [38]. As an application of schematic network metaphors to scientific data, Wahabzada et al. [47] successfully visualized plant disease dynamics by tracking spatiotemporal disease patterns with imaging techniques and composed metro map metaphors by taking advantage of [27]. Cancer pathways [16] were also visualized as schematic metro maps in the context of multiobjective optimization [28]. Schematic network metaphors also allows us to visualize the structure of low- dimensional space obtained through dimensionality reduction techniques. Neumayer et al. [24] employed a metro map metaphor to describe the configuration of segmented characteristic regions over the 2-dimensional plane obtained through self-organizing maps. Gorban et al. [15] applied this idea in the context of dimensionality reduction based on elastic energy minimization to schematize the distribution of data samples with a metro map metaphor. Several manually designed metaphors of metro maps are available on-line. Example includes a Galaxy map that details relationships among planets [11], a science map composed of scientists and their fields of sciences [31], a human anatomy diagram consisting of organ systems as metro lines [17], a logistic map of supply chains [1], a map of leading names and domains on the Internet [5], and a map of digital workspace and marketing technology vendors [2]. The most timely example is the Brexit tube maps [10], which model Brexit issues and sectors as stations and lines, respectively, to elucidate how they relate to one another. ### IV-B Accessible On-line Tools On-line tools often allows us to customize travel routes within the precomputed schematic network maps or even to manually design such schematic maps themselves. Mapway [4] and ExploreMetro [3] are mobile apps for finding a route to the destination on metro maps in major cities. Metro Map Maker [44] provides a web-based tool for designing schematic network maps, in which it is possible to manually place stations and draw metro lines by referring to a regular grid on a screen canvas. Several software libraries for designing interactive schematic maps are also available, including Unfolding Map Library [22] for Processing and Java, a library for interactive SVG metro maps with JointJS [21], and a jQuery plug-in for subway map visualization [19]. ## V Discussion and Guidance for Followers Our hypothesis about the correlation between problem complexity and running time has been visually supported since our scatter plots approximately show a top-left to bottom-right diagonal distribution of algorithmic approaches. This strongly suggests that researchers are looking for a compromise between the selected constraints for their problem formulation and the techniques to be used, in order to find the balance between the global scope of the solution as opposed to the running time of the algorithm. Doubtlessly, the final goal of schematic mapping algorithms is to develop an ultimate technique that pinpoints us in the top-right corner of the plots, representing high-quality maps computed instantaneously. As shown in Figure 1, the blue markers represent approaches investigated before 2014, and red markers depict the techniques developed after 2014. A tendency from left to right and from bottom to top can be observed indicating that researchers are heading towards the desired goal. While some techniques [29][28] tend to accelerate the conventional approach by Nöllenburg and Wolff [27], which has been considered as the most global optimization method in this paper, the running times are still not very suitable for interactive applications. This also confirms that the schematic map problem is still a fundamentally difficult one, so that a certain quality loss needs to be tolerated in order to achieve fast performance [48]. More sophisticated algorithms are expected to be developed to open further opportunities. As summarized in Section IV, several researchers are beginning to use the maps generated by schematization techniques to visualize their research results, especially when the data is associated with entity connectivity and spatial relationship between entities. The previous plots can also serve as a visual guidance to instruct those researchers to select an appropriate technique for their research purposes. ## VI Future Research Direction In this study, we have compiled the state of the art on schematic network visualization and have given a comprehensive overview of automatic network layout algorithms and map labeling algorithms, as well as tools and applications for use in multidisciplinary domains. Our taxonomy demonstrates that most methods carefully design their solutions in compliance with the problem space in order to provide effective approaches with reasonable quality. Nonetheless until now, no benchmark is provided to enable standard testing on schematic map production, although the Sydney metro map has been studied by most of the conventional map layout algorithms. Developing such a repository would help promoting the usage of algorithms not only in cartography but also in other scientific domains. Another challenge for future work in this field is to target advanced methodology to smartly handle global constraints to improve the scalability of the algorithms. This can be integrated with the interaction techniques together with appropriate user scenarios. For this purpose, we plan to further investigate visual representations and interaction schemes in accordance with the task taxonomy for map users. ## Acknowledgment The project leading to this submission has received funding from the EU Horizon 2020 research and innovation programme under the Marie Sklodowska- Curie grant No. 747985. ## References * [1] 3pl subway map. https://www.supplychainmedia.eu/print/visuals/subway-maps/3pl-subway-map/. * [2] Digital workplace & marketing technology vendor map. https://www.realstorygroup.com/vendormap/. * [3] Exploremetro. https://www.exploremetro.com/. * [4] Mapway. https://www.mapway.com/. * [5] Web trend map 4: Coolest gift for geeks. https://ia.net/topics/wtm4. * [6] A. Aguirregoitia, J. J. D. Cosín, and C. Presedo. Software project visualization using task oriented metaphors. Journal of Software Engineering and Applications, 3(11):1015–1026, 2010. * [7] D. Chivers and P. Rodgers. Gesture-based input for drawing schematics on a mobile device. In Information Visualisation (IV’11), pages 127–134, 2011. * [8] D. Chivers and P. Rodgers. Octilinear force-directed layout with mental map preservation for schematic diagrams. In T. Dwyer, H. Purchase, and A. Delaney, editors, Diagrammatic Representation and Inference (Diagrams’14), pages 1–8. Springer Berlin Heidelberg, 2014. * [9] D. Chivers and P. Rodgers. Improving search-based schematic layout by parameter manipulation. Int. J. Software Engineering and Knowledge Engineering, 25(6):961–991, 2015. * [10] S. de Groot and M. J. Roberts. Brexit mapping. http://www.brexitmapping.com. * [11] J. Diaz. We are just a tiny station in the milky way subway map. https://gizmodo.com/we-are-just-a-tiny-station-in-the-milky-way-subway-map-5454587. * [12] M. Fink, M. Lechner, and A. Wolff. Concentric metro maps. In Proceedings of the Schematic Mapping Workshop (SMW’14), 2014\. Poster. * [13] T. M. J. Fruchterman and E. M. Reingold. Graph drawing by force-directed placement. Software: Practice and experience, 21(11):1129–1164, 1991. * [14] K. Garland. Mr. Beck’s Underground Map. Capital Transport Publishing, 1994. * [15] A. N. Gorban, N. R. Sumner, and A. Y. Zinovyev. Beyond the concept of manifolds: Principal trees, metro maps, and elastic cubic complexes. In Principal Manifolds for Data Visualization and Dimension Reduction, pages 219–237. Springer, 2008. * [16] W. C. Hahn and R. A. Weinburg. A subway map of cancer pathways, 2002. https://www.nature.com/nrc/poster/subpathways/index.html [Subway map designed by C. Bentley.]. * [17] B. P. C. Ho. A doctor created a human anatomy diagram in the style of a subway map and it’s friggin’ gorgeous. http://digg.com/2018/human-anatomy-subway-map. * [18] S.-H. Hong, D. Merrick, and H. A. D. do Nascimento. Automatic visualisation of metro maps. Journal of Visual Languages and Computing, 17(3):203–224, 2006\. * [19] N. Kalyani. Subway map visualization jquery plugin. https://github.com/techbubble/subwayMap. * [20] Z. Li and W. Dong. A stroke-based method for automated generation of schematic network maps. Int. J. Geographical Information Science, 24(11):1631–1647, 2010\. * [21] P. Maszczynski. Creating an interactive svg metro map with jointjs. https://www.netvlies.nl/tips-updates/applicaties/cases/creating-an-interactive-svg-metro-map-with-jointjs/. * [22] T. Nagel. Unfolding map library. http://unfoldingmaps.org/. * [23] K. V. Nesbitt. Getting to more abstract places using the metro map metaphor. In Information Visualisation, IV ’04, pages 488–493, 2004. * [24] R. Neumayer, R. Mayer, G. Polzlbauer, and A. Rauber. The metro visualisation of component planes for self-organising maps. In Int. Joint Conference on Neural Networks, pages 2788–2793, 2007\. * [25] B. Niedermann and J. Haunert. An algorithmic framework for labeling network maps. Algorithmica, 80(5):1493–1533, 2018. * [26] M. Nöllenburg. A survey on automated metro map layout methods. In Schematic Mapping Workshop, Essex, UK, 2014. https://i11www.iti.kit.edu/extra/publications/n-asamm-14.pdf. * [27] M. Nöllenburg and A. Wolff. Drawing and labeling high-quality metro maps by mixed-integer programming. IEEE Transactions Visualization and Computer Graphics, 17(5):626–641, 2011. * [28] O. Oke and S. Siddiqui. Efficient automated schematic map drawing using multiobjective mixed integer programming. Computers & Operations Research, 61:1–17, 2015. * [29] M. Onda, M. Moriguchi, and K. Imai. Automatic drawing for tokyo metro map. In European Workshop on Computational Geometry (EuroCG’18), pages 62:1–62:6, 2018. * [30] M. Ovenden. Metro Maps of the World. Capital Transport Publishing, 2003. * [31] P. Plait. Metrocontextual science map. http://blogs.discovermagazine.com/badastronomy/2010/08/31/metrocontextual-science-map/\\#.XFEYW88zbfB. * [32] E. Pontikakis and F. Twaroch. Schematic maps as an alternative to point coverages when topographic maps are not available. In Information Visualisation, IV ’06, pages 297–303, 2006. * [33] M. J. Roberts. Underground Maps Unravelled: Explorations in Information Design. Self-published, 2012. * [34] M. J. Roberts, H. Gray, and J. Lesnik. Preference versus performance: Investigating the dissociation between objective measures and subjective ratings of usability for schematic metro maps and intuitive theories of design. International Journal of Human-Computer Studies, 98:109 – 128, 2017\. * [35] M. J. Roberts, E. J. Newton, F. D. Lagattolla, S. Hughes, and M. C. Hasler. Objective versus subjective measures of paris metro map usability: Investigating traditional octolinear versus all-curves schematics. International Journal of Human-Computer Studies, 71(3):363 – 386, 2013. * [36] D. Shahaf, C. Guestrin, and E. Horvitz. Metro maps of science. In Knowledge Discovery and Data Mining, KDD ’12, pages 1122–1130, 2012. * [37] D. Shahaf, C. Guestrin, and E. Horvitz. Trains of thought: Generating information maps. In World Wide Web, WWW ’12, pages 899–908, 2012. * [38] D. Shahaf, J. Yang, C. Suen, J. Jacobs, H. Wang, and J. Leskovec. Information cartography: Creating zoomable, large-scale maps of information. In Knowledge Discovery and Data Mining, KDD ’13, pages 1097–1105, 2013. * [39] J. Stott, P. Rodgers, J. C. Martínez-Ovando, and S. G. Walker. Automatic metro map layout using multicriteria optimization. 17(1):101–114, 2011. * [40] J. M. Stott, P. Rodgers, R. A. Burkhard, M. Meier, and M. T. J. Smis. Automatic layout of project plans using a metro map metaphor. In Information Visualisation, IV ’05, pages 203–206, 2005. * [41] K. Sugiyama and K. Misue. Graph drawing by the magnetic spring model. Journal of Visual Languages & Computing, 6(3):217–231, 1995. * [42] P. Ti and Z. Li. Generation of schematic network maps with automated detection and enlargement of congested areas. International Journal of Geographical Information Science, 28(3):521–540, 2014. * [43] P. Ti, Z. Li, and Z. Xu. Automated generation of schematic network maps adaptive to display sizes. The Cartographic Journal, 52(2):168–176, 2015. * [44] S. Turner. Metro map maker. https://metromapmaker.com/. * [45] T. C. van Dijk and D. Lutz. Realtime linear cartograms and metro maps. In Advances in Geographic Information Systems (SIGSPATIAL’18), pages 488–491. ACM, 2018. * [46] T. C. van Dijk, A. van Goethem, J.-H. Haunert, W. Meulemans, and B. Speckmann. Map schematization with circular arcs. In M. Duckham, E. Pebesma, K. Stewart, and A. U. Frank, editors, Geographic Information Science (GIScience’14), pages 1–17. Springer International Publishing, 2014. * [47] M. Wahabzada, A.-K. Mahlein, C. Bauckhage, U. Steiner, E.-C. Oerke, and K. Kersting. Metro maps of plant disease dynamics-automated mining of differences using hyperspectral images. PLOS ONE, 10(1):1–20, 2015. * [48] Y.-S. Wang and M.-T. Chi. Focus+context metro maps. IEEE Transactions Visualization and Computer Graphics, 17(12):2528–2535, 2011. * [49] Y.-S. Wang and W.-Y. Peng. Interactive metro map editing. IEEE Transactions on Visualization and Computer Graphics, 22(2):1115–1126, 2016. * [50] A. Wolff. Drawing subway maps: A survey. Informatik – Forschung und Entwicklung, 22(1):23–44, 2007. * [51] A. Wolff. Graph drawing and cartography. In R. Tamassia, editor, Handbook of Graph Drawing and Visualization, chapter 23, pages 697–736. CRC Press, 2013. * [52] H.-Y. Wu, S.-H. Poon, S. Takahashi, M. Arikawa, C.-C. Lin, and H.-C. Yen. Designing and annotating metro maps with loop lines. In Information Visualization, IV ’15, pages 9–14. IEEE, 2015. * [53] H.-Y. Wu, S. Takahashi, D. Hirono, M. Arikawa, C.-C. Lin, and H.-C. Yen. Spatially efficient design of annotated metro maps. Computer Graphics Forum, 32(3):261–270, 2013. * [54] H.-Y. Wu, S. Takahashi, C.-C. Lin, and H.-C. Yen. A zone-based approach for placing annotation labels on metro maps. In Smart Graphics (SG’11), volume 6815 of LNCS, pages 91–102. Springer-Verlag, 2011. * [55] H.-Y. Wu, S. Takahashi, C.-C. Lin, and H.-C. Yen. Travel-route-centered metro map layout and annotation. Computer Graphics Forum, 31(3):925–934, 2012. * [56] Y. Yoshida, K. Maruyama, T. Kawagoe, H.-Y. Wu, M. Arikawa, and S. Takahashi. Progressive annotation of schematic railway maps. In Information Visualisation, IV ’18, pages 373–378. IEEE, 2018\.
\quad{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{\mathbin{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\oplus}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}[{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}]\mathbin{\\!\oplus\\!}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathtt{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\mathtt{m}}_{\mathnormal{k}}}}}\langle{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}\rangle\vphantom{x}\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!.\\!}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}}}}}\begin{array}[]{l}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{4}}}\vdash{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}\mathbin{\\!:\\!}{{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\oplus\\!\left\\{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}^{\prime}_{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}k}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}})}\mathbin{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}.}\\!}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}_{k}}\right\\}_{k\in K}}}}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}}\vdash{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}}}\quad{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\not\leqslant}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathbf{end}}\quad{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{2}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}_{k}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}}\end{array}\end{array}$ (by (27) and Appendix §​ 0.D(7)) (37) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{1}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{2}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{4}}$ (by (27), (32), and (37)) (38) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{1}}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T}}}\;\text{with}\;{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\&\\!\left\\{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}_{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}i}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S_{i}}})}\mathbin{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}.}\\!}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{i}}\right\\}_{i\in I}}$ (by (32) and Fig. 7, rule [T-Sub]) (39) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{4}}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}}}}\;\text{with}\;{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\oplus\\!\left\\{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}^{\prime}_{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}k}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}})}\mathbin{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}.}\\!}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}_{k}}\right\\}_{k\in K}}$ (by (37) and Fig. 7, rule [T-Sub]) (40) $\displaystyle\begin{array}[]{l}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{1}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{2}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{4}}\text{ \;where}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{1}}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\&\\!\left\\{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}_{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}i}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S_{i}}})}\mathbin{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}.}\\!}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{i}}\right\\}_{i\in I}}}}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{4}}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}\mathbin{\\!:\\!}{{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\oplus\\!\left\\{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}^{\prime}_{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}k}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}})}\mathbin{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}.}\\!}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}_{k}}\right\\}_{k\in K}}}}\end{array}$ (by (38), (39), (40), and Def. 7) (44) $\displaystyle\forall{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}\in{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}:{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}}$ (by (18), (44), and Lem. 26) (45) $\displaystyle k\in K\subseteq I\quad\text{and}\quad{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S_{k}}$ (by (44), (45), and Lem. 24 (46) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}={\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{k}}}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{2}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}_{k}}}}$ (by (44), (46), and Def. 8) (47) $\displaystyle\forall{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}\in{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}:\exists{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime\prime\prime}_{s}}:{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime\prime\prime}_{s}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}}$ (by (45), (47), and §​ 0.B.5) (48) We can now use ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}$ to type ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$: $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}x_{k}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S_{k}}}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{k}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{k}}}}$ (by (46), (37), and (32)) (49) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}}\vdash{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S_{k}}}}$ $\displaystyle\begin{array}[]{@{}l@{}}\text{(by \eqref{eq:subj-red:comm:p-sel- typing} (for ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}}\vdash{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S^{\prime}_{k}}}}$)},\eqref{eq:subj- red:comm:stenv-i-k-carried-sub},\\\ \text{ transitivity of $\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}$, and {\color[rgb]{0.1,0.3,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.3,0.1}{\scriptsize[T-Sub]}})}\end{array}$ (52) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{k}}}}\text{\; defined}$ (by (37), (32), and (27)) (53) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{k}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{k}}\mathord{\left\\{{\nicefrac{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}}{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}x_{k}}}}}\right\\}}}}$ (by (49), (52), (53), and Appendix §​ 0.D) (54) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}}\begin{array}[]{l}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{0}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{3}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T_{k}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{k}}\mathord{\left\\{{\nicefrac{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}}{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}x_{k}}}}}\right\\}}}}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{2}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}}\mathbin{\\!:\\!}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}T^{\prime}_{k}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}}\end{array}$ (by (54), (37), (47), (48), and (22)) (57) We conclude this case by showing that there exists some ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$ that satisfies the statement: $\displaystyle\exists{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}:{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}$ (by (18), (44), (47), and Appendix §​ 0.C) (58) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}}$ (by (57), (58), and Appendix §​ 0.D) $\displaystyle\forall{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}\in{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}:\exists{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}:{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}}$ (by (18), (58), and §​ 0.B.5) (59) * • Case [R-Ctx]: By inversion of the rule and Def. 2, we have to prove the statement in the following sub-cases: 1. 1. ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mid}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}R}$ and ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mid}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}R}$ and ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}\to{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}$ 2. 2. ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\left(\mathbf{\nu}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}}\right){{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}}$ and ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\left(\mathbf{\nu}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}}\right){{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}}}$ and ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}\to{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}$ 3. 3. ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{def}\;{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}D}}\;\mathbf{in}\;{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}}$ and ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{def}\;{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}D}}\;\mathbf{in}\;{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}}}$ and ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}\to{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}$ Cases 1 and 3 are easily proved using the induction hypothesis. Therefore, here we focus on case 2. $\displaystyle\exists{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}},{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}\;\;\text{such that}\;\;{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}}}\begin{array}[]{@{}l@{}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}\\!\not\in\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\qquad{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}}\end{array}$ (by 2 and Appendix §​ 0.D(4)) (62) $\displaystyle\exists{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}},{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}\;\;\text{such that}\;\left\\{\begin{array}[]{@{}l@{}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}\\!\not\in\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to}}{}^{\\!\\!\\!*}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to}}{}^{\\!\\!\\!*}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}\\\ \forall{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}\in{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}:\exists{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}:{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}}\\\ {\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}}}\end{array}\right\\}$ (by (62) and inductive hypothesis) (68) $\displaystyle\exists{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}\;\;\text{such that}\;\;{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}}}$ (by (62), (68), and §​ 3.4) (69) $\displaystyle{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}}\begin{array}[]{@{}l@{}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}\\!\not\in\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}\quad{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Theta}}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime}}\mathpunct{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9},}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime\prime\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}}}\end{array}$ (by (68), (69) and 2) (72) Hence, we obtain the thesis by (68) and (72). * ###### Proof From the hypothesis ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}\\!\mathrel{\to{}^{\\!\\!\\!*}}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$, we know that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{0}}\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{1}}\\!\to\\!\cdots\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{n}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ (for some $n$). The proof proceeds by induction on $n$. The base case for n=0 is straightforward: we have ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$, thus ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ is well-typed. Furthermore, since the term ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\mathtt{err}}}$ is not typeable, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ cannot contain such a term. In the inductive case for n = m+1, we already know (by the induction hypothesis) that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{m}}$ is well-typed. By applying §​ 4.2, we can conclude that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{m+1}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ is also well-typed and does not contain any ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\mathtt{err}}}$ subterms. ## Appendix 0.F Proofs for Session Fidelity and Process Properties * ###### Proof The proof structure is similar to Thm. 5.4 in [18]: by induction on the derivation of the reduction of ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$, we infer the contents of ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$ and then the shape of ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ and its sub-processes ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$, showing that they can mimic the reduction of ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$. Most cases hold by applying the induction hypothesis. * • Case ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\xrightarrow{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}[{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}]}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$: in this case, the process ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ playing role ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ in session ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$ is a selection on ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}$ towards ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}$ (possibly within a process definition); while the process ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}$ playing role ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}$ in session ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$ is a branching on ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}]}$ from ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ (possibly within a process definition). Therefore, by [R-$\oplus\&$] in Fig. 2, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ can correspondingly reduce to ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ by transmitting either a basic value ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}v}$ or a channel endpoint ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s^{\prime}}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}^{\prime}}}]}$ from ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ to ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}$ in session ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$ (possibly after a finite number of transitions under rule [R-${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}X}$]). The resulting continuation process ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ is typed by ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$. The assertion that there exists ${\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}$ such that ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}}$ follows from ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\xrightarrow{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}[{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}]}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$, ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}}$, and §​ 3.4. Appendix §​ 0.F below says that if a process ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ satisfies the assumptions of session fidelity (Ex. 14) then all its reductums will satisfy such assumptions, too. This means that if ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ enjoys session fidelity, then all its reductums enjoy session fidelity, too. propositionlemSingleSessionPersistentAssume ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}}}$, where ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}}$, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}\equiv\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\Pi_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$, and ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}=\bigcup_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ such that, for each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$, we have ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$. Further, assume that each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ is either ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{0}}$ (up to $\equiv$), or only plays ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ in ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$, by ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$. Then, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}\to{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ implies $\exists{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}},{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}$ such that ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\\!\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to^{\\!*}}_{\\!\\!\\!{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$ and ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}}$, with ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G^{\prime}}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}}$, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}\equiv\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\Pi_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$, and ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}=\bigcup_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ such that, for each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$, we have ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$; furthermore, each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{0}}$ (up to $\equiv$), or only plays ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ in ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$, by ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$. ###### Proof Straightforward from the proof of §​ 4.2, which accounts for all possible transitions from ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ to ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$, and in all cases yields the desired properties for its typing context ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$. [Process Deadlock-Freedom]lemmalemProcessDF Assume ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}}}$, where ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}}$, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}\equiv\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\Pi_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$, and ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}=\bigcup_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ such that for each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$, we have ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$. Further, assume that each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ is either ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{0}}$ (up to $\equiv$), or only plays ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ in ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$, by ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$. Then, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ is deadlock-free. ###### Proof By the assumption ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}}$ and §​ 0.B.6, ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$ is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$-deadlock- free. Consider any ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ such that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}\\!\mathrel{\to{}^{\\!\\!\\!*}}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}\\!\not{\\!\\!\\!\to}$ with ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{0}}\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{1}}\\!\to\\!\cdots\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{n}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}\\!\not{\\!\\!\\!\to}$ (for some $n$) with each reduction ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{i}}\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{i+1}}$ ($i\\!\in\\!0...n\\!-\\!1$). By Appendix §​ 0.F, we know that each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{i}}$ is well-typed and its typing context ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{i}}$ is such that ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to^{\\!*}}_{\\!\\!\\!{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{i}}$; moreover, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{i}}$ satisfies the single-session requirements of Ex. 14. Now observe that, since the process ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{n}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}}\\!\not{\\!\\!\\!\to}$ cannot reduce further, by the contrapositive of Ex. 14, we obtain ${{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{n}}}\\!\\!\not\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to}_{\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}$. Furthermore, since ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$ is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$-deadlock- free, by Def. 11, we have $\forall{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}\\!\in\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{n}}$: ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{n}}\\!\left({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\right)}\\!\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\leqslant}}\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathbf{end}}$. Therefore, by [T-${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{0}}$], we have ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}\equiv{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{0}}$, which (by Def. 15(1)) is the thesis. [Process Liveness]lemmalemProcessLive Assume ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{\\!{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}}}$, where ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}}$, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}\equiv\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\Pi_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$, and ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}=\bigcup_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}\in I}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ such that for each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$, we have ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\emptyset}\\!}\cdot{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}\mathrel{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\vdash}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}}}$. Further, assume that each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ is either ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{0}}$ (up to $\equiv$), or only plays ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ in ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$, by ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$. Then, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ is live. ###### Proof By the assumption ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{{\color[rgb]{0.43,0.21,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.43,0.21,0.1}G}}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\sqsubseteq}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}}}$ and §​ 0.B.6, ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$ is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$-live. The proof proceeds by contradiction: assume that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ is _not_ live. Since (by hypothesis) each parallel component of ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ only plays one role ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}$ in session ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}$, this means that there are ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbb{C}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}$ such that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{0}}\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{1}}\\!\to\\!\cdots\\!\to\\!{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{n}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}\\!\equiv\\!{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbb{C}}}\\!\left[{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}}\right]$ where either: * • ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}[{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}]\mathbin{\\!\oplus\\!}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathtt{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathtt{m}}}}\langle{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}\rangle\vphantom{x}\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!.\\!}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}}}}$ (for some ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathtt{m}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}$), and $\not\exists{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbb{C}^{\prime}}$: ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}\\!\mathrel{\to{}^{\\!\\!\\!*}}\\!{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbb{C}^{\prime}}}\\!\left[{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}}}\right]$. By Appendix §​ 0.F, we know that each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{i}}$ is well-typed and its typing context ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{i}}$ is such that ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to^{\\!*}}_{\\!\\!\\!{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma_{i}}$; moreover, each ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P_{i}}$ satisfies the single-session requirements of Ex. 14. Therefore, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ satisfies the single-session requirements of Ex. 14, and is typed by some ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$ such that ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\to^{\\!*}}_{\\!\\!\\!{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$. Hence, by inversion of typing, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}$ is typed by some ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}$ (part of ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$) where ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}_{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}}\\!\left({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}\right)}$ is a (possibly recursive) internal choice towards ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}$, including a choice ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S}})}$ (where ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S}$ types the message payload ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}$). Therefore, we have ${{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}}\\!\\!\mathrel{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\xrightarrow{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}:{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\oplus}}{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}}}{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}({{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}S}})}}}}}$. Now, recall that (for the sake of the proof by contradiction) we are assuming that no sequence of reductions of ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ can fire the top-level selection of ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}$; this means that no parallel component of ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}$ ever exposes an external choice by role ${\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}$ including message label ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathtt{m}}$; correspondingly, there is at least one fair path beginning with ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$ (yielded by §​ 4.2) that never fires a transmission label ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}[{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}]}\mathtt{{\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}^{\prime}}$ (for any ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\mathtt{m}}^{\prime}$). But then, such a fair path starting from ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma^{\prime}}$ is not live, hence (by Def. 13) we obtain that ${\color[rgb]{0,0,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.9}\Gamma}$ is _not_ live, a desired contradiction; * • ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}s}}[{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{p}}}}}}]}}[{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}{\boldsymbol{{\color[rgb]{0.5,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.0,0.0}\mathtt{q}}}}}}}}}]\mathbin{\\!\&\\!}\\{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathtt{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\mathtt{m}}_{\mathnormal{i}}}}}({x_{i}})\vphantom{x}\mathbin{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\\!.\\!}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}_{i}}}}\\}_{i\in I}}$ (for some $I$, ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\mathtt{m}}_{\mathnormal{i}}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}x_{i}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}_{i}}$), and $\not\exists{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbb{C}^{\prime}},k\\!\in\\!I,{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}$: ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P^{\prime}}\mathrel{\to{}^{\\!\\!\\!*}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbb{C}^{\prime}}}\\!\left[{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}Q^{\prime}_{k}}\mathord{\left\\{{\nicefrac{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}w}}}{{x_{k}}}}\right\\}}}\right]$. The proof is similar to the previous case, and reaches a similar contradiction. In summary, we have shown that assuming ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ is not live leads to a contradiction. Consequently, we can conclude that ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}P}$ is live. * ###### Proof Apply Appendix §​ 0.F and 0.F.
# SystemC Model of Power Side-Channel Attacks Against AI Accelerators: Superstition or not? ††thanks: This work has been funded by the German Ministry of Education and Research (BMBF) via the project VE-Jupiter (16ME0234). Andrija Nešković1, Saleh Mulhem1, Alexander Treff2, Rainer Buchty1, Thomas Eisenbarth2, and Mladen Berekovic1 1Institute of Computer Engineering and 2Institute for IT Security University of Lübeck Lübeck, Germany {andrija.neskovic, saleh.mulhem, a.treff, rainer.buchty, thomas.eisenbarth, <EMAIL_ADDRESS> ###### Abstract As training artificial intelligence (AI) models is a lengthy and hence costly process, leakage of such a model’s internal parameters is highly undesirable. In the case of AI accelerators, side-channel information leakage opens up the threat scenario of extracting the internal secrets of pre-trained models. Therefore, sufficiently elaborate methods for design verification as well as fault and security evaluation at the electronic system level are in demand. In this paper, we propose estimating information leakage from the early design steps of AI accelerators to aid in a more robust architectural design. We first introduce the threat scenario before diving into SystemC as a standard method for early design evaluation and how this can be applied to threat modeling. We present two successful side-channel attack methods executed via SystemC-based power modeling: correlation power analysis and template attack, both leading to total information leakage. The presented models are verified against an industry-standard netlist-level power estimation to prove general feasibility and determine accuracy. Consequently, we explore the impact of additive noise in our simulation to establish indicators for early threat evaluation. The presented approach is again validated via a model-vs-netlist comparison, showing high accuracy of the achieved results. This work hence is a solid step towards fast attack deployment and, subsequently, the design of attack-resilient AI accelerators. ###### Index Terms: Artificial Intelligence, Accelerators, Side-channel Attacks, SystemC, Power Modeling. ## I Introduction The use of Artificial Intelligence (AI) is rapidly growing across all emerging technologies. One of the most important aspects is accelerating the AI inference process and building according hardware accelerators. An accelerator design’s fault-tolerance mechanisms and other safety features are usually evaluated in the pre-silicon phase, whereas evaluation of the accelerator’s physical security is performed in the post-silicon phase. Side channel attacks, especially power attacks, are considered a serious security threat leading to a vulnerable AI hardware component. Recently, studies in the domain of AI accelerator design show that side-channel leakages of AI accelerators can be exploited to reveal industrial secrets such as the AI model architecture and its parameters [1, 2, 3]. For instance, power attacks are deployed successfully to reverse engineer the AI model [1, 4]. Power attacks lead to copying the AI model and then distributing it as counterfeit intellectual property (IP). Therefore, a huge need exists for evaluating an AI accelerator’s side-channel attack resistance in the early design steps (EDS) of integrated circuit (IC) design, such as security evaluation at register- transfer level (RTL), using gate-level netlists, or even earlier. Evaluation and investigation of security issues in EDS provide insight into the robustness of the later fabricated IC. Thus, design decisions made at high abstraction levels have significant impact on the whole design process. Figure 1: AI Accelerator IC Design Process (Adapted from [5]). Tools and platforms considering security evaluation in EDS were developed mainly to detect hardware Trojan circuitry [6] or to check the security rule in ICs [7]. Recent work confirms the importance of security evaluation in the EDS by demonstrating static and dynamic information flow analysis using Virtual Prototypes (VPs) [8, 9]. Simulators targeting side-channel evaluation have been studied in [10]. However, the reviewed tools consider only the software implementation of cryptographic algorithms executed on general purpose hardware (i.e. microcontrollers). The evaluation of side-channel attacks (SCA) and its impact on ICs are still an open issue in EDS: To the best of our knowledge, no simulator exists targeting SCA evaluation of dedicated AI hardware accelerators. The missing SCA evaluation in EDS of IC is required to ensure the IC’s dependability, together with existing reliability and safety tests [11]. Our work demonstrates previously shown power SCA by utilizing a dedicated power estimation model with the goal to evaluate the worst-case resilience of an AI accelerator’s design at electronic system level (ESL). This approach was positively verified by a comparison of the SystemC model’s behavior with a technology-synthesized gate-level netlist. ### I-A Why a SystemC Model? SystemC is a solid candidate [12] for performing security evaluation in EDS, as it is one of the industry standards for hardware/software modelling at high abstraction levels. Particularly, SystemC is C++-based and was originally conceived for hardware/software co-design, simulation, and functional verification [13]. Over time, new design aspects such as fault evaluation [14] and power modeling [15, 16] were also addressed using SystemC. The security assessment of IC design has recently received more attention [7], especially by SystemC in EDS [12]. With proper power estimation models, SystemC can be utilized to simulate power attacks against ICs even at the ESL. In order to clarify how to deploy SystemC in EDS, Fig. 1 shows the top-down hardware design process of AI accelerators (a modified version of the Double roof model described in [5, 17]). Starting from ESL, the requirements towards AI accelerators are specified and synthesized into a system model (most likely represented by a VP [18]). The requirements for lower abstraction levels are derived based on this specification and implementation. At every abstraction level, the specification is transformed into an implementation with a synthesis step. This work presents a SystemC model of an AI accelerator at ESL. ### I-B Paper Contribution In this paper, we show how to evaluate power attacks against AI accelerators in EDS. For this, we build a SystemC model of a systolic-array-based AI accelerator hardware at ESL. Using SystemC, the activation count of components can be annotated with a power-consumption model to generate power traces covering the hardware of AI accelerators. We demonstrate a correlation power analysis (CPA) and template attacks (TA) based on our SystemC model in ideal conditions and explore the limits of these attacks in a noisy environment. Finally, we show the comparison of our model-based traces against power traces from a state of the art netlist level simulation to demonstrate the feasibility of the proposed model. ## II Related Work The target of this work is to evaluate the benefits of using SystemC models to analyze side-channel information leakage at ESL in EDS. Therefore, power side- channel attacks against AI accelerators are briefly discussed in this section, as well as modeling approaches utilizing SystemC in different areas. ### II-A Power Attacks vs. AI Hardware Accelerator Several power attack scenarios against AI hardware accelerators have been proposed [19, 4]. Power attacks exploit power consumption leakage from an accelerator executing a pre-trained AI model (simply AI model) to reveal its internal secrets. In particular, an attacker uses an evaluation board attached to a targeted device [20], i.e., the AI Hardware accelerator, and captures power consumption traces of some data input. The attacker applies statistical analysis, e.g., Simple Power Analysis (SPA), Differential Power Analysis (DPA), or Correlation Power Analysis (CPA), on the data input and the power traces to recover the internal secrets of the AI model. For instance, DPA was deployed to extract the AI’s secret parameters in [19]. In [4], CPA was applied against the systolic-array-based hardware accelerator of Deep Neural Networks (DNNs). ### II-B Power Consumption Modeling Challenges The power consumption of any CMOS computing platform includes two types: static (leakage) power consumption $P_{static}$ and dynamic (switching) power consumption $P_{dynamic}$ [21]. The total power for a computing platform can be modeled by: $P_{total}=P_{dynamic}+P_{static}$ (1) $P_{static}$ is the product of leakage current and the supply voltage [22], and $P_{dynamic}$ indicates and quantifies transistor switching. Thus, $P_{dynamic}$ provides a distinctive current profile. Therefore, power attacks mainly rely on $P_{dynamic}$, which is considered the Achilles heel of any CMOS computing platforms [23]. SystemC can be utilized for the activation count of hardware components at different abstraction levels. It plays a crucial role in simulating power attacks in EDS. Using SystemC to estimate a computing platform’s power consumption poses several challenges: SystemC was used in [24] to estimate the power consumption of different processor configurations based on pre-computed power values of its components, such as memories, register files, function units, etc. The proposed power model exhibited a 15% prediction error. In [15], a black-box power model was introduced for digital signal processors (DSPs) in SystemC. The proposed power model does not require detailed insight into the individual components of the probed computing platform. The black-box power model exhibited a prediction error of less than 4%. This prediction error is caused by the lack of information about power dissipation $P_{static}$ of the targeted manufacturing technology. Therefore, the power consumption of a computing platform is rather difficult to model in SystemC. However, our work introduces a SystemC model that considers only dynamic power consumption $P_{dynamic}$ to enable simulating power attacks. ### II-C SystemC for Security Evaluation Utilizing a SystemC VP to evaluate security-critical systems on chips has already been demonstrated in [16]. Beyond that, SystemC was proven successful for power-attack evaluation of cryptographic applications in [12], such as RSA-based public-key cryptosystems and elliptic-curve cryptography. This approach in [12] solely relies on a dedicated dynamic power consumption model, the so-called input-dependent model. This model covers the arithmetic operations by assuming that there is no difference between simulated hardware or C++ operators. The input-dependent model covers bit-shifts and comparisons as well, but lacks power modeling of registers, multiplexers, and other hardware components. In order to extend SystemC power attacks analysis beyond cryptographic applications, this paper introduces power estimation models to cover systolic arrays. By utilizing these power models, CPA and power-template attacks (TA) against AI accelerators are simulated. In the following sections, we first build a system model of a systolic-array- based AI accelerator and extend and modify power-consumption models proposed in [12, 16] to also cover additional components present in an AI hardware accelerator. Then, we perform the proposed attacks. Finally, we verify our power estimation model and the validity of the attacked performed hereon against a state-of-the-art netlist-level power estimation tool. ## III AI Accelerator ESL Model Figure 2: Block diagram of TPU including the threat model. To perform a security evaluation in EDS, an ESL model of an AI accelerator is required. For our approach, we use SystemC as the modelling language and start with a loosely-timed SystemC model of a systolic array for simulating the behaviour. By annotating this model with input-dependent power estimation capabilities, all components for SCA simulation can be provided. ### III-A Systolic Array for Acceleration The inference process of AI applications requires frequent data access. Such data-read operations from memory are very costly and time-consuming and therefore should be avoided on edge accelerators in order to minimize power consumption and maximize performance. This can be addressed by using so-called systolic array architectures, featuring a number of benefits [25]. Instead of accessing memory after every arithmetic operation, the systolic design approach utilizes multiple processing elements (_PEs_) to avoid frequent memory access. Each _PE_ performs a multiply-accumulate operation (MAC) as shown in Fig. 2. The partial result of an _PE_ is directly passed to another _PE_ without memory access. The realization of an array of _PEs_ in hardware can accelerate matrix multiplication, which is essential for accelerating the desired AI algorithm. The matrix multiplication of $A=(a_{ij})_{3\times 3}$ and $B=(b_{ij})_{3\times 3}$ results in a matrix $C=(c_{ij})_{3\times 3}$. A systolic array accelerates such a matrix multiplication, where, a resulting element ($c_{11}$) is, for instance, calculated sequentially over $3$ clock-cycles by performing 3 MACs in 3 different _PEs_ as follows [4]: $\begin{array}[]{rlr}Reg_{11}&=a_{11}\times b_{11}+0&(t=1)\\\ Reg_{21}&=a_{12}\times b_{21}+Reg_{11}&\\\ &=a_{12}\times b_{21}+a_{11}\times b_{11}&(t=2)\\\ Reg_{31}&=a_{13}\times b_{31}+Reg_{21}&\\\ &=a_{13}\times b_{31}+a_{12}\times b_{21}+a_{11}\times b_{11}&(t=3)\end{array}$ (2) Where $Reg_{ij}$ is a partial sum register of $\emph{PE}_{ij}$ as shown in Fig. 2. In our model, the weights and inputs are represented as 8 bit integers, and the partial-sum results as 18 bit integers. ### III-B SystemC Model of the Accelerator For our approach, we focus on a loosely-timed SystemC model. Here, the behaviour of the AI accelerator is represented by a SystemC _module_ to perform accelerated calculations and mimic the timing and power characteristics of a real hardware accelerator. Fig. 2 shows the architecture of the modelled system. The SystemC model easily realizes the individual multiply-accumulate and register operations required during inference, by using dedicated data types. The matrix multiplication is performed over several cycles depending on the dimension of the matrix by utilizing all _PEs_ in parallel. The result is therefore available in several parts across multiple cycles as described in Eq. 2. Furthermore, the proposed adversary is implemented as another SystemC _module_ which is able to send input and receive output from the AI accelerator. Lastly, all activity during inference is tracked by a dedicated resource handler shown in Fig. 3, which implements a power-estimation model described in the following section. Figure 3: SystemC Implementation Overview. ## IV Dynamic Power Consumption Model As the dynamic power consumption ($P_{Dynamic}$) is the required power consumption during logical transitions [22], the dynamic power-estimation model of a systolic array can be built based on the operations performed by every _PE_. Here, the SystemC model should implement a dedicated resource handler to generate power traces while the calculation is performed [12]. From ESL perspective, every single operation performed in hardware consumes a certain amount of power measured by the so-called power expense, which depends on the hardware architecture, the type of operations, and the inputs of the operation. In the following, we utilize the input-dependent power model proposed in [12] and we extend this model to include hardware components such as registers and MAC components. ### IV-A Power Model of a Single Processing Element The extended version of the input-dependent model relies on the input of the hardware components and its computational/storage efforts $CE$. If the inputs of a hardware component are zero, we consider its contribution to the dynamic power consumption as negligible and its computational/storage effort $CE$ is zero. Otherwise, its contribution is not negligible, and its computational/storage effort $CE$ relies on the number of ones in the input. The $CE$ reflects the switching activity of the component and can be described by utilizing a bit-flipping power model. Several cases of single-bit flipping have to be considered, and a power expense for every case is assigned as follows: The transition $0\rightarrow 0$ and $1\rightarrow 1$ require zero power expense, $0\rightarrow 1$ requires one power expense, and $1\rightarrow 0$ requires 0 power expense as flipping one bit from $0$ to $1$ consumes much more power than from $1$ to $0$ [12]. The proposed dynamic power consumption model of a single _PE_ estimates the power expense of the MAC component by breaking it down into arithmetic operations. Additionally, the expense of accessing the register is considered. MAC Component Power Model: The computational expenses of the MAC component can be broken down into the switching activity of binary arithmetic operations performed during the calculation, namely multiplication and addition. Counting the flipping of single bits during the calculation provides an estimation of the power expense of the performed MAC operation. Binary arithmetic multiplication can be considered as a series of adders; therefore, the power model of the multiplication is based on the power expenses of a binary adder shown in Table I. TABLE I: Power Expenses of Binary Adder Input Bits | Output Bits | $state$ | $CE$ | $PM_{BA}$ ---|---|---|---|--- $(a,b,c)$ | $(c,s)$ | $expense$ (0,0,0) | (0,0) | 0 | 0 | 0 (1,0,0) | (0,1) | 0 | 1 | 1 (0,1,0) | (0,1) | 0 | 1 | 1 (1,1,0) | (1,0) | 1 | 2 | 3 (0,0,1) | (0,1) | 1 | 0 | 1 (1,0,1) | (1,0) | 0 | 1 | 1 (0,1,1) | (1,0) | 0 | 1 | 1 (1,1,1) | (1,1) | 0 | 2 | 2 Register Power Model: $P\\!M_{Reg}$ denotes the power expenses of the register access power model, which can be modeled based on the bit-switching activity inside the register every time a new value is written. Therefore, the old and new states of the register ($Reg_{old}$ and $Reg_{new}$) are compared, and the number of switches is counted by using the Hamming Distance (HD): $P\\!M_{Reg}=H\\!D(Reg_{old}\oplus Reg_{new}).$ (3) The power consumption model of one PE: The total power consumption of a _PE_ ($P\\!M_{\emph{PE}}$) is the sum of the $P\\!M_{MAC}$ and the $P\\!M_{Reg}$, i.e., $P\\!M_{\emph{PE}}=P\\!M_{MAC}+P\\!M_{Reg}.$ (4) ### IV-B Resource Handler In the SystemC implementation, the power estimation is performed by the resource handler. The proposed resource handler relies on the total dynamic power consumption of all _PEs_ , where the _PEs_ consume power depending on the performed MAC operation and the register write operation. These operations are modeled separately and combined to produce a power trace of the whole calculation. We modify and add our _PE_ power model and utilize the resource handler proposed in [12] to fit the AI accelerator model. Fig. 3 illustrates how the resource handler generates power traces of the AI accelerator during inference. In the following sections, we will show how the power traces generated by the resource handler can be used by an adversary to perform power SCA. ## V Threat Model The proposed threat model is equivalent to a practical one in which the adversary has physical access to an AI edge device [4]. Regarding the system model, we assume that the adversary has the following capabilities during the attack: * • The adversary has knowledge about the targeted platform or device. * • The adversary has knowledge about the internal structure of the AI accelerator. * • The adversary cannot directly access to or read the secret information (weights). * • The adversary can input any data into the AI accelerator. * • The adversary can observe the device’s inference results and obtain power traces of the performed operations. This scenario can be classified as a grey-box approach [26], where the target of the adversary is to reveal the _PEs_ ’ parameters. These parameters are highly valuable, since they represent the weights of a trained NN. Fig. 2 shows an overview of the threat model. The weight parameters are pre- loaded to the systolic array for inference. The information leak is caused by the power trace of the inference calculations, thus the adversary can attack the weights via SCA. ## VI SCA Simulation using SystemC The described model of an AI accelerator extended with power estimation capabilities enables the modelling of SCA. Having defined the threat model, we can simulate side-channel attacks targeting the secret parameters of a trained neural network at the ESL. In order to simulate realistic scenarios, the CPA approach is considered as this approach has been proven successful on real hardware [4]. In addition, we revisit Template attacks, which are considered the most objective method to assess the leakage of a device under test [27, 28]. ### VI-A Adversary Simulation in SystemC The power estimation model of every _PE_ is a combination of the power estimation models of the single operations performed by the _PE_. Since static power consumption of the device is of no interest for the above mentioned SCAs, the focus lies on dynamic power consumption. The adversary has access to the modeled power trace and thus can perform the attacks as if the hardware was real. ### VI-B Correlation Power Analysis CPA-based attacks have been proven successful against hardware cryptographic functions [29]. Compared to less complex power analysis attacks, like SPA or DPA, CPA shows a more robust behavior. To perform a CPA, a leakage model needs to be defined. The most common approach is to calculate the correlation coefficient between power trace and Hamming Distance (HD) or Hamming Weight (HW) estimation of a certain calculation performed by the observed system. Figure 4: CPA provides multiple weight candidates for $b_{11}$. Every $\emph{PE}_{ij}$ of the systolic performs a multiply-accumulate operation and stores the result of the operation into a register. An adversary assumes a correlation between the power traces and the HD model of _PEs_ registers. To reveal a secret parameter, the adversary calculates the HD estimation ($\hat{H}_{n,b_{k}}$) for all possible transitions of the $Reg_{ij}$ register by $\hat{H}_{n,b_{k}}=H\\!D\left(Reg_{ij}^{t},Reg_{ij}^{t+1}\right).$ (5) The correlation coefficient ($\rho\left(b_{k}\right)$) of all estimations and the recorded power traces is calculated as follows: $\rho\left(b_{k}\right)=\frac{\sum_{n=0}^{N-1}\left(P_{n}-\bar{P}\right)\left(\hat{H}_{n,b_{k}}-\bar{H}_{b_{k}}\right)}{\sqrt{\sum_{n=0}^{N-1}\left(P_{n}-\bar{P}\right)^{2}}\sqrt{\Sigma_{n=0}^{N-1}\left(\hat{H}_{n,b_{k}}-\bar{H}_{n,b_{k}}\right)^{2}}},$ (6) where $P_{n}$ and $\bar{P}$ are the power trace and its average value and $\hat{H}_{n,b_{k}}$ and $\bar{H}_{b_{k}}$ are the HD estimation and its average value. The true value of the parameter produces the highest correlation; thus the adversary can reveal it by comparing all of the correlation coefficients as follows: $\hat{b}=\arg\max_{b_{k}}\left(\left|\rho\left(b_{k}\right)\right|\right).$ (7) Since the HD model is not unique for all possible transitions, multiple candidates can provide a similar correlation coefficient (values with bit- shift difference from the true value, e.g. 23, 46, 92, etc.). This causes certain constraints when revealing the parameters since the attack produces multiple candidates as shown in Fig. 4. Nevertheless, the attack can reduce the search space drastically. ### VI-C Template Attack Template attacks are a very powerful type of side-channel analysis [30]. As a subset of profiling attacks, template attacks are composed of two phases: profiling and attack phase. In the profiling phase, the adversary profiles data-dependent power consumption and noise behavior of a target device handling sensitive data. Then, the adversary performs the attack in the attacking phase to reveal the sensitive data based on the prior knowledge of the device profile. In the profiling phase of a template attack, the adversary has full control over a target device and can, e.g., arbitrarily set the weights. This scenario can be easily simulated with our implemented SystemC model. Having created the templates for individual _PEs_ , the adversary can launch the attack by iterating over individual _PEs_. In this attack a small number (10-20) of traces with unknown, but fixed weights leads to a successful recovery. As the parameters for building the template differ from _PE_ to _PE_ , the adversary cannot re-use the same template to reveal all of the parameters. Nevertheless, the additional effort to reveal all of the parameters is only linked to building the template for each of the _PEs_. The acquired traces can be re-used, thus the adversary doesn’t require additional power traces (neither for the profiling, nor for the attack). ## VII Attack Results and Impact of Additive Noise Figure 5: Correlation coefficient against additive noise. Since the model does not consider any measurement noise, the attacker is able to reveal all hidden parameters of the systolic array with the CPA. Fig. 4 shows the simulation results of the attack against the first parameter. The multiple peaks observed are caused by bit-shifted true values. Since the HD of bit-shifted values is the same, these weight candidates cause very similar correlation levels. The CPA of a real computing platform is most certainly influenced by measurement noise, therefore, this results should be considered as the best-case scenario (from the attacker’s perspective). The template attack successfully recovers all nine weights from the processing elements with a very low number of attack traces (less than 15 attack traces). Since we assume an adversary in a chosen-plaintext scenario, the attacker can freely decide which inputs are sent to the systolic array. By setting entire input columns to zero, the impact of most processing elements which store the pre-loaded weights is eliminated. This allows the attacker to selectively enable only a small subset (i.e. single columns) of processing elements. Just like with the CPA, the bit-shifted values of the correct weight produce a high score. Therefore, it is possible to have multiple candidates as a result of the attack in the leftmost column. After recovering the weights from the leftmost columns, the attack can build templates including the recovered weights. This reduces the uncertainty when attacking the middle or rightmost _PEs_ , thus bit-shifted values of the correct weights do not produce a false positive. TABLE II: Impact of additive noise on attacks Revealed Parameters | Template Attack | Correlation Power Attack ---|---|--- SNR | # Attack Traces | SNR | Correlation Coefficient 9/9 | $\geq$2.0 | 15 | >4.0 | 0.561 - 0.775 8/9 | - | - | 3.5 - 4.0 | 0.444 - 0.561 0/9 | <2.0 | - | <3.5 | <0.444 ### VII-A Impact of Additive Noise on CPA In applied cryptography, analysing the impact of additive noise on power attacks is essential [31, 32]. The backbone of such an analysis is Signal-to- noise ratio (SNR) of the leaked information [20]. Here, additive noise is used. The measurement noise is modelled by adding random values $R_{n}$ with an average $\bar{R}=0$ to the power trace $P_{n}$ at each estimation point as $P_{n}+R_{n}$. It can be gradually increased to have a bigger impact on the power estimation value. Here, we can set a fixed SNR to produce noisy power traces. With this model, a threshold evaluation of the CPA’s success is possible. Multiple experiments with additive noise are performed to investigate the influence of noise on correlation coefficients. A comparison of the correlation with different amounts of additive noise is shown in Table II and illustrated in Fig. 5. The results show how an increasing noise level impacts the correlation coefficient, ultimately making the correct candidate indistinguishable from other candidates. For too low SNRs, the CPA cannot successfully reveal the weights from the AI accelerator. Increasing the number of traces an attacker acquires, increases the chance of a successful attack. This can give an indication to how many traces an attacker would require in a post-silicone attack. ### VII-B Impact of Additive Noise on Template Attacks Several experiments have been conducted to study the impact of additive noise on template attacks, where both the profiling and attack traces are affected. SNR is also used to describe the magnitude of the noise. The experiments show that recovering the weights remains as easy as without noise. By increasing the impact of noise, i.e., decreasing the SNR to as low as 2.0, the template attacks proves to be successful with as little as 15 attack traces per targeted parameter. Consequently, template attacks are applicable with lower SNR values, i.e., the template attacks are much less affected by noise if the same noise level is present during the profiling phase, as well as the attack phase. Template attacks therefore, pose a serious threat to implementations, even in a noisy environment. By taking advantage of input tuning (as a chosen plaintext attack), an adversary could theoretically attack systolic arrays of any size and reveal secret parameters. Here, a more noisy environment requires the attacker to use a larger number of traces when building the template. ## VIII Model Verification Figure 6: Systolic Array Implemented for Verification. It was previously shown [33], that a time-based power estimation with a gate- level netlist comes quite close to post-silicon measurements. To achieve industry-grade results, we also use the Synopsys tool suite for our experiments. The verification starts with a Verilog implementation of a single PE, as well as the whole $(3\times 3)$ systolic array, as shown in 6. The design is synthesized using Synopsys DesignCompiler [34] to generate a gate-level netlist. A pre-defined test bench is used to stimulate the netlist and gather a value-change dump (VCD) using Synopsys VCS [35]. Lastly, Synopsys PrimePower [33] creates a power trace based on the VCD in a time-based power analysis. These power traces are considered to be noise-free reference traces of the real hardware. We use these traces to verify the input-dependent power model utilized in our SystemC simulation of a systolic array. Consequently, a statistical comparison between the reference power traces and the power traces collected at the SystemC level is performed. The comparison is divided into four main experiments as follows: Figure 7: Pearson’s Correlation Coefficient between Trace Sets. ### VIII-A First Experiment For a single PE with the same random inputs, we generate two sets of power traces (20 000 traces) collected at SystemC level and gate-level netlist. Then, we use Pearson’s correlation coefficient (PCC) to interpret if there is a linear correlation between them. PCC lies between $[-1,+1]$, with $PCC=0$ indicating no linear correlation. The results show that there is a positive correlation between the traces, as seen in Fig. 7a. A value of PCC up to 0.65 confirms that the proposed SystemC model is linearly associated with the power consumption tendency of a real hardware implementation. ### VIII-B Second Experiment For a single PE with two distinct sets of random inputs, we generate two sets of power traces (20 000 traces) collected at SystemC level and gate-level netlist. The goal of this experiment is to exclude a false-positive correlation for a single PE. Here, we observe a correlation coefficient close to 0, as shown in Fig. 7b. This confirms there is no false positive correlation between the power traces. ### VIII-C Third Experiment The design of the whole systolic array is more complex. A test bench with full coverage of all possible input/weight combinations for all PEs would produce an enormous amount of traces. Therefore, we fixed the weights in the PEs and stimulated the systolic array by 20 000 random input samples. An equivalent test bench is implemented in SystemC to produce comparable traces. As we expected, the two trace sets will be linearly correlated the most when modeling smaller pieces of hardware. Naturally, modelling bigger hardware at a high abstraction level will bring a drop in accuracy, and PCC will be lower, as shown in the in Fig. 7c. Therefore, Spearman’s correlation coefficient (SCC) is used in this experiment to interpret the direction of the association between them. The sign of the SCC value indicates if the same trends are expected between the two trace sets. When evaluating SCC between the two sets, we observe a positive SCC coefficient of $+0.27$. This indicates a moderate monotonic (linear or non-linear) relationship between them. In other words, SCC shows that power traces collected at SystemC level tend to increase when reference power traces increase. ### VIII-D Fourth Experiment Similar to the second experiment, we aim to exclude a false-positive correlation result between the traces of the whole systolic array. With two distinct sets of random inputs, we observe both PCC and SCC close to 0, as shown in Fig. 7d. This indicates there is no false positive in the power traces collected at SystemC level. In conclusion, it can be said with high confidence that the proposed power estimation model follows the same trends as a state-of-the-art netlist-level power estimation. ## IX Conclusion This paper presents power side-channel attacks against AI accelerator architectures at the electronic system level. Our approach features AI accelerator models with a corresponding dynamic power-consumption model to simulate the behaviour of systolic-array-based AI accelerators using SystemC. Our findings show that SystemC-based power attacks are possible and sufficiently resemble real-world threat scenarios. Our experiments successfully simulate SystemC-based power side-channel attacks against AI accelerators leading to full secret extraction: While correlation power analysis shows certain limitations in noisy conditions, template attacks pose a significant risk of being able to adapt to noise. To verify the SystemC-power estimation model, several experiments were performed to compare power traces computed from synthesized netlists with the proposed model. The results show that the proposed model follows the same trends as a gate-level netlist power estimation. Our set goal of earliest- possible threat analysis and subsequent design suggestions was thus successfully achieved and demonstrated. This work hence is one essential – and with regard to the presented methods and procedures to the best of our knowledge first – step in design-space exploration for security from a design/hardware perspective. In a future step and raising complexity, we would like to extend this approach from a systolic array to a full system model. ## References * [1] L. Batina, S. Bhasin, D. Jap, and S. Picek, “CSI NN: Reverse Engineering of Neural Network Architectures through Electromagnetic Side Channel,” in _Proceedings of the 28th USENIX Conference on Security Symposium_ , ser. SEC’19. USA: USENIX Association, 2019, pp. 515–532. * [2] H. Chabanne, J.-L. Danger, L. Guiga, and U. Kühne, “Side channel attacks for architecture extraction of neural networks,” _CAAI Transactions on Intelligence Technology_ , vol. 6, no. 1, pp. 3–16, 2021. * [3] Y.-S. Won, S. Chatterjee, D. Jap, A. Basu, and S. Bhasin, “WaC: First Results on Practical Side-Channel Attacks on Commercial Machine Learning Accelerator,” in _Proceedings of the 5th Workshop on Attacks and Solutions in Hardware Security_ , ser. ASHES ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 111–114. [Online]. Available: https://doi.org/10.1145/3474376.3487284 * [4] K. Yoshida, M. Shiozaki, S. Okura, T. Kubota, and T. Fujino, “Model Reverse-Engineering Attack against Systolic-Array-Based DNN Accelerator Using Correlation Power Analysis,” _IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences_ , vol. E104.A, no. 1, pp. 152–161, 2021. * [5] A. Gerstlauer, C. Haubelt, A. D. Pimentel, T. P. Stefanov, D. D. Gajski, and J. Teich, “Electronic system-level synthesis methodologies,” _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ , vol. 28, no. 10, pp. 1517–1530, 2009. * [6] H. Salmani and M. Tehranipoor, “Analyzing circuit vulnerability to hardware trojan insertion at the behavioral level,” in _2013 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)_ , 2013, pp. 190–195. * [7] K. Xiao, A. Nahiyan, and M. Tehranipoor, “Security rule checking in ic design,” _Computer_ , vol. 49, no. 8, pp. 54–61, 2016. * [8] M. Hassan, V. Herdt, H. M. Le, D. Große, and R. Drechsler, “Early soc security validation by vp-based static information flow analysis,” in _2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)_ , 2017, pp. 400–407. * [9] M. Goli, M. Hassan, D. Große, and R. Drechsler, “Security validation of vp-based socs using dynamic information flow tracking,” _it - Information Technology_ , vol. 61, no. 1, pp. 45–58, 2019. [Online]. Available: https://doi.org/10.1515/itit-2018-0027 * [10] N. Veshchikov and S. Guilley, “Use of simulators for side-channel analysis,” in _2017 IEEE European Symposium on Security and Privacy Workshops (EuroS &PW)_, 2017, pp. 104–112. * [11] B. Bauer, M. Ayache, S. Mulhem, M. Nitzan, J. Athavale, R. Buchty, and M. Berekovic, “On the dependability lifecycle of electrical/electronic product development: The dual-cone v-model,” _Computer_ , vol. 55, no. 09, pp. 99–106, sep 2022. * [12] J. Treus and P. Herber, “Early analysis of security threats by modeling and simulating power attacks in systemc,” in _2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring)_ , 2020, pp. 1–5. * [13] R. Drechsler, “Advanced formal verification,” 01 2004. * [14] B.-A. Tabacaru, M. Chaari, W. Ecker, T. Kruse, and C. Novello, “Fault-effect analysis on system-level hardware modeling using virtual prototypes,” in _2016 Forum on Specification and Design Languages (FDL)_ , 2016, pp. 1–7. * [15] G. Onnebrink, S. Schürmans, F. Walbroel, R. Leupers, G. Ascheid, X. Chen, and Y. Harn, “Black box power estimation for digital signal processors using virtual platforms,” in _Proceedings of the 2016 Workshop on Rapid Simulation and Performance Evaluation: Methods and Tools_ , ser. RAPIDO ’16. New York, NY, USA: Association for Computing Machinery, 2016. [Online]. Available: https://doi.org/10.1145/2852339.2852345 * [16] F. Menichelli, R. Menicocci, M. Olivieri, and A. Trifiletti, “High-level side-channel attack modeling and simulation for security-critical systems on chips,” _IEEE Transactions on Dependable and Secure Computing_ , vol. 5, no. 3, pp. 164–176, 2008. * [17] J. Teich, “Embedded System Synthesis and Optimization,” 2000. * [18] R. Leupers, G. Martin, R. Plyaskin, A. Herkersdorf, F. Schirrmeister, T. Kogel, and M. Vaupel, “Virtual platforms: Breaking new grounds,” in _2012 Design, Automation Test in Europe Conference Exhibition (DATE)_ , 2012, pp. 685–690. * [19] A. Dubey, R. Cammarota, and A. Aysu, “MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection,” in _2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)_. Los Alamitos, CA, USA: IEEE Computer Society, dec 2020, pp. 197–208. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/HOST45689.2020.9300276 * [20] A. Biryukov, D. Dinu, and J. Großschädl, “Correlation Power Analysis of Lightweight Block Ciphers: From Theory to Practice,” in _Applied Cryptography and Network Security_ , M. Manulis, A.-R. Sadeghi, and S. Schneider, Eds. Cham: Springer International Publishing, 2016, pp. 537–557. * [21] D. Harris and S. Harris, _Digital design and computer architecture_. Morgan Kaufmann, 2010. * [22] B. Jacob, S. W. Ng, and D. T. Wang, “Chapter 29 - power and leakage,” in _Memory Systems_ , B. Jacob, S. W. Ng, and D. T. Wang, Eds. San Francisco: Morgan Kaufmann, 2008, pp. 847–864. [Online]. Available: https://www.sciencedirect.com/science/article/pii/B978012379751350031X * [23] R. Soares, V. Lima, R. Lellis, P. Finkenauer Jr, and V. Camargo, “Hardware countermeasures against power analysis attacks: a survey from past to present,” _Journal of Integrated Circuits and Systems_ , vol. 16, no. 2, pp. 1–12, 2021. * [24] S. A. A. Shah, J. Wagner, T. Schuster, and M. Berekovic, “A lightweight-system-level power and area estimation methodology for application specific instruction set processors,” in _2014 24th International Workshop on Power and Timing Modeling, Optimization and Simulation (PATMOS)_ , 2014, pp. 1–5. * [25] Kung, “Why systolic architectures?” _Computer_ , vol. 15, no. 1, pp. 37–46, 1982. * [26] Y. Xiang, Y. Xu, Y. Li, W. Ma, Q. Xuan, and Y. Liu, “Side-Channel Gray-Box Attack for DNNs,” _IEEE Transactions on Circuits and Systems II: Express Briefs_ , vol. 68, no. 1, pp. 501–505, 2021. * [27] F. Durvaux, F. Standaert, and N. Veyrat-Charvillon, “How to certify the leakage of a chip?” in _Advances in Cryptology - EUROCRYPT 2014 - 33rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Copenhagen, Denmark, May 11-15, 2014. Proceedings_ , ser. Lecture Notes in Computer Science, P. Q. Nguyen and E. Oswald, Eds., vol. 8441. Springer, 2014, pp. 459–476. [Online]. Available: https://doi.org/10.1007/978-3-642-55220-5_26 * [28] O. Bronchain, J. M. Hendrickx, C. Massart, A. Olshevsky, and F. Standaert, “Leakage certification revisited: Bounding model errors in side-channel security evaluations,” in _Advances in Cryptology - CRYPTO 2019 - 39th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2019, Proceedings, Part I_ , ser. Lecture Notes in Computer Science, A. Boldyreva and D. Micciancio, Eds., vol. 11692. Springer, 2019, pp. 713–737. [Online]. Available: https://doi.org/10.1007/978-3-030-26948-7_25 * [29] E. Brier, C. Clavier, and F. Olivier, “Correlation Power Analysis with a Leakage Model,” in _Cryptographic Hardware and Embedded Systems - CHES 2004_ , M. Joye and J.-J. Quisquater, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, pp. 16–29. * [30] S. Chari, J. R. Rao, and P. Rohatgi, “Template attacks,” in _Cryptographic Hardware and Embedded Systems - CHES 2002_ , B. S. Kaliski, ç. K. Koç, and C. Paar, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003, pp. 13–28. * [31] L. Lerman, N. Veshchikov, S. Picek, and O. Markowitch, “On the construction of side-channel attack resilient s-boxes,” in _International Workshop on Constructive Side-Channel Analysis and Secure Design_. Springer, 2017, pp. 102–119. * [32] F.-X. Standaert, T. G. Malkin, and M. Yung, “A unified framework for the analysis of side-channel key recovery attacks,” in _Annual international conference on the theory and applications of cryptographic techniques_. Springer, 2009, pp. 443–461. * [33] “Synopsys PrimePower,” https://www.synopsys.com/implementation-and-signoff/signoff/primepower.html, accessed: 2023-05-17. * [34] “Synopsys DesignCompiler,” https://www.synopsys.com/implementation-and-signoff/rtl-synthesis-test/dc-ultra.html, accessed: 2023-05-17. * [35] “Synopsys VCS,” https://www.synopsys.com/verification/simulation/vcs.html, accessed: 2023-05-17.
# Payroll Tax Incidence: Evidence from Unemployment Insurance Audrey Guo Santa Clara University. Contact<EMAIL_ADDRESS>I would like to thank Katarzyna Bilicka, Maria Fitzpatrick, Tatiana Homonoff, John Ifcher, Kris Mitchener, and seminar participants at Georgetown Law, UC Davis, Santa Clara University, IIPF Annual Congress, and All-California Labor Conference for helpful comments. Any views expressed are those of the authors and not those of the U.S. Census Bureau. The Census Bureau’s Disclosure Review Board and Disclosure Avoidance Officers have reviewed this information product for unauthorized disclosure of confidential information and have approved the disclosure avoidance practices applied to this release. This research was performed at a Federal Statistical Research Data Center under FSRDC Project Number 1632. (CBDRB-FY22-P1632-R9305-R9674) This research uses data from the Census Bureau’s Longitudinal Employer Household Dynamics Program, which was partially supported by the following National Science Foundation Grants SES-9978093, SES-0339191 and ITR-0427889; National Institute on Aging Grant AG018854; and grants from the Alfred P. Sloan Foundation. (March 2023) ###### Abstract Economic models assume that payroll tax burdens fall fully on workers, but where does tax incidence fall when taxes are firm-specific and time-varying? Unemployment insurance in the United States has the key feature of varying both across employers and over time, creating the potential for labor demand responses if tax costs cannot be fully passed on to worker wages. Using state policy changes and matched employer-employee job spells from the LEHD, I study how employment and earnings respond to payroll tax increases for highly exposed employers. I find significant drops in employment growth driven by lower hiring, and minimal evidence of pass-through to earnings. The negative employment effects are strongest for young and low-earning workers. ## 1 Introduction Employer taxes on labor earnings are used to fund a variety of social insurance programs, and policymakers frequently view payroll tax cuts as a way to stimulate employment. In the theory of tax incidence, the resulting employment and wage change depends on the elasticities of labor demand and labor supply; because labor demand is assumed to be much more elastic than labor supply (especially in the long run), most models assume payroll tax costs are born entirely by workers, resulting in little to no effect on employment.111As evidenced by early estimates from Brittain (1971) and Gruber (1997). However, there may be imperfect pass-through in the presence of downward wage rigidities, leading to a decline in employment rather than wages.222Using data from a large payroll processing firm, Grigsby et al. (2021) find very low rates of wage cuts, providing strong evidence for downward nominal wage rigidity. Using Census Bureau data, Murray (2021) estimates that downward nominal wage rigidity accounted for at least 23% of excess job destruction during the financial crisis of 2008. Hamermesh (1979) points out that if the entire burden of a payroll tax could be shifted onto wages, employers would not so vehemently oppose tax increases. Furthermore, payroll tax changes that only affect a subset of firms may prevent employers from fully passing costs through to wages.333Likewise, Saez et al. (2012) and Saez et al. (2019) found that payroll tax changes only affecting a subset of workers within a firm also failed to result in full pass-through. In this paper, I exploit state Unemployment Insurance (UI) tax increases to estimate payroll tax incidence in the United States. The third largest payroll tax in the U.S., UI is an employer-specific and time-varying tax that is administered at the state level. Employer tax rates are an increasing function of the amount of UI benefits paid out to previous employees, and in 2010 and 2011, state-legislated UI tax increases raised the payroll tax costs for highly exposed employers. I use administrative data from the US Census Bureau’s Longitudinal Employer-Household Dynamics (LEHD) to study the impact of these state tax increases on worker earnings and employment growth. I identify a subset of highly exposed employers, defined as those with a history of previous layoffs, and use a difference-in-differences research design to compare the outcomes of employers based in treatment versus control states. I estimate minimal pass-through of state tax increases to earnings, and instead find negative impacts on employment growth. For each $100 increase in payroll taxes per worker, employment growth declined by 0.43 percentage points over the course of the year. Event study estimates found the largest effects in the 2nd and 3rd year after the initial policy changes, which suggest that UI tax increases hindered the economic recovery after the Great Recession. These results are further confirmed using a triple-difference research design that compares high exposure and low exposure employers within the same state, to control for potential differences in state economic conditions in the wake of the Great Recession. I test whether this drop in employment growth is driven by increased separations or reduced hiring, and find clear evidence of the latter. The quarterly share of new hires fell by up to 0.2 percentage points for each $100 tax increase, while the effect on the separation share was small and statistically insignificant. These findings align with firm tax incentives, as each additional new hire would incur another instance of UI tax; on the other hand, additional separations provide no UI tax savings, since low UI tax bases cause most earnings to exceed the tax base by the end of the first quarter. Heterogeneity analysis shows that the negative employment effects are stronger for young (age under 25) and low-earning workers. Low-earning workers also experienced larger drops to earnings growth than the average worker, but still well below full pass-through. Additionally, single-establishment firms were more likely to reduce hiring than multi-establishment firms, suggesting that cash constraints may also play an important role. The main contribution of this paper is providing a U.S. context to study payroll tax incidence, as the lack of variation in federal payroll taxes has caused US-based studies to be largely absent from the recent literature. Given differences in labor market institutions, the estimates of tax incidence in one country may not generalize in the same way to another. State-level UI tax increases provide a setting similar to Ku et al. (2020), who found that regional payroll tax increases in Norway lowered employment, because firms were only partially able to pass the costs on to workers. Saez et al. (2021) finds that the positive employment effects from a young workers payroll tax cut persisted even after workers aged out of eligibility. Likewise, I estimate a gradual and sustained decrease in employment growth due to payroll tax increases. Benzarti and Harju (2021a) found that a payroll tax cut in Finland helped treated firms weather the Great Recession despite having minimal impact on employment beforehand, suggesting a cyclical component to firm responses. My findings of a negative employment response to tax increases following the Great Recession confirms the symmetry of this effect. Another contribution is the estimation of heterogeneous effects across worker and employer characteristics. Benzarti and Harju (2021b) found that firms with greater payroll taxes substituted away from low-skilled and manual workers. I find greater drops in labor demand for low-earning and young workers, but not for those with less education. My finding that single-establishment firms were especially likely to decrease hiring in response to higher UI taxes, is consistent with the finding in Lobel (2021) that small firms were more responsive to payroll tax cuts. Biró et al. (2022) estimate heterogeneity in payroll tax incidence by firm productivity, finding that lower productivity firms experienced employment responses while higher productivity firms passed tax cuts on to workers. While I do not observe firm productivity, my sample of high exposure employers likely overlaps with lower productivity firms, as they suffered layoffs prior to the tax increases. Finally, my paper contributes to the small literature on UI tax incidence by revisiting the question using recent data and exogenous policy changes from multiple states. Early studies by Anderson and Meyer (1997) and Anderson and Meyer (2000) found evidence of pass-through for industry-specific UI taxes but not employer-specific rates, although the variation in employer-specific rates included both high and low exposure employers together. My findings also complement the results in Johnston (2021), which found little effect of annual changes in employer-specific rates on worker earnings, but large impacts on hiring. The next section provides institutional details on US Unemployment Insurance. Section 3 describes the research design, and Section 4 discusses the data and construction of my analysis sample. Section 5 presents the main employer-level results, and Section 6 presents a worker-level analysis focusing on low- earning workers. Section 7 concludes. ## 2 Institutional Background Unemployment insurance in the U.S. is funded solely through payroll taxes on employers.444With the exception of New Jersey, which also charges workers a uniform payroll tax of 0.5%. Unlike Social Security or Medicare taxes, which are uniform across all employers, UI is experience-rated and therefore employer-specific. UI tax revenues reached a height of 1 percent of total wages in 2012, but this masks considerable heterogeneity across employers. Employers that lay off workers who then claim UI benefits are assigned a higher future tax rate - up to a statutory maximum.555Guo and Johnston (2021) provides a brief history of U.S. unemployment insurance, which was modeled after the experience rating in workers’ compensation. Since its inception, policymakers have recognized the tradeoff between disincentivizing layoffs, and the cyclical tax burdens that dynamic experience rating would create. These employer-specific rates are calculated annually, and Appendix Figure A.1 illustrates a sample tax schedule from the state of Florida, where the maximum tax rate is capped at 5.4%. Appendix Figure A.2 plots total UI contributions over time as a percentage of total wages, and reveals the cyclical pattern of UI tax collections, where tax rates are highest in the years following recessions. Due to this time-varying nature of UI tax rates, businesses may have more difficulty passing the costs on to their workers, and changes in UI tax liabilities have the potential to influence employment as well as earnings. Another unique aspect of UI is the low tax base in most states. Throughout the last half-century, taxable wages have not kept up with inflation, leading to an errosion of the tax base. In 2009, the year before the first UI tax increases, the median UI tax base was only $10,000 – with the median state’s taxable wages covering only 28% of total payroll. This low tax base causes UI to become a lump-sum tax on workers, independent of earnings. Thus UI taxation is regressive in the sense that seasonal, part-time, and low-wage workers are taxed at a higher effective tax rate (as a share of earnings), and these types of workers become relatively more costly for a business facing high UI tax rates. Huang (2022) finds that higher state taxable wage bases increases teenage employment rates, and Duggan et al. (2022) finds that higher tax bases increase labor demand for low-wage part-time workers. There is substantial variation in UI tax schedules across states, and a tax increase in one state raises payroll costs for incumbent employers, without affecting tax regimes in any other. Many state UI programs are underfunded, leading them to deplete their UI trust funds during recessions. During the Great Recession, 36 states depleted their UI trust funds and were forced to take out loans from the federal government; this occurred again for 18 states during the height of the COVID-19 pandemic. While many states recognized the need to update their UI tax schedules to replenish their trust funds following the Great Recession, the timing and the size of the legislated tax increases varied greatly. At the same time, there were proposals to increase the federally mandated UI tax base (of $7,000), which would have given states a way to obtain more tax revenue without needing to legislate their own UI tax policy. California, for example, did not legislate any UI tax increases despite a UI trust fund debt of over $10 billion. ## 3 Research Design In 2010 and 2011, a total of sixteen states implemented UI tax increases in order to replenish their trust funds. Coupled with the mechanical increase in employer-specific tax rates due to experience rating, these tax hikes could amount to a 2-5% increase in effective tax rates. For my LEHD research sample of 23 states and District of Columbia, Figure 1 graphs the distribution of maximum tax increases (tax base * maximum rate) from 2009 to 2011. Among this subsample, I exclude 7 states from my analysis due to basline differences in UI tax regimes. Six of these excluded states automatically index UI tax bases to wage growth, resulting in incremental and predictable tax increases every year throughout my sample period (and thus greater tax costs at baseline than the treatment states). I also exclude Pennsylvania, which experienced a UI tax decrease in 2011-12. From the remaining states, I identify 9 treatment states whose statutory maximum UI taxes rose by more than $100 from 2009 to 2011, leaving a remainder of 8 control states with minimal tax changes during this window. From 2009 to 2011, treatment states experienced per-capita maximum UI tax increases ranging from just under $200 to over $600, and the average increase was $378. In percentage terms, maximums increased by 67% on average, and the largest tax increase was experienced by South Carolina (in both level and percentage terms).666Appendix Figure A.3 plots maximum and average UI taxes over time for each control state, and appendix Figure A.4 and Figure A.5 plots them for treatment and excluded states, respectively. Figure 2 plots the locations of these 17 LEHD states, and the change in maximum UI taxes from 2009 to 2011. In order to focus on employers for whom these state UI tax increases would be binding, I create a sample of high exposure employers that are likely to be close to the maximum tax rate following the Great Recession. I first estimate event study regressions to test for differential pre-trends in treatment states relative to control states. I then estimate pooled difference-in- differences regressions to estimate a continuous treatment effect from UI tax increases. To establish a consistent sample for both regression specifications, I restrict to the period 2008 to 2013, to allow for at least two years of pre-change data and three years of post-change data. Estimating an event study requires defining a single starting date for each state tax increase. While some state increases occurred all at once, other states legislated a more gradual increase from 2010 to 2012. The pooled difference-in-differences estimation will incorporate this variation in magnitude and timing, but for the event study I simply define event time relative to the quarter before the initial tax increase occurs ($event=0$ in either 2009:Q4 or 2010:Q4 for treatment states, and event time always equals zero for control states). I then estimate the following regression specification: $Y_{fst}=\alpha_{f}+\sum_{k=-8,k\neq-1}^{13}\beta_{k}(event=k)+NAICS*\delta_{t}+\gamma minwage_{st}+\epsilon_{fst}$ (1) Here $f$ indexes employer (SEIN), $s$ indexes state, $t$ indexes year-quarter, and $k$ indexes the quarter relative to the policy change. I group all quarters more than two years before the policy change together into one estimate $k=-8$, and all quarters more than three years after the policy change together into $k=13$. To allow for potential anticipation effects, the treatment effects are estimated relative to the outcome two quarters prior to the tax change (ie: if a state increased taxes in 2010, the baseline would be Q3 of 2009, and if a state increased taxes in 2011, the baseline would be Q3 of 2010). Thus the excluded quarter is $k=-1$, and the first quarter of the tax change is labeled $k=1$.777States with 2010 increases are Arkansas, Maine, Maryland, Tennessee, and West Virginia; States with 2011 increases are Illinois, Indiana, Oklahoma, and South Carolina. Coefficients $\beta_{-8}$ to $\beta_{-2}$ test for differential pre-trends prior to the policy changes. Fixed effects are included for the SEIN, as well as the year-quarter interacted with 2-digit NAICS sector, as some sectors such as construction were disproportionately impacted by the Great Recession. The primary outcomes of interest include mean log quarterly earnings for stable employees (excludes hires and separations), as well as year-over-year employment growth and hiring rates. I also estimate a pooled difference-in-differences regression with a continuous treatment variable, $Taxchange_{st}$, defined as the dollar change in maximum UI tax relative to 2009 (and defined to equal zero in years 2008 and 2009). The baseline regression specification is as follows: $Y_{fst}=\alpha_{f}+\sum_{q=1}^{4}\beta_{q}*Taxchange_{st}+NAICS*\delta_{t}+\gamma minwage_{st}+\epsilon_{fst}$ (2) Here $f$ indexes employer, $s$ indexes state, and $t$ indexes year-quarter. I estimate four coefficients of interest $\beta_{1}$ to $\beta_{4}$ to allow for differential responses by calendar quarter, because UI tax burdens are largest in Q1. The identifying assumption is that in the absence of UI tax policy changes, employers in the same 2-digit NAICS sector in treatment states and control states would have evolved similarly in the years following the Great Recession. It is worth noting that after the Great Recession, there was also a significant federal extension of UI benefit duration beyond the typical 26 weeks, through both the Emergency Unemployment Compensation (EUC) program and the Extended Benefits (EB) program. Benefit duration reached a potential maximum of 99 weeks in 2010-2011, before dropping in 2012 and coming abruptly to an end at the close of 2013. Farber and Valletta (2015) and Farber et al. (2015) found little impact of these benefit extensions on job finding, as they primarily caused fewer workers to leave the labor force and led to long-term unemployment. Thus, I assume the extension of UI benefits does not differentially impact worker labor supply during my analysis period. ## 4 Data This project uses the Census Bureau’s Longitudinal Employer-Household Dynamics (LEHD), an administrative employer-employee matched dataset of quarterly earnings, directly sourced from state UI records. Each employer in the LEHD is assigned a state EIN number (SEIN), and this will be my firm definition. A SEIN is also the level at which UI tax rates are assigned to employers. It can encompass multiple establishments within a state, but no employment outside the state. And just as large firms may have multiple EINs associated with their business, these firms may also own multiple SEINs within a given state. To construct my sample, I include employers from 9 treatment states and 8 control states, covering the time period from 2008 to 2013 (allowing for an unbalanced panel). I also drop the Retail Trade and Accommodation & Food Services sectors (NAICS 44-45 and NAICS 72). These two sectors experience high turnover and low UI take-up rates, such that it is very difficult to classify employers into high versus low experience rating. The exclusion of these sectors should not affect the generalizability of my results, as they rank among the sectors with the lowest average UI tax costs, just above Healthcare and Education (Guo and Johnston (2021)). The first outcome of interest is average quarterly earnings, either calculated for stable workers (which excludes hires and separations) or for all workers. Since no information on hours is collected, incidence on earnings may also reflect changes in hours. In additional analyses in Section 6, I identify low- earning workers in the LEHD to separately estimate the impacts on these workers with the highest relative UI tax burdens. To study employment outcomes, I count hires and separations based on the beginning and end of employment spells, and divide by employment to define quarterly shares of new hires and separations. I also measure quarterly employment growth using DHS growth rates. ### 4.1 Calculating UI Tax Exposure State UI tax increases affected either the maximum tax rate or tax base, both of which require a sufficiently high experience rating to be binding. Because I do not directly observe employers’ payroll tax rates in the LEHD, I impute exposure to UI tax increases based on separation history. Employers who never separate workers into non-employment are inferred to have a low UI tax rate (low exposure), and employers who separate a significant share of workers into non-employment will be inferred to have a tax rate close to the state maximum (high exposure). It is not layoffs themselves that impact employer-specific tax rates, but rather the amount of UI benefits claimed by laid off workers. In order to identify separations that could be eligible for UI benefits, I first limit the scope of consideration to workers with at least 2 consecutive quarters of earnings, and high quarter earnings of at least $1500 (minimum requirement to be eligible for UI benefits in most states). I also ignore workers younger than age 18 (to abstract from job separations driven by schooling) or older than age 60 (to abstract from retirement timing). I then impute a layoff as a job separation in quarter $t$ that results in at least one quarter of zero earnings from quarters $t+1$ to $t+3$. Separations that were the result of moves to a different state will not be captured in the LEHD data, so these could also be miscoded as layoffs. I also drop employers with less than 20 or greater than 500 workers at baseline (2009:Q3), to minimize measurement error in this imputation. Then for each SEIN, I aggregate the number of imputed layoffs over two separate pre-periods, and use these two cumulative shares to define inclusion in my analysis sample as a high exposure firm. The first pre-period runs from 2006:Q1 to 2007:Q4, and the second runs from 2008:Q1 to 2009:Q2 (the Great Recession began in December 2007 and ended in June 2009).888I use the three- year period before the first state tax increases in order to best capture layoffs that could lead to potential UI tax increases. Layoffs that occurred prior to 2006 are unlikely to impact tax rates in 2010 and later, because most firms will have already finished experiencing tax increases. For example, layoffs during the 2008-09 Great Recession caused tax rates to peak in 2011-12. In both treatment and control states, I restrict the sample to SEINs that in each of the two pre-periods experienced total layoffs of 33-100% of their 2009:Q3 baseline employment. I also reserve a subset of SEINs with imputed layoffs of less than 15% from 2006:Q1-2007:Q4, and less than 10% from 2008:Q1-2009:Q2. Given the likelihood of overestimating layoffs, an SEIN with an imputed layoff rate less than 10% very likely experienced zero or very little UI benefit claims. This set of employers, which are expected to be the least affected by UI tax increases due to their distance from the maximum tax rate, will act as a “placebo” or low exposure sample in triple-difference analyses. Separation into non-employment is still an imperfect predictor of eventual experience rating, because of imperfect take-up rates and substantial variation in UI duration. Thus I view this exercise as accepting there will be Type I error, in favor of minimizing Type II error. My results are also robust to using alternative thresholds to define this sample of high exposure employers. In new work using administrative data from Washington, Lachowska et al. (2021) find that the take-up of UI among workers plays as important a role as the separation rate in determining a given employer’s future UI tax rate. Thus the measurement error introduced by inadvertently including some employers with low UI tax exposure may attenuate my estimates. ### 4.2 Summary Statistics LABEL:summstats reports summary statistics for my sample of firms in treatment and control states at baseline in the third quarter of 2009. They are very similar in size, with average employment of roughly 67 workers. Average layoff histories also match extremely well, with the average employer having experienced cumulative layoffs in 2006-07 that amounted to 57% of their 2009:Q3 employment, and 50% for the period of 2008:Q1 to 2009:Q2. One area of mismatch, however, is annual earnings, with control states paying higher wages than treatment states. This is likely due to state-level differences in cost- of-living and minimum wages. Federal mininum wage increases occurred in July of 2007, 2008, and 2009, and treatment states were more likely to have faced binding minimum wage increases. Thus I also include a control for the state minimum wage in all of my regression specifications. This table also shows that the construction sector is overrepresented among high-exposure firms; nationally the industry only accounts for 10% of establishments, while in my analysis sample construction employers make up over 20%. ## 5 Results I first establish that increases to the statutory maximum UI tax translated into increases in actual taxes paid. In the absence of employer-specific tax rates, I estimate an event study regression using aggregate data from the public-use Quarterly Census of Employment and Wages (QCEW). The observation level is a 4-digit industry, state, and quarter, and effective UI tax rates are calculated by dividing quarterly UI contributions by taxable quarterly payroll. Using construction industries as a proxy for high exposure employers (as they have the highest average tax costs), Figure 3 shows that the policy changes in treatment states led to an increase in UI tax costs, predominantly in the first two quarters of the calendar year (Q1 coincides with event times -7, -3, 1, 5, and 9). On average, Q1 tax rates in treatment states increased by 0.83, 1.1, and 1.0 percentage points over the three years following a policy change, relative to control states. Estimates from a difference-in-differences specification in Appendix LABEL:erateDD find that each $100 increase in maximum UI taxes increased effective UI tax rates by 0.16 percentage points in the first quarter of the year. However, this average effect at the industry level masks substantial heterogeneity in effective tax rates that occurs at the worker level, especially for workers that earn below the taxable wage base. This will be explored further in section 6. ### 5.1 Event Study To study the impact on earnings, Figure 4 plots event study estimates from Equation 1 using log quarterly earnings as the outcome. Log earnings are calculated by taking the firm-level average of log quarterly earnings for all stable employees continuously employed from quarter t-1 to t+1. By excluding new hires and separations, this ignores any potential drops in earnings due to incomplete job spells. The baseline estimates in Panel A show that even after controlling for state minimum wages, employers in treatment states paid lower wages than those in control states prior to the policy change, and had a pattern of lower first quarter earnings relative to the rest of the year. This persistent seasonality in earnings is likely driven by regional differences in climate. Two of the largest control states, California and Florida, are known to have very mild winters and thus less seasonal employment in high exposure industries such as construction. Panel B shows that including state-by- calender quarter fixed effects helps to smooth out these seasonal patterns. After controlling for state-specific seasonality, we still observe small drops in average earnings in the first quarter (when tax burdens are greatest), but the magnitudes are very small and not statistically significant (implying at most a 1.2% drop in Q1 earnings). This therefore suggests that UI tax increases have a minimal impact on workers’ average quarterly earnings. Turning next to the employment margin, Panel A of Figure 5 plots baseline estimates of year-over-year employment growth, and Panel B includes state-by- calendar quarter fixed effects. Employment growth shows a sustained decline in treatment states after the tax increase, with a 2 percentage point drop by the end of the first year, and a 4 percentage point drop after the second. This growing treatment effect over time could be driven by two factors. Looking at a breakdown of taxes by treatment state in Figure A.4, we see that some states experienced a gradual tax increase that peaked in 2012 (either a year or two after the initial increase). Secondly, average tax rates were also highest in 2011-2012, as layoffs from the Great Recession took a few years to translate into higher tax rates. Panel B shows that this pattern is also robust to the inclusion of state-specific calendar quarter fixed effects, which account for differences in geographic seasonality. ### 5.2 Difference-in-Differences I next use a difference-in-differences estimation strategy to estimate average treatment magnitudes. In these specifications, I now use a continuous treatment measure equal to the change in the maximum UI tax schedule between time $t$ and 2009, and equal to zero prior to 2009. I estimate specifications described by Equation 2, which include employer (SEIN) and industry-by-time fixed effects, as well as a control for the state minimum wage. LABEL:earntable reports estimates for three earnings outcomes, with unweighted estimates in the odd columns, and estimates weighted by firm employment in the even columns. The baseline specifications are unweighted, since the analysis sample is already restricted to SEINs between 20 and 500 workers in 2009, although weighting generally has little impact on the estimated magnitudes. Columns 1 and 2 calculate the employer’s average quarterly earnings across stable workers who were employed for the full quarter, while Columns 3 and 4 include all workers when calculating the average. Both outcomes show a small increase in log quarterly earnings after the tax change in all quarters except for Q1, consistent with the observed pre-trend in Figure 4. The absence of an increase in the Q1 DD coefficient implies a maximum tax incidence of 40% for every additional dollar of UI tax (given that $100 is roughly 1% of mean quarterly earnings), although this effect is likely driven by seasonal employment patterns and is not robust to a triple-difference specification. In Columns 5 and 6 I estimate an alternative measure, which is how likely workers are to experience an earnings increase in a given quarter. Because quarterly earnings vary throughout the year (especially when inclusive of hours), I define a simple indicator for whether the worker earned more in the current quarter than in the previous two quarters (ignoring any workers with less than 3 quarters of tenure). The firm-level outcome is then calculated as the share of workers satisfying the previous definition. This measure is expected to be correlated with the share actually receiving raises, but estimates are likely to be attenuated by measurement error due to the quarterly frequency. These estimates provide further evidence for a minor degree of tax incidence in the first quarter of the year; for every $100 increase in per-capita maximum UI taxes, the share of workers receiving a raise in earnings in Q1 falls by 0.4 percentage points. Given the evidence against full tax incidence on earnings, I next investigate employment-related outcomes. LABEL:emptable reports estimates for three different outcomes, and estimates are robust to weighting by firm employment (even columns). I define year-over-year employment growth as the DHS growth rate of quarterly employment relative to the same quarter of the previous year, $\frac{Emp_{t}-Emp_{t-1}}{\frac{1}{2}(Emp_{t}+Emp_{t-1})}$, which creates a symmetric growth measure that allows for entry and exit. The baseline unweighted specification in Column 1 estimates that UI tax increases lower year-over-year employment growth, and the effect compounds throughout the year. A $100 increase in per-capita UI taxes lowers employment growth by 0.43 percentage points by the end of the year, and the magnitude is even larger if weighted by firm employment. This implies that in a state like Indiana, which experienced a $500 jump in maximum UI taxes in 2011, employment growth fell by 2.2 percentage points in high exposure firms, relative to similar counterparts in control states. To decompose this employment change into hires and separations, I define the quarterly new-hire share as the number of new hires divided by employment, and the quarterly separation share as the number of separations divided by employment. The estimates show a negative impact on hiring, with no statistically significant effect on separations. Column 3 finds that a $100 increase in per-capita maximum UI taxes lowers both the first quarter and last quarter hiring rate by 0.17 percentage points. This pattern of negative employment growth and hiring is consistent with the mechanism of greater UI tax burdens. At the start of the calendar year all earnings up to the tax base are eligible for UI taxes, and thus create a large payroll tax burden for the employer. Each additional worker hired incurs an additional UI tax burden, whereas additional separations do not incur a tax saving and could even push future costs higher if they claim UI benefits. Towards the end of the calendar year, most if not all existing workers have exceeded the UI tax base and no longer cost their employer any additional payroll tax. However, the earnings of new hires still incur taxes up to the same tax base, resulting in a much higher effective tax rate for new hires; this could explain the observed reluctance to hire in the last quarter of the year. Although I am unable to observe this, another possible explanation for the drop in labor demand is substitution to contract workers, who would not be subject to UI taxation.999For example, Kugler et al. (2017) found that payroll tax cuts in Colombia increased rates of formal employment among affected workers. ### 5.3 Heterogeneity by Worker and Employer Characteristics To allow for heterogeneous responses for different types of workers, LABEL:subgroup analyzes employment outcomes for three specific groups of workers. First, I study the impact on the hiring rate of low-earning workers, who are defined to be those who earned less than $3000 in the quarter after being hired (inflation adjusted to 2015 dollars).101010The $3000 threshold was chosen to coincide with the federal poverty level for a single person household ($11,770 in 2015). Column 1 shows that a $100 increase in the maximum UI tax lowered the Q1 share of low-earning new hires by 0.1pp; benchmarked to the sample mean of 6.5%, this indicates a larger negative response than for the overall share of new hires from the previous table. Next I calculate year-over-year employment growth rates for two subgroups of interest: workers younger than 25 years of age, and those with only a high school education or less (conditional on being 25 or older).111111Outcomes for workers older than age 55 were also estimated, but the results did not warrant submission for disclosure. Since small firms may not always employ these types of workers, growth rates are defined to equal zero if a firm did not employ any of these workers in either the current or previous calendar quarter, resulting in a larger difference between the weighted and unweighted estimates when weighting by firm employment. Recall the previous estimate of a $100 increase in maximum UI taxes lowering employment growth in Q4 by 0.43 percentage points (Column 1 of LABEL:emptable). For young workers, the impact is an even larger decline of 0.66 percentage points for each $100 increase. Growth rates for low-educated workers are more similar to the overall effect, with an estimated decline of 0.39 percentage points by the end of the year. This suggests that young workers who are just entering the workforce are the most impacted by falls in labor demand due to UI tax increases. To test for heterogeneous responses by different types of employers, I classify employers based on their characteristics at baseline (2009:Q3). I focus on three observable characteristics: median worker earnings, multi- establishment status, and firm age. The latter two are also potential proxies for whether the employer would face cash or financing constraints, as a surprise increase in UI taxes could restrict available cash flow. Recall that all firms in the analysis sample are already age 5 or older by 2010, as they needed a long enough layoff history to be identified as high exposure. The main outcome of interest is the new hire share, and the estimation augments the differences-in-differences specification described by Equation 2. In each specification, I add four additional interaction terms interacting UI tax changes with one of the following indicators defined at baseline: Median quarterly earnings below $6000, single-establishment firm (both in-state and nationally), or firm age less than 20 years old. Figure 6 plots estimates for each of the three characteristics. The estimates show little difference in hiring whether an employer has a majority of low-earning workers or not. However, the single establishment interactions show a large negative response for firms that only have a single location (seen by the interactions for Q1, Q3, and Q4), while multi-establishment firms are essentially unaffected. This indicates that single-establishment firms with less access to resources are the most likely to lower hiring in response to UI tax increases. Among these employers, a $100 maximum tax hike lowered hiring by 0.23pp, 0.22pp, and 0.41pp in quarters 1, 3, and 4 respectively. Finally, the last set of markers shows that younger firms are also more likely than older firms to reduce hiring in response to a UI tax increase, although the magnitudes are smaller than those for single-establishment firms. Appendix Figure A.6 plots estimates for the separations share as well. While I estimated no overall effect on separations, the quarterly interactions suggest that single establishment firms exposed to a tax increase were actually less likely to experience separations in the first two quarters of the year, relative to multi-establishment firms. A potential mechanism could be a greater effort towards worker retention in the absence of hiring. ### 5.4 Triple-Difference Design and Robustness In order to control for state-specific factors that could be confounding our estimates, I also conduct a difference-in-difference-in-differences estimation comparing the main sample of high-exposure employers to the placebo sample that experienced minimal layoffs. Because UI tax increases are more binding for employers near the maximum tax rate, we can use employers with minimal layoffs – and therefore low exposure to UI tax increases – to control for state-level variation that may influence both tax policy and employment outcomes. Figure 7 plots event study estimates for year-over-year employment growth using this triple differences design on the expanded sample of both high tax and low tax employers. It estimates Equation 1 with not only the event-time treatment dummies, but also a set of treatment dummies interacted with whether the employer is classified as high-tax. These interactions become our new coefficients of interest, while the baseline event-time estimates control for confounding conditions in treatment states. The estimates show no differential pre-trend prior to the tax increases, and a sustained drop in growth after, reaching negative 5 percentage points by the end of the second year. This provides further evidence of a causal relationship between UI tax increases and drops in labor demand. LABEL:triplediff reports estimates of a pooled triple-difference regression specification studying earnings, hiring, and separations. While the Taxchange*High coefficients for the new hire share and separation share are consistent with those from the baseline difference-in-differences specification, we no longer observe an impact on log earnings or the share receiving raises. This implies that the previously observed negative impact on Q1 earnings and raises was due to underlying differences in treatment and control states unrelated to UI tax increases, and the wage incidence of UI tax increases are minimal. LABEL:triplediff2 reports estimates for employment growth outcomes. The triple-difference interactions are all statistically significant and larger in magnitude than the original difference-in-differences estimates. Within just the first quarter of the year, employment growth drops by 0.5 percentage points for every $100 UI tax increase. And this negative effect grows to -0.74pp by the end of the year. The employment growth of young workers were even more negatively impacted, with each $100 in tax increases lowering end- of-year growth by 0.9pp. These results are also robust to the inclusion of state-by-year-quarter fixed effects, reported in LABEL:triplediff_styear. My results are also robust to a number of sensitivity checks of sample composition.121212Estimates from robustness checks have not been disclosed from the Census RDC, but would be available for additional disclosure should they be required. First, while I opted to exclude the retail trade and food/hospitality sectors due to higher likelihood for measurement error, both the difference-in-differences and triple-differences estimates are robust to the inclusion of these employers. Second, because I do not observe actual employer-specific tax rates, I measure expected UI tax costs based on imputed layoff histories. Small changes to the cutoff of layoff shares that warrant inclusion in my main analysis sample had no qualitative effect on my estimates. Additionally, the results are not driven by the inclusion of any one state in the analysis sample, nor does the restriction to a balanced panel impact the estimates. Finally, to address potential concerns with the differential timing of state UI tax increases, I have estimated separate event study regressions including only 2010 treatment states or only 2011 treatment states, respectively. This ensures that each group of treatment states are only compared to the control states, and there is no longer an issue of staggered treatment, since all tax increases occur at the same time. Treatment effects from this exercise are qualitatively similar to those using the pooled sample. ## 6 Incidence for Low-Earning Workers Due to low UI tax bases (the median tax base in 2009 was only $10,000), low- wage workers face higher effective tax rates and thus bear a larger relative tax burden than their higher-earning counterparts. They may also have less bargaining power and are more likely to work part-time. I identify existing workers of highly exposed firms, and estimate the impact of UI tax increases on their subsequent earnings trajectories. The added precision of a worker- level analysis allows for the identification of earnings impacts within this subgroup of interest. Focusing on the main analysis sample of high-exposure firms, I construct a worker-level sample consisting of workers employed by the firm for at least a year, with annual earnings between $5000 and $24,000 in 2009.131313Because the LEHD reports total earnings and not wage rates, this definition may pick up high-wage workers who work very few hours, although this should be fairly uncommon. Additionally, since UI taxes are based on earnings and not wages, part-time high-wage workers are equally costly to an employer from a payroll tax standpoint. To ensure these workers make up a non-negligible share of the firms’ employment, I also drop any firms that did not employ 10 or more of these low-wage workers as of 2009:Q3. This subsample includes approximately 549,000 low-wage workers, employed by 13,000 employers (44% of the highly exposed employers in the main analysis sample). I estimate worker-level event study regressions, where event time is defined in the same way as Equation 1. $EarnGrowth_{ifst}=\alpha_{f}+\sum_{k=-8,k\neq-1}^{8}\beta_{k}Taxchange_{s}(t-Increase_{s}=k)\\\ +\delta_{t}+\rho log(earn_{ifs,t-4})+\gamma minwage_{st}+\epsilon_{ifst}$ (3) Here $i$ indexes worker, $f$ indexes employer, $s$ indexes state, $t$ indexes year-quarter, and $k$ indexes the quarter relative to the policy change. I group all quarters more than two years before the policy change together into one estimate $k=-8$, and I follow workers for up to three years after the policy change, through the end of 2012. $Taxchange_{s}$ is defined for treatment states to equal the average maximum UI tax increase between 2009 and 2010-11 (in hundreds), and equal to zero for control states. The main outcome of interest, $EarnGrowth_{ifst}$ is defined as the year-over-year earnings growth relative to the same quarter in the previous year. Thus inclusion in the sample requires the worker to have been employed at the firm a year earlier, limiting the analysis to a more stable subset of workers. Figure 8 plots the estimates of the event time interactions with $Taxchange_{s}$; thus the magnitude of each estimate reflects the effect of a $100 UI tax increase. In the first quarter after the tax increase (event time = 1), each $100 UI tax increase lowered earnings growth by 0.63% for continuing workers in treated firms, relative to their counterparts in control firms. This negative impact is small and precisely estimated, and does not last beyond the first year. This allows us to rule out negative earnings impacts beyond 1% in Q1 and 0.9% in Q3, for each $100 increase. Since 1% of quarterly earnings equates to $48 on average among this subsample, together these estimates imply a maximum pass-through rate of 90% ($90 out of $100) for existing low-wage workers. Therefore, even amongst a subgroup that bears the largest effective tax burden, we find evidence that short-run tax increases result in less than full tax incidence on the earnings of existing workers. ## 7 Conclusion The employer-specific and time-varying aspects of UI payroll taxation provides researchers with extensive policy variation that can be used to understand how firms respond to temporary payroll tax changes. Given previous proposals of cutting employer payroll taxes to stimulate employment during recessions, there is debate over whether such tax cuts would have the intended effect or whether they would simply increase firm profits. I estimate significant negative employment responses to unexpected tax increases, which suggests that temporary payroll tax cuts are likely to have a positive impact on employment as well, especially for young and low-earning workers that face higher relative tax burdens, and for smaller employers with less access to financial resources. In response to state maximum UI tax increases in the years following the Great Recession, UI tax burdens rose for employers with large layoff histories, with minimal evidence of tax incidence on workers. With little ability to pass tax increases on to workers, employers lowered hiring rates and employment growth, and this effect grew over time. Using the Q1 treatment estimates of hiring rates from LABEL:emptable, I calculate an implied labor demand elasticity of -1.1. This measure is obtained by dividing the Q1 DD coefficient estimate of -0.170 by 0.161, the first quarter effective UI tax increase estimated using the QCEW in Appendix LABEL:erateDD. An alternative measure can be calculated by dividing the triple-difference drop in Q4 employment growth of -0.74 by 0.31, the share of average annual earnings a $100 tax increase is equivalent to; this equates to an end-of-year labor demand elasticity of -2.4. These elasticity estimates fall within the range of elasticities from the labor demand literature, although they are on the lower end of those from recent payroll tax studies.141414Ku et al. (2020), estimates a labor demand elasticity of -3.6 using place-based payroll tax increases in Norway, while Benzarti and Harju (2021a) estimates elasticities of -2.9 to -4.16 in response to firm-specific increases in Finland. And in the U.S. setting, Johnston (2021) estimates a UI tax labor demand elasticity of -4 while Guo (2021) estimates an elasticity of -1.1. The tax incidence of unemployment insurance payroll taxes has been an understudied topic in the payroll taxation literature. The experience rating of employer-specific tax rates creates the benefit of reducing avoidable layoffs (particularly temporary layoffs) and the associated externalities imposed upon the UI system. However, the cyclical pattern in which tax rates are highest in years following economic recessions also slows hiring and employment growth during the recovery. More work is needed to understand the unintended consequences and inherent tradeoffs of US unemployment insurance’s current design. ## References * (1) * Anderson and Meyer (1997) Anderson, P. M. and Meyer, B. D. (1997), The effects of firm specific taxes and government mandates with an application to the U.S. unemployment insurance program, Journal of Public Economics 65(2), 119–145. * Anderson and Meyer (2000) Anderson, P. M. and Meyer, B. D. (2000), The effects of the unemployment insurance payroll tax on wages, employment, claims and denials, Journal of Public Economics 78(1), 81–106. * Benzarti and Harju (2021a) Benzarti, Y. and Harju, J. (2021a), Can payroll tax cuts help firms during recessions?, Journal of Public Economics 200, 104472. * Benzarti and Harju (2021b) Benzarti, Y. and Harju, J. (2021b), Using payroll tax variation to unpack the black box of firm-level production, Journal of the European Economic Association 19(5), 2737–2764. * Biró et al. (2022) Biró, A., Branyicki, R., Lindner, A., Márk, L. and Prinz, D. (2022), Firm heterogeneity and the impact of payroll taxes, IFS Working Paper . * Brittain (1971) Brittain, J. A. (1971), The incidence of social security payroll taxes, The American Economic Review 61(1), 110–125. * Duggan et al. (2022) Duggan, M., Guo, A. and Johnston, A. C. (2022), Would broadening the ui tax base help low-income workers?, AEA Papers and Proceedings 112, 107–11. * Farber et al. (2015) Farber, H. S., Rothstein, J. and Valletta, R. G. (2015), The effect of extended unemployment insurance benefits: Evidence from the 2012-2013 phase-out, American Economic Review 105(5), 171–76. * Farber and Valletta (2015) Farber, H. S. and Valletta, R. G. (2015), Do extended unemployment benefits lengthen unemployment spells? evidence from recent cycles in the u.s. labor market, Journal of Human Resources 50(4), 873–909. * Grigsby et al. (2021) Grigsby, J., Hurst, E. and Yildirmaz, A. (2021), Aggregate nominal wage adjustments: New evidence from administrative payroll data, American Economic Review 111(2), 428–71. * Gruber (1997) Gruber, J. (1997), The incidence of payroll taxation: Evidence from Chile, Journal of Labor Economics 15(S3), S72–S101. * Guo (2021) Guo, A. (2021), The effects of state business taxes on plant closures: Evidence from unemployment insurance taxation and multi-establishment firms, Forthcoming at Review of Economics and Statistics . * Guo and Johnston (2021) Guo, A. and Johnston, A. C. (2021), The finance of unemployment compensation and its consequence, Public Finance Review 49(3), 392–434. * Hamermesh (1979) Hamermesh, D. S. (1979), New estimates of the incidence of the payroll tax, Southern Economic Journal 45(4), 1208–1219. * Huang (2022) Huang, P.-C. (2022), Employment effects of the unemployment insurance tax base, Forthcoming at Journal of Human Resources . * Johnston (2021) Johnston, A. C. (2021), Unemployment insurance taxes and labor demand: Quasi-experimental evidence from administrative data, American Economic Journal: Economic Policy 13(1), 266–93. * Ku et al. (2020) Ku, H., Schönberg, U. and Schreiner, R. C. (2020), Do place-based tax incentives create jobs?, Journal of Public Economics 191, 104105. * Kugler et al. (2017) Kugler, A. D., Kugler, M. D. and Herrera-Prada, L. O. (2017), Do payroll tax breaks stimulate formality? Evidence from Colombia’s reform, Economía 18(1), 3–40. * Lachowska et al. (2021) Lachowska, M., Sorkin, I. and Woodbury, S. A. (2021), Firms and unemployment insurance take-up, NBER Working Paper 30266 . * Lobel (2021) Lobel, F. (2021), The unequal incidence of payroll taxes with imperfect competition: Theory and evidence, Working Paper . * Murray (2021) Murray, S. (2021), Downward nominal wage rigidity’s effect on employment through job destruction: Quasi-experimental evidence from the Great Recession, Working Paper . * Saez et al. (2012) Saez, E., Matsaganis, M. and Tsakloglou, P. (2012), Earnings determination and taxes: Evidence from a cohort-based payroll tax reform in Greece, The Quarterly Journal of Economics 127(1), 493–533. * Saez et al. (2019) Saez, E., Schoefer, B. and Seim, D. (2019), Payroll taxes, firm behavior, and rent sharing: Evidence from a young workers’ tax cut in sweden, American Economic Review 109(5), 1717–63. * Saez et al. (2021) Saez, E., Schoefer, B. and Seim, D. (2021), Hysteresis from employer subsidies, Journal of Public Economics 200, 104459. FIGURES Figure 1: Maximum UI Tax Increase from 2009 to 2011 Source: Significant Measures of State UI Tax Systems. Maximum tax calculated as taxable wage base multiplied by maximum tax rate. Figure 2: UI Tax Policy Variation Source: Significant Measures of State UI Tax Systems. The 7 treatment states are shaded in blue corresponding to the tax change they experienced, and the 8 control states are shaded with vertical lines (and also include Delaware and District of Columbia). States in grey are excluded. Figure 3: Effective UI Tax Rates (2008-2013) Source: Quarterly Census of Employment and Wages. (N = 4,062) Sample limited to ten 4-digit construction industries and years 2008 to 2013 (cells too small to meet disclosure requirements are missing). Effective tax rates calculated by dividing quarterly UI contributions by quarterly payroll. Event study estimated with state-industry and year-quarter fixed effects, and weighted by industry employment. Error bars denote 95% CI for standard errors clustered at state level. Figure 4: Event Study of Log Quarterly Earnings (2008-2013) A: Baseline Estimates B: Controlling for State-by-Calendar Quarter N = 651,000. Outcome variable is the mean quarterly earnings of stable workers (employed in both t-1 and t+1). Also includes a control for state minimum wage, and includes SEIN and Year-Quarter-N2 fixed effects. Panel B also includes state-by-calendar quarter fixed effects. Error bars denote 95% CI for robust standard errors clustered at state level. Figure 5: Event Study of Year-Over-Year Employment Growth (2008-2013) A: Baseline Estimates B: Controlling for State-by-Calendar Quarter N = 651,000. Outcome variables denoted in percentage points. Also includes a control for state minimum wage, and includes SEIN and Year-Quarter-N2 fixed effects. Panel B also includes state-by-calendar quarter fixed effects. Error bars denote 95% CI for robust standard errors clustered at state level. Figure 6: Heterogeneity in New Hire Share by Firm Type (2008-2013) N = 651,000. Also includes a control for state minimum wage, and includes SEIN and Year-Quarter-N2 fixed effects. Error bars denote 95% CI for robust standard errors clustered at state level. The share of employers with median earnings below $6000 equals 0.29. The share that are single-establishment firms equals 0.69. The share with firm age under 20 equals 0.22. Figure 7: Year-Over-Year Employment Growth - Triple Difference (2008-2013) N = 1,109,000. Figure plots the estimates from treatment dummies interacted with event time and high-tax status. Regression also includes treatment interacted with event time, a control for state minimum wage, and SEIN and Year-Quarter-N2-High fixed effects. Error bars denote 95% CI for robust standard errors clustered at state level. Figure 8: Year-Over-Year Earnings Growth - Worker Level (2008-2012) N = 4,954,000. Includes 549,000 unique workers across 13,000 unique employers. Figure plots the estimates from taxchange measure interacted with event time. Regression also includes a control for state minimum wage, lagged log earnings, and SEIN, Year-Quarter, and state-by-calendar quarter fixed effects. Error bars denote 95% CI for robust standard errors clustered at state level. TABLES Table 1: Summary Statistics (2009:Q3) | Control | Treatment ---|---|--- | 00 Mean 00 | 00 SD 00 | 00 Mean 00 | 00 SD 00 Employment | 66.74 | 72.28 | 67.29 | 70 Cumulative Layoff Share 2006-07 | 0.5668 | 0.1689 | 0.5668 | 0.1679 Cumulative Layoff Share 2008-09 | 0.5037 | 0.1473 | 0.5065 | 0.1493 Average Annual Earnings | 36,900 | 27,990 | 32,350 | 23,340 Share of Workers Earning $<\$5000$ | 0.1518 | 0.1647 | 0.1572 | 0.1619 Construction Sector | 0.245 | | 0.2282 | $N$ | 18000 | 11500 Employer-level observations from 3rd quarter of 2009. Control states include AZ, CA, CO, DE, DC, FL, NE, and MO. Treatment states include AR, IN, IL, ME, MD, OK, SC, TN, and WV. Table 2: Log Quarterly Earnings (2008–2013) | Stable Workers | All Workers | Share with Raise ---|---|---|--- | (1) | (2) | (3) | (4) | (5) | (6) Tax $\Delta$*Q1 ($100’s) | -0.00113 | -0.000855 | 0.000158 | -0.0000151 | -0.419∗∗∗ | -0.398∗∗∗ | (0.00190) | (0.00123) | (0.00109) | (0.000735) | (0.113) | (0.0805) Tax $\Delta$*Q2 ($100’s) | 0.00380∗∗∗ | 0.00331∗∗∗ | 0.00209∗∗ | 0.00214∗∗∗ | 0.111 | 0.137 | (0.000938) | (0.000754) | (0.000869) | (0.000706) | (0.0929) | (0.157) Tax $\Delta$*Q3 ($100’s) | 0.00353∗∗∗ | 0.00334∗∗∗ | 0.00309∗∗∗ | 0.00287∗∗∗ | -0.00947 | -0.0320 | (0.00101) | (0.00101) | (0.000864) | (0.000729) | (0.0985) | (0.0922) Tax $\Delta$*Q4 ($100’s) | 0.00494∗∗∗ | 0.00432∗∗∗ | 0.00484∗∗∗ | 0.00471∗∗∗ | 0.0281 | -0.0218 | (0.00130) | (0.000902) | (0.00105) | (0.000824) | (0.139) | (0.132) Minimum Wage | 0.0152∗∗∗ | 0.0121∗∗∗ | 0.0181∗∗∗ | 0.0126∗∗∗ | 0.170 | 0.260 | (0.00279) | (0.00276) | (0.00363) | (0.00316) | (0.227) | (0.229) $R^{2}$ | 0.869 | 0.905 | 0.884 | 0.921 | 0.191 | 0.193 Mean of Dep Var | 9.232 | 9.240 | 9.067 | 9.057 | 31.0 | 31.7 Weighting | | YES | | YES | | YES $N$ | 651000 | 651000 | 651000 | 651000 | 651000 | 651000 Regressions include SEIN and year-quarter-N2 fixed effects. Share with Raise equals the percentage of continuing workers who received a raise relative to the previous two quarters. Even columns are weighted by employment. Robust standard errors clustered at state level in parentheses. ∗∗ $p<0.05$, ∗∗∗ $p<0.01$ Table 3: Employment Outcomes (2008–2013) | Employment Growth | New Hire Share | Separation Share ---|---|---|--- | (1) | (2) | (3) | (4) | (5) | (6) Tax $\Delta$*Q1 ($100’s) | -0.142 | -0.469∗∗ | -0.170∗∗ | -0.163∗∗ | -0.0968 | -0.0350 | (0.129) | (0.177) | (0.0723) | (0.0672) | (0.0832) | (0.0583) Tax $\Delta$*Q2 ($100’s) | -0.287∗∗ | -0.571∗∗ | 0.240 | 0.224 | -0.0746 | -0.0494 | (0.120) | (0.205) | (0.193) | (0.168) | (0.0897) | (0.0757) Tax $\Delta$*Q3 ($100’s) | -0.318∗∗ | -0.674∗∗∗ | -0.104 | -0.0962 | 0.0514 | 0.102∗∗ | (0.115) | (0.194) | (0.0752) | (0.0756) | (0.0652) | (0.0410) Tax $\Delta$*Q4 ($100’s) | -0.431∗∗∗ | -0.759∗∗∗ | -0.170∗ | -0.178∗ | 0.0587 | 0.110 | (0.127) | (0.171) | (0.0954) | (0.0879) | (0.169) | (0.147) Minimum Wage | -0.912 | -0.792 | -0.330∗ | -0.196 | 0.107 | 0.190 | (0.696) | (0.781) | (0.182) | (0.201) | (0.208) | (0.200) $R^{2}$ | 0.193 | 0.254 | 0.427 | 0.529 | 0.405 | 0.521 Mean of Dep Variable | -4.201 | 0.349 | 13.40 | 14.98 | 14.74 | 15.87 Weighting | | YES | | YES | | YES $N$ | 651000 | 651000 | 651000 | 651000 | 651000 | 651000 Regressions include SEIN and year-quarter-N2 fixed effects. Outcomes denoted in percentage points, and even columns are weighted by employment. Robust standard errors clustered at state level in parentheses. ∗ $p<0.10$, ∗∗ $p<0.05$, ∗∗∗ $p<0.01$ Table 4: Employment Growth of Subgroups (2008–2013) | | Employment Growth: ---|---|--- | Low-Earn Hire Share | Age Under 25 | HS or Less | (1) | (2) | (3) | (4) | (5) | (6) Tax $\Delta$*Q1 ($100’s) | -0.102∗∗ | -0.0956∗∗ | -0.218 | -0.511∗∗ | -0.0956 | -0.450∗∗ | (0.0460) | (0.0433) | (0.246) | (0.234) | (0.133) | (0.182) Tax $\Delta$*Q2 ($100’s) | 0.0929 | 0.106 | -0.325 | -0.643∗∗ | -0.249∗ | -0.563∗∗ | (0.0777) | (0.0694) | (0.254) | (0.226) | (0.122) | (0.209) Tax $\Delta$*Q3 ($100’s*) | -0.0701 | -0.0457 | -0.510∗ | -0.896∗∗∗ | -0.288∗∗ | -0.655∗∗∗ | (0.0521) | (0.0434) | (0.287) | (0.250) | (0.108) | (0.200) Tax $\Delta$*Q4 ($100’s) | -0.0822 | -0.0714 | -0.655∗ | -0.951∗∗∗ | -0.393∗∗∗ | -0.735∗∗∗ | (0.0543) | (0.0454) | (0.328) | (0.264) | (0.132) | (0.172) Minimum Wage | -0.237∗ | -0.151 | 0.0461 | -0.446 | -1.589∗ | -1.217 | (0.120) | (0.118) | (1.383) | (1.497) | (0.778) | (0.836) $R^{2}$ | 0.515 | 0.647 | 0.101 | 0.154 | 0.156 | 0.218 Mean of Dep Var | 6.478 | 7.399 | -11.44 | -5.683 | -2.712 | 1.972 Weighting | | YES | | YES | | YES $N$ | 651000 | 651000 | 651000 | 651000 | 651000 | 651000 Regressions include SEIN and year-quarter-N2 fixed effects. Outcomes are denoted in percentage points, and even columns are weighted by employment. Robust standard errors clustered at state level in parentheses. ∗ $p<0.10$, ∗∗ $p<0.05$, ∗∗∗ $p<0.01$ Table 5: Triple Difference (2008–2013) | Log Earnings | Share with Raise | New Hire Share | Separation Share ---|---|---|---|--- | (1) | (2) | (3) | (4) Tax $\Delta$*Q1 ($100’s) | -0.000610 | -0.247∗∗ | -0.0286∗ | -0.0385 | (0.00147) | (0.108) | (0.0143) | (0.0373) Tax $\Delta$*Q2 ($100’s) | 0.00313 | 0.130 | 0.0739 | 0.0305 | (0.00274) | (0.489) | (0.0660) | (0.0624) Tax $\Delta$*Q3 ($100’s) | -0.000553 | 0.0416 | 0.0382 | -0.0480 | (0.00283) | (0.219) | (0.0289) | (0.0485) Tax $\Delta$*Q4 ($100’s) | 0.00291∗∗ | -0.000309 | 0.0380 | -0.0719∗∗∗ | (0.00128) | (0.149) | (0.0231) | (0.0240) Tax $\Delta$*Q1 $\times$ High | -0.000246 | -0.171 | -0.155∗∗ | -0.0544 | (0.00172) | (0.114) | (0.0654) | (0.0508) Tax $\Delta$*Q2 $\times$ High | 0.00112 | 0.0285 | 0.155 | -0.0995 | (0.00261) | (0.506) | (0.156) | (0.0574) Tax $\Delta$*Q3 $\times$ High | 0.00433 | -0.0328 | -0.151∗∗ | 0.110 | (0.00326) | (0.278) | (0.0596) | (0.0797) Tax $\Delta$*Q4 $\times$ High | 0.00223 | 0.0103 | -0.220∗∗ | 0.135 | (0.00172) | (0.0873) | (0.101) | (0.164) Minimum Wage | 0.0108∗∗∗ | 0.00649 | -0.170 | -0.00830 | (0.00271) | (0.262) | (0.118) | (0.144) $R^{2}$ | 0.901 | 0.178 | 0.476 | 0.482 Mean of Dep Var | 9.323 | 31.7 | 10.61 | 11.28 $N$ | 1109000 | 1109000 | 1109000 | 1109000 Regressions include SEIN, year-quarter-N2, and year-quarter-High fixed effects. The necessary DDD interactions are either absorbed by the SEIN fixed effects or the year-quarter-High fixed effects. Robust standard errors clustered at state level in parentheses. ∗ $p<0.10$, ∗∗ $p<0.05$, ∗∗∗ $p<0.01$ Table 6: Triple Difference - Employment Growth (2008–2013) | Overall Growth | Age under 25 | HS or Less ---|---|---|--- | (1) | (2) | (3) Tax $\Delta$*Q1 ($100’s) | 0.330∗∗ | 0.379 | 0.371∗∗ | (0.116) | (0.228) | (0.135) Tax $\Delta$*Q2 ($100’s) | 0.303∗∗ | 0.295 | 0.309∗ | (0.127) | (0.287) | (0.154) Tax $\Delta$*Q3 ($100’s) | 0.315∗∗ | 0.208 | 0.357∗∗ | (0.135) | (0.242) | (0.149) Tax $\Delta$*Q4 ($100’s) | 0.284∗ | 0.222 | 0.299∗∗ | (0.140) | (0.265) | (0.139) Tax $\Delta$*Q1 $\times$ High | -0.503∗∗ | -0.618∗ | -0.525∗∗ | (0.190) | (0.293) | (0.233) Tax $\Delta$*Q2 $\times$ High | -0.622∗∗∗ | -0.659∗∗ | -0.617∗∗ | (0.212) | (0.290) | (0.274) Tax $\Delta$*Q3 $\times$ High | -0.664∗∗∗ | -0.750∗∗ | -0.705∗∗ | (0.222) | (0.326) | (0.253) Tax $\Delta$*Q4 $\times$ High | -0.743∗∗∗ | -0.904∗∗ | -0.748∗∗∗ | (0.234) | (0.387) | (0.252) Minimum Wage | -0.383 | 0.598 | -0.801** | (0.336) | (0.872) | (0.369) $R^{2}$ | 0.203 | 0.086 | 0.167 Mean of Dep Variable | -1.373 | -7.296 | 0.419 $N$ | 1109000 | 1109000 | 1109000 Growth measures are multiplied by 100. Regressions include SEIN, year- quarter-N2, and year-quarter-High fixed effects. The necessary DDD interactions are either absorbed by the SEIN fixed effects or the year- quarter-High fixed effects. Robust standard errors clustered at state level in parentheses. ∗ $p<0.10$, ∗∗ $p<0.05$, ∗∗∗ $p<0.01$ ## Appendix A APPENDIX TABLES AND FIGURES Figure A.1: Empirical UI Tax Schedule for Florida (2008) Source: US Dept of Labor ETA 204 Experience Rating Report. The Benefit Ratio is a measure of the employer’s layoff experience in the last three years. Figure A.2: Total UI Contributions, as % of total wages (1967 - 2018) Source: US Dept of Labor Unemployment Insurance Chartbook. Shaded regions denote US recession years, as defined by NBER. Figure A.3: State UI Tax Schedules - Control Compares average UI taxes paid and maximum UI taxes. Average taxes for specialty trade contractors obtained from Quarterly Census of Employment and Wages. Specialty trade contractors (NAICS 238) was chosen as an industry whose employers often face rates close to the maximum, to proxy for high exposure firms. Figure A.4: State UI Tax Schedules - Treatment Compares average UI taxes paid and maximum UI taxes. Average taxes for specialty trade contractors obtained from Quarterly Census of Employment and Wages. Specialty trade contractors (NAICS 238) was chosen as an industry whose employers often face rates close to the maximum, to proxy for high exposure firms. Figure A.5: State UI Tax Schedules - Excluded Compares average UI taxes paid and maximum UI taxes. Average taxes for specialty trade contractors obtained from Quarterly Census of Employment and Wages. Specialty trade contractors (NAICS 238) was chosen as an industry whose employers often face rates close to the maximum, to proxy for high exposure firms. Figure A.6: Heterogeneity in Separations Share by Firm Type (2008-2013) N = 651,000. Also includes a control for state minimum wage, and includes SEIN and Year-Quarter-N2 fixed effects. Error bars denote 95% CI for robust standard errors clustered at state level. The share of employers with median earnings below $6000 equals 0.29. The share that are single-establishment firms equals 0.69. The share with firm age under 20 equals 0.22. Table A.1: Average Industry Effective UI Tax Rate (2008–2013) | Unweighted | Employment-Weighted ---|---|--- | (1) | (2) Tax $\Delta$*Q1 ($100’s) | 0.0996∗ | 0.161∗∗∗ | (0.0498) | (0.0646) Tax $\Delta$*Q2 ($100’s) | 0.138∗∗∗ | 0.136∗∗∗ | (0.0242) | (0.0270) Tax $\Delta$*Q3 ($100’s) | 0.0752∗∗∗ | 0.0490∗ | (0.0209) | (0.0254) Tax $\Delta$*Q4 ($100’s) | 0.0536∗ | 0.0207 | (0.0280) | (0.0335) Minimum Wage | 0.0865 | 0.206 | (0.104) | (0.0987) $R^{2}$ | 0.881 | 0.914 Mean of Dependent Variable | 1.382 | 1.217 $N$ | 4062 | 4062 Uses average industry data from the public-use QCEW. Sample limited to ten 4-digit construction industries, and observations are at the industry by state by quarter level (cells too small to meet disclosure requirements are missing). Effective tax rates calculated by dividing quarterly UI contributions by quarterly payroll. Regressions include state-industry and year-quarter fixed effects. Robust standard errors clustered at state level in parentheses. ∗∗ $p<0.05$, ∗∗∗ $p<0.01$ Table A.2: Triple Difference Controlling for State-Time FEs (2008–2013) | | | | | Growth: | Growth: ---|---|---|---|---|---|--- | Log Earn | Growth | New Hires | Separations | Under 25 | HS or Less | (1) | (2) | (3) | (4) | (5) | (6) Tax $\Delta$*Q1 $\times$ High | -0.00292 | -0.528∗∗ | -0.154∗ | -0.0456 | -0.665∗∗ | -0.551∗∗ | (0.00320) | (0.190) | (0.0739) | (0.0601) | (0.284) | (0.231) Tax $\Delta$*Q2 $\times$ High | 0.00184 | -0.642∗∗∗ | 0.196 | -0.0838 | -0.717∗∗ | -0.640∗∗ | (0.00322) | (0.215) | (0.185) | (0.0528) | (0.277) | (0.275) Tax $\Delta$*Q3 $\times$ High | 0.00511∗∗ | -0.672∗∗∗ | -0.131∗∗ | 0.130∗∗ | -0.776∗∗ | -0.723∗∗ | (0.00239) | (0.218) | (0.0504) | (0.0607) | (0.301) | (0.249) Tax $\Delta$*Q4 $\times$ High | 0.00256 | -0.737∗∗∗ | 0.199∗ | 0.195 | -0.939∗∗ | -0.754∗∗∗ | (0.00219) | (0.227) | (0.102) | (0.181) | (0.370) | (0.244) $R^{2}$ | 0.914 | 0.205 | 0.479 | 0.485 | 0.087 | 0.168 Mean of Dep Var | 9.096 | -1.373 | 10.61 | 11.28 | -7.296 | 0.419 $N$ | 1109000 | 1109000 | 1109000 | 1109000 | 1109000 | 1109000 Outcomes in percentage points. Regressions include SEIN, year-quarter-state, year-quarter-N2, and year-quarter-High fixed effects. The necessary DDD interactions are either absorbed by the SEIN, year-quarter-state, or year- quarter-High fixed effects. Robust standard errors clustered at state level in parentheses. ∗ $p<0.10$, ∗∗ $p<0.05$, ∗∗∗ $p<0.01$
# High-fidelity quantum control via Autler-Townes splitting Michele Delvecchio Dipartimento di Matematica, Fisica e Informatica, Universitá di Parma, Parco Area delle Scienze 7/A, 43124 Parma, Italy Istituto Nazionale di Fisica Nucleare (INFN), Sezione Milano Bicocca, Gruppo di Parma, Parco Area delle Scienze 7/A, 43124 Parma, Italy Teodora Kirova <EMAIL_ADDRESS>Institute of Atomic Physics and Spectroscopy, University of Latvia, Jelgavas street 4, Riga, LV-1004, Latvia Ennio Arimondo <EMAIL_ADDRESS>Dipartimento di Fisica, Università di Pisa, Largo Pontecorvo 3, 56127 Pisa, Italy Istituto Nazionale di Ottica - Consiglio Nazionale delle Ricerche, Università di Pisa, Largo Pontecorvo 3, 56127 Pisa, Italy Donatella Ciampini<EMAIL_ADDRESS>Dipartimento di Fisica, Università di Pisa, Largo Pontecorvo 3, 56127 Pisa, Italy Sandro Wimberger <EMAIL_ADDRESS>Dipartimento di Matematica, Fisica e Informatica, Universitá di Parma, Parco Area delle Scienze 7/A, 43124 Parma, Italy Istituto Nazionale di Fisica Nucleare (INFN), Sezione Milano Bicocca, Gruppo di Parma, Parco Area delle Scienze 7/A, 43124 Parma, Italy ###### Abstract We propose quantum control protocols for the high-fidelity preparation of target-states in systems with Autler-Townes splitting. We investigate an approximated three-level system obtained from a four-level one by adiabatically eliminating a state that does not participate in the evolution. In our work we use linear, arctan and Roland-Cerf functions for transferring population between two eigenstates of the system obtaining a high fidelity for long evolution times. Additionally, in order to overcome the restriction given by the lifetimes of the experimental setup, we propose an accelerated adiabatic evolution with a shortcut to adiabaticity protocol, which allows us to reach fidelities close to one but much faster. Quantum control, Autler-Townes splitting, Shortcut-to-Adiabaticity, Counterdiabatic Driving ## I Introduction Quantum control is based on the application of unitary transformations to simple or complex quantum systems to drive their evolution into a target quantum state [1, 2, 3, 4]. Quantum control schemes should be classified on the basis of two elements, i.e., the experimental handle producing the quantum evolution, and the specific protocol describing the temporal modification of the system Hamiltonian. The detuning of a laser exciting the quantum system, for instance, a two-level system, is a convenient handle for preparing the initial quantum state and generating a Rabi evolution into the final state. The protocol determines the time dependence of the handle, optimized to enhance fidelity, robustness and speed of the unitary transformation. In a multilevel system with an intrinsic interaction between the internal states, the laser excitation may play a different indirect role. The non- resonant control photons do not produce directly the quantum transfer: they alter the energy levels of the controlled system. Following the initial quantum preparation the final state transfer is driven by the internal Hamiltonian. Within this not-resonant approach the quantum control based on laser-induced potentials, i.e., on AC Stark shift, in some context known as Autler-Townes splitting, [5], has received a widespread interest, from atomic- molecular physics [6, 7, 8, 9] to solid state systems [10, 11] and chemistry [12, 13]. Within the atomic-molecular physics area such an approach has been applied to different processes as molecular alignment, photodissociation reactions, such as reviewed in [9]. Our attention is focused on the quantum control of the two-electron singlet-triplet transitions in molecular systems, see e.g. Refs. [14, 15, 16, 17, 18, 19, 20, 21]. The perturbed singlet-triplet states, arising from the level mixing due to the spin-orbit coupling, represent a gateway to gain access to the otherwise dark triplet states in these systems. In most cases, for instance, in ion strings [19, 20], the Autler-Townes manipulation of the singlet-triplet transition is produced by properly shaped short laser pulses. These pulses decouple the singlet and triplet electronic states, as well as minimize absorption to excited singlets and ionization of the absorbing species. Refs. [17, 18] introduce an alternative approach based on longer time scales, but leading to a more efficient singlet-triplet transfer. Controlling the spin character of a spin-orbit coupled pair of levels, the Autler-Townes effect acts as an all-optical spin switch between singlet and triplet manifolds. In experiments on molecular states in lithium dimers this tuning produces an optical control of the singlet-triplet probability distribution. Based on levels experiencing a weak singlet-triplet coupling the Autler-Townes effect leads to a well controlled transfer to triplet states. In an equivalent quantum control experiment on perovskite nanocrystals exhibiting a spin-orbit induced multi-band structure, the optical absorption is finely tuned using the Autler-Townes effect [22]. In addition Ref. [23] applies an electrical control over the detuning energy of the quantum states of the two-electron singlet-triplet qubit across a chain of three quantum dots. Next to the experimental handle, the second control element is the specific time-dependent driving protocol. Detuning protocols such as the Landau-Zener- Majorana-Stückelberg tunneling (LZMS) [24, 25, 26] have acquired a significant role in different research areas. Relying on a linear dependence of a quantum handle it represents an easy to implement protocol for the preparation of the target quantum state. Both the asymptotic transition probability and its time dependence have been examined extensively, theoretically in Refs. [27, 28] and experimentally in cold atoms and solid state qubits [29, 30, 31, 32, 33, 34]. Ref. [35] applied this protocol to a spin-orbit-coupled Bose-Einstein condensate. The LZMS tunneling is equivalent to the adiabatic passage of light-induced potentials introduced in [6] for controlling molecular reactions. Improvements in LZMS fidelity and speed are obtained applying a nonlinear temporal dependence of the control parameter [36, 37, 38, 39]. In order to speed up the operation, more recently alternative superadiabatic protocols, referred to as ”shortcuts to adiabaticity”, have been introduced theoretically and experimentally as reviewed in [40]. In order to avoid the additional requests on the parameter resources, an effective instantaneous following of the adiabatic states is achieved by introducing fast oscillations in the Hamilton parameters already present [41, 42]. While the experiments of [16, 17, 18] with lithium molecules are basically in a continuous-wave regime, the present target is to modify the Autler-Townes quantum control in order to realize a time-dependent quantum transfer satisfying high-level requests for fidelity, robustness and speed. This scheme realizes a triggered and fast all-optical spin switch. The time dependence of the transfer protocol is based on the control of the Autler-Townes effect parameters. A similar time-dependent control of the Hamiltonian parameters can be applied to other qubit systems experiencing a spin-orbit coupling. Our scheme based on the control of a probe laser frequency detuning, contains some features of the three-level Stimulated Raman Adiabatic Passage operation [43]. While the probe laser produces the excitation from the initial state to an intermediate excited one, the second step for the transfer to the target state is produced by the spin-orbit Hamiltonian. This second step not directly driven by a coupling laser, is controlled by the Autler-Townes effect of a second laser connecting the target state to a fourth level. Therefore an original feature of our theoretical analysis is the application of LZMS-like protocol to such an effective four-level system. The conditions for the implementation of linear, non-linear and superadiabatic protocols are imposed on a reduced three-level system derived through an adiabatic elimination. The requests for such an elimination and the superadiabatic realization in the full four-level system are carefully investigated. Even if our analysis is based on the experimental parameters of Ref. [16, 17], the four-level LZMS protocol can be applied to a large variety of experimental schemes. The paper is organized as follows: Section II introduces the level scheme and presents in detail the role of the spin-orbit coupling. In the following, the adiabatic protocol, the reduction of the four-level system to a three-level one, and the time-dependence of eigenvectors-eigenvalues of the full Hamitonian are presented. Section III examines the application of different adiabatic protocols to the four-level system and introduces the counterdiabatic approach as a tool to enhance the driving speed. A short summary concludes our work. Figure 1: Four-level system with pump and coupling lasers, driving the transitions $|1\rangle\to|S\rangle$, with Rabi frequency $\Omega_{p}$ and detuning $\delta_{p}$, and $|T\rangle\to|2\rangle$, with Rabi frequency $\Omega_{c}$ and detuning $\delta_{c}$, respectively. A spontaneous decay with rate $\Gamma_{T}$ from the $|T\rangle$ state, represented by the dashed arrow, prepares the final state $|T_{g}\rangle$ of the all-optical switch. While in the experiment of Ref. [17] the coupling laser connects the $\left|S\right>$ and $\left|2\right>$ states, our equivalent choice allows an adiabatic elimination of the $\left|2\right>$ state. The dressed states $|\pm\rangle$, arising due to the Autler-Townes mixing discussed below in Eq. (II.3), are shown for a blue detuned transition with $\delta_{c}>0$ as sketched in the figure. ## II Hamiltonian and driving schemes In this section, we aim at providing the basis of the system setups that we used in our study. We first introduce the theoretical model of the four-level system we considered, then we show how and under what conditions it can be reduced to a three-level one, and finally we discuss the Autler-Townes effect in our context. ### II.1 Singlet-triplet Hamiltonian The singlet-triplet transfer experiment, as those in refs. [16, 17, 18], are described by the four levels introduced in [44] and presented in Fig. 1. Their original structure for a lithium dimer is shown in Fig. 2 in Ref. [17]. In particular, our initial ground state $|1\rangle$ would correspond in Li2 to the rotational-vibrational state $X^{1}\Sigma_{g}^{+}(\nu=2,J=22)$, and the spin-orbit coupled states $\left|T\right>$ and $\left|S\right>$ to $1^{3}\Sigma_{g}^{-}(\nu=1,J=21,f)$ and $G^{1}\Pi_{g}(\nu=12,J=21,f)$, respectively. In the experiment reported in [17] the first transition from the ground to the excited state is a two-photon one. Our target is to transfer the population from the singlet initial state $|1\rangle$ to the triplet excited state $|T\rangle$, and from the latter finally to the state $|T_{g}\rangle$ either automatically by spontaneous decay or by the application of an additional $\pi$-pulse. The gateway key is the $(|S\rangle,|T\rangle)$ manifold of states with mixed singlet-triplet symmetries owing to a spin-orbit coupling. They are originated from the $(|S_{0}\rangle,|T_{0}\rangle)$ states corresponding to unperturbed singlet and triplet states with original energy separation $\Delta_{0}$. They experience a spin-orbit perturbation mixing with amplitude $V$ described by the Hamiltonian $H_{so}=V|S_{0}\rangle\langle T_{0}|+H.c$, (assuming $\hbar=1$). The mixed states $|S\rangle$ and $|T\rangle$ are given by $|S\rangle=\alpha|S_{0}\rangle-\beta|T_{0}\rangle,\qquad|T\rangle=\beta|S_{0}\rangle+\alpha|T_{0}\rangle,$ (1) where the $(\alpha,\beta)$ coefficients are normalized to one, i.e., $|\alpha|^{2}+|\beta|^{2}=1$, while in the following these parameters are assumed to be real without loss of generality. The unperturbed energy separation $\Delta_{0}$ and the perturbation $V$ are linked to the effective energy splitting $\Delta_{so}$ of the $(|S\rangle,|T\rangle)$ levels and to the mixing coefficients by (see [44]): $\Delta_{0}=(\alpha^{2}-\beta^{2})\Delta_{so}\qquad V=\alpha\beta\Delta_{so}.$ (2) The Lithium molecular states of Ref. [16] have mixing coefficients $\alpha^{2}=0.87$ and $\beta^{2}=0.13$, and spin-orbit splitting $\Delta_{so}=2\pi\cdot 0.75$ GHz equivalent to to $4.71$ ns-1. In the following all parameters are given in the latter units, while the conversion into GHz is obtained dividing those by $2\pi$. The access to the singlet-triplet manifold is provided by the pump ($p$) laser connecting the $\left|1\right>$ singlet to the $|S\rangle$ state with detuning $\delta_{p}$. The singlet component of both $|S\rangle,|T\rangle$ eigenstates of Eq. (1) determines their excitation by the $p$ laser characterized by Rabi frequencies $\alpha\Omega_{p}$ and $\beta\Omega_{p}$, respectively. The effective laser excitation is controlled by the pump detuning from those states. Due to the mixed singlet-triplet character a fraction of molecules in the $|T\rangle$ excited state decays to the pure lower energy $|T_{g}\rangle$ triplet state. This process representing the basis of the optical pumping in triplet states for ultracold molecules is not very efficient for the rather small $\beta^{2}$ value reported above. The quantum control of the singlet-triplet transfer introduced by [16, 17] is based on the modification of the energy separation of the spin-orbit $|S\rangle,|T\rangle$ manifold. Such a modification is produced by a control ($c$) off-resonant laser linking the $|T\rangle$ state to an additional $|2\rangle$ triplet one with detuning $\delta_{c}$. The Rabi frequencies for the mixed manifold components are $-\beta\Omega_{c}$ and $\alpha\Omega_{c}$, respectively. The temporal evolution of the optically driven levels $(|1\rangle,|S\rangle,|T\rangle,|2\rangle)$ is described by a four-level Hamiltonian. For the $(|S\rangle,|T\rangle)$ basis we obtain $H^{(4)}=\begin{pmatrix}\delta_{p}+\Delta_{so}&\alpha\Omega_{p}/2&-\beta\Omega_{p}/2&0\\\ \alpha\Omega_{p}/2&\Delta_{so}&0&\beta\Omega_{c}/2\\\ -\beta\Omega_{p}/2&0&0&\alpha\Omega_{c}/2\\\ 0&\beta\Omega_{c}/2&\alpha\Omega_{c}/2&-\delta_{c}\end{pmatrix},$ (3) where we assume the zero energy at the position of the $|T\rangle$ state, and define $\delta_{p}=\omega_{p}-(E_{S}-E_{1})$, $\delta_{c}=\omega_{c}-(E_{2}-E_{T})$, with $E_{i}$ the energy of the states $(i=1,2,S,T)$. In the experiment of [16], the Rabi frequencies were $\Omega_{p}=0.24$, $\Omega_{c}=3.8$, all in units of ns-1. While the lower $|1\rangle$ level is stable and the $|2\rangle$ state is off- resonantly excited, the $|S\rangle,|T\rangle$ mixed levels suffer from spontaneous emission decay to levels outside the manifold of the above Hamiltonian, with decay rates of $\Gamma_{S}=0.06$ and $\Gamma_{T}=0.10$, both in ns-1 for the experiments of ref. [16]. That experiment monitored the spontaneous decay to the $|T_{g}\rangle$ triplet-ground state representing the final product of the all-optical switch, as denoted by the dashed line in Fig. 1. In our analysis we consider two regimes for $\delta_{c}$, supposed to be fixed during the protocol. The first is $\delta_{c}\approx 0$, in which the Autler- Townes effect must be considered. In the $|\delta_{c}|\gg\Omega_{c}$ second case, the problem can be reduced to a three-level model by adiabatically eliminating the state $\left|2\right>$ from the beginning. ### II.2 Reduced three-level System For $|\delta_{c}|\gg\Omega_{c}$, corresponding to a virtual excitation of the $|2\rangle$ state, the $\left|2\right>$ state can be adiabatically eliminated. The corresponding reduced Hamiltonian reads $H^{(3)}=\begin{pmatrix}\delta_{p}+\Delta_{so}&\frac{\alpha\Omega_{p}}{2}&-\frac{\beta\Omega_{p}}{2}\\\ \frac{\alpha\Omega_{p}}{2}&\Delta_{so}+\frac{\beta^{2}\Omega_{c}^{2}}{4\delta_{c}}&\frac{\alpha\beta\Omega_{c}^{2}}{4\delta_{c}}\\\ -\frac{\beta\Omega_{p}}{2}&\frac{\alpha\beta\Omega_{c}^{2}}{4\delta_{c}}&\frac{\alpha^{2}\Omega_{c}^{2}}{4\delta_{c}}\\\ \end{pmatrix}\,,$ (4) now in the basis $\\{\left|1\right>,\left|S\right>,\left|T\right>\\}$. Notice that within the effective energy of the state $|T\rangle$ the light-shift or ac shift $\delta_{ls}$ given by $\delta_{ls}=\frac{\alpha^{2}\Omega_{c}^{2}}{4\delta_{c}},$ (5) associated to the Autler-Townes process. That term may produce a degeneracy between the $|S\rangle$ and $|T\rangle$ for a blue detuning of the control laser and a proper choice of the control laser parameters. If we neglect the “second order” Hamiltonian matrix elements proportional to $\beta^{2},\alpha^{2}\text{ and }\alpha\beta$, we obtain a matrix equivalent to Eq. (5) of Ref. [44]. ### II.3 Autler-Townes effect The driving of a two-level system by an intense resonant laser produces the Autler-Townes effect, i.e., the splitting of an optical transition [5]. Within a dressed-atom analysis [45] the degeneracy between the ground and the excited state is broken by the strong laser. The resulting energy separation is given by the Rabi frequency of the driving laser. Translated to our Fig. 1 level scheme, the Autler-Townes configuration of Refs. [16, 17] is created by the coupling laser tuned at $\delta_{c}\approx 0$ modifying the energies of the states $|T\rangle$ and $|2\rangle$. That driving produces as eigenstates the following $|\pm\rangle$ superpositions: $\displaystyle|+\rangle=$ $\displaystyle a_{+}|T\rangle+b_{+}|2\rangle,$ $\displaystyle|-\rangle=$ $\displaystyle a_{-}|T\rangle+b_{-}|2\rangle,$ (6) with eigenvalues $\lambda_{\pm}$ given by $\lambda_{\pm}=-\frac{-\delta_{c}\pm\sqrt{\delta_{c}^{2}+\Omega_{c}^{2}}}{2}.$ (7) and eigenstate coefficients given by $\frac{b_{\pm}}{a_{\pm}}=\frac{\delta_{c}\pm\sqrt{\delta_{c}^{2}+\alpha^{2}\Omega_{c}^{2}}}{\alpha\Omega_{c}}.$ (8) Because the Autler-Townes control is realized at low $\delta_{c}$ values, in this regime we usually cannot apply the adiabatic elimination reducing the four-level Hamiltonian of Eq. (3) to the three-level of Eq. (4). However, in the next section we show that, using the states in Eq. (II.3), we obtain a matrix description in which $\left|T\right>$ and $\left|2\right>$ are not coupled. This will be useful for reaching a high fidelity in the Autler-Townes regime. Because the energy separation between the states $|S\rangle$ and $|T\rangle$ may be modified by the Autler-Townes energy shift of the $|T\rangle$ state, their singlet-triplet coupling is also modified. In the experiment of Refs. [16, 17] an increased triplet excitation reached approximately twice the value $\beta^{2}=0.13$ for the molecular states of interest. ### II.4 Fidelity To measure the performance of the triplet transfer, we define the fidelity as the probability of being in the target state $\left|T\right>$ at final time of the evolution $t=t_{f}$. In formula, $\mathcal{F}(t_{f})=\left|\langle\psi\left(t=t_{f}\right)|T\rangle\right|^{2}\,$ (9) where $\psi(t=t_{f})$ is the state of the system at the end of the protocol. As long as $t_{f}<\Gamma_{T}^{-1}$, our target state decays after the protocol into the experimentally relevant one $\left|T_{g}\right>$, and the fidelities of both states will practically be identical. Alternatively, the population of the state $\left|T\right>$ can be fast transferred to the final target $\left|T_{g}\right>$ by a $\pi$ pulse. We will also use the infidelity $\mathcal{I}\equiv 1-\mathcal{F}$ if necessary for a better visualization. ## III Laser driving protocols This section introduces the adiabatic driving functions that we adopted in our schemes and then the protocol we chose to accelerate the adiabatic evolution of the mentioned driving functions. ### III.1 Adiabatic evolution Generalizing the idea of the LZMS problem [36, 37, 38, 46, 39], in order to realize the above goal we used specific functions such that the detuning $\delta_{p}$ is given by the following dimensionless form $\delta_{p}(\tau)=sf(\tau)\,,$ (10) where $\tau=t/t_{f}$, $s$ is a scaling parameter and $f(\tau)$ a dimensionless function which we call sweep function. Exploiting the adiabatic theorem of quantum mechanics, which states that if the system changes slowly enough it follows the evolution of one of its initially prepared eigenstates adiabatically in time, we can design specific sweep functions in order to maximize the transfer to our target state. In particular, we use three exemplary sweep functions: a linear one from the classical LZMS problem, an arctan function and the Roland-Cerf [36, 47]. They explicitly read * • Linear $\delta_{p}(\tau)=a(2\tau-1)\,,$ (11) * • Arctan $\delta_{p}(\tau)=a\arctan(b\tau)-c,$ (12) * • Roland-Cerf $\delta_{p}(\tau)=\frac{\beta\Omega_{p}(1-2\tau)}{2\sqrt{4\tau(1-\tau)+\left(\frac{\beta\Omega_{p}}{2a}\right)^{2}}}-d\,.$ (13) In our simulations we set $a=10$ ns-1 producing the linear scan of Fig.2(a). The dimensionless parameter $b$ of the arctan scan controls the shape of the $f(\tau)$ function and, consequently, the distance between the states $\left|1\right>$ and $\left|T\right>$ at the end of the protocol, as in Fig. 2(b). The Roland-Cerf function is based on the procedure described in [46] for the tangent protocol. The bias value $d$ is optimized to create the wanted avoided crossing, as in Fig. 2(c). An optimal choice of the parameters can improve the adiabatic evolution, but may degrade the evolution with counterdiabatic pulses, as seen later on in Sec. IV. Figure 2: Instantaneous eigenvalues vs. rescaled time for the three sweep functions (a) linear; (b) arctan with $b=20$ and $c=19.2$ ns-1; (c) Roland- Cerf with $d=4.68$ ns-1. In these plots $\delta_{c}=100$ ns-1 leading to $\delta_{ls}=0.03$ ns-1 for the light-shift defined in Eq. (5). On the left the position of the three initial eigenstates $\left|1\right>,\left|S\right>,\left|T\right>$ and on the right their final energies are marked. In (a) two avoided crossing are created, but we are only interested in the one between the states $\left|1\right>$ and $\left|T\right>$. In (b), thanks to the arctan function shape, only the avoided crossing of interest, observed in the inset, is generated. Also in (c) although with a different sweep function, a single avoided crossing is obtained. For the three-level Hamiltonian in Eq. (4), we compare the performance of the above sweep functions. Fig. 2 reports the instantaneous eigenvalues of the Hamiltonian for the three functions when varying the detuning $\delta_{p}$ with time. Because the energy of the triplet state $\left|T\right>$ is lower than the $\left|S\right>$ one, a scan of $\delta_{p}$ from a negative value with a positive slope reaches in time the $\left|1\right>\to\left|T\right>$ anti-crossing before the $\left|1\right>\to\left|S\right>$ one, as for the linear sweep in (a). We are interested only in the first anticrossing where, adiabatically following the lowest eigenstate (blue/black solid line), a $\left|1\right>\rightarrow\left|T\right>$ population transfer is realized. Therefore the linear sweep is terminated before reaching the $\left|1\right>\to\left|S\right>$ anticrossing. In Fig. 2(b) or the arctan case, the coefficients $b$ and $c$ of Eq. (12) governing the $\delta_{p}$ non- constant rate are chosen in order to avoid the second anticrossing. This generates temporal oscillations in the fidelity as shown later in the corresponding section. Also here an adiabatic following of the lowest eigenstate (blue/black solid line) produces the required population transfer. In Fig. 2(c), such a transfer occurs in the instantaneous energies scheme for the optimized Roland-Cerf sweep, too. ### III.2 Accelerated evolution The final time of transfer protocols must be compared to the spontaneous emission decay of the molecular states of interest. Therefore we accelerate the evolution in order to reach the required fidelity in a shorter time. However, an accelerated evolution typically generates unwanted non-adiabatic transitions between the eigenstates of the system, reducing the final fidelity. The approach introduced in [48, 49, 50] drives the system in a perfectly adiabatic manner in an arbitrary short time. In the next sections, we show the result for the reduced three-level model, but the same procedure can be applied to the complete system as well. Given a generic time-dependent system Hamiltonian $\hat{H}_{0}(t)=\sum_{k}E_{k}(t)\left|k(t)\right>\left<k(t)\right|\,,$ (14) where $E_{k}(t)$ and $\left|k(t)\right>$ are, respectively, the instantaneous eigenvalues and eigenvectors of the system, the counter-adiabatic (CD) Hamiltonian $H_{CD}(t)$ is determined by: $H_{CD}(t)=i\hbar\sum_{n\neq k}\sum_{k}\frac{\left|n(t)\right>\left<n(t)\right|\partial_{t}H_{0}(t)\left|k(t)\right>\left<k(t)\right|}{E_{k}(t)-E_{n}(t)}\,.$ (15) The system evolution is driven, thus, by the new Hamiltonian $\hat{H}(t)=\hat{H}_{0}+H_{CD}(t)$. In this way, if the system starts in an eigenstate $\left|k(t)\right>$, it remains in that instantaneous eigenstate for the entire evolution. While for a two-level system the expression of $H_{CD}$ can be found analytically, for more levels it is generally more convenient to resort to numerical tools. Identifying $\hat{H}_{0}$ with the matrices in Eqs. (3) and (4), the Schrödinger equation with the total Hamiltonian $\hat{H}(t)$ is then numerically evolved using the function sesolve of the python library qutip [51], ensuring the convergence of the results. Figure 3: Instantaneous eigenvalues of the four-level system vs the $\tau=t/t_{f}$ rescaled time at $\delta_{c}=1$ ns-1 for the sweep functions: (a) linear; (b) arctan with $b=10$ and $c=18$ ns-1; (c) Roland-Cerf with $d=3.41$ ns-1. The energies of the four states $\left|1\right>,\left|S\right>,\left|T\right>,\left|2\right>$ evolve with time from the initial to final values. The impact of the final $\left|1\right>$-$\left|S\right>$ avoided crossing is handled as described in the caption of Fig. 2. Note the presence of the $\left|1\right>$-$\left|2\right>$ additional avoided crossing with a small coupling strength. ## IV Autler-Townes regime For the four-level system described by the Eq. (3), we examine the population transfer between the states $\left|T\right>$ and $\left|1\right>$ at low $\delta_{c}$ values where the Autler-Townes effect plays a key role. The instantaneous eigenvalues are depicted in Fig. 3 for the protocols of Sec III.1 with the parameters reported in the figure caption. The transfer of our interest is associated to the avoided crossing between the states $\left|T\right>$ and $\left|1\right>$. However in all the protocols of Fig. 3, before reaching the level $\left|T\right>$ the scanning reaches an additional $\left|1\right>$-$\left|2\right>$ avoided crossing. Even with a reduced strength this crossing would block the realization of a very high fidelity in the final $\left|T\right>$ occupation. In order to bypass this loss, for both linear and arctan sweeps we split the $\delta_{p}$ evolution in two pieces. At first, using a short evolution time we scan diabatically the first avoided crossing. Next we impose to $\delta_{p}$ the counter-diabatic evolution of Sec. III.2. The transition occurs at $\tau=t/t_{f}=0.2$. The temporal evolution of the Roland-Cerf sweep is automatically optimized for such fast- slow scanning. As an example of the fidelity results, Fig. 4(a) shows the populations of the four levels, of the original basis, in the case of the arctan protocol. The $\left|T\right>$ state fidelity reaches a final value around sixty-five percent. However, even with a perfectly adiabatic evolution, a large part of the final population, $\approx 35\%$ in this case, occupies the state $\left|2\right>$. This result appears because the counter-adiabatic protocol imposes an evolution linking initial and final superposition of states. For the present case, the Autler-Townes effect produces a close degeneracy between the states $\left|1\right>$ and $\left|+\right>$. Figure 4(b) represents the populations of the dressed states obtained from the evolution of the dressed basis $\left|\pm\right>$. Therefore, we have applied the counter-diabatic evolution to the new four level system where the $\left|T\right>,\left|2\right>$ states are now replaced by the dressed basis $\left|\pm\right>$ of Eqs. (II.3). Figure 4(b) shows the populations of the four states $\left|+\right>,\left|S\right>,\left|1\right>,\left|-\right>$ obtained from the evolution of the CD Hamiltonian in the dressed representation. Figure 4(c) reports the populations of the four states $\left|T\right>,\left|S\right>,\left|1\right>,\left|2\right>$ derived from that evolution by performing a projective measurement of the occupations of the states $\left|T\right>$ and $\left|2\right>$. The states $\left|\pm\right>$ are superpositions of $\left|T\right>,\left|2\right>$. Thus, the measured $\left|T\right>$ population contains contributions from both dressed states leading to interference effects. The CD dressed Hamiltonian with a maximized fidelity produces a final 97% occupation of $\left|T\right>$, higher than that in the case (a). This population can be transferred by a $\pi$ pulse on a short time scale to the final experimental target $\left|T_{g}\right>$. The spontaneous emission $\left|T\right>\to\left|T_{g}\right>$ with $1/\Gamma_{T}\gg t_{f}$ represents an alternative for the accumulation into the target state. Figure 4: Populations of the four states obtained from the different evolutions in (a) with the four-level Hamiltonian of Eq. (3) in the original/bare basis, and in (b) and (c) in the dressed representation introduced in Sec. IV. The populations are measured in (a) and (c) in the bare basis $\\{\left|T\right>,\left|2\right>\\}$ and in (b) in the dressed basis $\\{\left|\pm\right>\\}$. The population of $\left|T\right>$ can then be transferred to $\left|T_{g}\right>$ either by spontaneous decay or by applying an additional $\pi$ pulse. All evolutions are performed using the arctan sweep together with the shortcut to adiabaticity protocol presented in Sec. III.2 for $\tau=t/t_{f}>0.2$. We choose a very fast evolution with $t_{f}=1$ ns in order to diabatically pass through the first avoided crossing. At $t_{f}=1$ ns, the measured $\left|T\right>$ population is 97% in (c), much higher than for the different evolution in (a). (d) shows a zoom of Fig. 3(b) into the interested avoided crossing. Please notice that the population transfer in (a),(b) and (c) occurs at the position of the avoided crossing in (d). ## V Three-level performances In this section, we return to the three-level system introduced above by Eq. (4). We first analyze the fidelities of the driving protocols when no CD correction is applied, then we accelerate the evolution introducing the CD term presented in Sec. III.2. Figure 5: Comparison between the fidelity, at $t=t_{f}$, as defined in Eq. (9), of the three adiabatic protocols denoted as LZ, AT and RC, at $\delta_{c}=30$ ns-1. The dashed lines represent the reduced three-level system, while the solid lines describe the complete four-level system. The two systems are identified by the numbers $3$ and $4$ in the legends. For such a large $\delta_{c}$, the four and three-level systems produce very similar fidelities. ### V.1 Linear, arctan and Roland-Cerf protocols Figure 5 reports a comparison in fidelity of the three protocols presented in Sec. III, without the counter-diabatic term and in the regime of large values of $\delta_{c}$, such that the $|2\rangle$ can be eliminated and the system is effectively reduced to a three-level one. The well designed Roland-Cerf driving reaches a better performance with $\mathcal{F}=93\%$ at $\tau_{f}=50$ ns for $\delta_{c}=30$ ns-1, as shown in Fig. 5 by the dashed line with square symbols. Good fidelities are reached at longer final times with the arctan protocol. However, in this case the population of the level $\left|T\right>$ experiences temporal oscillations, as shown Fig. 6(b). Similar oscillations are observed also in Fig 5. The amplitude of these oscillations is reduced using larger $t_{f}$ values. Lower fidelities are obtained with the linear protocol, with large oscillations in the populations. These are generated by changing the linear protocol to a constant after $t/t_{f}=0.3$. As above, this change is necessary if one wants to avoid to create an anticrossing with $\left|S\right>$ and transfer, in this way, part of the population to it. Figure 6: Populations of the three levels at $t_{f}=1000$ ns and $\delta_{c}=30$ ns-1, for the (a) linear, (b) arctan and (c) Roland-Cerf sweep functions. The populations in all the three cases are represented by the orange dashed line for the state $\left|1\right>$, the blue solid line for $\left|T\right>$ and green dash-dot line for the state $\left|S\right>$. The legend in (b) is valid also for (a) and (c). For the same evolution time, the latter two driving protocols produce better results. The population of the state $\left|T\right>$ represents the fidelity as a function of the rescaled time. Oscillations in (a) are caused by the fact that the linear protocol is terminated before reaching the level $\left|S\right>$ (at $t/t_{f}=0.3$ in this case) and, therefore, it assumes a constant shape for the rest of the evolution time. ### V.2 Counter-diabatic evolution Figure 7: In (a) imaginary parts of the $H_{CD}^{(i,j)}$ elements with $i,j=1,2,3$ for the arctan protocol, with $H_{CD}^{(1,3)}$ the largest required pulse. Parameters $\delta_{c}=30$ ns-1, $t_{f}=1$ ns, $c=19.2$ ns-1, $b=20$. In (b) infidelities, at the $10^{-3}$ level, for different CD corrections. Blue dots: for full $H_{CD}$ elements; orange squares: eliminating the $H_{CD}^{(2,3)}$ correction only; green triangles: eliminating both $H_{CD}^{(2,3)}$ and $H_{CD}^{(1,2)}$ corrections. We report here the performance of the arctan $\left|1\right>\to\left|T\right>$ protocol as corrected by the Berry counter-diabatic Hamiltonian of Eq. (15). For the three-level system three additional pulses are required in order to eliminate all the non-adiabatic transitions. The imaginary part of those $H_{CD}^{(i,j)}$ matrix elements are shown in Fig. 7(a). Applying these correction pulses, we obtain infidelities of the order of $10^{-3}$ as in the blue solid line in Fig. 7(b) for an arbitrary evolution time. The application of the Berry correction pulses is costly in terms of energy and of new matrix elements in the Hamiltonian. Towards a simplified experimental realization, the number of $H_{CD}$ corrections may be reduced to the most relevant ones, as analyzed in [46]. Fig. 7(a) evidences that the $H_{CD}^{(1,3)}$ correcting peak is greater than the remaining ones. Fig. 7(b) compares the infidelities applying the full $H_{CD}$, and eliminating either one or both the small amplitude corrections. The elimination of the two terms in the CD Hamiltonian produces only a maximum change at the $5\cdot 10^{-4}$ level, but simplifies a lot the experimental realizations. Therefore the single $H_{CD}^{(1,3)}$ implementation produces very high fidelities on timescales much shorter than without the acceleration, bringing experimental times down to the coherence time of the system. A similar reduction of the effect number of matrix elements in $H_{CD}$ is possible in the protocol shown in Sec. IV, but will not be shown here in full detail. ## VI Conclusions We have explored the Autler-Townes control of spin-orbit coupling in an original four-level system with the target of a large transfer efficiency to a final triplet state. We extended the flexibility explored by [21] for the nonlinear response in such a system to population transfer. Our work confirms the interesting feature of a probe-coupling excitation laser scheme, assisted by the intermediate spin-orbit coupling. However the present system is not equivalent to a STIRAP configuration [43] which realizes a population transfer in, e.g., a three-level system without populating the intermediate state. Our target, instead, is to populate the final triplet state modifying the spin- orbit coupling of the intermediate and final states. The quantum control is based on the Autler-Townes produced a modified energy separation between the singlet and triplet states. The detuning and the intensity of the control laser represent the key parameters of the control operation. We have applied the counter-diabatic control in order to speed up the transfer process with a high transfer fidelity. Our population transfers are all larger than those reached in the experiments of [16]. The use of accelerated adiabatic driving produces the required transfer in a time short compared to the decay of the molecular states of interest. These features make our protocols very relevant for future experiments. For the present population transfer the coupling between initial and final state is based on an intermediate singlet-triplet coupling Hamiltonian. However the present analysis may be applied as well, e.g., to configurations where that coupling is produced by an applied magnetic field, static or oscillating, or by a two-photon laser interaction. Thus, our quantum transfer may be useful to a large class of quantum simulations. In heavy alkali dimer molecules such as Rb2 and Cs2, the spin-orbit interaction is much stronger than for the lighter ones. Such perturbed levels are used as intermediate levels in the transfer of cold molecules formed at long range in the triplet state to deeply bound levels of the singlet ground state. Our control approaches could be applied also to those systems. ###### Acknowledgements. TK acknowledges support from the Visiting Fellow Program of the University of Pisa. TK and EA thank A. H. Ahmed and A. M. Lyyra for extended stimulating discussions. We are grateful to Francesco Petiziol for comments on the manuscript. ## References * Nielsen and Chuang [2010] M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, 2010). * D’Alessandro [2007] D. D’Alessandro, _Introduction to Quantum Control and Dynamics_, Chapman & Hall/CRC Applied Mathematics & Nonlinear Science (CRC Press, 2007). * Wiseman and Milburn [2009] H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_ (Cambridge University Press, 2009). * Glaser _et al._ [2015] S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny, and F. K. Wilhelm, Training Schrödinger´s cat: quantum optimal control - strategic report on current status, visions and goals for research in europe, Eur. Phys. J. D 69, 279 (2015). * Autler and Townes [1955] S. H. Autler and C. H. Townes, Stark effect in rapidly varying fields, Phys. Rev. 100, 703 (1955). * Garraway and Suominen [1998] B. M. Garraway and K.-A. Suominen, Adiabatic passage by light-induced potentials in molecules, Phys. Rev. Lett. 80, 932 (1998). * Brumer and Shapiro [2003] P. W. Brumer and M. Shapiro, _Principles of the Quantum Control of Molecular Processes_ (Wiley Hoboken, 2003). * Sussman _et al._ [2006] B. J. Sussman, D. Townsend, M. Y. Ivanov, and A. Stolow, Dynamic Stark control of photochemical processes, Science 314, 278 (2006), https://www.science.org/doi/pdf/10.1126/science.1132289 . * Sola _et al._ [2018] I. R. Sola, B. Y. Chang, S. A. Malinovskaya, and V. S. Malinovsky, Quantum control in multilevel systems, in _Advances Atomic Molecular Optical Physics_ , Vol. 67 (Elsevier, 2018) p. 151. * Boyle _et al._ [2009] S. J. Boyle, A. J. Ramsay, A. M. Fox, M. S. Skolnick, A. P. Heberle, and M. Hopkinson, Beating of exciton-dressed states in a single semiconductor $\mathrm{InGaAs}/\mathrm{GaAs}$ quantum dot, Phys. Rev. Lett. 102, 207401 (2009). * Astapenko _et al._ [2015] V. Astapenko, P. Golovinski, and A. Yakovets, Control of excitation transfer in coupled quantum dots by a nonresonant laser pulse, Optics and Laser Technology 71, 103 (2015). * Stranius _et al._ [2018] K. Stranius, M. Hertzog, and B. Karl, Selective manipulation of electronically excited states through strong light-matter interactions, Nature Comm. 9, 043418 (2018). * Eizner _et al._ [2019] E. Eizner, L. A. Martìnez-Martìnez, J. Yuen-Zhou, and S. Kèna-Cohen, Inverting singlet and triplet excited states using strong light-matter coupling, Science Advances 5, eaax4482 (2019). * Sola _et al._ [2006] I. R. Sola, J. González-Vázquez, and V. S. Malinovsky, Optical control of a spin switch in the weak spin-orbit coupling limit, Phys. Rev. A 74, 043418 (2006). * González-Vázquez _et al._ [2006] J. González-Vázquez, I. R. Sola, J. Santamaria, and V. S. Malinovsky, Optical control of the singlet-triplet transition in Rb2, The Journal of Chemical Physics 125, 124315 (2006). * Ahmed _et al._ [2011] E. H. Ahmed, S. Ingram, T. Kirova, O. Salihoglu, J. Huennekens, J. Qi, Y. Guan, and A. M. Lyyra, Quantum Control of the Spin-Orbit Interaction Using the Autler-Townes Effect, Phys. Rev. Lett. 107, 163601 (2011). * Ahmed and Lyyra [2014] E. H. Ahmed and A. M. Lyyra, Frequency domain control of quantum state singlet/triplet character and prospects for an all-optical spin switch, Journal of Modern Optics 61, 7 (2014). * Ahmed _et al._ [2014] E. H. Ahmed, X. Pan, J. Huennekens, and A. M. Lyyra, Optical control of collisional population flow between molecular electronic states of different spin multiplicity, Phys. Rev. A 89, 061401 (2014). * Vindel-Zandbergen _et al._ [2013] P. Vindel-Zandbergen, M. Falge, B. Y. Chang, V. Engel, and I. R. Sola, Manipulating the singlet–triplet transition in ion strings by nonresonant dynamic Stark effect, Theoretical Chemistry Accounts 132, 1359 (2013). * Falge _et al._ [2014] M. Falge, P. Vindel-Zandbergen, V. Engel, M. Lein, B. Y. Chang, and I. R. Sola, The time-scale of nonlinear events driven by strong fields: can one control the spin coupling before ionization runs over?, Journal of Physics B: Atomic, Molecular and Optical Physics 47, 124027 (2014). * Jamshidi-Ghaleh _et al._ [2017] K. Jamshidi-Ghaleh, Z. Ebrahimi-hamed, and M. Sahrai, Controlling nonlinear optical response in an open four-level molecular system using quantum control of spin-orbit interaction, The European Physical Journal Plus 132, 424 (2017). * Yumoto _et al._ [2021] G. Yumoto, H. Hirori, F. Sekiguchi, R. Sato, M. Saruyama, T. Teranishi, and Y. Kanemitsu, Strong spin-orbit coupling inducing Autler-Townes effect in lead halide perovskite nanocrystals, Nature Commun 12, 3026 (2021). * Feng _et al._ [2018] M. Feng, C. J. Kwong, T. S. Koh, and L. C. Kwek, Coherent transfer of singlet-triplet qubit states in an architecture of triple quantum dots, Phys. Rev. B 97, 245428 (2018). * Landau [1932] L. D. Landau, Zur Theorie der Energieübertragung II, Phys. Z. Sowjetunion 2, 46 (1932). * Zener [1932] C. Zener, Non-adiabatic crossing of energy levels, Proc. R. Soc. A 137, 696 (1932). * E.C.G. Stückelberg [1932] E.C.G. Stückelberg, Theory of inelastic collisions between atoms, Helv. Phys. Acta 5, 369 (1932). * Vitanov and Garraway [1996] N. V. Vitanov and B. M. Garraway, Landau-Zener model: Effects of finite coupling duration, Phys. Rev. A 53, 4288 (1996). * Vitanov [1999] N. V. Vitanov, Transition times in the Landau-Zener model, Phys. Rev. A 59, 988 (1999). * Wilkinson _et al._ [1997] S. R. Wilkinson, C. F. Bharucha, M. C. Fischer, K. W. Madison, P. R. Morrow, Q. Niu, B. Sundaram, and M. G. Raizen, Experimental evidence for non-exponential decay in quantum tunnelling, Nature(London) 387, 575 (1997). * Sillanpää _et al._ [2006] M. Sillanpää, T. Lehtinen, A. Paila, Y. Makhlin, and P. Hakonen, Continuous-time monitoring of landau-zener interference in a cooper-pair box, Phys. Rev. Lett. 96, 187002 (2006). * Zenesini _et al._ [2009] A. Zenesini, H. Lignier, G. Tayebirad, J. Radogostowicz, D. Ciampini, R. Mannella, S. Wimberger, O. Morsch, and E. Arimondo, Time-Resolved Measurement of Landau-Zener Tunneling in Periodic Potentials, Phys. Rev. Lett. 103, 090403 (2009). * Kling _et al._ [2010] S. Kling, T. Salger, C. Grossert, and M. Weitz, Atomic Bloch-Zener Oscillations and Stückelberg Interferometry in Optical Lattices, Phys. Rev. Lett. 105, 215301 (2010). * Zhou _et al._ [2014] J. Zhou, P. Huang, Q. Zhang, Z. Wang, T. Tan, X. Xu, F. Shi, X. Rong, S. Ashhab, and J. Du, Observation of Time-Domain Rabi Oscillations in the Landau-Zener Regime with a Single Electronic Spin, Phys. Rev. Lett. 112, 010503 (2014). * Sun _et al._ [2015] G. Sun, X. Wen, M. Gong, D.-W. Zhang, Y. Yu, S.-L. Zhu, J. Chen, P. Wu, and S. Han, Observation of coherent oscillation in single-passage Landau-Zener transitions, Scientific Reports 5, 8463 (2015). * Olson _et al._ [2014] A. J. Olson, S.-J. Wang, R. J. Niffenegger, C.-H. Li, C. H. Greene, and Y. P. Chen, Tunable Landau-Zener transitions in a spin-orbit-coupled Bose-Einstein condensate, Phys. Rev. A 90, 013616 (2014). * Roland and Cerf [2002] J. Roland and N. J. Cerf, Quantum search by local adiabatic evolution, Phys. Rev. A 65, 042308 (2002). * Garanin and Schilling [2002] D. A. Garanin and R. Schilling, Effects of nonlinear sweep in the Landau-Zener-Stueckelberg effect, Phys. Rev. B 66, 174438 (2002). * Malossi _et al._ [2013] N. Malossi, M. G. Bason, M. Viteau, E. Arimondo, R. Mannella, O. Morsch, and D. Ciampini, Quantum driving protocols for a two-level system: From generalized Landau-Zener sweeps to transitionless control, Phys. Rev. A 87, 012116 (2013). * Stefanatos and Paspalakis [2020] D. Stefanatos and E. Paspalakis, Speeding up adiabatic passage with an optimal modified roland–cerf protocol, Journal of Physics A: Mathematical and Theoretical 53, 115304 (2020). * Guéry-Odelin _et al._ [2019] D. Guéry-Odelin, A. Ruschhaupt, A. Kiely, E. Torrontegui, S. Martínez-Garaot, and J. G. Muga, Shortcuts to adiabaticity: Concepts, methods, and applications, Rev. Mod. Phys. 91, 045001 (2019). * Suqing _et al._ [2005] D. Suqing, L.-B. Fu, J. Liu, and X.-G. Zhao, Effects of periodic modulation on the Landau–Zener transition, Physics Letters A 346, 315 (2005). * Petiziol _et al._ [2018] F. Petiziol, B. Dive, F. Mintert, and S. Wimberger, Fast adiabatic evolution by oscillating initial Hamiltonians, Phys. Rev. A 98, 043436 (2018). * Vitanov _et al._ [2017] N. V. Vitanov, A. A. Rangelov, B. W. Shore, and K. Bergmann, Stimulated Raman adiabatic passage in physics, chemistry, and beyond, Rev. Mod. Phys. 89, 015006 (2017). * Kirova and Spano [2005] T. Kirova and F. C. Spano, Designing molecular eigenstates in a four-level $\Lambda$ system, Phys. Rev. A 71, 063816 (2005). * Cohen-Tannoudji [1996] C. N. Cohen-Tannoudji, The Autler-Townes Effect Revisited, in _Amazing Light: A Volume Dedicated To Charles Hard Townes On His 80th Birthday_, edited by R. Y. Chiao (Springer New York, New York, NY, 1996) pp. 109–123. * Petiziol _et al._ [2019] F. Petiziol, B. Dive, S. Carretta, R. Mannella, F. Mintert, and S. Wimberger, Accelerating adiabatic protocols for entangling two qubits in circuit QED, Phys. Rev. A 99, 042315 (2019). * Petiziol and Wimberger [2019] F. Petiziol and S. Wimberger, Effect of phase errors on a quantum control protocol using fast oscillations, Condens. Matter 4, 34 (2019). * Berry [2009] M. V. Berry, Transitionless quantum driving, J. Phys. A 42, 365303 (2009). * Demirplak and Rice [2003] M. Demirplak and S. A. Rice, Adiabatic population transfer with control fields, J. Phys. Chem. A 107, 9937 (2003). * Fleischhauer _et al._ [1999] M. Fleischhauer, R. Unanyan, B. Shore, and K. Bergmann, Coherent population transfer beyond the adiabatic limit: Generalized matched pulses and higher-order trapping states, Phys. Rev. A 59, 3751 (1999). * Johansson _et al._ [2013] J. Johansson, P. Nation, and F. Nori, Qutip 2: A python framework for the dynamics of open quantum systems, Computer Physics Communications 184, 1234 (2013).
# Novel Slot Detection: A Benchmark for Discovering Unknown Slot Types in the Task-Oriented Dialogue System Yanan Wu1∗, Zhiyuan Zeng1∗, Keqing He2∗, Hong Xu1 Yuanmeng Yan1, Huixing Jiang2, Weiran Xu1 1Pattern Recognition & Intelligent System Laboratory 1Beijing University of Posts and Telecommunications, Beijing, China 2Meituan Group, Beijing, China <EMAIL_ADDRESS> <EMAIL_ADDRESS>The first three authors contribute equally. Weiran Xu is the corresponding author. ###### Abstract Existing slot filling models can only recognize pre-defined in-domain slot types from a limited slot set. In the practical application, a reliable dialogue system should know what it does not know. In this paper, we introduce a new task, Novel Slot Detection (NSD), in the task-oriented dialogue system. NSD aims to discover unknown or out-of-domain slot types to strengthen the capability of a dialogue system based on in-domain training data. Besides, we construct two public NSD datasets, propose several strong NSD baselines, and establish a benchmark for future work. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide new guidance for future directions111https://github.com/ChestnutWYN/ACL2021-Novel-Slot-Detection. ## 1 Introduction Slot filling plays a vital role to understand user queries in personal assistants such as Amazon Alexa, Apple Siri, Google Assistant, etc. It aims at identifying a sequence of tokens and extracting semantic constituents from the user queries. Given a large scale pre-collected training corpus, existing neural-based models Mesnil et al. (2015); Liu and Lane (2015, 2016); Goo et al. (2018); Haihong et al. (2019); Chen et al. (2019); He et al. (2020b, d); Yan et al. (2020); Louvan and Magnini (2020); He et al. (2020a) have been actively applied to slot filling and achieved promising results. Existing slot filling models can only recognize pre-defined entity types from a limited slot set, which is insufficient in the practical application scenario. A reliable slot filling model should not only predict the pre- defined slots but also detect potential unknown slot types to know what it doesn’t know, which we call Novel Slot Detection (NSD) in this paper. NSD is particularly crucial in deployed systems—both to avoid performing the wrong action and to discover potential new entity types for future development and improvement. We display an example as Fig 1 shows. Figure 1: An example of Novel Slot Detection in the task-oriented dialogue system. Without NSD, the dialogue system gives the wrong response since it misunderstands the unknown slot “is this my world” as the in-domain _playlist_ type. In contrast, NSD recognizes “is this my world” as _NS_ and the system gives a fallback response. Meanwhile, with human-in-the-loop annotation, the system can increase its functions or skills. Utterance | play | is | this | my | world | by | leo | arnaud ---|---|---|---|---|---|---|---|--- Slot Filling Labels | O | B-album | I-album | I-album | I-album | O | B-artist | I-artist Novel Slot Detection Labels | O | NS | NS | NS | NS | O | B-artist | I-artist Table 1: Comparison between slot filling and novel slot detection. In the novel slot detection labels, we consider “album” as an unknown slot type that is out of the scope of the pre-defined slot set. Meanwhile, “artist” belonging to in-domain slot types still needs to be recognized as the original slot filling task. In this paper, we define _Novel Slot_ (NS) as new slot types that are not included in the pre-defined slot set. NSD aims to discover potential new or out-of-domain entity types to strengthen the capability of a dialogue system based on in-domain pre-collected training data. There are two aspects in the previous work related to NSD, out-of-vocabulary (OOV) recognition Liang et al. (2017a); Zhao and Feng (2018); Hu et al. (2019); He et al. (2020c, d); Yan et al. (2020); He et al. (2020e) and out-of-domain (OOD) intent detection Lin and Xu (2019); Larson et al. (2019); Xu et al. (2020a); Zeng et al. (2021b, a). OOV means many slot types can have a large number of new slot values while the training set only obtains a tiny part of slot values. OOV aims to recognize unseen slot values in training set for pre-defined slot types, using character embedding Liang et al. (2017a), copy mechanism Zhao and Feng (2018), few/zero- shot learning Hu et al. (2019); He et al. (2020e); Shah et al. (2019), transfer learning Chen and Moschitti (2019); He et al. (2020c, b) and background knowledge Yang and Mitchell (2017); He et al. (2020d), etc. Compared to OOV recognition, our proposed novel slot detection task focuses on detecting unknown slot types, not just unseen values. NSD faces the challenges of both OOV and no sufficient context semantics (see analysis in Section 6.2), greatly increasing the complexity of the task. Another line of related work is OOD intent detection Hendrycks and Gimpel (2017); Lee et al. (2018); Lin and Xu (2019); Ren et al. (2019); Zheng et al. (2020); Xu et al. (2020a) which aims to know when a query falls outside the range of predefined supported intents. The main difference is that NSD detects unknown slot types in the token level while OOD intent detection identifies out-of-domain intent queries. NSD requires a deep understanding of the query context and is prone to label bias of _O_ (see analysis in Section 5.3.1), making it challenging to identify unknown slot types in the task-oriented dialog system. In this paper, we first introduce a new and important task, Novel Slot Detection (NSD), in the task-oriented dialogue system (Section 2). NSD plays a vital role in avoiding performing the wrong action and discovering potential new entity types for the future development of dialogue systems. Then, we construct two public NSD datasets, Snips-NSD and ATIS-NSD, based on the original slot filling datasets, Snips Coucke et al. (2018) and ATIS Hemphill et al. (1990) (Section 2.2). From the perspective of practical application, we consider three kinds of dataset construction strategies, Replace, Mask and Remove. Replace denotes we label the novel slot values with all _O_ in the training set. Mask is to label with all _O_ and mask the novel slot values. Remove is the most strict strategy where all the queries containing novel slots are removed. We dive into the details of the three different construction strategies in Section 3.2 and perform a qualitative analysis in Section 5.3.1. Besides, we propose two kinds of evaluation metrics, span-level F1 and token-level F1 in Section 3.4, following the slot filling task. Span F1 considers the exact matching of a novel slot span while Token F1 focuses on prediction accuracy on each word of a novel slot span. We discuss performance comparison between the two metrics and propose a new metric, restriction- oriented span evaluation (ROSE), to combine the advantages of both in Section 5.3.3. Then, we establish a fair benchmark and propose extensive strong baselines for NSD in Section 4. Finally, we perform exhaustive experiments and qualitative analysis to shed light on the challenges that current approaches faced with NSD in Section 5.3 and 6. Our contributions are three-fold: (1) We introduce a Novel Slot Detection (NSD) task in the task-oriented dialogue system. NSD helps avoid performing the wrong action and discovering potential new entity types for increasing functions of dialogue systems. (2) We construct two public NSD datasets and establish a benchmark for future work. (3) We conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide new guidance for future NSD work. ## 2 Problem Formulation ### 2.1 Slot Filling Given a sentence $X=\\{x_{1},...,x_{n}\\}$ with $n$ tokens, the slot filling task is to predict a corresponding tag sequence $Y=\\{y_{1},...,y_{n}\\}$ in BIO format, where each $y_{i}$ can take three types of values: B-slot_type, I-slot_type and _O_ , where “B” and “I” stand for the beginning and intermediate word of a slot and “ _O_ ” means the word does not belong to any slot. Here, slot filling assumes $y_{i}\in\mathbf{y}$, where $\mathbf{y}$ denotes a pre-defined slot set of size $\mathcal{M}$. Current approaches typically model slot filling as a sequence labeling problem using RNN Liu and Lane (2015, 2016); Goo et al. (2018) or pre-trained language models Chen et al. (2019). ### 2.2 Novel Slot Detection Original Utterance | play | is | this | my | world | by | leo | arnaud ---|---|---|---|---|---|---|---|--- Original Slot Filling Labels | O | B-album | I-album | I-album | I-album | O | B-artist | I-artist Strategy | Replace | play | is | this | my | world | by | leo | arnaud O | O | O | O | O | O | B-artist | I-artist Mask | play | MASK | MASK | MASK | MASK | by | leo | arnaud O | O | O | O | O | O | B-artist | I-artist Remove | - | - | - | - | - | - | - | - - | - | - | - | - | - | - | - Table 2: Comparison between three processing strategies in the training set. We consider “album” as an unknown slot type and “-” denotes the sentence is removed from the training data. We refer to the above training data $D$ as in-domain (IND) data. Novel slot detection aims to identify unknown or out-of-domain (OOD) slot types via IND data while correctly labeling in-domain data. We denote unknown slot type as _NS_ and in-domain slot types as IND in the following sections. Note that we don’t distinguish between B-NS and I-NS and unify them as _NS_ because we empirically find existing models hardly discriminate B and I for an unknown slot type. We provide a detailed analysis in Section 5.3.3. We show an example of NSD in Table 1. The challenges of recognizing NSD come from two aspects, _O_ tags and in-domain slots. On the one hand, models need to learn entity information for distinguishing _NS_ from _O_ tags. On the other hand, they require discriminating _NS_ from other slot types in the pre-defined slot set. We provide a detailed error analysis in Section 6.1. ## 3 Dataset Since there are not existing NSD datasets, we construct two new datasets based on the two widely used slot filling datasets, Snips Coucke et al. (2018) and ATIS Hemphill et al. (1990). We first briefly introduce Snips and ATIS, then elaborate on data construction and processing in detail, and display the statistic of our NSD datasets, Snips-NSD and ATIS-NSD. Finally, we define two evaluation metrics for the NSD task, Span F1 and Token F1. ### 3.1 Original Slot Filling Datasets Snips222https://github.com/sonos/nlu-benchmark/tree/master/2017-06-custom- intent-engines is a custom intent engine dataset. It originally has 13,084 train utterances, 700 and 700 test utterances. ATIS333https://github.com/yvchen/JointSLU/tree/master/data contains audio recordings of people making flight reservations. It originally has 4,478 train utterances, 500 dev and 893 test utterances. The full statistic is shown in Table 3. Note that the vocabulary only contains words in the training set, and test set words that do not exist in the vocabulary are referred to OOV words. The percentage of OOV words represents the portion of OOV words in the test set. ### 3.2 Data Construction and Processing For Snips and ATIS datasets, we keep some slot classes in training as unknown and integrate them back during testing, following Fei and Liu (2016); Shu et al. (2017); Lin and Xu (2019). We randomly select part of slot types in Snips and ATIS as unknown slots(5%, 15%, and 30% in this paper). Note that the original train/val/test split is fixed. Considering class imbalance, we perform weighted sampling where the chosen probability is relevant to the number of class examples similar to Lin and Xu (2019). To avoid randomness of experiment results, we report the average result over 10 runs. | Snips | ATIS ---|---|--- Vocabulary Size | 11,241 | 722 Percentage of OOV words | 5.95% | 0.77% Number of Slots | 39 | 79 Training Set Size | 13,084 | 4,478 Development Set Size | 700 | 500 Testing Set Size | 700 | 893 Table 3: Statistics of ATIS and Snips datasets. After we choose the unknown slot types, a critical problem is how to handle sentences including these unknown slot types in training set. For OOD intent detection, we just need to remove these sentences in training and validation set. However, for Novel Slot Detection, a sentence perhaps contains both in- domain slots and unknown slots, which is nontrivial for tackling unknown slots at the token level. We need to balance the performance of recognizing unknown slots and in-domain slots. Therefore, we propose three different processing strategies as follows: (1) Replace: We label the unknown slot values with all _O_ in the training set while the original values remain unchanged. (2) Mask: We label the unknown slot values with all _O_ and mask these slot values with a special token MASK. (3) Remove: All the sentences containing unknown slots are directly removed. We display examples of the above three strategies in Table 2. For the val and test set, we just label the unknown slot values with all _NS_ while keeping the in-domain labeling fixed. Note that _NS_ tags only exist in the val and test set, not in the training set. Besides, we keep original in-domain slots fixed to evaluate the performance of both _NS_ and in-domain slots. We aim to simulate the practical scenario where we can hardly know what unknown slots are. These three strategies all have its practical significance. Compared with others, Remove is the most suitable strategies for real-world scenarios. In practical scenario, dialog systems first train in the data set labeled by human annotators, and then applied to the actual application. In the process of interaction with the real users, novel slot types appear gradually. Therefore, we consider that the training set doesn’t contain potential novel slots sentences. In other words, Remove is the most suitable strategy for NSD in real applications. What’s more, Section 5.3.1 demonstrates Remove performs best while the others suffer from severe model bias by _O_ tags. Therefore, we adopt Remove as the main strategy in this paper. ### 3.3 Statistic of New NSD Datasets Table 4 shows the detailed statistics of Snips-NSD-15% constructed by Remove strategy, where we choose 15% classes in the training data as unknown slots. 444Since different proportions of unknown slots have different statistics, here we only display the results of Snips-NSD-15% for brevity. Combining Table 3 and Table 4, we can find Remove strategy removes 28.70% of queries in the original Snips training set, hence increases the percentage of OOV word from 5.95% to 8.51%. And unknown slot values account for 12.29% of total slot values in the test set. Snips-NSD-15% | Train | Val | Test ---|---|---|--- number of in-domain slots | 33 | 33 | 33 number of unknown slots | 6 | 6 | 6 percentage of OOV words | - | - | 8.51% number of queries | 9,329 | 700 | 700 | number of queries --- including unknown slots 0 | 192 | 202 number of slot values | 23,176 | 1,794 | 1,790 number of unknown slot values | 0 | 210 | 220 Table 4: The detailed statistics of Snips-NSD-15%. ### 3.4 Metrics The traditional slot filling task uses Span F1 555https://www.clips.uantwerpen.be/conll2000/chunking/conlleval.txt for evaluation. Span F1 considers the exact span matching of an unknown slot span. However, we find in Section 5.3.3 that this metric is too strict to NSD models. In the practical application, we only need to coarsely mine parts of words of unknown slots, then send these queries containing potential unknown slot tokens to human annotators, which has effectively reduced extensive labor and improved efficiency. Therefore, we define a more reasonable metric, Token F1 which focuses on the word-level matching of a novel slot span. We also propose a new metric, Restriction-Oriented Span Evaluation (ROSE), for a fair comparison in Section 5.3.3. Figure 2: The overall architecture of our approach. ## 4 Methodology In this section, we introduce the NSD models proposed in this paper and illustrate the differences between the various parallel approaches during the training and test stage. ### 4.1 Overall Framework The overall structure of model is shown in Fig 2. In the training stage, we either train a multiple-class classifier or binary classifier using different training objectives. We use public BERT-large Devlin et al. (2019) embedding layer and BiLSTM-CRF Huang et al. (2015) for token level feature extraction. Then, in the test stage, we use the typical neural multiple classifier to predict the in-domain slot labels. Meanwhile, we use the detection algorithm, MSP or GDA to figure out novel slot tokens. Finally, we override the slot token labels which are detected as NS. In terms of training objectives, detection algorithms, and distance strategies, we compare different variants as follows. Models | 5% | 15% | 30% ---|---|---|--- IND | NSD | IND | NSD | IND | NSD detection method | objective | distance strategy | Span F1 | Span F1 | Token F1 | Span F1 | Span F1 | Token F1 | Span F1 | Span F1 | Token F1 MSP | binary | - | 87.21 | 12.34 | 25.16 | 71.44 | 12.31 | 39.50 | 58.88 | 8.73 | 40.38 multiple | - | 88.05 | 14.04 | 30.50 | 79.71 | 20.97 | 40.02 | 78.52 | 25.26 | 46.91 binary+multiple | - | 89.59 | 23.58 | 37.55 | 83.72 | 24.70 | 45.32 | 79.08 | 30.66 | 52.10 GDA | binary | difference | 87.95 | 23.83 | 35.83 | 83.65 | 22.06 | 43.99 | 78.72 | 32.50 | 44.13 binary | minimum | 61.29 | 10.36 | 17.08 | 49.11 | 16.91 | 31.10 | 48.07 | 15.56 | 33.78 multiple | difference | 93.14 | 29.73 | 45.99 | 90.07 | 31.96 | 53.02 | 85.56 | 36.16 | 54.55 multiple | minimum | 93.10 | 31.67* | 46.97* | 90.18 | 32.19 | 53.75* | 86.26* | 38.64* | 55.24* Table 5: IND and NSD results with different proportions (5%, 15% and 30%) of classes are treated as unknown slots on Snips-NSD. * indicates the significant improvement over all baselines (p $<$ 0.05). Models | 5% | 15% | 30% ---|---|---|--- IND | NSD | IND | NSD | IND | NSD detection method | objective | distance strategy | Span F1 | Span F1 | Token F1 | Span F1 | Span F1 | Token F1 | Span F1 | Span F1 | Token F1 MSP | binary | - | 92.04 | 19.73 | 29.63 | 91.74 | 23.40 | 33.89 | 80.49 | 21.88 | 39.17 multiple | - | 94.33 | 27.15 | 31.16 | 92.54 | 39.88 | 42.29 | 87.63 | 40.42 | 47.64 binary+multiple | - | 94.41 | 32.49 | 43.48 | 93.29 | 41.23 | 43.13 | 90.14 | 41.76 | 51.87 GDA | binary | difference | 93.69 | 27.02 | 34.21 | 92.13 | 30.51 | 36.30 | 88.73 | 30.91 | 45.64 binary | minimum | 93.57 | 15.90 | 20.96 | 90.98 | 24.53 | 27.26 | 88.21 | 26.40 | 39.83 multiple | difference | 95.20 | 47.78* | 51.54* | 93.92 | 50.92* | 52.24* | 92.02 | 51.26* | 56.59* multiple | minimum | 95.31* | 41.74 | 45.91 | 93.88 | 43.78 | 46.18 | 91.67 | 45.44 | 52.37 Table 6: IND and NSD results with different proportions (5%, 15% and 30%) of classes are treated as unknown slots on ATIS-NSD. * indicates the significant improvement over all baselines (p $<$ 0.05). Training objective. For in-domain slots, we propose two training objectives. Multiple classifier refers to the traditional slot filling objective setting, which performs token-level multiple classifications on the BIO tags Ratinov and Roth (2009) combined with different slots. Binary classifier unifies all non-O tags into one class, and the model makes a token-level binary classification of O or non-O on the sequence. Note that in the test stage, for in-domain prediction, we both use the multiple classifier. While, for novel slot detection, we use the multiple classifier, or the binary classifier, or both of them. In Table 5 and Table 6, binary+multiple means the token will be labeled as NS only if both classifiers predict it as NS. Detection algorithm. MSP and GDA are detection algorithms in the test stage. MSP (Maximum Softmax Probability) Hendrycks and Gimpel (2017) applies a threshold on the maximum softmax probability, if the maximum falls below the threshold, the token will be predicted to be a novel slot token. GDA (Gaussian Discriminant Analysis) Xu et al. (2020a) is a generative distance-based classifier for out-of-domain detection with Euclidean space. We treat tokens not belonging to any in-domain slots (including O) as novel slot tokens for both methods. For example, with a binary classifier, if the softmax probabilities belonging to O or non-O are both lower than an MSP threshold, then the token is labeled as NS. Distance strategy. The GDA detection is based on the distances between a target and each slot representation cluster. In original GDA, when the minimum distance is greater than a certain threshold, it is predicted to be novel slots. We propose a novel strategy named Difference, which uses the maximum distance minus the minimum distance, when the difference value of a target is less than a threshold, it is predicted as novel slots. Both of their thresholds are obtained by optimizing the NSD metrics on the validation set. ## 5 Experiment and Analysis ### 5.1 Implementation Details We use the public pre-trained Bert-large-uncased model to embed tokens which has 24 layers, 1024 hidden states, 16 heads and 336M parameters. The hidden size for the BiLSTM layer is set to 128. Adam is used for optimization with an initial learning rate of 2e-5. The dropout value is fixed as 0.5, and the batch size is 64. We train the model only on in-domain labeled data. The training stage has an early stopping setting with patience equal to 10. We use the best F1 scores on the validation set to calculate the MSP and GDA thresholds adaptively. Each result of the experiments is tested for 10 times under the same setting and reports the average value. The training stage of our model lasts about 28 minutes on single Tesla T4 GPU(16 GB of memory). ### 5.2 Main Results Table 5 and 6 show the experiment results with seven different models on two benchmark slot filling datasets Snips-NSD and ATIS-NSD constructed by Remove strategy. We both report NSD and IND results using Span F1 and Token F1. We compare these models from three perspectives, detection method, objective and distance strategy in the following. The analysis of effect of the proportion of unknown slot types is described in 5.3.2. Detection Method: MSP vs GDA. Under the same setting of objective, GDA performs better than MSP in both IND and NSD, especially in NSD. We argue that GDA models the posterior distribution on representation spaces of the feature extractor and avoids the issue of overconfident predictions Guo et al. (2017); Liang et al. (2017b, 2018). Besides, comparing Snips-NSD and ATIS-NSD, NSD Token F1 scores on ATIS-NSD are much higher than Snips-NSD but no significant difference exists for NSD Span F1 scores. The reason is that Snips-NSD has a higher average entity length (1.83) than ATIS-NSD (1.29), making it harder to detect the exact _NS_ span. Objective: Binary vs Multiple. Under all settings, Multiple outperforms Binary with a large margin on two datasets in both IND and NSD metrics. For MSP, combining Multiple and Binary get higher F1 scores. Specifically, the Binary classifier is used to calculate the confidence of a token belonging to non-_O_ type, which can judge whether the token belongs to entities and distinguish _NS_ from type _O_. On the other hand, we use the Multiple classifier to calculate the confidence for tokens that are of type _NS_ , to distinguish _NS_ from all predefined non-_O_ slot types. For GDA, we do not combine Multiple and Binary because of poor performance. Multiple achieves the best results for all the IND and NSD F1 scores. We suppose multi-class classification can better capture semantic features than binary classification. Strategy | 5% | 15% | 30% ---|---|---|--- IND | NSD | IND | NSD | IND | NSD Span | Span | Token | Span | Span | Token | Span | Span | Token Replace | 94.52 | 1.93 | 5.27 | 94.33 | 0.66 | 2.29 | 94.02 | 0.27 | 0.82 Mask | 90.08 | 23.10 | 37.91 | 86.52 | 25.07 | 45.92 | 83.37 | 32.14 | 50.68 Remove | 93.10 | 31.67 | 46.97 | 90.18 | 32.19 | 53.75 | 86.26 | 38.64 | 55.24 Table 7: Comparison between different data processing strategies on Snips-NSD using GDA+Multiple+Minimum. Distance Strategy: Minimum vs Difference. We find under the same setting of Binary, Difference strategy outperforms Minimum on both datasets for NSD metrics. But under the same setting of Multiple, there is no consistent superiority between the two distance strategies. For example, Difference outperforms Minimum for NSD metrics on ATIS-NSD, opposite to the results on Snips-NSD. We argue different distance strategies are closely related to objective settings and dataset complexity. We will leave the theoretical analysis to the future. ### 5.3 Qualitative Analysis #### 5.3.1 Effect of Different Data Processing Strategies Table 7 displays IND and NSD metrics of three different dataset processing strategies on Snips-NSD using the same model GDA+Multiple+Minimum. In this section, we will dive into the analysis of the effects of different data processing strategies. Results show the Replace strategy gets poor performance in NSD, which proves labeling unknown slots as _O_ tags will severely mislead the model. The Mask and Remove strategies are more reasonable since they remove unknown slots from the training data. Their main difference is that Mask only deletes token-level information, while Remove even eliminates the contextual information. For NSD in all datasets, Remove gains significantly better performance on both Token F1 and Span F1 than Mask by 9.06%(5%), 7.83%(15%) and 4.56%(30%) on Token F1, and 8.57%(5%), 7.12%(15%) and 6.5%(30%) on Span F1. We argue the remaining context is still misleading even if the novel slot tokens are not directly trained in the Mask strategy. Besides, Mask does not conform to the real NSD scenario. Generally, Remove is the most suitable strategy for NSD in real applications and can achieve the best performance. #### 5.3.2 Effect of the Proportion of Unknown Slot Types Fig LABEL:fig:propotion displays the effect of the proportion of unknown slot types using the Remove strategy in GDA+Multiple+Minimum. Results show that with the increase of the proportion of unknown slot types, the NSD F1 scores get improvements while IND F1 scores decrease. We suppose fewer in-domain slot types help the model distinguish unknown slots from IND slots, thus NSD F1 scores get improvements. However, for in-domain slot detection, since Remove deletes all the sentences containing unknown slots in the training data, our models suffer from the lack of sufficient context to recognize IND slots so IND F1 scores decrease. Figure 4: Effect of varying degrees of restrictions | GDA+mul.+min. | MSP+bin.+mul. ---|---|--- ROSE-mean | 40.73 | 34.71 ROSE-100% | 40.39 | 33.74 ROSE-50% | 41.00 | 35.46 Table 8: ROSE metrics on Snips-NSD using GDA+Multiple+Minimum and MSP+Binary+Multiple #### 5.3.3 New Metric: ROSE The previous results have shown Span F1 is much lower than the token F1. The reason is that Span F1 is a strict metric, where the model needs to correctly predict all _NS_ tokens and the correct boundary. This is difficult for NSD models due to the lack of supervised information. In fact, NSD models only need to mark some tokens in the span of novel slots and send the total sequence containing the _NS_ tokens back to the humans. A small number of token omissions or misjudgments are acceptable. Therefore, to meet a reasonable NSD scenario, we propose a new metric, restriction-oriented span evaluation (ROSE), to evaluate the span prediction performance under different restrictions. First, we do not punish the situation where tokens prediction exceeds the span. Then, we consider a span is correct when the number of correctly predicted tokens is greater than a settable proportion $p$ of the span length. We take the average of the ROSE score and the original span F1 to avoid the model obtaining an outstanding result through over-long prediction. The results using Snips with 15% of novel slots are shown in Figure 4. As the degree of restriction increases, the metrics tend to decline. It indicates that the model can mostly identify more than half of the tokens in spans. To make a comprehensive evaluation, we defined the ROSE-mean, namely the mean of ROSE-25%, ROSE-50%, ROSE-75%, and ROSE-100%. We present results on part of proposed models in Table 8. #### 5.3.4 Analysis of Single Unknown Slot To analyze the relationship between NSD performance and a single specific slot, we calculate the token and span metrics treating each single slot type as an unknown slot and show the results of the top five and bottom five for Token F1 scores in Table 9. We find that the slots with better performance often account for a larger percentage of the data set, such as Object_name or Entity_name. They also tend to have a larger value space, such as TimeRange, Music_item, or Artist. These characteristics allow the semantic representation of these slots to be distributed over a large area rather than clustered tightly together. We consider that this distribution is more reasonable because in a real application scenario, novel slots are diverse and its distribution tends to be diffuse. Performance on these types also proves that the NSD models we propose can be better generalized to a reasonable data setting. Type | Proportion(%) | Span Length | Token F1 | Span F1 ---|---|---|---|--- top 5 | Object_name | 21.42 | 3.71 | 55.64 | 20.82 TimeRange | 15.29 | 2.35 | 53.65 | 30.15 Entity_name | 23.14 | 3.09 | 48.56 | 22.83 Music_item | 14.86 | 1.05 | 46.23 | 34.59 Artist | 15.29 | 2.05 | 45.26 | 26.36 bottom 5 | City | 8.57 | 1.32 | 18.72 | 15.85 Country | 6.29 | 1.57 | 14.19 | 11.11 State | 5.54 | 1.10 | 13.55 | 10.83 Best_rating | 6.14 | 1.00 | 11.04 | 11.04 Year | 3.43 | 1.00 | 10.24 | 10.24 Table 9: Results of single unknown slot. Type 1 | Type 2 | Token F1 | Span F1 ---|---|---|--- Object_name | - | 55.64 | 20.82 TimeRange | - | 53.65 | 30.15 Party_size_number | - | 33.44 | 28.57 City | - | 18.72 | 15.85 State | - | 13.55 | 10.83 Object_name | TimeRange | 53.88 | 23.37 Object_name | Party_size_number | 52.81 | 22.35 Object_name | City | 57.92 | 21.42 Object_name | State | 56.32 | 19.27 TimeRange | Party_size_number | 71.27∗ | 51.03∗ City | State | 29.33∗ | 27.14∗ Table 10: Results of combining multiple unknown slots. * denotes that NSD performance of the combination of two unknown slots is significantly better than each single slot. #### 5.3.5 Analysis for Relationship of Multiple Unknown Slots In order to explore the effect of inter-slot relationships on NSD, we conducted experiments in which two types are mixed as novel slots. Some of the results are shown in Table 10. In the five types shown in the table, Object_name is an open vocabulary slot with a wide range of values and contains many OOV tokens, TimeRange and Party_size_number often contain numbers, City and State are usually similar in semantics and context. We found that when the other types combined with Object_name, NSD performance is often maintained close to treat Object_name as a novel slot alone. The reason, on the one hand, is that the proportion of other types in the dataset is relatively small, so the overall impact on the metrics is smaller. On the other hand, due to the large semantic distribution range of the open vocabulary slot, there is a latent inclusion relationship for other types, so the mixing of a single type tends to have a slight impact on the NSD performance. We also found that the appropriate combination can significantly improve the efficiency of NSD. Such as TimeRange with Party_size_number, or City with State. This indicates that when the novel slot is similar to the in- domain slot, the model tends to predict the novel slot as a similar slot, which leads to errors. When both are treated as novel slots, these errors can be mitigated. NSD error proportion(%) | O | Open vocabulary slots | Other slots | Sum ---|---|---|---|--- Prediction is NS | 17.79 | 18.84 | 9.07 | 45.70 Target is NS | 18.47 | 7.54 | 28.29 | 54.30 Sum | 36.26 | 26.38 | 37.36 | 100.00 Table 11: Relative proportions of several types of errors. Error type | NS | Example ---|---|--- NS to O | | movie_name --- (m_name) | text: when will paris by night aired --- true: O O B-m_name I-m_name I-m_name O predict: O O NS O NS O | NS to --- open slot album | | text: play the insoc ep --- true: O B-album I-album I-album predict: O B-object_name I-object_name NS | NS to --- other slot artist | | text: play kurt cobain ballad tunes --- true: O B-artist I-artist B-music_item O predict: O B-genre I-genre B-music_item O O to NS | artist | | text: the workout playlist needs more chris cross --- true: O B-playlist O O O B-artist I-artist predict: O B-playlist O O NS NS NS | open slots --- to NS object_type | | text: tell me the actors of the saga awards --- true: O O O B-object_name O O B-object_type O predict: O O O NS O O NS O | other slots --- to NS city | | text: what is the weather of east portal ks --- true: O O O O O B-city I-city B-state predict: O O O O O NS NS NS Table 12: Error case from NSD prediction. ## 6 Discussion In this section, we empirically divide all the error samples into three categories. Each type of problem contains two aspects, corresponding to NSD precision and recall, respectively. We present the relative proportions of several types of errors in Table 11, which using Snips dataset with 5% novel slots on GDA+multiple+minimum model. For each error type, we present an example in Table 12 to describe the characteristics and analyze the causes. Then, we dive into identifying the key challenges and finally proposed possible solutions for future work. ### 6.1 Error Analysis Tag _O_. Tag _O_ is the largest and most widely distributed type in the dataset, and it generally refers to the independent function tokens. Therefore, when identifying, it is easy to be confused with other types, and the confusion is more serious for novel slots without supervised learning. We observed that tokens with _O_ label detected as novel slots usually exist near spans, and the function words in the span labeled as a novel slot have a probability of being predicted as _O_. We consider that this kind of problem is related to the context. Although the processing strategy of Remove can effectively reduce the misleading of _O_ for the novel slots, tag _O_ will still be affected by context information of other in-domain slots. Open Vocabulary Slots. We observe that a large number of novel slot tokens are mispredicted as open vocabulary slots, while the reverse situation is much less likely to happen. This indicates that in Snips, open vocabulary slots tend to overlap or contain most other slots semantically. Even in traditional slot filling tasks, open vocabulary slots are often confused with other slots. We demonstrate this hypothesis in the analysis. Section 5.3.5 shows that NSD performs better when open vocabulary slots are treated as novel slots, and Section 5.3.4 shows that there is no significant performance change when open vocabulary slots are mixed with some semantically concentrated slots. The reason for this problem is that the definition of the dataset is not reasonable. Slots with a large value range can hardly help the personal assistant to give an appropriate reply, and the supervised information of these slots is usually incomplete. Similar Slots. Except for the two cases mentioned above, predicting novel slots as other in-domain slots is the most common type of error, in which similar slots account for a large part of it. Due to the overlap between vocabulary or shared similar context, the model often tend to be overconfident to predict similar slot labels, we analyze the phenomenon in Table 10, when similar types is treated as a new slot at the same time, NSD efficiency will rise significantly. We employ a generative classification method GDA, compared with the traditional MSP method, to make full use of data features and alleviate the problem. ### 6.2 Challenges Based on the above analysis, we summarize the current challenges faced by the NSD task: Function tokens. Articles, prepositions, and so on that act as connective words in a sequence. It is usually labeled with type _O_ , but also found in some long-span slots, such as Movie_name. It can lead to confusion between _O_ and novel slot when this kind of slot is the target of NSD. Insufficient context. Correct slot detection often depends on the context, and this supervised information is missing for novel slots. Models can only conduct NSD to tokens using the original embeddings or representations trained in other contexts, which can lead to bias in the semantic modeling of the novel slot. Dependencies between slots. There are some semantic overlaps or inclusion relationships in the slot definition of the current benchmark slot filling datasets. As a result, the semantic features are not sufficiently discriminative, and thus some outliers tokens in in-domain slots are easily confused with the novel slots. Open vocabulary slots. Open vocabulary slots is a special kind of slot, its definition is usually macroscopic and can be further divided, the value range is broad. The representation distribution for Open vocabulary slots tends to be diffuse and uneven, which can be misleading to NSD. ### 6.3 Future Directions For tag _O_ , a possible solution is to use a binary model to assist identification between _O_ and non-_O_ function tokens, we provide a simple method in this paper and leave further optimizing to future work. Then, to decouple the dependencies between slots, it is critical to learn more discriminative features for in-domain data, using contrastive learning or prototypical network is expected to help. Besides, in the traditional slot filling task, the open vocabulary slot problem has been researched for a long time, and accumulate many achievements. Adaptive combination and improvement of relevant methods with NSD tasks is also an important direction of our future research. ## 7 Related Work OOV Recognition OOV aims to recognize unseen slot values in training set for pre-defined slot types, using character embedding Liang et al. (2017a), copy mechanism Zhao and Feng (2018), few/zero-shot learning Hu et al. (2019); Shah et al. (2019), transfer learning Chen and Moschitti (2019); He et al. (2020c) and background knowledge Yang and Mitchell (2017); He et al. (2020d), etc. Our proposed NSD task focuses on detecting unknown slot types, not just unseen values. OOD Intent Detection Lee et al. (2018); Lin and Xu (2019); Xu et al. (2020a) aim to know when a query falls outside the range of predefined supported intents. Generally, they first learn discriminative intent representations via in-domain (IND) data, then employs detecting algorithms, such as Maximum Softmax Probability (MSP) Hendrycks and Gimpel (2017), Local Outlier Factor (LOF) Lin and Xu (2019), Gaussian Discriminant Analysis (GDA) Xu et al. (2020b) to compute the similarity of features between OOD samples and IND samples. Compared to our proposed NSD, the main difference is that NSD detects unknown slot types in the token level while OOD intent detection identifies sentence-level OOD intent queries. ## 8 Conclusion In this paper, we defined a new task, Novel Slot Detection(NSD), then provide two public datasets and establish a benchmark for it. Further, we analyze the problems of NSD through multi-angle experiments and extract the key challenges of the task. We provide some strong models for these problems and offer possible solutions for future work. ## Acknowledgements This work was partially supported by National Key R&D Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC ”Artifical Intelligence” Project No. MCM20190701. ## Broader Impact Dialog systems have demonstrated remarkable performance across a wide range of applications, with the promise of a significant positive impact on human production mode and lifeway. The first step of the dialog system is to identify users’ key points. In practical industrial scenario, users may make unreasonable queries which fall outside of the scope of the system-supported slot types. Previous dialogue systems will ignore this problem, which will lead to wrong operations and limit the system’s development. In this paper, we firstly propose to detect not only pre-defined slot types but also potential unknown or out-of-domain slot types using MSP and GDA methods. According to exhaustive experiments and qualitative analysis, we also discuss several major challenges in Novel Slot Detection for future work. The effectiveness and robustness of the model are significantly improved by adding Novel Slot Detection, which takes a step towards the ultimate goal of enabling the safe real-world deployment of dialog systems in safety-critical domains. The experimental results have been reported on standard benchmark datasets for considerations of reproducible research. ## References * Chen and Moschitti (2019) Lingzhen Chen and Alessandro Moschitti. 2019. Transfer learning for sequence labeling using source model and target data. _ArXiv_ , abs/1902.05309. * Chen et al. (2019) Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. _arXiv preprint arXiv:1902.10909_. * Coucke et al. (2018) A. Coucke, A. Saade, Adrien Ball, Théodore Bluche, A. Caulier, D. Leroy, Clément Doumouro, Thibault Gisselbrecht, F. Caltagirone, Thibaut Lavril, Maël Primet, and J. Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. _ArXiv_ , abs/1805.10190. * Devlin et al. (2019) J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _NAACL-HLT_. * Fei and Liu (2016) Geli Fei and B. Liu. 2016. Breaking the closed world assumption in text classification. In _HLT-NAACL_. * Goo et al. (2018) Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 753–757. * Guo et al. (2017) Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In _ICML_. * Haihong et al. (2019) E Haihong, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5467–5471. * He et al. (2020a) Keqing He, Shuyu Lei, Yushu Yang, Huixing Jiang, and Zhongyuan Wang. 2020a. Syntactic graph convolutional network for spoken language understanding. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 2728–2738, Barcelona, Spain (Online). International Committee on Computational Linguistics. * He et al. (2020b) Keqing He, Weiran Xu, and Yuanmeng Yan. 2020b. Multi-level cross-lingual transfer learning with language shared and specific knowledge for spoken language understanding. _IEEE Access_ , 8:29407–29416. * He et al. (2020c) Keqing He, Yuanmeng Yan, Si hong Liu, Z. Liu, and Weiran Xu. 2020c. Learning label-relational output structure for adaptive sequence labeling. _2020 International Joint Conference on Neural Networks (IJCNN)_ , pages 1–8. * He et al. (2020d) Keqing He, Yuanmeng Yan, and Weiran Xu. 2020d. Learning to tag OOV tokens by integrating contextual representation and background knowledge. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 619–624, Online. Association for Computational Linguistics. * He et al. (2020e) Keqing He, Jinchao Zhang, Yuanmeng Yan, Weiran Xu, Cheng Niu, and Jie Zhou. 2020e. Contrastive zero-shot learning for cross-domain slot filling with adversarial attack. In _COLING_. * Hemphill et al. (1990) C. T. Hemphill, J. J. Godfrey, and G. Doddington. 1990. The atis spoken language systems pilot corpus. In _HLT_. * Hendrycks and Gimpel (2017) Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. _ArXiv_ , abs/1610.02136. * Hu et al. (2019) Ziniu Hu, Ting Chen, Kai-Wei Chang, and Yizhou Sun. 2019. Few-shot representation learning for out-of-vocabulary words. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4102–4112, Florence, Italy. Association for Computational Linguistics. * Huang et al. (2015) Zhiheng Huang, W. Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. _ArXiv_ , abs/1508.01991. * Larson et al. (2019) Stefan Larson, Anish Mahendran, Joseph Peper, Christopher Clarke, Andrew Lee, P. Hill, Jonathan K. Kummerfeld, Kevin Leach, M. Laurenzano, L. Tang, and J. Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. _ArXiv_ , abs/1909.02027. * Lee et al. (2018) Kimin Lee, Kibok Lee, H. Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In _NeurIPS_. * Liang et al. (2017a) Dongyun Liang, Weiran Xu, and Yinge Zhao. 2017a. Combining word-level and character-level representations for relation classification of informal text. In _Proceedings of the 2nd Workshop on Representation Learning for NLP_ , pages 43–47, Vancouver, Canada. Association for Computational Linguistics. * Liang et al. (2017b) Shiyu Liang, Yixuan Li, and R. Srikant. 2017b. Principled detection of out-of-distribution examples in neural networks. _ArXiv_ , abs/1706.02690. * Liang et al. (2018) Shiyu Liang, Yixuan Li, and R. Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. _arXiv: Learning_. * Lin and Xu (2019) Ting-En Lin and H. Xu. 2019. Deep unknown intent detection with margin loss. _ArXiv_ , abs/1906.00434. * Liu and Lane (2015) Bing Liu and Ian Lane. 2015. Recurrent neural network structured output prediction for spoken language understanding. In _Proc. NIPS Workshop on Machine Learning for Spoken Language Understanding and Interactions_. * Liu and Lane (2016) Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. _arXiv preprint arXiv:1609.01454_. * Louvan and Magnini (2020) Samuel Louvan and B. Magnini. 2020. Recent neural methods on slot filling and intent classification for task-oriented dialogue systems: A survey. In _COLING_. * Mesnil et al. (2015) Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Z. Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, and Geoffrey Zweig. 2015. Using recurrent neural networks for slot filling in spoken language understanding. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 23:530–539. * Ratinov and Roth (2009) Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In _Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)_ , pages 147–155. * Ren et al. (2019) J. Ren, Peter J. Liu, E. Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, and Balaji Lakshminarayanan. 2019. Likelihood ratios for out-of-distribution detection. In _NeurIPS_. * Shah et al. (2019) Darsh J. Shah, Raghav Gupta, A. Fayazi, and Dilek Z. Hakkani-Tür. 2019. Robust zero-shot cross-domain slot filling with example values. _ArXiv_ , abs/1906.06870. * Shu et al. (2017) Lei Shu, Hu Xu, and Bing Liu. 2017. Doc: Deep open classification of text documents. _ArXiv_ , abs/1709.08716. * Xu et al. (2020a) H. Xu, Keqing He, Yuanmeng Yan, Si hong Liu, Z. Liu, and Weiran Xu. 2020a. A deep generative distance-based classifier for out-of-domain detection with mahalanobis space. In _COLING_. * Xu et al. (2020b) Hong Xu, Keqing He, Yuanmeng Yan, Sihong Liu, Zijun Liu, and Weiran Xu. 2020b. A deep generative distance-based classifier for out-of-domain detection with mahalanobis space. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 1452–1460, Barcelona, Spain (Online). International Committee on Computational Linguistics. * Yan et al. (2020) Yuanmeng Yan, Keqing He, Hong Xu, Sihong Liu, Fanyu Meng, Min Hu, and Weiran Xu. 2020. Adversarial semantic decoupling for recognizing open-vocabulary slots. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6070–6075, Online. Association for Computational Linguistics. * Yang and Mitchell (2017) B. Yang and Tom Michael Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In _ACL_. * Zeng et al. (2021a) Zhiyuan Zeng, Keqing He, Yuanmeng Yan, Hong Xu, and Weiran Xu. 2021a. Adversarial self-supervised learning for out-of-domain detection. In _NAACL_. * Zeng et al. (2021b) Zhiyuan Zeng, Hong Xu, Keqing He, Yuanmeng Yan, Sihong Liu, Zijun Liu, and Weiran Xu. 2021b. Adversarial generative distance-based classifier for robust out-of-domain detection. In _ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 7658–7662. * Zhao and Feng (2018) Lin Zhao and Zhe Feng. 2018. Improving slot filling in spoken language understanding with joint pointer and attention. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 426–431, Melbourne, Australia. Association for Computational Linguistics. * Zheng et al. (2020) Yinhe Zheng, Guanyi Chen, and Minlie Huang. 2020. Out-of-domain detection for natural language understanding in dialog systems. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 28:1198–1209.
11institutetext: Institutionen för fysik och astronomi, Uppsala universitet, Box 516, S-75120 Uppsala, Sweden 22institutetext: National Centre for Nuclear Research, 02-093 Warsaw, Poland # Electromagnetic transition form factors and Dalitz decays of hyperons Nora Saloneaddr1,addr2 Stefan Leupoldaddr1 (August 29, 2024) ###### Abstract Dalitz decays of a hyperon resonance to a ground-state hyperon and an electron-positron pair can give access to some information about the composite structure of hyperons. We present expressions for the multi-differential decay rates in terms of general transition form factors for spin-parity combinations $J^{P}=\frac{1}{2}^{\pm},\frac{3}{2}^{\pm}$ of the hyperon resonance. Even if the spin of the initial hyperon resonance is not measured, the self-analyzing weak decay of the “final” ground-state hyperon contains information about the relative phase between combinations of transition form factors. This relative phase is non-vanishing because of the unstable nature of the hyperon resonance. If all form factor combinations in the differential decay formulae are replaced by their respective values at the photon point, one obtains a QED type approximation, which might be interpreted as characterizing hypothetical hyperons with point-like structure. We compare the QED type approximation to a more realistic form factor scenario for the lowest-lying singly-strange hyperon resonances. In this way we explore which accuracy in the measurements of the differential Dalitz decay rates is required in order to distinguish the composite-structure case from the pointlike case. Based on the QED type approximation we obtain as a by-product a rough prediction for the ratio between the Dalitz decay width and the corresponding photon decay width. ## 1 Motivation Electromagnetic form factors have become an important tool to study the structure of strongly interacting objects, see e.g. Granados:2017cib ; Alarcon:2017asr ; Leupold:2017ngs ; Junker:2019vvy ; Husek:2019wmt ; Landsberg:1986fd ; Czerwinski:2012ry ; Miller:2007uy ; Pacetti:2015iqa ; Punjabi:2015bba ; Devenish:1975jd ; Korner:1976hv ; Carlson:1985mm ; Pascalutsa:2006up ; Aznauryan:2011qj ; Tiator:2011pw ; Eichmann:2018ytt ; Kaxiras:1985zv ; Kubis:2000aa ; Sanchis-Alepuz:2017mir ; Ablikim:2019vaj ; Ramalho:2019koj ; Ramalho:2020tnn and references therein. In the near future, photon and Dalitz decays of hyperons will be measured at GSI/FAIR by HADES+PANDA Ramstein:2019kaz , $Y^{*}\to Y\gamma$ and $Y^{*}\to Y\,e^{+}e^{-}$, respectively. Here $Y^{*}$ denotes a singly-strange hyperon resonance and $Y$ a ground-state hyperon ($\Lambda$ or $\Sigma$). In the present work we want to explore what it takes to extract more information from the Dalitz decays than from the photon decays. In other words: How accurately does one need to measure the Dalitz decay distribution to determine an energy dependence of the form factors? In turn, the shape of a form factor is related to the information about the intrinsic structure Alarcon:2017asr ; Miller:2007uy ; Tiator:2011pw . To this end, we introduce the most general transition form factors for spin-parity combinations $J^{P}=\frac{1}{2}^{\pm},\frac{3}{2}^{\pm}$ of $Y^{*}$. This is similar in spirit to the developments of Korner:1976hv ; Perotti:2018wxm for $e^{+}e^{-}\to Y^{*}\bar{Y}$, but for a different kinematical regime. For general considerations about transition form factors see also Devenish:1975jd ; Carlson:1985mm . In practice, what we focus on are the low-lying hyperon resonances $\Lambda(1405)$ with $J^{P}=\frac{1}{2}^{-}$, $\Lambda(1520)$ with $J^{P}=\frac{3}{2}^{-}$ pdg . For completeness we cover also the cases of $J^{P}=\frac{3}{2}^{+}$ and $J^{P}=\frac{1}{2}^{+}$. Examples for the latter are the states $\Sigma(1385)$ and $\Sigma^{0}$, respectively. Transitions of those states have been studied by the group of one of the authors in Granados:2017cib ; Junker:2019vvy ; Husek:2019wmt ; Nair:2018mwa ; Holmberg:2018dtv . The $\Lambda(1405)$ is a very interesting state; see e.g. Mai:2020ltx ; Dalitz:1967fp ; Siegel:1988rq ; Jido:2003cb ; GarciaRecio:2003ks ; Geng:2007hz ; Sekihara:2008qk ; Hall:2014uca and references therein. It is the lowest- lying baryon state with negative parity pdg . Naively, one would think that the lowest-lying baryon with negative parity should contain only up and down quarks since those are significantly lighter than the s-quark. Yet, the $\Lambda(1405)$ as the lightest baryon with negative parity has a stangeness of $-1$, i.e. must contain (at least) one strange quark. This triggered many discussions about the nature of the $\Lambda(1405)$ as a state that might not fit into the quark model, which describes baryons as three-quark states. As an alternative, a bound state of nucleon and antikaon (“hadronic molecule”) has been proposed. It has also been suggested that there might actually be two coupled-channel hadronic-molecule states Jido:2003cb , one coupling stronger to the nucleon-antikaon structure and one stronger to $\Sigma$-pion, which constitutes the main decay mode of the $\Lambda(1405)$. From a more quantitative point of view the proper question is how much overlap a physical $\Lambda(1405)$ state has e.g. with a three-quark or with a proton-antikaon field configuration etc. In any case, it is conceivable that different pictures about the nature of the $\Lambda(1405)$ lead to somewhat different size predictions. More generally, different ideas about the intrinsic structure can lead to different predictions for the differential Dalitz decay rates. The second concrete example for which we will provide quantitative results is the $\Lambda(1520)$. It is the strange baryon next in mass after covering $\Sigma$ and $\Sigma(1385)$ previously Granados:2017cib ; Junker:2019vvy ; Husek:2019wmt ; Nair:2018mwa ; Holmberg:2018dtv and the $\Lambda(1405)$ in the present work. Above the $\Lambda(1520)$ our low-energy techniques might fail to work. To provide a self-contained work, it makes sense to cover also the $\Lambda(1520)$ in the present paper. Though the $\Lambda(1520)$ is often regarded as a typical quark-model state, see e.g. the mini-review in pdg , there are also ideas that suggest the $\Lambda(1520)$ essentially as a hadron molecule partner to the $\Lambda(1405)$ when interchanging ground-state baryons from the nucleon octet by spin-3/2 states from the $\Delta$ decuplet Kolomeitsev:2003kt . Like for the $\Lambda(1405)$, it can be expected that different pictures about the structure of the $\Lambda(1520)$ Roca:2006pu ; Roca:2006sz lead to different predictions for the differential Dalitz decay rates. The purpose of the present paper is not about developing particular models for the structure of the $\Lambda(1405)$ or the $\Lambda(1520)$. Yet we take the large interest in these states as a motivation to perform a model-independent analysis of the capability of Dalitz decays to access their respective intrinsic structure. The paper is structured in the following way. In the next section we present the framework to introduce electromagnetic transition form factors in the most general way for initial states with spin 3/2 or 1/2 and final states with spin 1/2. We will calculate the decay rates for radiative (photon) decays and Dalitz decays and also include the possibility that the “final” hyperon emerging from the Dalitz decay performs a further weak decay into nucleon and pion. In sect. 3 we introduce our parametrization for the transition form factors that features transition radii. We also specify a “structureless” case that we can contrast with the case of an extended structure. In sect. 4 we apply our framework concretely to the two initial states $\Lambda(1405)$ and $\Lambda(1520)$ (and the final state of a ground state $\Lambda$). Further discussion and a summary is provided in sect. 5. Appendices are added for technical purposes, but also to explain some conceptual issues that do not fit into the main body of the text. ## 2 Constraint-free form factors, helicity amplitudes and differential decay widths From a formal point of view we study electromagnetic transitions from a baryon resonance with spin 1/2 or 3/2 to a baryon with spin 1/2. For this transition, we disregard parity violating processes, i.e. we focus on transitions mediated by the strong and electromagnetic interactions. Prominent examples that can be described by this framework are the decays: $\Lambda(1520)\to\Lambda\gamma^{(*)},\Sigma^{0}\gamma^{(*)}$; $\Lambda(1405)\to\Lambda\gamma^{(*)},\Sigma^{0}\gamma^{(*)}$; $\Sigma(1385)\to\Lambda\gamma^{(*)},\Sigma\gamma^{(*)}$; $\Sigma^{0}\to\Lambda\gamma^{(*)}$. Electroweak processes like $\Xi^{0}\to\Lambda\gamma^{(*)}$ would need a (straightforward) extension of the formalism and are not covered in the present work. We will present formulae for all parity combinations. When it comes to concrete applications we will focus on two processes, namely $\Lambda(1520)\to\Lambda\gamma^{(*)}$ and $\Lambda(1405)\to\Lambda\gamma^{(*)}$. Generically we study the process $Y^{*}\to Y\gamma^{(*)}$ where the star for the hyperon $Y$ denotes an excited hyperon, i.e. a resonance, while $\gamma^{*}$ refers to a virtual photon $\gamma$. ### 2.1 Transition $\frac{3}{2}^{\mp}\to\frac{1}{2}^{\pm}$ If the initial baryon $Y^{*}$ has spin 3/2 and opposite parity to the final baryon $Y$, the most general decomposition of the transition respecting Lorentz invariance, current conservation and parity symmetry can be written as (cf. also Korner:1976hv ) $\langle p_{Y},\lambda_{Y}|j^{\mu}(0)|p_{Y^{*}},\lambda_{Y^{*}}\rangle=e\bar{u}(p_{Y},\lambda_{Y})\Gamma^{\mu\nu}_{-}u_{\nu}(p_{Y^{*}},\lambda_{Y^{*}})$ (1) with $\begin{split}\qquad\Gamma^{\mu\nu}_{-}=&-iH_{1}(q^{2})\,m_{Y^{*}}\,\left(\gamma^{\mu}q^{\nu}-\not{q}\,g^{\mu\nu}\right)\\\ &+iH_{2}(q^{2})\left(q^{\nu}p_{Y^{*}}^{\mu}-(q\cdot p_{Y^{*}})\,g^{\mu\nu}\right)\\\ &+iH_{3}(q^{2})\left(q^{\mu}q^{\nu}-q^{2}g^{\mu\nu}\right)\end{split}$ (2) and $q:=p_{Y^{*}}-p_{Y}$. Here $j_{\mu}$ denotes the electromagnetic current and $e$ the charge of the proton. The helicity of the initial (final) baryon is denoted by $\lambda_{Y^{*}}$ ($\lambda_{Y}$). Our conventions for the spin-3/2 vector-spinor $u_{\nu}$ have been spelled out in Junker:2019vvy . Note that no $\gamma_{5}$ appears here since either none or both of the involved baryons have natural parity Devenish:1975jd (and the electromagnetic current is a vector current and has natural parity). The three quantities $H_{i}$, $i=1,2,3$, constitute constraint-free transition form factors in the sense of a Bardeen-Tung-Tarrach (BTT) construction Bardeen:1969aw ; Tarrach:1975tu . We have introduced the three transition form factors $H_{i}$ such that they all have the same dimensionality (two inverse mass dimensions). In general, these transition form factors are complex quantities. Thus the appearance of the explicit $i$’s in the defining eq. (2) is a pure convention. However, there is some meaning to this choice. Suppose one calculates contributions to the transition form factors from an effective Lagrangian that satisfies charge conjugation symmetry. Then, a tree-level calculation will yield purely real results for $H_{i}$. In other words, conventions have been chosen such that only loops create imaginary parts for $H_{i}$. We substantiate this further in A. Next, we introduce dimensionless helicity amplitudes: we define $\displaystyle H_{-}(q^{2})$ $\displaystyle:=$ $\displaystyle-\left(m_{Y^{*}}-m_{Y}\right)m_{Y^{*}}H_{1}(q^{2})$ $\displaystyle{}+\frac{1}{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}+q^{2}\right)H_{2}(q^{2})+q^{2}H_{3}(q^{2})\,,$ $\displaystyle H_{0}(q^{2})$ $\displaystyle:=$ $\displaystyle-\ (m_{Y^{*}}-m_{Y})\,m_{Y^{*}}H_{1}(q^{2})$ $\displaystyle{}+(m_{Y^{*}}-m_{Y})\,m_{Y^{*}}H_{2}(q^{2})$ $\displaystyle{}+\frac{m_{Y^{*}}-m_{Y}}{2m_{Y^{*}}}\left(m_{Y^{*}}^{2}-m_{Y}^{2}+q^{2}\right)H_{3}(q^{2})\,,$ $\displaystyle H_{+}(q^{2})$ $\displaystyle:=$ $\displaystyle-\left(m_{Y^{*}}m_{Y}-m_{Y}^{2}+q^{2}\right)H_{1}(q^{2})$ $\displaystyle{}+\frac{1}{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}+q^{2}\right)H_{2}(q^{2})+q^{2}H_{3}(q^{2})\,.$ In a frame where the baryon momenta are aligned, the helicity flip amplitude $H_{-}$ is related to the combinations $(\lambda_{Y^{*}},\lambda_{Y})=(3/2,1/2),(-3/2,-1/2)$. The other helicity flip amplitude $H_{+}$ is related to the combinations $(\lambda_{Y^{*}},\lambda_{Y})=(-1/2,1/2),(1/2,-1/2)$. Finally the non-flip amplitude $H_{0}$ relates to $\lambda_{Y^{*}}=\lambda_{Y}=\pm 1/2$. Note that the spin111We avoid here the phrase “helicity” since the virtual photon might be at rest. of the real or virtual photon along the axis defined by the flight direction of the hyperons is given by $\lambda_{\gamma}=\lambda_{Y^{*}}-\lambda_{Y}$. The advantage of the helicity amplitudes over the three transition form factors $H_{i}$, $i=1,2,3$, is that there are no interference terms when calculating the decay widths for the reactions $Y^{*}\to Y\gamma$ and $Y^{*}\to Y\,e^{+}e^{-}$. We will see this explicitly below. A second advantage is that one can use quark counting rules to determine the high- energy behavior Carlson:1985mm of the helicity amplitudes for large values of space-like $q^{2}<0$. The main topic of this work are Dalitz decays. Here the photon virtuality $q^{2}$ is time-like and has an upper limit given by $(m_{Y^{*}}-m_{Y})^{2}$. Therefore high-energy constraints are not so relevant for the physics discussed here. Nonetheless, for completeness, we collect the high-energy behavior of all transition form factors and helicity amplitudes in B. While the transition form factors $H_{1}$, $H_{2}$ and $H_{3}$ are free from kinematical constraints, the helicity amplitudes satisfy $\begin{split}\qquad H_{+}((m_{Y^{*}}-m_{Y})^{2})&=H_{0}((m_{Y^{*}}-m_{Y})^{2})\\\ &=H_{-}((m_{Y^{*}}-m_{Y})^{2})\end{split}$ (4) and $\begin{split}\qquad&\frac{2(m_{Y^{*}}+m_{Y})}{m_{Y^{*}}-m_{Y}}H_{0}((m_{Y^{*}}+m_{Y})^{2})\\\ &=H_{+}((m_{Y^{*}}+m_{Y})^{2})+H_{-}((m_{Y^{*}}+m_{Y})^{2})\,.\end{split}$ (5) From a technical point of view, these relations are easy to deduce from the definitions of the helicity amplitudes (2.1). Later, the constraint (4) will be very important for our model independent low-energy parametrization of the transitions. Therefore we feel obliged to offer also a physical instead of purely technical motivation for the kinematical constraints. This physical explanation is provided in C. The width for the two-body radiative decay $Y^{*}\to Y\gamma$ is given by $\Gamma_{2}=e^{2}\frac{\left(m_{Y^{*}}+m_{Y}\right)^{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}\right)}{96\pi m_{Y^{*}}^{3}}\\\ \times\left[3|H_{-}(0)|^{2}+|H_{+}(0)|^{2}\right]\,.$ (6) As already announced there are no interference terms between the helicity amplitudes. The absence of $H_{0}$ signals nothing but the non-existence of a longitudinally polarized real photon. To describe in the one-photon approximation the Dalitz decay $Y^{*}\to Y\gamma^{*}\to Y\,e^{+}e^{-}$ we choose a frame where the virtual photon (and therefore the electron-positron pair) is at rest. In this frame, $\theta$ denotes the angle between the hyperon $Y$ and the electron. The double- differential three-body (Dalitz) decay width is given by $\begin{split}&\frac{\text{d}\Gamma_{3}}{\text{d}q^{2}\text{d}(\cos{\theta})}=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{3}\,192m_{Y^{*}}^{3}}\frac{(m_{Y^{*}}+m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\hskip 2.84526pt\times\bigg{\\{}\bigg{(}1+\cos^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}\theta\bigg{)}\big{[}3|H_{-}(q^{2})|^{2}+|H_{+}(q^{2})|^{2}\big{]}\\\ &\hskip 5.69054pt+\bigg{(}\sin^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}\theta\bigg{)}\frac{4q^{2}}{(m_{Y^{*}}-m_{Y})^{2}}|H_{0}(q^{2})|^{2}\bigg{\\}}\,.\end{split}$ (7) Here we have used the velocity of the electron in the rest frame of the electron-positron pair given by $\beta_{e}=\sqrt{1-4m_{e}^{2}/q^{2}}$. The momentum of $Y^{*}$ and $Y$ in the rest frame of the virtual photon is given by $\qquad\qquad\qquad p_{z}:=\frac{\lambda^{1/2}(m_{Y^{*}}^{2},m_{Y}^{2},q^{2})}{2\sqrt{q^{2}}}$ (8) with the Källén function $\qquad\lambda(a,b,c):=a^{2}+b^{2}+c^{2}-2(ab+bc+ac)\,.$ (9) To obtain the integrated Dalitz decay width, we note that the lower (upper) integration limit of the $\cos\theta$ integration is $-1$ ($+1$), i.e. the $\theta$ integration would go from $\pi$ to 0 and not the other way. This convention makes the right-hand side of (7) positive. It is nice to see how all physical constraints are visible in (7). For $q^{2}=(2m_{e})^{2}$ the leptons are produced at rest. Thus one cannot define an angle between electron and hyperon momentum. The right-hand side of (7) should show no angular dependence. This is indeed the case. At the other end of the phase space, i.e. for $q^{2}=(m_{Y^{*}}-m_{Y})^{2}$ the hyperons are at rest. Again one cannot define an angle between electron and hyperon momentum. Using the kinematical constraint (4), one can see that also here the angular dependence disappears. Even though the electromagnetic transition that we consider respects parity symmetry, we can allow for a further decay of the “final” hyperon. Focusing now on $Y=\Lambda$ (or $\Sigma^{\pm}$) this last decay is mediated by the weak interaction and does violate the parity symmetry. As a consequence, this decay populates different partial waves and gives rise to an interference pattern. In total, we study now the decay sequence $Y^{*}\to Y\,e^{+}e^{-}$ and $Y\to\pi N$. Thus we have a four-body final state. In principle, this gives rise to 5 independent kinematical variables. However, the intermediate $Y$ state is so long-living pdg that its mass is fixed (and in the experimental analyses one triggers on a displaced vertex Thome:2012bdy ; IkegamiAndersson:2020rau ). In addition, one can show that the squared matrix element of the four-body decay is independent of specific combinations of four-momenta. This feature is discussed in D. As a consequence, the specific four-body decay depends on three independent variables, one variable more than the already considered Dalitz decay. Besides the invariant mass $\sqrt{q^{2}}$ of the dilepton pair (the photon virtuality $\gamma^{*}$) and the angle $\theta$ between the electron and the hyperons in the rest frame of $\gamma^{*}$, one could use a second relative angle. It is convenient to define this angle in the frame where the decaying $Y$ hyperon is at rest. In this frame the electron-positron pair defines a plane. One could use the angle between the plane’s normal and the direction of the nucleon. Since it is a relative angle, its definition does not depend on the choice of a coordinate system, only on the choice of a proper frame of reference. Yet, to connect to the formalism developed in Perotti:2018wxm ; Faldt:2013gka ; Faldt:2016qee ; Faldt:2017kgy ; Faldt:2017yqt ; Faldt:2019zdl we introduce a fixed coordinate system and find for the four-body decay $Y^{*}\to Y\gamma^{*}\to\pi N\,e^{+}e^{-}$ the differential decay width $\begin{split}&\frac{\text{d}\Gamma_{4}}{\text{d}q^{2}\text{d}(\cos{\theta})\text{d}\Omega_{N}}\\\ &=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{4}\,384m_{Y^{*}}^{3}}\,\text{Br}_{Y\to\pi N}\,\frac{(m_{Y^{*}}+m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\hskip 2.84526pt\times\bigg{\\{}\bigg{(}1+\cos^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}\theta\bigg{)}\big{[}|H_{+}(q^{2})|^{2}+3|H_{-}(q^{2})|^{2}\big{]}\\\ &\hskip 14.22636pt+\bigg{(}\sin^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}\theta\bigg{)}\frac{4q^{2}}{\left(m_{Y^{*}}-m_{Y}\right)^{2}}|H_{0}(q^{2})|^{2}\\\ &\hskip 14.22636pt-\frac{4\sqrt{q^{2}}\beta_{e}^{2}}{m_{Y^{*}}-m_{Y}}\,\alpha_{Y}\,\text{Im}[H_{0}(q^{2})H_{+}^{*}(q^{2})]\\\ &\hskip 25.60747pt\times\sin\theta\cos\theta\sin\theta_{N}\sin\phi_{N}\bigg{\\}}\,.\end{split}$ (10) We recall that $q^{2}$ is the square of the dilepton mass or photon virtuality and $\theta$ denotes the angle between one of the hyperons and the electron in the rest frame of the dilepton (rest frame of the virtual photon). In addition, $\theta_{N}$ and $\phi_{N}$ are the angles of the nucleon three- momentum measured in the rest frame of $Y$. The coordinate system in this frame is defined by $\vec{q}$ pointing in the negative $z$-direction (i.e. in the rest frame of the virtual photon the $Y$ direction defined the positive $z$-axis) and the electron moves in the $x$-$z$ plane with positive momentum projection on the $x$-axis. In this frame, $\theta_{N}$ is the angle of the nucleon momentum relative to the $z$-axis and $\phi_{N}$ is the angle between the $x$-axis and the projection of the nucleon momentum on the $x$-$y$ plane, i.e. $\displaystyle\qquad\textbf{p}_{N}=p_{\rm f}\,(\sin\theta_{N}$ $\displaystyle\cos\phi_{N},\sin\theta_{N}\sin\phi_{N},\cos\theta_{N})\,,$ q $\displaystyle=|\textbf{q}|\,(0,0,-1)\,,$ (11) $\displaystyle\textbf{p}_{e^{-}}\cdot\textbf{e}_{y}=0\,,\quad$ $\displaystyle\textbf{p}_{e^{-}}\cdot\textbf{e}_{x}>0\,,\quad\textbf{e}_{y}=\frac{\textbf{p}_{e^{-}}\times\textbf{q}}{|\textbf{p}_{e^{-}}\times\textbf{q}\,|}\,,$ with the momentum $p_{\rm f}$ of the nucleon in the rest frame of the decaying $Y$ hyperon. We provide the momentum and also the corresponding energy: $\qquad\qquad\qquad p_{\rm f}=\frac{\lambda^{1/2}(m_{Y}^{2},m_{N}^{2},m_{\pi}^{2})}{2m_{Y}}$ (12) and $\qquad\qquad\qquad E_{N}=\frac{m_{Y}^{2}+m_{N}^{2}-m_{\pi}^{2}}{2m_{Y}}\,,$ (13) with the Källén function defined in (9). Note the subtlety that $\theta$ is measured in the rest frame of the virtual photon while $\Omega_{N}$ denotes angles in the rest frame of the $Y$ hyperon. In terms of Lorentz invariant quantities the angles are related to $\displaystyle p_{Y}\cdot k_{e}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\,\lambda^{1/2}(m_{Y^{*}}^{2},m_{Y}^{2},q^{2})\,\beta_{e}\,\cos\theta\,,$ $\displaystyle\epsilon_{\mu\nu\alpha\beta}\,k_{e}^{\mu}\,p_{Y}^{\nu}\,p_{N}^{\alpha}\,q^{\beta}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\,\sqrt{q^{2}}\,\lambda^{1/2}(m_{Y^{*}}^{2},m_{Y}^{2},q^{2})\,p_{\rm f}\,\beta_{e}$ (14) $\displaystyle\times\sin\theta\sin\theta_{N}\sin\phi_{N}$ with $k_{e}:=p_{e^{-}}-p_{e^{+}}$, $q=p_{e^{-}}+p_{e^{+}}=p_{Y^{*}}-p_{Y}$ and the convention pesschr for the Levi-Civita symbol: $\epsilon_{0123}=-1$. Other aspects of the four-body decay are discussed in D. The final weak decay of a spin-1/2 hyperon to a nucleon and a pion is driven by the matrix element pdg $\qquad{\cal M}_{\rm weak}=G_{F}\,m_{\pi}^{2}\,\bar{u}_{N}(p_{N})\left(A-B\gamma_{5}\right)u_{Y}(p_{Y})\,.$ (15) It is useful to introduce the asymmetry parameter $\qquad\qquad\qquad\alpha_{Y}:=\frac{2{\rm Re}(T^{*}_{s}T_{p})}{|T_{s}|^{2}+|T_{p}|^{2}}$ (16) with the s-wave amplitude $T_{s}:=A$, the p-wave amplitude $T_{p}:=p_{\rm f}\,B/(E_{N}+m_{N})$ and mass $m_{N}$, energy $E_{N}$ and momentum $p_{\rm f}$ of the nucleon in the rest frame of the decaying hyperon $Y$; see also appendix A of Faldt:2013gka for further details that are useful for practical calculations. For stable particles, e.g. nucleons, the electromagnetic form factors are complex for positive transferred momentum $q^{2}\geq 4m_{N}^{2}$ i.e. for the reaction $e^{+}e^{-}\to N\bar{N}$ in the time-like region of $q^{2}$. On the other hand, the form factors are real for the space-like region $q^{2}<0$, i.e. for the scattering process $e^{-}N\to e^{-}N$. However, for resonances, e.g. $Y^{*}$, the TFFs are complex for all values of $q^{2}$ Junker:2019vvy . Therefore the interference terms in (10) can in principle be measured. They contain information that is complementary to the moduli that are accessible by the Dalitz decay parametrized by (7). A calculation of such interference terms is beyond the scope of this work. Yet, the results of Junker:2019vvy suggest that such interference terms are relatively small. Note that the spin-parity combination considered in Junker:2019vvy refers strictly speaking to subsection 2.3 below, but semi-quantitatively we expect a similar pattern. High experimental accuracy will be required to access these interference terms in the Dalitz decay region. Yet, it might be worth to extract this additional structure information. It would also be interesting to see how different models for the structure of hyperon resonances differ in their predictions for such interference terms. In this context we stress once more that in practice these interference terms are driven by the fact that resonances are unstable with respect to the strong interaction. Quasi-stable states (which decay only because of the electromagnetic or weak interaction) would have tiny imaginary parts of form factors. For the same reason, models that treat resonances as stable are not capable to provide predictions for such interference terms. For measurements of the latter in the production region of hyperon-antihyperon pairs see Ablikim:2019vaj . Note also that one needs an additional (weak) decay (more generally a decay that populates more than one partial wave) to make the interference term visible. If the asymmetry parameter $\alpha_{Y}$ vanished, one would not see an interference effect in (10). ### 2.2 Transition $\frac{1}{2}^{\mp}\to\frac{1}{2}^{\pm}$ The structure of this subsection follows closely the previous one. If the initial baryon $Y^{*}$ has spin 1/2 and opposite parity to the final baryon $Y$, the most general decomposition of the transition respecting Lorentz invariance, current conservation and parity symmetry can be written as (cf. also Korner:1976hv ) $\langle p_{Y},\lambda_{Y}|j^{\mu}(0)|p_{Y^{*}},\lambda_{Y^{*}}\rangle=e\bar{u}(p_{Y},\lambda_{Y})\,\Gamma^{\mu}_{-}\,u(p_{Y^{*}},\lambda_{Y^{*}})$ (17) with $\Gamma^{\mu}_{-}=\tilde{F}_{2}(q^{2})\,m_{Y^{*}}\,\sigma^{\mu\beta}q_{\beta}\gamma_{5}+i\tilde{F}_{3}(q^{2})\left(q^{2}\gamma^{\mu}-\not{q}q^{\mu}\right)\gamma_{5}\,.$ (18) Note the appearance of a $\gamma_{5}$ since one of the baryons has unnatural parity. The two quantities $\tilde{F}_{2}$ and $\tilde{F}_{3}$ constitute constraint-free transition form factors in the sense of a BTT construction Bardeen:1969aw ; Tarrach:1975tu . The labeling is motivated in A. We introduce dimensionless helicity amplitudes: $\displaystyle\qquad\tilde{F}_{0}(q^{2}):={}$ $\displaystyle(m_{Y^{*}}-m_{Y})^{2}\tilde{F}_{3}(q^{2})$ $\displaystyle{}-(m_{Y^{*}}-m_{Y})\,m_{Y^{*}}\,\tilde{F}_{2}(q^{2})\,,$ (19) $\displaystyle\qquad\tilde{F}_{+}(q^{2}):={}$ $\displaystyle q^{2}\tilde{F}_{3}(q^{2})-(m_{Y^{*}}-m_{Y})\,m_{Y^{*}}\,\tilde{F}_{2}(q^{2})\,.$ In a frame where the baryon momenta are aligned, the helicity flip amplitude $\tilde{F}_{+}$ is related to the combinations $(\lambda_{Y^{*}},\lambda_{Y})=(-1/2,1/2),(1/2,-1/2)$. The non-flip amplitude $\tilde{F}_{0}$ relates to $\lambda_{Y^{*}}=\lambda_{Y}=\pm 1/2$. While the transition form factors $\tilde{F}_{2}$ and $\tilde{F}_{3}$ are free from kinematical constraints, the helicity amplitudes satisfy $\qquad\tilde{F}_{+}((m_{Y^{*}}-m_{Y})^{2})=\tilde{F}_{0}((m_{Y^{*}}-m_{Y})^{2})\,.$ (20) We will use this later for a model independent low-energy parametrization of the transitions. The constraint (20) can be easily deduced from the definitions (19). Physically it follows from the fact that one partial wave is dominant over the other at the end of the phase space of the Dalitz decay $Y^{*}\to Y\,e^{+}e^{-}$; see the related discussion in C. The respective decay widths for the two-body radiative decay $Y^{*}\to Y\gamma$, the three-body Dalitz decay $Y^{*}\to Y\gamma^{*}\to Y\,e^{+}e^{-}$, and the four-body decay $Y^{*}\to Y\gamma^{*}\to\pi N\,e^{+}e^{-}$ are given by $\quad\Gamma_{2}=\frac{e^{2}|\tilde{F}_{+}(0)|^{2}\left(m_{Y^{*}}+m_{Y}\right)^{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}\right)}{8\pi m_{Y^{*}}^{3}}\,,$ (21) $\displaystyle\frac{\text{d}\Gamma_{3}}{\text{d}q^{2}\text{d}(\cos{\theta})}=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{3}\,16m_{Y^{*}}^{3}}\frac{(m_{Y^{*}}+m_{Y})^{2}-q^{2}}{q^{2}}$ $\displaystyle\quad\times\bigg{\\{}\bigg{(}1+\cos^{2}{\theta}+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}{\theta}\bigg{)}\,|\tilde{F}_{+}(q^{2})|^{2}$ (22) $\displaystyle\qquad+\bigg{(}\sin^{2}{\theta}+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}{\theta}\bigg{)}\frac{q^{2}}{(m_{Y^{*}}-m_{Y})^{2}}|\tilde{F}_{0}(q^{2})|^{2}\bigg{\\}}\,,$ and $\begin{split}&\frac{\text{d}\Gamma_{4}}{\text{d}q^{2}\text{d}(\cos{\theta})\text{d}\Omega_{N}}\\\ &=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{4}\,32m_{Y^{*}}^{3}}\text{Br}_{Y\to\pi N}\,\frac{(m_{Y^{*}}+m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\hskip 5.69054pt\times\bigg{\\{}\bigg{(}1+\cos^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}\theta\bigg{)}|\tilde{F}_{+}(q^{2})|^{2}\\\ &\hskip 8.53581pt+\bigg{(}\sin^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}\theta\bigg{)}\frac{q^{2}}{(m_{Y^{*}}-m_{Y})^{2}}|\tilde{F}_{0}(q^{2})|^{2}\\\ &\hskip 8.53581pt+\frac{2\sqrt{q^{2}}\beta_{e}^{2}}{m_{Y^{*}}-m_{Y}}\,\alpha_{Y}\,\text{Im}[\tilde{F}_{0}(q^{2})\tilde{F}_{+}^{*}(q^{2})]\\\ &\qquad\times\sin\theta\cos\theta\sin\theta_{N}\sin\phi_{N}\bigg{\\}}\,.\end{split}$ (23) Again, it is nice to see how all angular dependence disappears for the cases where specific three-vectors vanish and therefore do not allow to define relative angles. For $q^{2}=(2m_{e})^{2}$ one can see directly how all angular dependence vanishes in the curly brackets of (22) and (23). For $q^{2}=(m_{Y^{*}}-m_{Y})^{2}$ this is achieved by the kinematical constraint (20). ### 2.3 Transition $\frac{3}{2}^{\pm}\to\frac{1}{2}^{\pm}$ The previous two subsections were devoted to cases that we will study in more detail later. For completeness, we also add two more combinations in the present and the succeeding subsection. If the initial baryon $Y^{*}$ has spin 3/2 and the same parity as the final baryon $Y$, the corresponding decomposition is given by $\langle p_{Y},\lambda_{Y}|j^{\mu}(0)|p_{Y^{*}},\lambda_{Y^{*}}\rangle=e\bar{u}(p_{Y},\lambda_{Y})\Gamma^{\mu\nu}_{+}u_{\nu}(p_{Y^{*}},\lambda_{Y^{*}})$ (24) $\begin{split}\qquad\Gamma^{\mu\nu}_{+}={}&\tilde{H}_{1}(q^{2})\,m_{Y^{*}}\left(\gamma^{\mu}q^{\nu}-\not{q}g^{\mu\nu}\right)\gamma_{5}\\\ &+\tilde{H}_{2}(q^{2})\left(q^{\nu}p_{Y^{*}}^{\mu}-(q\cdot p_{Y^{*}})g^{\mu\nu}\right)\gamma_{5}\\\ &+\tilde{H}_{3}(q^{2})\left(q^{\mu}q^{\nu}-q^{2}g^{\mu\nu}\right)\gamma_{5}\,.\end{split}$ (25) We see the appearance of a $\gamma_{5}$ since one of the baryons has unnatural parity. The three quantities $\tilde{H}_{i}$, $i=1,2,3$ constitute constraint- free transition form factors in the sense of a BTT construction Bardeen:1969aw ; Tarrach:1975tu . We note that formally $\Gamma^{\mu\nu}_{+}$ is obtained from (2) by multiplying with $-i\gamma_{5}$ from the left Korner:1976hv and changing $H\to\tilde{H}$. One can rephrase this by stating that one obtains (24), (25) from (1), (2) by replacing $\bar{u}$ by $-i\bar{u}\gamma_{5}$ for the spinor of the Y hyperon. The interesting aspect is that $-i\bar{u}\gamma_{5}$ satisfies the same equation of motion as $\bar{u}$ but with the mass replaced by its negative. As a consequence, most of the relations that we present now for the transition $3/2^{\pm}\to 1/2^{\pm}$ can be obtained from the corresponding relations for the transition $3/2^{\mp}\to 1/2^{\pm}$ from subsection 2.1 by just replacing $m_{Y}\to-m_{Y}$ (and changing $H\to\tilde{H}$). Again we introduce dimensionless helicity amplitudes: $\displaystyle\tilde{H}_{-}(q^{2})$ $\displaystyle:=$ $\displaystyle-\left(m_{Y^{*}}+m_{Y}\right)m_{Y^{*}}\,\tilde{H}_{1}(q^{2})$ $\displaystyle{}+\frac{1}{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}+q^{2}\right)\tilde{H}_{2}(q^{2})+q^{2}\tilde{H}_{3}(q^{2})\,,$ $\displaystyle\tilde{H}_{0}(q^{2})$ $\displaystyle:=$ $\displaystyle-\left(m_{Y^{*}}+m_{Y}\right)m_{Y^{*}}\,\tilde{H}_{1}(q^{2})$ $\displaystyle{}+\left(m_{Y^{*}}+m_{Y}\right)m_{Y^{*}}\,\tilde{H}_{2}(q^{2})$ $\displaystyle{}+\frac{m_{Y^{*}}+m_{Y}}{2m_{Y^{*}}}\left(m_{Y^{*}}^{2}-m_{Y}^{2}+q^{2}\right)\,\tilde{H}_{3}(q^{2})\,,$ $\displaystyle\tilde{H}_{+}(q^{2})$ $\displaystyle:=$ $\displaystyle-\left(q^{2}-m_{Y^{*}}m_{Y}-m_{Y}^{2}\right)\tilde{H}_{1}(q^{2})$ $\displaystyle{}+\frac{1}{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}+q^{2}\right)\tilde{H}_{2}(q^{2})+q^{2}\tilde{H}_{3}(q^{2})\,.$ Note that the conventions for the helicity amplitudes are in line with Carlson:1985mm , but opposite to Junker:2019vvy . The kinematical constraints obtain the form $\begin{split}\qquad\tilde{H}_{+}((m_{Y^{*}}+m_{Y})^{2})&=\tilde{H}_{0}((m_{Y^{*}}+m_{Y})^{2})\\\ &=\tilde{H}_{-}((m_{Y^{*}}+m_{Y})^{2})\end{split}$ (27) and $\begin{split}\qquad&\frac{2(m_{Y^{*}}-m_{Y})}{m_{Y^{*}}+m_{Y}}\tilde{H}_{0}((m_{Y^{*}}-m_{Y})^{2})\\\ &=\tilde{H}_{+}((m_{Y^{*}}-m_{Y})^{2})+\tilde{H}_{-}((m_{Y^{*}}-m_{Y})^{2})\,.\end{split}$ (28) The width for the two-body radiative decay $Y^{*}\to Y\gamma$ is given by $\Gamma_{2}=e^{2}\left[3|\tilde{H}_{-}(0)|^{2}+|\tilde{H}_{+}(0)|^{2}\right]\\\ \times\frac{\left(m_{Y^{*}}-m_{Y}\right)^{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}\right)}{96\pi m_{Y^{*}}^{3}}\,.$ (29) The differential decay width for the Dalitz decay $Y^{*}\to Y\gamma^{*}\to Y\,e^{+}e^{-}$ can be expressed as $\begin{split}&\frac{\text{d}\Gamma_{3}}{\text{d}q^{2}\text{d}(\cos{\theta})}=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{3}\,192m_{Y^{*}}^{3}}\frac{(m_{Y^{*}}-m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\ \times\bigg{\\{}\bigg{(}1+\cos^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}\theta\bigg{)}\big{[}3|\tilde{H}_{-}(q^{2})|^{2}+|\tilde{H}_{+}(q^{2})|^{2}\big{]}\\\ &\quad+\bigg{(}\sin^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}\theta\bigg{)}\frac{4q^{2}}{(m_{Y^{*}}+m_{Y})^{2}}|\tilde{H}_{0}(q^{2})|^{2}\bigg{\\}}\,.\end{split}$ (30) The four-body decay $Y^{*}\to Y\gamma^{*}\to\pi N\,e^{+}e^{-}$ has the following differential decay width: $\begin{split}&\frac{\text{d}\Gamma_{4}}{\text{d}q^{2}\text{d}(\cos{\theta})\text{d}\Omega_{N}}\\\ &=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{4}\,384m_{Y^{*}}^{3}}\,\text{Br}_{Y\to\pi N}\,\frac{(m_{Y^{*}}-m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\hskip 2.84526pt\times\bigg{\\{}\bigg{(}1+\cos^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}\theta\bigg{)}\big{[}|\tilde{H}_{+}(q^{2})|^{2}+3|\tilde{H}_{-}(q^{2})|^{2}\big{]}\\\ &\hskip 8.53581pt+\bigg{(}\sin^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}\theta\bigg{)}\frac{4q^{2}}{\left(m_{Y^{*}}+m_{Y}\right)^{2}}|\tilde{H}_{0}(q^{2})|^{2}\\\ &\hskip 8.53581pt+\frac{4\sqrt{q^{2}}\beta_{e}^{2}}{m_{Y^{*}}+m_{Y}}\,\alpha_{Y}\,\text{Im}[\tilde{H}_{0}(q^{2})\tilde{H}_{+}^{*}(q^{2})]\\\ &\qquad\times\sin\theta\cos\theta\sin\theta_{N}\sin\phi_{N}\bigg{\\}}\,.\end{split}$ (31) In the last equation we have a notable exception to the “rule” that the formulae of the present subsection are obtained from the corresponding ones in subsection 2.1 by the replacements $m_{Y}\to-m_{Y}$ and $H\to\tilde{H}$. The interference term has the opposite sign to the one in (10). ### 2.4 Transition $\frac{1}{2}^{\pm}\to\frac{1}{2}^{\pm}$ Finally we study the case of a transition between two hyperons with the same spin and parity assignments. One such case, the transition from $\Sigma^{0}$ to $\Lambda$ has been studied in detail in Granados:2017cib ; Husek:2019wmt . The decomposition is given by $\langle p_{Y},\lambda_{Y}|j^{\mu}(0)|p_{Y^{*}},\lambda_{Y^{*}}\rangle=e\bar{u}(p_{Y},\lambda_{Y})\,\Gamma^{\mu}_{+}\,u(p_{Y^{*}},\lambda_{Y^{*}})$ (32) with $\Gamma^{\mu}_{+}=iF_{2}(q^{2})\,m_{Y^{*}}\,\sigma^{\mu\beta}q_{\beta}+F_{3}(q^{2})\left(q^{2}\gamma^{\mu}-\not{q}q^{\mu}\right)\,.$ (33) We introduce the dimensionless helicity amplitudes by $\begin{split}\qquad F_{0}(q^{2}):=&(m_{Y^{*}}+m_{Y})^{2}F_{3}(q^{2})\\\ &-(m_{Y^{*}}+m_{Y})\,m_{Y^{*}}\,F_{2}(q^{2})\,,\\\ F_{+}(q^{2}):=&q^{2}F_{3}(q^{2})-(m_{Y^{*}}+m_{Y})\,m_{Y^{*}}\,F_{2}(q^{2})\,.\end{split}$ (34) These helicity amplitudes satisfy the kinematical constraint $\qquad F_{+}((m_{Y^{*}}+m_{Y})^{2})=F_{0}((m_{Y^{*}}+m_{Y})^{2})\,.$ (35) The respective decay widths for the two-body radiative decay $Y^{*}\to Y\gamma$, the three-body Dalitz decay $Y^{*}\to Y\gamma^{*}\to Y\,e^{+}e^{-}$, and the four-body decay $Y^{*}\to Y\gamma^{*}\to\pi N\,e^{+}e^{-}$ are given by $\quad\Gamma_{2}=\frac{e^{2}|F_{+}(0)|^{2}\left(m_{Y^{*}}-m_{Y}\right)^{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}\right)}{8\pi m_{Y^{*}}^{3}}\,,$ (36) $\begin{split}&\frac{\text{d}\Gamma_{3}}{\text{d}q^{2}\text{d}(\cos{\theta})}=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{3}\,16m_{Y^{*}}^{3}}\frac{(m_{Y^{*}}-m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\hskip 8.53581pt\times\bigg{\\{}\bigg{(}1+\cos^{2}{\theta}+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}{\theta}\bigg{)}\,|F_{+}(q^{2})|^{2}\\\ &\hskip 11.38109pt+\bigg{(}\sin^{2}{\theta}+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}{\theta}\bigg{)}\frac{q^{2}}{(m_{Y^{*}}+m_{Y})^{2}}|F_{0}(q^{2})|^{2}\bigg{\\}}\,,\end{split}$ (37) and $\begin{split}&\frac{\text{d}\Gamma_{4}}{\text{d}q^{2}\text{d}(\cos{\theta})\text{d}\Omega_{N}}\\\ &=\frac{e^{4}p_{z}\sqrt{q^{2}}\beta_{e}}{(2\pi)^{4}\,32m_{Y^{*}}^{3}}\text{Br}_{Y\to\pi N}\,\frac{(m_{Y^{*}}-m_{Y})^{2}-q^{2}}{q^{2}}\\\ &\qquad\times\bigg{\\{}\bigg{(}1+\cos^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}\theta\bigg{)}|F_{+}(q^{2})|^{2}\\\ &\qquad\quad+\bigg{(}\sin^{2}\theta+\frac{4m_{e}^{2}}{q^{2}}\cos^{2}\theta\bigg{)}\frac{q^{2}}{(m_{Y^{*}}+m_{Y})^{2}}|F_{0}(q^{2})|^{2}\\\ &\qquad\quad-\frac{2\sqrt{q^{2}}\beta_{e}^{2}}{m_{Y^{*}}+m_{Y}}\,\alpha_{Y}\,\text{Im}[F_{0}(q^{2})F_{+}^{*}(q^{2})]\\\ &\qquad\qquad\times\sin\theta\cos\theta\sin\theta_{N}\sin\phi_{N}\bigg{\\}}\,.\end{split}$ (38) ## 3 QED type case and form factor parametrization In this section we will give a quantitative meaning to the phrases “structureless” and “extended structure”. We will call the former “QED type”. For the latter we introduce a radius. A real photon does not resolve the intrinsic structure of the composite hyperons that take part in the electromagnetic transitions. It is the photon virtuality that relates to the resolution; see e.g. the discussion in Alarcon:2017asr ; Miller:2007uy . In this spirit, we define a QED type case Junker:2019vvy by modifying the Dalitz decay formula (7) such that it fits to the radiative decay formula (6). To this end we replace e.g. in (7) $\displaystyle\big{[}3|H_{-}(q^{2})|^{2}+|H_{+}(q^{2})|^{2}\big{]}$ $\displaystyle\to\big{[}3|H_{-}(0)|^{2}+|H_{+}(0)|^{2}\big{]}\,,$ $\displaystyle\frac{4q^{2}}{(m_{Y^{*}}-m_{Y})^{2}}\,$ $\displaystyle|H_{0}(q^{2})|^{2}\to 0\,,$ (39) i.e. we replace all virtualities $q^{2}$ by $0$ for these building blocks. We can do this for all spin-parity combinations. In this way we obtain a Dalitz decay formula for “structureless” fermions. If we divide out the corresponding radiative decay width we get $\displaystyle\frac{1}{\Gamma_{2}}\frac{\text{d}\Gamma_{\text{QED type}}}{\text{d}q^{2}\text{d}(\cos{\theta})}=\frac{(m_{Y^{*}}\pm m_{Y})^{2}-q^{2}}{q^{2}\left(m_{Y^{*}}\pm m_{Y}\right)^{2}\left(m_{Y^{*}}^{2}-m_{Y}^{2}\right)}$ $\displaystyle\phantom{mmm}\times\frac{e^{2}p_{z}\sqrt{q^{2}}\beta_{e}}{(4\pi)^{2}}\bigg{(}1+\cos^{2}{\theta}+\frac{4m_{e}^{2}}{q^{2}}\sin^{2}{\theta}\bigg{)}\,.\phantom{m}$ (40) Here the upper (lower) sign refers to the cases where the parities of initial and final fermion differ (are the same). Obviously, this formula (40) is independent of any transition form factors or helicity amplitudes. If an experiment cannot resolve the difference between nature and (40), it cannot reveal if hyperons have an intrinsic structure or not. This does not mean that the determination of radiative decay widths does not contain any interesting information Kaxiras:1985zv , but no visible deviation from (40) means that the such measured Dalitz decays do not contain more information than radiative decays, i.e. decays with real photons. The QED type case (40) defines our baseline to which we want to compare the case where hyperons do have an intrinsic structure. From now on we focus on decays of the two negative-parity resonances $\Lambda(1520)$ and $\Lambda(1405)$. The mass difference between the considered resonances and the ground-state hyperons is not very large. Consequently the energy range $\sqrt{q^{2}}$ (dilepton invariant mass) that is explored by the Dalitz decays is rather limited, it ranges from two times the electron mass up to the mass difference $m_{Y^{*}}-m_{Y}$. For a rough estimate of the importance of the $q^{2}$ dependence, we approximate any helicity amplitude $G(q^{2})$ in the following way: $\qquad\qquad G(q^{2})\approx G(0)\left(1+\frac{1}{6}q^{2}\langle r^{2}\rangle\right)$ (41) where we have introduced the radius via $\qquad\qquad\langle r^{2}\rangle:=\frac{6}{G(0)}\left.\frac{\text{d}G(q^{2})}{\text{d}q^{2}}\right|_{q^{2}=0}\,.$ (42) In view of the typical size of hadrons of about $1\,$fm, we assume $\qquad\qquad\quad 0\leq\langle r^{2}\rangle\leq 1\,\text{fm}^{2}\approx 25\,\text{GeV}^{-2}\,.$ (43) In practice, this will provide us with a rough upper limit of the deviation of a differential decay rate from the corresponding QED type case. We would like to stress that (43) is not entirely accurate. As already pointed out, the transition form factors and helicity amplitudes for resonances are not real-valued in the Dalitz decay region.222They are not even real-valued in the space-like region of electron-hadron scattering. Strictly speaking, this carries over to the squared radii. The physical reason for being complex is the inelastic two-step process of strong decay, $Y^{*}\to\pi Y^{\prime}$, and rescattering, $\pi Y^{\prime}\to\gamma^{*}Y$. Here $Y^{\prime}$ denotes another hyperon. However, the imaginary part of a transition form factor or helicity amplitude will not be very large if the total decay width of the resonance is sufficiently small. This is the case for the resonances that we consider. For the electromagnetic transitions $\Sigma(1385)\to\Lambda$ the imaginary parts of squared radii have been explicitly calculated in Junker:2019vvy . There, the real parts were in the ballpark of (43) and the imaginary parts were much smaller. Therefore we assume that the real part of any $\langle r^{2}\rangle$ is dominant over the imaginary part. What remains to be shown to justify (43) as a reasonable approximation? We still have to argue why the real part should have positive sign. Indeed, there is a well-known case where the squared radius of a hadron seems to be negative: the electric charge radius of the neutron pdg . Yet, this is a somewhat misleading case. For the electric form factor of the neutron, the radius cannot be defined via (42), since the charge of the neutron, $G(0)$, vanishes. As a remedy one drops $G(0)$ in the definition of the charge radius of the neutron. However, one can take a somewhat different route to the same physical information. Instead of electric charge, one can look at the isospin. The isoscalar and isovector form factors of the nucleon have all non-vanishing “charges”. One can define an isoscalar and an isovector radius based on (42), see, e.g., Leupold:2017ngs ; Kubis:2000aa . Those two radii are positive and in the order of 1 fm, in full agreement with (43). Also the results of Junker:2019vvy support this estimate. Finally we note that even the radius of the pion transition form factor Hoferichter:2014vra fits into this picture. For all these cases, the radii are less than 1 fm. Thus we believe that (43) defines a reasonable conservative range and we expect that for all hadrons, reality is closer to 1 fm than to 0. ## 4 Concrete results for negative-parity resonances The parametrization (41) seems to indicate that we have two free parameters per helicity amplitude. However, this is not the case because we have to obey the kinematical constraints (4) or (20), respectively. This will help us to express all relevant quantities solely in terms of radii. Some clarification is in order here. Of course, the ansatz (41) is an approximation that holds only for sufficiently low values of $q^{2}$. We want to use this approximation for the whole Dalitz decay region for cases where the mass difference between the decaying and the final hyperon is sufficiently small. The kinematical constraints (4) or (20) lie in this low-energy region because they lie at the end of the kinematically allowed Dalitz decay region. Therefore we can use the kinematical constraints to reduce the number of free parameters and focus on the impact of the radii on the results. Note that this line of reasoning would not work for positive-parity resonances. In particular, the constraint (35) does not lie in the low-energy region where the ansatz (41) would make sense. Albeit not of completely general use, the kinematical constraints are absolutely suited for the cases that we consider further, namely for the hyperon resonances $\Lambda(1520)$ and $\Lambda(1405)$ with spin-parity assignment $3/2^{-}$ and $1/2^{-}$, respectively pdg . ### 4.1 Transition radii and the decay $\Lambda(1405)\to\Lambda\,e^{+}e^{-}$ Using (41) for the helicity amplitudes $\tilde{F}_{+}$ and $\tilde{F}_{0}$ allows us to rewrite the kinematical constraint (20) into $\qquad\tilde{F}_{0}(0)\approx\tilde{F}_{+}(0)\,\frac{1+(m_{Y^{*}}-m_{Y})^{2}\langle r^{2}\rangle_{+}/6}{1+(m_{Y^{*}}-m_{Y})^{2}\langle r^{2}\rangle_{0}/6}\,.$ (44) The remaining dependence on $\tilde{F}_{+}(0)$ cancels out in the ratio of the Dalitz decay width (22) and the radiative decay width (21). Thus the such normalized decay width $\text{d}\Gamma_{3}/(\text{d}q^{2}\text{d}(\cos\theta))/\Gamma_{2}$ depends only on the (squared) transition radii $\langle r^{2}\rangle_{+}$ and $\langle r^{2}\rangle_{0}$. Figure 1: Comparison between radius structure and QED-type approximation for the $\frac{1}{2}^{-}\to\frac{1}{2}^{+}$ transition. Top panel: both radii at maximal value (43); middle panels: one radius at max, one zero; bottom panel: both radii put to zero. In the following, we will focus on the once-integrated normalized decay widths $\qquad\qquad\qquad\frac{1}{\Gamma_{2}}\frac{\text{d}\Gamma_{3}}{\text{d}q^{2}}\quad\mbox{and}\quad\frac{1}{\Gamma_{2}}\frac{\text{d}\Gamma_{3}}{\text{d}(\cos\theta)}\,,$ (45) to study the dependence on the transition radii and compare to the QED type case. The $q^{2}$ dependence is depicted in fig. 1. We observe that the radius related to the helicity flip amplitude $\tilde{F}_{+}$ causes a significant deviation from the QED type case. The effect from the non-flip amplitude is minor. This is related to the additional $q^{2}$ factor that multiplies $\tilde{F}_{0}$ in (22) and to the larger weight of $1+\cos^{2}\theta$ relative to $\sin^{2}\theta=1-\cos^{2}\theta$ when integrating over $\cos\theta$. Note that the QED type case is not defined by putting all radii to zero. Therefore, there is a small difference between the two curves in the bottom panel of fig. 1. We could have defined the QED type case in a different way. But instead we use the occasion to point out that there is in principle an ambiguity in defining the structureless QED case. In practice, however, this ambiguity is small. Depending on the composition of the $\Lambda(1405)$ as a dominantly three- quark or dominantly hadron molecule state, its helicity flip transition radius can be expected to be somewhat different. Still we think that 1 fm is a reasonable estimate in any case. We regard the plots of fig. 1 as interesting for experimentalists who aim to reveal the intrinsic structure of the $\Lambda(1405)$ using Dalitz decays. For this endeavor one needs to achieve an experimental accuracy that can discriminate at least between the two lines in the top panel of fig. 1. To study differences between different scenarios for the structure of the $\Lambda(1405)$ requires an even better accuracy. Next we turn to the angular distribution. If one integrates (22) or (40) over $q^{2}$ (in the range $(2m_{e})^{2}\leq q^{2}\leq(m_{Y^{*}}-m_{Y})^{2}$), one obtains a constant term and one linear in $\cos^{2}\theta$. Thus the general structure is $\qquad\qquad\qquad\frac{1}{\Gamma_{2}}\frac{\text{d}\Gamma_{3}}{\text{d}(\cos\theta)}=A\,(\cos^{2}\theta+C)\,.$ (46) If one integrates finally over the angle, one obtains the total decay width for the process $Y^{*}\to Y\,e^{+}e^{-}$, normalized to the width for the process $Y^{*}\to Y\gamma$, i.e. $\qquad\qquad\qquad\qquad\frac{\Gamma_{3}}{\Gamma_{2}}=\frac{2}{3}\,A+2AC\,.$ (47) In table 1 we provide $A$, $C$ and the width ratio of (47) for the QED type case and different radius combinations. $\big{(}\langle r^{2}\rangle_{0};\langle r^{2}\rangle_{+}\big{)}$ GeV-2 | $A$ | $C$ | $\frac{2}{3}A+2AC$ ---|---|---|--- (0 ; 25) | $0.00252$ | 1.43 | 0.00889 (25 ; 25) | $0.00262$ | 1.34 | 0.00876 (0 ; 0) | $0.00253$ | 1.30 | 0.00830 (25 ; 0) | $0.00259$ | 1.26 | 0.00823 QED type | $0.00273$ | 1.14 | 0.00804 Table 1: Parameters $A$ and $C$, and decay width ratio for the $J^{P}=\frac{1}{2}^{-}$ initial state in QED approximation and with a radius structure. Concerning the overall scaling factor $A$, we observe that different radii lead to modifications on a 10% level. However, for this parameter the ambiguity how to define a structureless case is in the same order. For the parameter $C$, larger helicity flip transition radii lead to larger values. In total, we observe variations up to about 20%. Interestingly, a radius $\langle r^{2}\rangle_{0}$ in the non-flip amplitude counteracts the effect of a radius $\langle r^{2}\rangle_{+}$ in the helicity flip amplitude. We predict that the Dalitz decay width, normalized to the radiative decay width, is about 0.8 to 0.9%. This fits to the rule of thumb that an additional QED vertex provides a suppression factor of about $\alpha=e^{2}/(4\pi)\approx 10^{-2}$. ### 4.2 Transition radii and the decay $\Lambda(1520)\to\Lambda\,e^{+}e^{-}$ We perform the same procedure for the decays of the spin-$3/2^{-}$ hyperon resonance $Y^{*}=\Lambda(1520)$. For the numerical results we focus on $Y=\Lambda$, the lightest strange baryon. The approximation (41) is utilized for the helicity amplitudes $H_{+}$, $H_{0}$, $H_{-}$. It is supposed to be valid in the whole Dalitz decay region, $(2m_{e})^{2}\leq q^{2}\leq(m_{Y^{*}}-m_{Y})^{2}$. The kinematical constraints (4) are used to eliminate the dependence on $H_{...}(0)$ for the ratio of Dalitz decay width and radiative width. What remains is the dependence on the three squared radii, $\langle r^{2}\rangle_{+}$, $\langle r^{2}\rangle_{0}$, $\langle r^{2}\rangle_{-}$. Figs. 2 and 3 illustrate this dependence as a function of $q^{2}$ for the normalized singly-differential Dalitz decay width. Figure 2: Same as fig. 1 but for the $\frac{3}{2}^{-}\to\frac{1}{2}^{+}$ transition. Top panel: all three radii at maximal value (43); other panels: two radii at max, third at zero. Figure 3: Same as fig. 2 but with less many radii put at maximum value (43). Bottom panel: all radii put to zero; other panels: one radius at max, rest at zero. What the plots show, first of all, is the fact that $\langle r^{2}\rangle_{-}$ matters most. Whenever it is large, there is a significant deviation from the QED type case. Whenever it is small, the results with radii are close to the structureless QED case. Essentially this can be traced back to the explicit factor of 3 in (7) that boosts the importance of $H_{-}$ relative to the other two helicity amplitudes. In addition, the importance of $|H_{0}|^{2}$ is demoted by the explicit $q^{2}$ factor in front of it and the different weight in the angular average, an effect already observed for the spin-1/2 case of the previous subsection. Again, we believe that the plots of figs. 2 and 3 are important for experimentalists to judge how accurate their results must be to disentangle extended from structureless objects. We turn to the angular dependence. Formulae (46) and (47) can again be used and the dependence of the parameters $A$ and $C$ on the radii is collected in table 2. $\big{(}\langle r^{2}\rangle_{0};\langle r^{2}\rangle_{+};\langle r^{2}\rangle_{-}\big{)}$ | $A$ | $C$ | $\frac{2}{3}A+2AC$ ---|---|---|--- GeV-2 (0 ; 25 ; 25) | $0.00267$ | 1.55 | 0.0101 (0 ; 0 ; 25) | $0.00270$ | 1.42 | 0.00945 (25 ; 25 ; 25) | $0.00290$ | 1.35 | 0.00976 (0 ; 25 ; 0) | $0.00272$ | 1.31 | 0.00894 (25 ; 0 ; 25) | $0.00285$ | 1.29 | 0.00924 (0 ; 0 ; 0) | $0.00273$ | 1.28 | 0.00881 (25 ; 25 ; 0) | $0.00282$ | 1.23 | 0.00881 (25 ; 0 ; 0) | $0.00281$ | 1.22 | 0.00870 QED type | $0.00292$ | 1.13 | 0.00855 Table 2: Same as table 1 but for the $J^{P}=\frac{3}{2}^{-}$ initial state. Similar to the spin-1/2 case of the previous subsection we observe a variation in $A$ of about 10%, but no clear tendency of the impact of finite radii. Concerning the parameter $C$ we observe variations of up to 30%. The value of $C$ is increased for larger values of $\langle r^{2}\rangle_{-}$ or $\langle r^{2}\rangle_{+}$ and decreased for a larger value of $\langle r^{2}\rangle_{0}$. The impact of $\langle r^{2}\rangle_{+}$ is less pronounced than the impact of $\langle r^{2}\rangle_{-}$, which we explain again by the explicit factor of 3 in (7). For the integrated Dalitz decay width, normalized to the photon decay width, we predict 0.9 to 1%, slightly larger than our prediction for the $\Lambda(1405)$ of the previous subsection. We attribute this to the larger phase space available for the heavier $\Lambda(1520)$. ## 5 Further discussion, summary and outlook We have provided a comprehensive framework that first considers the most general electromagnetic transition form factors, free of kinematical constraints. Those form factors are perfectly suited for a dispersive representation Junker:2019vvy , a perspective that we might explore in the future. We have related these form factors to helicity amplitudes and shown how they make their appearance in the decay rates of the processes $Y^{*}\to Y\gamma$ and $Y^{*}\to Y\,e^{-}e^{+}$. We covered the cases where the initial hyperon $Y^{*}$ can have spin 1/2 or 3/2. The final hyperon $Y$ was assumed to have spin 1/2. All parity combinations have been covered. Of course, these relations are in principle not new and easy to deduce, e.g., from Korner:1976hv . Yet, we have decided to spell out all definitions and conventions in detail to keep the work self-contained and to facilitate comparisons between different works. We have further extended the framework to include also a final weak decay of the ground-state hyperon $Y$, very much in spirit of Perotti:2018wxm , though with different kinematics. We have stressed that the emerging interference term between a helicity flip and the non-flip amplitude exists also in the Dalitz decay region. In a second step, we have focused on the specific transitions of $\Lambda(1405)$ and $\Lambda(1520)$ to the ground-state $\Lambda$. In view of the not too large kinematical range that is covered by the Dalitz decays we have parametrized the helicity amplitudes in terms of transition radii. Such radii are commonly used for the study of form factors at low energies, e.g. for the electric or magnetic form factors of proton and neutron, i.e. for their helicity non-flip or flip amplitudes. As a function of such transition radii, we have investigated how much accuracy is needed for an experiment to discriminate between hypothetical structureless hyperons and a more realistic case. Indeed, we regard the scenario where all radii are non-vanishing as the most realistic one. Concerning the dependence on the invariant mass of the electron-positron pair, our setup shows a significant deviation from the structureless QED type case, in particular in the $q^{2}$ range above $0.01\,$GeV2 for the transition from $\Lambda(1405)$ to $\Lambda$ and above $0.02\,$GeV2 for the transition from $\Lambda(1520)$ to $\Lambda$. We see also significant differences for the parameter $C$ that parametrizes the deviation of the angular dependence from a pure $\cos^{2}$ distribution. Of course, it is completely straightforward to extend this radius analysis also to the electromagnetic transitions $\Lambda(1405)\to\Sigma^{0}$, $\Lambda(1405)\to\Sigma^{0}(1385)$, $\Lambda(1520)\to\Sigma^{0}$, and with a little generalization also to $\Lambda(1520)\to\Sigma^{0}(1385)$. But the smaller the mass difference between initial and final hyperon, the less additional information is contained in the Dalitz decays relative to the real- photon case. Therefore we have focused on the cases where the final hyperon is as light as possible. More generally, the framework provided here can also be applied to double- or triple-strange hyperon resonances and to baryon resonances where strange quarks are replaced by heavier quarks. ###### Acknowledgements. SL thanks T. Galatyuk, A. Kupść, B. Ramstein, P. Salabura and K. Schönning for many valuable discussions that inspired this investigation. This work has been supported by the Swedish Research Council (Vetenskapsrådet) (grant number 2019-04303). Partial support has been provided by the Polish National Science Centre through the grant 2019/35/O/ST2/02907. ## Appendix A Toy-model Lagrangians The following Lagrangians are hermitian and invariant with respect to charge conjugation symmetry. They are also parity symmetric, but it is the first two properties that make the coupling constants real-valued. Note that we do not make use of these Lagrangians in our actual calculations. In this sense, these are toy-model Lagrangians. But we use them to motivate the appearance or absence of $i$’s for our definitions of the transition form factors $H_{j}$, $\tilde{H}_{j}$, $F_{j}$, and $\tilde{F}_{j}$. A real-valued contribution to $H_{1}$ as introduced in (2) comes from a tree- level calculation based on $\displaystyle\qquad\qquad{\cal L}_{H1}=a_{1}\left(\bar{\Psi}\gamma^{\mu}F_{\mu\nu}\Psi^{\nu}+\bar{\Psi}^{\nu}\gamma^{\mu}F_{\mu\nu}\Psi\right)\,,$ (48) where $a_{1}\in\mathds{R}$. In the same way one finds a contribution from $\displaystyle\qquad\qquad{\cal L}_{H2}=ia_{2}\left(\bar{\Psi}F_{\mu\nu}\partial^{\mu}\Psi^{\nu}-\partial^{\mu}\bar{\Psi}^{\nu}F_{\mu\nu}\Psi\right)$ (49) for $H_{2}$ and from $\displaystyle\qquad\qquad{\cal L}_{H3}=ia_{3}\left(\bar{\Psi}\partial^{\mu}F_{\mu\nu}\Psi^{\nu}-\bar{\Psi}^{\nu}\partial^{\mu}F_{\mu\nu}\Psi\right)$ (50) for $H_{3}$. Also here the coupling constants must be real, $a_{2},a_{3}\in\mathds{R}$. Similarly, the Lagrangians $\qquad\tilde{\cal L}_{H1}=i\tilde{a}_{1}\left(\bar{\Psi}\gamma^{\mu}\gamma_{5}F_{\mu\nu}\Psi^{\nu}-\bar{\Psi}^{\nu}\gamma^{\mu}\gamma_{5}F_{\mu\nu}\Psi\right)\,,$ (51) $\qquad\tilde{\cal L}_{H2}=\tilde{a}_{2}\left(\bar{\Psi}\gamma_{5}F_{\mu\nu}\partial^{\mu}\Psi^{\nu}-\partial^{\mu}\bar{\Psi}^{\nu}\gamma_{5}F_{\mu\nu}\Psi\right)\,,$ (52) $\qquad\tilde{\cal L}_{H3}=\tilde{a}_{3}\left(\bar{\Psi}\gamma_{5}\partial^{\mu}F_{\mu\nu}\Psi^{\nu}-\bar{\Psi}^{\nu}\gamma_{5}\partial^{\mu}F_{\mu\nu}\Psi\right)$ (53) contribute to $\tilde{H}_{1}$, $\tilde{H}_{2}$, $\tilde{H}_{3}$, respectively. We recall that the latter transition form factors have been introduced in (25). The Lagrangians are only hermitian and invariant with respect to charge conjugation, if the coupling constants are real, i.e. $\tilde{a}_{1},\tilde{a}_{2},\tilde{a}_{3}\in\mathds{R}$. Turning to spin-1/2, we have the Lagrangians $\qquad{\cal L}_{F2}=b_{2}\left(\bar{\Psi}_{Y}\sigma_{\mu\nu}F^{\mu\nu}\Psi_{Y^{*}}+\bar{\Psi}_{Y^{*}}\sigma_{\mu\nu}F^{\mu\nu}\Psi_{Y}\right)$ (54) and $\quad{\cal L}_{F3}=b_{3}\left(\bar{\Psi}_{Y}\gamma_{\mu}\partial_{\nu}F^{\mu\nu}\Psi_{Y^{*}}+\bar{\Psi}_{Y^{*}}\gamma_{\mu}\partial_{\nu}F^{\mu\nu}\Psi_{Y}\right)$ (55) contributing with real-valued results to $F_{2}$ and $F_{3}$, respectively. Finally $\tilde{\cal L}_{F2}=i\tilde{b}_{2}\left(\bar{\Psi}_{Y}\sigma_{\mu\nu}\gamma_{5}F^{\mu\nu}\Psi_{Y^{*}}+\bar{\Psi}_{Y^{*}}\sigma_{\mu\nu}\gamma_{5}F^{\mu\nu}\Psi_{Y}\right)$ (56) and $\tilde{\cal L}_{F3}=i\tilde{b}_{3}\left(\bar{\Psi}_{Y}\gamma_{\mu}\gamma_{5}\partial_{\nu}F^{\mu\nu}\Psi_{Y^{*}}-\bar{\Psi}_{Y^{*}}\gamma_{\mu}\gamma_{5}\partial_{\nu}F^{\mu\nu}\Psi_{Y}\right)$ (57) contribute to $\tilde{F}_{2}$ and $\tilde{F}_{3}$, respectively. Hermiticity and charge conjugation symmetry demand $b_{2},b_{3},\tilde{b}_{2},\tilde{b}_{3}\in\mathds{R}$. An explanation is in order concerning the labels 2 and 3 for the transition form factors $F_{i}$ and $\tilde{F}_{i}$. In a (here formal) low-energy counting scheme where the hyperons’ three-momenta and the electromagnetic potentials and their four-momenta are counted as soft and of same size, the Lagrangians (54) and (56) are of second order, while (55) and (57) are of third order. In addition, the construction in (54) resembles the Pauli term that describes the anomalous magnetic moment of a spin-1/2 state, here generalized to transitions. For the nucleon, the Pauli form factor is often called $F_{2}$; see e.g. Kubis:2000aa and references therein. For the transitions involving spin-3/2 initial states this formal power counting leads to a mismatch between the two parity sectors, cf. e.g. Holmberg:2019ltw ; Junker:2019vvy . Therefore we have just enumerated the transition form factors $H_{i}$ and $\tilde{H}_{i}$ from 1 to 3. ## Appendix B High-energy behavior In this appendix we use the methods of Carlson:1985mm to present the high- energy behavior of all transition form factors. This procedure uses the implicit assumption that the considered hyperons have a minimal quark content of three quarks. For the validity of the framework, it does not matter how large the overlap of the physical state with a three-quark configuration is Aznauryan:2011qj , as long as it is non-zero. The quark counting rules are derived for large space-like photon virtuality. Therefore it is convenient to introduce the large positive quantity $Q^{2}:=-q^{2}$ and $Q:=\sqrt{Q^{2}}$. One obtains $\begin{split}H_{+}(-Q^{2})\sim\frac{1}{Q^{4}}\,,&\;H_{0}(-Q^{2})\sim\frac{1}{Q^{6}}\,,\;H_{-}(-Q^{2})\sim\frac{1}{Q^{6}}\,;\\\\[10.00002pt] \tilde{H}_{+}(-Q^{2})\sim\frac{1}{Q^{4}}\,,&\;\tilde{H}_{0}(-Q^{2})\sim\frac{1}{Q^{6}}\,,\;\tilde{H}_{-}(-Q^{2})\sim\frac{1}{Q^{6}}\,;\\\\[10.00002pt] F_{+}(-Q^{2})\sim\frac{1}{Q^{4}}\,,&\;F_{0}(-Q^{2})\sim\frac{1}{Q^{6}}\,;\\\\[10.00002pt] \tilde{F}_{+}(-Q^{2})\sim\frac{1}{Q^{4}}\,,&\;\tilde{F}_{0}(-Q^{2})\sim\frac{1}{Q^{6}}\,,\end{split}$ (58) which leads to $\begin{split}H_{1}(-Q^{2})\sim\frac{1}{Q^{6}}\,,&\;H_{2}(-Q^{2})\sim\frac{1}{Q^{8}}\,,\;H_{3}(-Q^{2})\sim\frac{1}{Q^{8}}\,;\\\\[10.00002pt] \tilde{H}_{1}(-Q^{2})\sim\frac{1}{Q^{6}}\,,&\;\tilde{H}_{2}(-Q^{2})\sim\frac{1}{Q^{8}}\,,\;\tilde{H}_{3}(-Q^{2})\sim\frac{1}{Q^{8}}\,;\\\\[10.00002pt] F_{2}(-Q^{2})\sim\frac{1}{Q^{6}}\,,&\;F_{3}(-Q^{2})\sim\frac{1}{Q^{6}}\,;\\\\[10.00002pt] \tilde{F}_{2}(-Q^{2})\sim\frac{1}{Q^{6}}\,,&\;\tilde{F}_{3}(-Q^{2})\sim\frac{1}{Q^{6}}\,.\end{split}$ (59) ## Appendix C Kinematical constraints and partial waves In the main part of this paper we use helicity amplitudes, i.e. amplitudes characterized by the helicities of the initial and final states and by total angular momentum Jacob:1959at . Instead of helicities, one could also use spin and orbital angular momentum. This characterization scheme is commonly used in non-relativistic physics (where the spin-orbit coupling is suppressed). While we find the helicity amplitudes in general more practical for our purpose, the classification using orbital angular momentum $l$ is actually helpful where the considered system becomes non-relativistic. For the Dalitz decays $Y^{*}\to Y\,e^{+}e^{-}$, this happens at the end of the phase space, i.e. for $q^{2}\approx(m_{Y^{*}}-m_{Y})^{2}$. Here initial and final hyperon are at rest and their mass difference goes entirely into the back-to-back motion of the electron-positron pair. Note that we consider the frame where the virtual photon is at rest and that we denote the hyperons’ momentum by $p_{z}$, cf. (8). In subsection 2.1 we discuss the case of opposite parity of initial spin-3/2 and final spin-1/2 hyperon. In this case, the orbital angular momentum $l$ must be even. In total we find three partial waves: $l=0$ and $s=3/2$; $l=2$ and $s=1/2$; $l=2$ and $s=3/2$. Here $s$ denotes the total spin built from the spin 1 of the photon and the spin 1/2 of the final hyperon. Since the amplitude for orbital angular momentum $l$ scales with $p_{z}^{l}$, one amplitude (the s-wave) becomes dominant at the end of the phase space. This must find its expression in a relation between the helicity amplitudes. Indeed, this is what the kinematical constraint (4) means physically. To complete this discussion, we note that due to crossing symmetry the same amplitude that describes $Y^{*}\to Y\,e^{+}e^{-}$ can also be used to describe $e^{+}e^{-}\to Y\bar{Y}^{*}$. A non-relativistic situation emerges at the production threshold $q^{2}\approx(m_{Y^{*}}+m_{Y})^{2}$. Since the antifermion $\bar{Y}^{*}$ has opposite parity to $Y^{*}$, the allowed orbital angular momentum $l$ must now be odd. The three possible partial waves are $l=1$ and $s=1$; $l=1$ and $s=2$; $l=3$ and $s=2$, where now $s$ denotes the total spin built from the spin 3/2 of the antihyperon and the spin 1/2 of the hyperon. At threshold, the two p-waves dominate over the f-wave. Thus one of the three helicity amplitudes must be related to the other two. This is the physics content of the kinematical constraint (5). Analogous considerations can be used for the other spin-parity combinations that we discuss in this paper. For every case, one can understand where and how many kinematical constraints emerge for the helicity amplitudes. ## Appendix D General structure of the squared matrix element for the four- body decay We consider the decay $Y^{*}\to Y\,e^{+}e^{-}\to N\pi\,e^{+}e^{-}$. The spin summed/averaged squared matrix element $\langle|{\cal M}_{4}|^{2}\rangle$ can be expressed in terms of the four four-vectors $p_{Y}$, $p_{N}$, $q=p_{e^{-}}+p_{e^{+}}$ and $k_{e}=p_{e^{-}}-p_{e^{+}}$. The Lorentz invariant combinations that cannot be expressed solely by masses are $q^{2}$, $k_{e}\cdot p_{Y}$, $q\cdot p_{N}$, $k_{e}\cdot p_{N}$ and $\epsilon_{\mu\nu\alpha\beta}\,k_{e}^{\mu}\,p_{Y}^{\nu}\,p_{N}^{\alpha}\,q^{\beta}$. In the following we will show that the general structure is $\displaystyle\langle|{\cal M}_{4}|^{2}\rangle$ $\displaystyle=$ $\displaystyle J_{1}(q^{2})+J_{2}(q^{2})\,(k_{e}\cdot p_{Y})^{2}$ (60) $\displaystyle{}+J_{3}(q^{2})\,k_{e}\cdot p_{Y}\;\epsilon_{\mu\nu\alpha\beta}\,k_{e}^{\mu}\,p_{Y}^{\nu}\,p_{N}^{\alpha}\,q^{\beta}\,.$ In particular, we will show that $\langle|{\cal M}_{4}|^{2}\rangle$ does not depend on $q\cdot p_{N}$ or $k_{e}\cdot p_{N}$. An evaluation of the lepton trace shows that $k_{e}$ can only appear pairwise. It is also easy to see that $p_{N}$ can appear at most linearly. This defines already the structure with the Levi-Civita tensor. No terms $\sim q\cdot p_{N},k_{e}\cdot p_{N}$ could show up there, because there is already one $p_{N}$ contracted with the Levi-Civita tensor. It remains to be shown that any term without a Levi-Civita tensor does not depend on $q\cdot p_{N}$ or $k_{e}\cdot p_{N}$. The Levi-Civita structure emerges from an odd number of $\gamma_{5}$ matrices while any other structure stems from terms with an even number of $\gamma_{5}$ matrices. Consequently, the weak coupling constants $A$ and $B$ from (15) appear each linearly in $J_{3}$, while the contributions to any other structure are $\sim|A|^{2}$ or $\sim|B|^{2}$. Thus one has to deal there either with the $A$-type interaction or with the $B$-type interaction, but not with both at the same time. Both interactions are formally parity invariant if one considers the nucleon as having opposite or the same parity as the $Y$. Thus one can use parity arguments to show that any term that does not contain the Levi-Civita tensor will be independent of $q\cdot p_{N}$ and $k_{e}\cdot p_{N}$. Now we consider the rest frame of the $Y$ and a spin quantization axis along the motion of the nucleon. In this frame, the spin of the $Y$ must be identical to the spin of the nucleon. Let ${\cal M}_{w}(s_{N},s_{Y})$ be the decay matrix element caused by the $A$\- or $B$-interaction (exclusive “or”). Here $s_{N/Y}$ is the spin of the nucleon or $Y$, respectively. We find $\begin{split}\qquad\qquad\quad{\cal M}_{w}(s_{N},s_{Y})\sim{}&\delta_{s_{N},s_{Y}}\,,\\\ |{\cal M}_{w}(+1/2,+1/2)|={}&|{\cal M}_{w}(-1/2,-1/2)|\,.\end{split}$ (61) Let ${\cal M}_{s}(\ldots,s_{Y})$ denote the decay matrix element for the decay $Y^{*}\to Y\,e^{+}e^{-}$. The dots denote the spins of the other states. We do not need to specify them further. In total we find $\displaystyle\langle|{\cal M}_{4}|^{2}\rangle_{A\;{\rm or}\,B}$ $\displaystyle\sim$ $\displaystyle\sum\limits_{\ldots}\sum\limits_{s_{N},s_{Y},s^{\prime}_{Y}}{\cal M}_{w}(s_{N},s_{Y})\,{\cal M}_{s}(\ldots,s_{Y})$ $\displaystyle\phantom{mmmmmm}\times{\cal M}_{s}^{*}(\ldots,s^{\prime}_{Y})\,{\cal M}^{*}_{w}(s_{N},s^{\prime}_{Y})$ $\displaystyle=$ $\displaystyle\sum\limits_{\ldots}\sum\limits_{s_{N}}{\cal M}_{w}(s_{N},s_{N})\,{\cal M}^{*}_{w}(s_{N},s_{N})$ $\displaystyle\phantom{mmmm}\times{\cal M}_{s}(\ldots,s_{N})\,{\cal M}_{s}^{*}(\ldots,s_{N})$ $\displaystyle=$ $\displaystyle\sum\limits_{\ldots}\sum\limits_{s_{N}}|{\cal M}_{w}(s_{N},s_{N})|^{2}\,|{\cal M}_{s}(\ldots,s_{N})|^{2}$ $\displaystyle=$ $\displaystyle\sum\limits_{\ldots}\left(|{\cal M}_{w}(+1/2,+1/2)|^{2}\,|{\cal M}_{s}(\ldots,+1/2)|^{2}\right.$ $\displaystyle\phantom{mm}\left.+|{\cal M}_{w}(-1/2,-1/2)|^{2}\,|{\cal M}_{s}(\ldots,-1/2)|^{2}\right)$ $\displaystyle=$ $\displaystyle|{\cal M}_{w}(+1/2,+1/2)|^{2}$ $\displaystyle\times\sum\limits_{\ldots}\left(|{\cal M}_{s}(\ldots,+1/2)|^{2}+|{\cal M}_{s}(\ldots,-1/2)|^{2}\right)$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left(\sum\limits_{s_{N},s_{Y}}|{\cal M}_{w}(s_{N},s_{Y})|^{2}\right)\left(\sum\limits_{s^{\prime}_{Y},\ldots}|{\cal M}_{s}(\ldots,s^{\prime}_{Y})|^{2}\right)\,.$ Now we have reached a product of two spin-summed squares of matrix elements. Both are Lorentz invariant and can be evaluated in any frame. The first one does not depend on $q$ or $k_{e}$. The second does not depend on $p_{N}$. Therefore, the products $q\cdot p_{N}$ and $k_{e}\cdot p_{N}$ do not appear. We note again that this line of reasoning only works if one considers solely the $A$-type interaction or solely the $B$-type interaction. It does not work for interference terms $\sim AB^{*},A^{*}B$. But such interference terms are accompanied by an odd number of $\gamma_{5}$ matrices and therefore give rise to a Levi-Civita tensor. ## References * (1) C. Granados, S. Leupold, and E. Perotti, Eur. Phys. J. A53, 117 (2017) [arXiv:1701.09130 [hep-ph]]. * (2) J. M. Alarcón, A. N. Hiller Blin, M. J. Vicente Vacas, and C. Weiss, Nucl. Phys. A964, 18 (2017) [arXiv:1703.04534 [hep-ph]]. * (3) S. Leupold, Eur. Phys. J. A54, 1 (2018) [arXiv:1707.09210 [hep-ph]]. * (4) O. Junker, S. Leupold, E. Perotti, and T. Vitos, Phys. Rev. C 101, 015206 (2020) [arXiv:1910.07396 [hep-ph]]. * (5) T. Husek and S. Leupold, Eur. Phys. J. C 80, 218 (2020) [arXiv:1911.02571 [hep-ph]]. * (6) L. G. Landsberg, Phys. Rept. 128, 301 (1985). * (7) E. Czerwiński, S. Eidelman, C. Hanhart, B. Kubis, A. Kupść, S. Leupold, P. Moskal, and S. Schadmand, eds., MesonNet Workshop on Meson Transition Form Factors. 2012\. arXiv:1207.6556 [hep-ph]. * (8) G. A. Miller, Phys. Rev. Lett. 99, 112001 (2007) [arXiv:0705.2409 [nucl-th]]. * (9) S. Pacetti, R. Baldini Ferroli, and E. Tomasi-Gustafsson, Phys. Rept. 550-551, 1 (2015). * (10) V. Punjabi, C. F. Perdrisat, M. K. Jones, E. J. Brash, and C. E. Carlson, Eur. Phys. J. A51, 79 (2015) [arXiv:1503.01452 [nucl-ex]]. * (11) R. Devenish, T. Eisenschitz, and J. Körner, Phys. Rev. D 14, 3063 (1976). * (12) J. G. Körner and M. Kuroda, Phys. Rev. D16, 2165 (1977). * (13) C. E. Carlson, Phys. Rev. D34, 2704 (1986). * (14) V. Pascalutsa, M. Vanderhaeghen, and S. N. Yang, Phys. Rept. 437, 125 (2007) [arXiv:hep-ph/0609004]. * (15) I. Aznauryan and V. Burkert, Prog. Part. Nucl. Phys. 67, 1 (2012) [arXiv:1109.1720 [hep-ph]]. * (16) L. Tiator, D. Drechsel, S. Kamalov, and M. Vanderhaeghen, Eur. Phys. J. ST 198, 141 (2011) [arXiv:1109.6745 [nucl-th]]. * (17) G. Eichmann and G. Ramalho, Phys. Rev. D 98, 093007 (2018) [arXiv:1806.04579 [hep-ph]]. * (18) E. Kaxiras, E. J. Moniz, and M. Soyeur, Phys. Rev. D32, 695 (1985). * (19) B. Kubis and U.-G. Meißner, Eur. Phys. J. C18, 747 (2001) [arXiv:hep-ph/0010283]. * (20) H. Sanchis-Alepuz, R. Alkofer, and C. S. Fischer, Eur. Phys. J. A 54, 41 (2018) [arXiv:1707.08463 [hep-ph]]. * (21) M. Ablikim et al. [BESIII Collaboration], Phys. Rev. Lett. 123, 122003 (2019) [arXiv:1903.09421 [hep-ex]]. * (22) G. Ramalho, M. Peña, and K. Tsushima, Phys. Rev. D 101, 014014 (2020) [arXiv:1908.04864 [hep-ph]]. * (23) G. Ramalho, Phys. Rev. D 102, 054016 (2020) [arXiv:2002.07280 [hep-ph]]. * (24) B. Ramstein et al. [HADES Collaboration], EPJ Web Conf. 199, 01008 (2019). * (25) E. Perotti, G. Fäldt, A. Kupść, S. Leupold, and J. J. Song, Phys. Rev. D99, 056008 (2019) [arXiv:1809.04038 [hep-ph]]. * (26) P. Zyla et al. [Particle Data Group], PTEP 2020, 083C01 (2020). * (27) S. S. Nair, E. Perotti, and S. Leupold, Phys. Lett. B 788, 535 (2019) [arXiv:1802.02801 [hep-ph]]. * (28) M. Holmberg and S. Leupold, Eur. Phys. J. A54, 103 (2018) [arXiv:1802.05168 [hep-ph]]. * (29) M. Mai, “Review of the ${\mathbf{\Lambda}}$(1405): A curious case of a strange-ness resonance,” 9, 2020. arXiv:2010.00056 [nucl-th]. Invited contribution to the EPJST. * (30) R. Dalitz, T. Wong, and G. Rajasekaran, Phys. Rev. 153, 1617 (1967). * (31) P. Siegel and W. Weise, Phys. Rev. C 38, 2221 (1988). * (32) D. Jido, J. Oller, E. Oset, A. Ramos, and U.-G. Meißner, Nucl. Phys. A 725, 181 (2003) [arXiv:nucl-th/0303062]. * (33) C. Garcia-Recio, M. Lutz, and J. Nieves, Phys. Lett. B 582, 49 (2004) [arXiv:nucl-th/0305100]. * (34) L. Geng, E. Oset, and M. Döring, Eur. Phys. J. A 32, 201 (2007) [arXiv:hep-ph/0702093]. * (35) T. Sekihara, T. Hyodo, and D. Jido, Phys. Lett. B 669, 133 (2008) [arXiv:0803.4068 [nucl-th]]. * (36) J. M. M. Hall, W. Kamleh, D. B. Leinweber, B. J. Menadue, B. J. Owen, A. W. Thomas, and R. D. Young, Phys. Rev. Lett. 114, 132002 (2015) [arXiv:1411.3402 [hep-lat]]. * (37) E. Kolomeitsev and M. Lutz, Phys. Lett. B 585, 243 (2004) [arXiv:nucl-th/0305101]. * (38) L. Roca, C. Hanhart, E. Oset, and U.-G. Meißner, Eur. Phys. J. A 27, 373 (2006) [arXiv:nucl-th/0602016]. * (39) L. Roca, S. Sarkar, V. Magas, and E. Oset, Phys. Rev. C 73, 045208 (2006) [arXiv:hep-ph/0603222]. * (40) W. A. Bardeen and W. K. Tung, Phys. Rev. 173, 1423 (1968), [Erratum: Phys. Rev. D4, 3229 (1971)]. * (41) R. Tarrach, Nuovo Cim. A28, 409 (1975). * (42) E. Thomé, PhD thesis, Uppsala U. (2012). * (43) W. Ikegami Andersson, PhD thesis, Uppsala U. (2020). * (44) G. Fäldt, Eur. Phys. J. A51, 74 (2015) [arXiv:1306.0525 [nucl-th]]. * (45) G. Fäldt, Eur. Phys. J. A52, 141 (2016) [arXiv:1602.02532 [nucl-th]]. * (46) G. Fäldt and A. Kupść, Phys. Lett. B 772, 16 (2017) [arXiv:1702.07288 [hep-ph]]. * (47) G. Fäldt, Phys. Rev. D 97, 053002 (2018) [arXiv:1709.01803 [hep-ph]]. * (48) G. Fäldt and K. Schönning, Phys. Rev. D 101, 033002 (2020) [arXiv:1908.04157 [hep-ph]]. * (49) M. E. Peskin and D. V. Schroeder, An Introduction to Quantum Field Theory, Westview Press, 1995. * (50) M. Hoferichter, B. Kubis, S. Leupold, F. Niecknig, and S. P. Schneider, Eur. Phys. J. C74, 3180 (2014) [arXiv:1410.4691 [hep-ph]]. * (51) M. Holmberg and S. Leupold, Phys. Rev. D 100, 114001 (2019) [arXiv:1909.13562 [hep-ph]]. * (52) M. Jacob and G. Wick, Annals Phys. 7, 404 (1959).
Federated Learning (FL) is an emerging distributed machine learning approach that preserves client privacy by storing data on edge devices. However, data heterogeneity among clients presents challenges in training models that perform well on all local distributions. Recent studies have proposed clustering as a solution to tackle client heterogeneity in FL by grouping clients with distribution shifts into different clusters. However, the diverse learning frameworks used in current clustered FL methods make it challenging to integrate various clustered FL methods, gather their benefits, and make further improvements. To this end, this paper presents a comprehensive investigation into current clustered FL methods and proposes a four-tier framework, namely , to encompass and extend existing approaches. Based on the , we identify the remaining challenges associated with current clustering methods in each tier and propose an enhanced clustering method called to address these challenges. Through extensive numerical evaluations, we showcase the effectiveness of our clustering framework and the improved components. Our code will be publicly available. § INTRODUCTION Federated Learning (FL) is a privacy-focused distributed machine learning approach. In FL, the server shares the model with clients for local training, and the clients send parameter updates back to the server. The clients will not share their raw data with servers, ensuring privacy. However, the non-iid client data distribution leads to significant performance drops for FL algorithms [McMahan et al., 2016, Li et al., 2018, Karimireddy et al., 2020, Karimireddy et al., 2019]. To address data heterogeneity, traditional FL focuses on training a single global model that performs well across all local distributions [Li et al., 2021, Li et al., 2018, Tang et al., 2022, Guo et al., 2023]. However, relying solely on a global model may not adequately handle the heterogeneous client distributions. As a remedy, clustered FL methods have been proposed to group clients into different clusters based on their local distributions. Numerous studies have demonstrated the superiority of clustered FL methods over single-model FL approaches [Long et al., 2023, Sattler et al., 2020, Ghosh et al., 2020, Marfoq et al., 2021, Guo et al., 2023]. Overview of the . The encompasses the existing clustered FL algorithms through the design of four tiers, including cluster formulations, which maximize conditional distribution, joint distribution, or variable relationships; cluster weights calculation, including soft clustering and hard clustering; adaptive clustering procedure, including using a predefined number of clusters, automatically adding new clusters, or merge and remove existing clusters; client distance metrics, including using distance on clients' local gradients, clients' local model parameters, or clients' local feature norms. The four tiers collaborate to form a comprehensive clustered FL learning process, as shown in the left part of the figure. For instance, CFL can be described by the A, D, G, and J, while A, E, and F cover FedEM.=-1 Diverse learning frameworks pose challenges on enhancing the clustered FL. Despite the success of current clustered FL methods, the use of diverse learning frameworks poses challenges in integrating different algorithms, gathering their advantages, and achieving further improvements. For instance, FedEM [Marfoq et al., 2021] excels in addressing complex mixture distribution scenarios and performs admirably on challenging tasks. However, it necessitates a predefined number of clusters, constraining its practicality. In contrast, adaptive clustering techniques such as CFL [Sattler et al., 2020] can autonomously determine the number of clusters. Nonetheless, CFL cannot be seamlessly integrated with soft clustering methods like FedEM, thereby limiting its effectiveness in handling complex mixture distribution tasks. Consolidating existing methods as a solution. To tackle these challenges, our aim is to establish a holistic learning framework for clustered FL methods, allowing us to seamlessly combine their advantages. Once accomplished, we can easily incorporate existing methods such as FedEM and CFL to develop an improved approach (see Table <ref>). Therefore, we first revisit and summarize existing clustered FL methods. We then introduce , a holistic clustered FL algorithm framework with four tiers (as shown in Figure <ref>). These tiers address the primary tasks in clustering methods: (1) Cluster Learning and Assignment (tiers 1 and 2), which assigns clients to optimal clusters and learns cluster-specific parameters; (2) Cluster Number Determination (tiers 3 and 4), which decides the number of clusters. The four tiers within constitute a comprehensive clustered FL learning process that incorporates existing methods and allows for flexible modifications in each tier (see Algorithm <ref>). It is evident that the enhanced algorithms exhibit significant performance improvements compared to the original methods, as demonstrated in Table <ref>. In light of the , we have identified the remaining challenges within each tier that were previously overlooked by existing clustered FL, as illustrated in Figure <ref> in Section <ref>. We then introduce , an enhanced algorithm designed to tackle these remaining challenges. Numerical results confirm that effectively extends existing methods, achieving a superior balance between personalization and generalization while delivering strong performance. We summarize the contribution of this paper as follows: * We introduce , a holistic framework for clustered FL that encompass the existing methods. enables the integration of existing methods' benefits by adjusting techniques in each tier.=-1 * We identify four remaining challenges in clustered FL algorithms within each tier of , and introduce an improved algorithm called to address these challenges. * Extensive experiments on different datasets (CIFAR10, CIFAR100, and Tiny-Imagenet) and various architectures (MobileNet-V2 and ResNet18) demonstrate the effectiveness of our framework and the improved components of .=-1 § RELATED WORKS In the field of Federated Learning, FedAvg serves as the de-facto algorithm, employing local Stochastic Gradient Descent (local SGD) techniques [McMahan et al., 2016, Lin et al., 2020] to reduce communication costs and protect client privacy. However, FL faces significant challenges due to distribution shifts among clients, which can hinder the performance of FL algorithms [Li et al., 2018, Wang et al., 2020, Karimireddy et al., 2020, Jiang & Lin, 2023, Guo et al., 2021]. Traditional FL methods primarily focus on improving the convergence speed of global models and incorporate bias reduction techniques [Tang et al., 2022, Guo et al., 2023, Li et al., 2021, Li et al., 2018]. However, these single-model approaches are often inadequate for handling heterogeneous data distributions, especially in cases involving concept shifts [Ke et al., 2022, Guo et al., 2023, Jothimurugesan et al., 2023]. To address these challenges, researchers have introduced clustered FL algorithms to enhance the performance of FL algorithms.=-1 Clustered FL groups clients based on their local data distribution, addressing the distribution shift problem. Most methods employ hard clustering with a fixed number of clusters, grouping clients by measuring their similarities [Ghosh et al., 2020, Long et al., 2023, Wang et al., 2022, Stallmann & Wilbik, 2022]. However, hard clustering may not adequately capture complex relationships between local distributions, and soft clustering paradigms have been proposed to address this issue [Marfoq et al., 2021, Wu et al., 2023, Ruan & Joe-Wong, 2022, Guo et al., 2023]. In this paper, we propose a generalized formulation for clustered FL that encompasses current methods and improves them by addressing issues related to intra-client inconsistency and efficiency.=-1 Another line of research focuses on automatically determining the number of clusters. Current methods utilize hierarchical clustering [Sattler et al., 2020, Sattler et al., 2020, Zhao et al., 2020, Briggs et al., 2020, Zeng et al., 2023, Duan et al., 2021, Duan et al., 2021], which measures client dissimilarity using model parameters or local gradient distances. Some papers enhance these distance metrics by employing various techniques, such as eigenvectors [Yan et al., 2023] and local feature norms [Wei & Huang, 2023]. FEDCOLLAB [Bao et al., 2023] quantifies client similarity through client discriminators. However, the requirement for discriminators between every client pair in FEDCOLLAB hinders scalability for cross-device scenarios with numerous clients. In this paper, we concentrate on cross-device settings, introducing a holistic adaptive clustering framework enabling cluster splitting and merging. We also present enhanced weight updating for soft clustering and finer distance metrics for various clustering principles. For further discussions on related works, please refer to Appendix <ref>. § : REVISITING AND EXTENDING CLUSTERED FL METHODS Current clustered FL methods typically employ diverse learning frameworks. As a result, existing methods often face challenges in gathering the advantages of different algorithms for potential enhancements. To address this issue, as shown in Figure <ref>, we introduce the , consisting of four tiers designed to tackle the primary tasks of clustering methods: (1) Cluster Learning and Assignment (tiers 1 and 2): Identify which clients should belong to the same clusters and learns parameters for each cluster. (2) Cluster Number Determinant (tiers 3 and 4): Decide the number of clusters. As a result, the four tiers of the form a comprehensive learning process (Algorithm <ref>), enabling flexible improvements and the integration of advantages from different algorithms (Table <ref>).=-1 §.§ Tiers 1 & 2: The and We introduce the first two tiers: and . defines the objective functions of the clustering methods, aiming to learn the underlying distributions of each cluster. orthogonally helps find the suitable clusters for each client, whereas hard clustering assigns each client to one cluster, while soft clustering allows clients to contribute to multiple clusters. We propose the following optimization framework to encompass these two tiers. Optimization framework of clustered FL methods. The clustered FL methods can be expressed as a dual-variable optimization problem that maximizes $\cL(\mTheta, \mOmega)$, with $K$ clusters and $M$ data sources represented as $\cD_1, \cdots, \cD_M$: \begin{align} \label{equ:general-objective} \textstyle \cL(\mTheta, \mOmega) & = \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \log \left( \sum_{k=1}^{K} \omega_{i;k} \cL_k(\xx_{i,j}, y_{ij}; \mtheta_k) \right) \,, \quad \text{s.t.} \quad \sum_{k=1}^{K} \omega_{i;k} = 1, \forall i \,, \end{align} where $N = \sum_{i=1}^{M} N_i$ and $N_i := |\cD_i|$. The parameters to be optimized are clustering weights $\mOmega = [\omega_{1;1}, \cdots, \omega_{M, K}]$, and model parameters $\mTheta = [\mtheta_1, \cdots, \mtheta_K]$. Tier 1: Incorporate existing . The existing methods employ clustering to address diverse tasks, which results in the proposal of various formulations. Our Algorithm <ref> encompasses the existing clustering formulations by selecting $\cL_k(\xx_{i,j}, y_{ij}; \mtheta_k)$ as follows: * $\cP_{\mtheta_k}(y_{i,j} | \xx_{i,j})$. Most existing methods [Marfoq et al., 2021, Ghosh et al., 2020, Long et al., 2023] can be recovered using this conditional distribution (likelihood functions). * $\cP_{\mtheta_k} (\xx_{i,j}, y_{i,j})$. FedGMM [Wu et al., 2022] uses this joint probability. [In FedGMM [Wu et al., 2023], $\mtheta_k$ is split into $[\mtheta_{k_1}, \mnu_{k_2}]$, and it uses $\cL_k(\xx_{i,j}, y_{i,j}; \mtheta_k) = \cP_{\mtheta_{k_1}}(y_{i,j}|\xx_{i,j})\cP_{\mnu_{k_2}}(\xx_{i,j})$ to model the joint probability.] * $\frac{\cP_{\mtheta_k}(\xx_{i,j},y_{i,j})}{\cP_{\mtheta_k}(\xx_{i,j})\cP_{\mtheta_k}(y_{i,j})}$. FedRC [Guo et al., 2023] relies on correlations between variables $\xx$ and $y$. =-1 Tier 2: Incorporate existing . Various methods employ distinct mechanisms for calculating clustering weights $\omega_{i;k}$. The choice of $\omega_{i;k}$, with either binary values $\omega_{i;k} \in \{0, 1\}$ or continuous values $\omega_{i;k} \in [0, 1]$, characterizes the dynamic clustering procedure. * Hard clustering methods employ binary values $\omega_{i;k} \in {0, 1}$. In these methods, $\omega_{i;k}$ is determined using heuristic techniques, such as parameter distance [Long et al., 2023, Zeng et al., 2023, Sattler et al., 2020] or local loss function values [Ghosh et al., 2020].=-1 * Soft clustering approaches permit $\omega_{i;k} \in [0, 1]$, determined by maximizing $\cL(\mTheta, \mOmega)$[Marfoq et al., 2021, Guo et al., 2023, Wu et al., 2023], or by normalizing local loss values[Ruan & Joe-Wong, 2022]. Soft clustering methods do not assume separated clients' local distributions and can thus handle complex scenarios, such as mixture distributions [Marfoq et al., 2021, Wu et al., 2023]. §.§ Tiers 3 & 4: The Adaptive Clustering Procedure and Distance Metrics [1] Number of communication rounds $T$, initial number of clusters $K^{0}$, initial parameters $\mphi^{0}$, and $\mTheta^{0}$. Number of clusters $K^{T}$, trained parameters $\mphi^{T}$, and $\mTheta^{T}$. $t = 0, \cdots, T-1$ Sample a subset of clients $\cS^{t}$, and send $\mTheta^{t+1}$ to the clients. Client $i$ in $\cS^{t}$ Local updates by solving (<ref>).Tiers 1 and 2 Upload local gradients $\gg_{i;k}^{t+1}$ to the server. $\mtheta_{k}^{t+1} = \mtheta_{k}^{t} - \eta_g \sum_{i \in \cS^{t}} \gg_{i;k}^{t+1}$, $\forall k$. Calculate distance matrix $\mD^{t}$, and $\mD_k^{t}$ for each cluster $k$. Tier 4 Detect cluster $k_{s}$ need to be split Tier 3 Split clients in cluster $k_{s}$ into two sub-clusters $\cS_{s, 1}$ and $\cS_{s, 2}$ based on $\mD_{k_s}^{t}$ or $\mD^{t}$. $\mtheta_{k_s}^{t+1} = \mtheta_{k}^{t} - \eta_g \sum_{i \in \cS_{s, 1}} \gg_{i;k}^{t+1}$. $\mtheta_{K^{t} + 1}^{t+1} = \mtheta_{k}^{t} - \eta_g \sum_{i \in \cS_{s, 2}} \gg_{i;k}^{t+1}$. Update $\omega_{i;k}$ for corresponding clients. Detect cluster $k_{d}$ need to be deletedTier 3 Delete cluster $k_{d}$. Update $\omega_{i;k}$ for corresponding clients. Update $K^{t+1}$ by the current number of clusters. algorithm: holistic Algorithm Framework of clustered FL. Tiers 3 and 4 illustrate the techniques for Cluster Number Determination. In detail, the adaptive clustering procedures automatically adjust the number of clusters, while distance metrics control the adaptive clustering procedures, determining whether clusters should split or merge. The allows for different techniques at each tier, enhancing flexibility in choosing the optimal adaptive clustering methods or converting methods that rely on fixed cluster numbers to adaptive ones. Tier 3: Adaptive clustering procedures demonstrate how to modify cluster numbers. To automatically determine the number of clusters, current approaches can be categorized into two orthogonal methods: (1) Splitting clusters to increase the number of clusters [Sattler et al., 2020, Sattler et al., 2020]. (2) Merging clusters to reduce the number of clusters [Zeng et al., 2023]. We unify these approaches at tier 3. Tier 4: Client distance metrics dictate when cluster numbers should be adjusted. The client's distances are utilized to determine whether the current number of clusters should be adjusted. For instance, when the distance within a cluster is large, the cluster will divide into sub-clusters. Conversely, if the distances between two clusters are small, these two clusters should be merged. Existing clustering methods use various metrics such as cosine similarity of local gradients [Sattler et al., 2020], gradients from a globally shared network [Zeng et al., 2023], and local feature norms [Wei & Huang, 2023]. Remaining challenges in clustered FL methods. We identify four key issues in clustered FL algorithms: (1) inconsistent intra-client clustering weights, (2) efficiency concerns, (3) the absence of adaptive clustering for soft clustering methods, and (4) the lack of fine-grained distance metrics for various clustering principles. Clustering principles and differ in their approach as follows: assigns clients with any shifts into different clusters, while only assigns clients with concept shifts to different clusters." § : TACKLING REMAINING CHALLENGES IN CLUSTERED FL Section <ref> introduces a holistic clustering framework with four tiers to encompass existing methods. However, each tier still presents challenges that current methods cannot address. In this section, we outline four key remaining challenges in Figure <ref> and introduce to tackle them. Due to space constraints, we summarize the improved algorithm in Algorithm <ref>. §.§ Remaining Challenges of the Clustering in FL In this subsection, we identify four remaining challenges of the . We categorize these challenges by tiers in the , as shown in Figure <ref>. The details are provided below. Challenges on tier 1: Inconsistent intra-client clustering weights and efficiency concerns. These challenges can be addressed by improving the clustering formulations. * Inconsistent intra-client clustering weights. Existing approaches use the same clustering weights $\omega_{i;k}$ for all the samples belonging to client $i$ [Sattler et al., 2020, Ghosh et al., 2020, Marfoq et al., 2021, Guo et al., 2023]. However, they overlook cases where the optimal clustering weights of different samples within the same client can be inconsistent, implying that $\omega_{i,j_1;k} \neq \omega_{i,j_2;k}$ for certain samples $(\xx_{i,j_1}, y_{i, j_1})$ and $(\xx_{i,j_2}, y_{i, j_2})$. See our example here[ When client $i$'s local distribution is a mixture of two distributions, namely, $(\xx_{i,j_1}, y_{i, j_1})$ sampled from the first distribution and $(\xx_{i,j_2}, y_{i, j_2})$ from the second distribution, the optimal clustering weights for $(\xx_{i,j_1}, y_{i, j_1})$ and $(\xx_{i,j_2}, y_{i, j_2})$ should be distinct. * Efficiency. The current clustered FL methods [Marfoq et al., 2021, Long et al., 2023, Guo et al., 2023] require K-fold higher communication or computation costs, hindering overall algorithm efficiency during deployment. Challenges on tiers 2 & 3: The absence of adaptive clustering for soft clustering methods. Current adaptive clustering methods primarily address hard clustering [Sattler et al., 2020, Sattler et al., 2020, Zeng et al., 2023]. Hence, there exists a gap between research and practice, as there is a need to automatically determine the number of clusters for soft clustering methods [Marfoq et al., 2021, Guo et al., 2023]. Challenges on tier 4: Lack of fine-grained distance metrics for various clustering principles. The clustering principles determine which clients should be assigned to the same clusters. Existing clustering methods may use different clustering principles, as described by and below: =-1 * (Any Shift Type Clustering Principle): clients with any distribution shifts are placed into separate clusters [Marfoq et al., 2021, Wu et al., 2023]. * (Concept Shift Only Clustering Principle): only clients with concept shifts are assigned to separate clusters [Guo et al., 2023]. As discussed in Section <ref>, client distances determine whether the current number of clusters should be changed, aligning with the role of clustering principles: if the current number of clusters cannot meet the requirements of the clustering principles, the cluster number should be adjusted. Consequently, we advocate for distance metrics to be closely tied to distribution shifts, ultimately aligning with clustering principles. Unfortunately, existing distance metrics, such as those based on local gradients or local model parameters [Sattler et al., 2020, Zeng et al., 2023, Long et al., 2023, Yan et al., 2023], cannot establish a clear link to distribution shifts. As a result, current methods struggle to satisfy diverse and detailed clustering principles. §.§ Improve Tier1: Inconsistency and Efficiency Aware Objective Functions To address tier 1 challenges, specifically, (i) inconsistent intra-client clustering weights and (ii) efficiency, we propose an extension of the objective function (Eq. (<ref>)), which is defined as $\mathcal{L}(\boldsymbol{\phi}, \boldsymbol{\Theta}, \boldsymbol{\Omega}, \tilde{\boldsymbol{\Omega}})$ and includes the parameters $\boldsymbol{\phi}$, $\boldsymbol{\Theta}$, $\boldsymbol{\Omega}$, and $\tilde{\boldsymbol{\Omega}}$. * Global shared feature extractor $\mphi$, and cluster-specific predictors $\mTheta = [\mtheta_1, \cdots, \mtheta_K]$. Dividing the feature extractor $\mphi$ and the predictors $\{ \mtheta_k \}$ reduces communication and computation costs since the predictors are lightweight architectures, like linear classifier layers. * Sample-wise clustering weights $\mOmega = [\omega_{1,1;1} \cdots, \omega_{M, N_M; K}]$ for enhanced training stage and client-wise clustering weights $\tilde{\mOmega} = [\tilde{\omega}_{1;1} \cdots, \tilde{\omega}_{M;K}]$ for testing stage [Experiments on the effectiveness of sample-wise clustering weights in Figures <ref> and <ref>.]. We employ sample-specific clustering weights ($\omega_{i,j;k}$) during training to ensure that data samples from the same clients can contribute to different cluster models, resolving the issue of inconsistent intra-client clustering weights. Furthermore, during testing, when test-time label information is unavailable, we utilize client-specific weights ($\tilde{\omega}_{i;k}$) for each client and cluster. * Enhance the optimization of $\omega_{i,j;k}$ by regularizing the distance between $\tilde{\omega}_{i;k}$ and $\omega_{i,j;k}$. Motivated by the intuition that “if data from the same clients have similar distributions, the corresponding clustering weights should be similar”, we encourage $\tilde{\omega}_{i;k}$ and $\omega_{i,j;k}$ to be close to each other. The following objective function is designed to meet our requirements. \begin{align} \textstyle & \cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega}) = \underbrace{\frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \log \left( \sum_{k=1}^{K} \omega_{i,j;k} \cL_k(\xx_{i,j}, y_{ij}; \mphi, \mtheta_k) \right)}_{\cA_1} - \underbrace{\mu \sum_{i=1}^{M} \sum_{j=1}^{N_i} \left( \sum_{k=1}^{K} \tilde{\omega}_{i;k} \log \frac{\tilde{\omega}_{i;k}}{\omega_{i,j;k}} \right)}_{\cA_2} \label{equ:fedias-objective} \\ & \text{s.t.} \sum_{k=1}^{K} \omega_{i,j;k} = 1, \, \forall i, j \,, \sum_{k=1}^{K} \tilde{\omega}_{i;k} = 1, \, \forall i \,, \quad \tilde{\mOmega} = \argmin_{\tilde{\mOmega}} \left| \max_{\mOmega} \cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega}) - \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \right| \,, \label{equ:obtain-cluster-weights} \end{align} where $\cA_1$ term is extended from (<ref>) by using the global shared feature extractor $\mphi$ and the sample-wise weights $\omega_{i,j;k}$. $\cA_2$ focuses on regularizing the difference between the sample-wise clustering weights $\omega_{i,j;k}$ and the client-wise clustering weights $\tilde{\omega}_{i;k}$. The $\mu$ controls the strength of this regularization. We obtain $\tilde{\omega}_{i;k}$ by solving (<ref>), where we aim to minimize the impact of replacing $\omega_{i,j;k}$ with $\tilde{\omega}_{i;k}$. Optimization of the proposed objective function. Different from heuristic methods used in most studies to optimize (<ref>) [IFCA [Ghosh et al., 2020] sets $\omega_{i,j;k_{i, \text{min}}} = 1, \forall j$ when $k_{i, \text{min}} = \argmin_k \E_{D_i} \left[f_{i;k}(\xx_{i,j}, y_{i,j}, \mphi, \mtheta_k)\right]$, where $f_{i;k}$ is the local loss function. FeSEM [Long et al., 2023] sets $k_{i, \text{min}} = \argmin_k \norm{\mtheta_k - \mtheta_i}_2$, where $\mtheta_k, \mtheta_i$ represents the model parameters of cluster $k$ and client $i$, respectively.=-1]. we aim to introduce a more interpretable approach here. In this approach, we maximize the objective functions (Eq. (<ref>)) to obtain optimization steps. Specifically, we can update $\tilde{\omega}_{i;k}$, $\omega_{i,j;k}$, $\mtheta_k$, and $\mphi$ by (<ref>)–(<ref>). \begin{align} \textstyle \gamma_{i,j;k}^{t+1} & = \frac{ \omega_{i,j;k}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mphi^{t}, \mtheta_k^{t})}{\sum_{n=1}^{K} \omega_{i,j;n}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mphi^{t}, \mtheta_n^{t})} \,, \quad \label{equ:uptate-gamma} \tilde{\gamma}_{i,j;k}^{t+1} = \frac{ \tilde{\omega}_{i;k}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mphi^{t}, \mtheta_k^{t})}{\sum_{n=1}^{K} \omega_{i;n}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mphi^{t}, \mtheta_n^{t})} \,, \\ \tilde{\omega}_{i;k}^{t+1} & = \frac{1}{N_i} \sum_{j=1}^{N_i} \tilde{\gamma}_{i,j;k}^{t+1} \,, \quad \omega_{i,j;k}^{t+1} = \frac{\gamma_{i,j;k}^{t+1}}{1 + \mu N} + \frac{\mu N}{1 + \mu N} \tilde{\omega}_{i;k}^{t+1} = \tilde{\mu} \gamma_{i,j;k}^{t+1} + (1 - \tilde{\mu}) \tilde{\omega}_{i;k}^{t+1} \, , \label{equ:update-omega} \\ \mtheta_k^{t+1} & = \mtheta_k^{t} - \eta \sum_{i=1}^{M} \sum_{j=1}^{N_i} \frac{\gamma_{i,j;k}^{t+1}}{\cL_k(\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t})} \nabla_{\mtheta_k} \cL_k (\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t}) \, , \label{equ:update-theta} \\ \mphi^{t+1} & = \mphi^{t} - \eta \sum_{i=1}^{M} \sum_{j=1}^{N_i} \sum_{k=1}^{K} \frac{\gamma_{i,j;k}^{t+1}}{\cL_k(\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t})} \nabla_{\mphi} \cL_k (\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t+1}) \label{equ:update-mphi}\, , \end{align} where $\gamma_{i,j;k}$ and $\tilde{\gamma}_{i,j;k}$ are intermediate results for calculating $\omega_{i,j;k}$ and $\tilde{\omega}_{i;k}$. More detailed proofs can be found in Appendix <ref>. $\tilde{\mu} = \frac{1}{1 + \mu N}$ serves as a hyperparameter to control the strength of the penalty term in Equation (<ref>).=-1 Theoretical Results on Linear Representation Learning Case. We examine the convergence of a linear representation learning problem, as extended from the settings in Collins et al., 2021, Tziotis et al., 2022. We assume that the clustering weights, denoted as $\omega_{i,j;k}$, are obtained in each communication round. We assume that local data $\xx_{i,j} \in \R^{d}$, and the global shared feature extractor is parameterized by $\mB \in \R^{d \times c}$. For each underlying cluster $k$, we define $\mtheta_k \in \R^{c}$, and the labels for data $\xx_{i,j}$ belonging to cluster $k$ are given by $y_{i,j} = (\mtheta_k^{*})^{T}(\mB^{*})^{T} \xx_{i,j} + z_{k}$, where $z_{k} \sim \cN(0, \sigma^2)$ captures the heterogeneous between $K$ underlying clusters. The global empirical risk is defined as the mean square error=-1 \begin{align} \textstyle \min_{\mB, \mTheta} \frac{1}{2N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \left( y_{i,j} - \sum_{k=1}^{K} \omega_{i,j;k} \mtheta_{k}^{T} \mB^{T} \xx_{i,j} \right)^{2} \,, \label{equ: linear objective function} \end{align} where $\mTheta = [\mtheta_1, \cdots, \mtheta_K]$. Then we can derive the convergence of given the following theorem. Detailed definitions, assumptions proofs, and discussions refer to Appendix <ref>. Under Assumption <ref>- <ref>, when we have $N \ge \frac{K^2}{d+c}$, and $\min_{k} \hat{N}_k \ge \cC \frac{c^3(1 + \sigma^2)^4 \log(M)}{E_0^2} \min \left \{ \frac{1}{\kappa^2}, \bar{\sigma}_{\min}^2 \right \}$ for some constant $\cC$, we have \begin{align} \textstyle \text{dist}(\hat{\mB}^{t+1} \!, \hat{\mB}^{*}) & \le \text{dist}(\hat{\mB}^{t} \!, \hat{\mB}^{*}) (1 \!-\! c_{\text{min}} \!+\! \frac{57}{200} c_{\text{max}} ) ( 1 \!-\! \frac{1}{2} c_{\text{max}} )^{-\nicefrac{1}{2}} \!+\! ( \frac{7}{100} c_{\text{max}} ) ( 1 \!-\! \frac{1}{2} c_{\text{max}} )^{- \nicefrac{1}{2}} \,, \end{align} with the probability at least $1 \!-\! \exp(-90(d+c)) \!-\! \exp(-90c^2\log(M))$. Here $\hat{N}_k = \sum_{i\!=\!1}^{M} \sum_{j\!=\!1}^{N_i} \omega_{i,j;k}$, $E_0 = 1 - \text{dist}^2(\hat{\mB}^{0}, \hat{\mB}^{*})$, $c_{\text{min}} = \eta K \frac{\min_k \hat{N}_k}{N} \bar{\sigma}_{\min, *}^2 E_0$, and $c_{\text{max}} = \eta K \frac{\max_k \hat{N}_k}{N} \bar{\sigma}_{\min, *}^2 E_0$. §.§ Improve Tiers 2 & 3: Adaptive Clustering for Soft Clustering Paradigms Given the limitations of existing adaptive clustering methods, we have extended the clustering weight update mechanisms to incorporate soft clustering and have verified its effectiveness in Figures <ref> and <ref>. The overall process is summarized in Algorithms <ref> and <ref>. In Algorithm <ref>, the clustering weights are adjusted after splitting cluster $k$ into two sub-clusters, denoted by $k_1$ and $k_2$. Then we set $\omega_{i,j,k_1} = \omega_{i,j,k_2} = \omega_{i,j,k} / 2$ for all $i$ and $j$. In Algorithm <ref>, the clustering weights are updated when removing cluster $k$. For all $k^{'} \neq k$, we modify $\omega_{i,j;k^{'}}$ as $\omega_{i,j;k^{'}} = \frac{\omega_{i,j;k^{'}}}{\sum_{n \neq k} \omega_{i,j;n}}$.=-1 We use the hyperparameter $\rho$ to control cluster splitting. As evidenced in Table <ref>, a higher $\rho$ results in fewer clusters, signifying enhanced generalization but reduced personalization. In detail, the cluster $k$ will split if the following condition is met: \begin{align} \textstyle \max (\mD_k) - \operatorname{mean} (\mD_k) \ge \rho \,, \end{align} where $\mD_k$ is the distance matrix of cluster $k$. We identify the need for cluster removal when the cluster no longer receives the highest clustering weights from any clients. Additional details about the enhanced adaptive process can be found in Algorithm <ref>. §.§ Improve Tier4: Fine-Grained Distance Metric Design Due to the page limitations, we include most of the details about the method design and practical implements in Appendix <ref>. As discussed in Section <ref>, various algorithms may group clients into different clusters based on different clustering principles. Therefore, in this section, we design the following fine-grained distance metrics for these different clustering principles: \begin{align} \textstyle \mD_{i,j}^{k} \begin{split} = \left \{ \begin{array}{ll} \max \left \{ d_c, d_{lf} \right \} \E_{D_i} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \E_{D_j} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \label{equ:theory-distance} \, , & \text{\cpA} \, , \\ \E_{D_i} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \E_{D_j} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \, , & \text{\cpB} \, , \end{array} \right. \end{split} \end{align} where $\text{dist}$ is the cos-similarity, $d_c \!=\! \max_{y} \left\{ \text{dist} \left( \E_{D_i} \left[\cP(\zz | \xx, y;\mphi) \right], \E_{D_j} \left[\cP(\zz| \xx, y; \mphi) \right] \right) \right\}$, and $d_{lf} \!=\! \text{dist} \left( \E_{D_i} \left[\cP(\zz | \xx;\mphi) \right], \E_{D_j} \left[\cP(\zz| \xx; \mphi) \right] \right)$. The distances above become large only when the following conditions occur together: (1) Large values of $d_c$ indicate concept shifts between clients $i$ and $j$; (2) Large $d_{lf}$ indicate significant feature and label distribution differences. (2) Large values of $\E_{D_i} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \E_{D_j} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right]$ indicate incorrect clustering weights with high confidence. The effectiveness of the above distance metrics design is evidenced in Table <ref>. § NUMERICAL RESULTS In this section, we evaluate the performance of and other clustered FL methods. Additional experiment results, including hyper-parameter ablation studies, different model architectures, and additional scenarios, can be found in Appendix <ref>.=-1 Performance of the adaptive clustering methods on CIFAR10, CIFAR100, and Tiny-Imagenet datasets. For each algorithm, we present the best Validation and Test accuracies. For clustering methods that require a fixed number of clusters, we set $K = 3$. The hyperparameters $\text{tol}_1$, $\text{tol}_2$, $\alpha^{*}(0)$, $\tau$, and $\rho$ in adaptive clustering methods govern the balance between personalization and generalization, as well as the cluster number. For instance, lower $\tau$ in StoCFL or lower $\rho$ in indicate improved personalization and reduced generalization. $K^{T}$ denotes the cluster number in the final training round, where a larger $K^{T}$ suggests enhanced personalization and reduced generalization. We emphasize the best results in bold and the worst results in blue. 2*Algorithm 3cCIFAR10, $\beta = 0.2$ 3cCIFAR100, $\beta = 0.2$ 3cTiny-Imagenet, $\beta = 0.2$ (lr)2-4 (lr)5-7 (lr)8-10 Val Test $K^{T}$ Val Test $K^{T}$ Val Test $K^{T}$ FedAvg $49.19$ 0.5 $\pm 2.15$ $45.42$ 0.5 $\pm 2.42$ $1$ $\textcolor{blue}{26.01}$ 0.5 $\pm 1.15$ $27.87$ 0.5 $\pm 2.12$ $1$ $38.83$ 0.5 $\pm 0.20$ $39.07$ 0.5 $\pm 0.44$ $1$ FeSEM $45.30$ 0.5 $\pm 0.40$ $29.01$ 0.5 $\pm 0.79$ $3$ $26.37$ 0.5 $\pm 0.64$ $24.50$ 0.5 $\pm 0.28$ $3$ $37.10$ 0.5 $\pm 0.80$ $30.00$ 0.5 $\pm 2.02$ $3$ IFCA $\textcolor{blue}{34.46}$ 0.5 $\pm 2.06$ $\textcolor{blue}{23.18}$ 0.5 $\pm 2.55$ $3$ $26.99$ 0.5 $\pm 3.89$ $26.20$ 0.5 $\pm 1.56$ $3$ $38.52$ 0.5 $\pm 0.30$ $29.92$ 0.5 $\pm 0.30$ $3$ FedEM $66.49$ 0.5 $\pm 0.69$ $53.64$ 0.5 $\pm 1.61$ $3$ $29.75$ 0.5 $\pm 0.47$ $24.18$ 0.5 $\pm 0.03$ $3$ $42.00$ 0.5 $\pm 0.74$ $39.25$ 0.5 $\pm 0.31$ $3$ FedRC $63.65$ 0.5 $\pm 2.95$ $59.41$ 0.5 $\pm 0.19$ $3$ $34.56$ 0.5 $\pm 0.79$ $\mathbf{37.62}$ 0.5 $\pm 0.16$ $3$ $38.93$ 0.5 $\pm 0.18$ $39.73$ 0.5 $\pm 0.04$ $3$ $\quad \text{tol}_1 =0.4, \text{tol}_2 =1.6$ $61.55$ 0.5 $\pm 1.74$ $46.88$ 0.5 $\pm 0.35$ $6$ $35.05$ 0.5 $\pm 0.35$ $24.84$ 0.5 $\pm 2.50$ $4$ $37.41$ 0.5 $\pm 1.87$ $30.25$ 0.5 $\pm 0.55$ $3$ $\quad \text{tol}_1 =0.4, \text{tol}_2 =0.8$ $65.06$ 0.5 $\pm 3.34$ $45.74$ 0.5 $\pm 4.01$ $9$ $36.98$ 0.5 $\pm 3.37$ $22.00$ 0.5 $\pm 1.88$ $5$ $40.36$ 0.5 $\pm 3.55$ $28.82$ 0.5 $\pm 0.71$ $4$ $\quad \text{tol}_1 =0.2, \text{tol}_2 =0.8$ $58.92$ 0.5 $\pm 2.09$ $55.02$ 0.5 $\pm 0.97$ $4$ $37.73$ 0.5 $\pm 7.68$ $31.47$ 0.5 $\pm 0.09$ $3$ $35.74$ 0.5 $\pm 0.57$ $34.41$ 0.5 $\pm 1.92$ $1$ $\quad \alpha^{*} (0) =0.85$ $77.59$ 0.5 $\pm 0.04$ $57.38$ 0.5 $\pm 1.91$ $98$ $52.73$ 0.5 $\pm 1.03$ $32.77$ 0.5 $\pm 0.28$ $100$ $64.72$ 0.5 $\pm 0.30$ $34.73$ 0.5 $\pm 0.39$ $87$ $\quad \alpha^{*} (0) =0.98$ $60.58$ 0.5 $\pm 1.07$ $61.18$ 0.5 $\pm 0.78$ $14$ $41.49$ 0.5 $\pm 4.11$ $33.57$ 0.5 $\pm 1.56$ $40$ $53.05$ 0.5 $\pm 2.57$ $35.09$ 0.5 $\pm 0.25$ $42$ $\quad \tau=0.05$ $59.79$ 0.5 $\pm 1.34$ $57.35$ 0.5 $\pm 0.92$ $15$ $29.97$ 0.5 $\pm 0.47$ $31.40$ 0.5 $\pm 2.16$ $4$ $31.85$ 0.5 $\pm 0.08$ $31.39$ 0.5 $\pm 0.87$ $1$ $\quad \tau=0.10$ $70.84$ 0.5 $\pm 1.58$ $51.72$ 0.5 $\pm 0.07$ $54$ $\mathbf{69.76}$ 0.5 $\pm 2.57$ $\textcolor{blue}{9.42}$ 0.5 $\pm 0.07$ $89$ $\mathbf{67.48}$ 0.5 $\pm 1.53$ $\textcolor{blue}{13.03}$ 0.5 $\pm 0.67$ $91$ $\rho=0.05$ $\mathbf{87.77}$ 0.5 $\pm 1.11$ $41.85$ 0.5 $\pm 4.11$ $58$ $\mathbf{69.25}$ 0.5 $\pm 0.69$ $14.24$ 0.5 $\pm 1.93$ $67$ $60.44$ 0.5 $\pm 0.86$ $23.14$ 0.5 $\pm 1.46$ $32$ $\rho=0.1$ $85.08$ 0.5 $\pm 0.11$ $43.34$ 0.5 $\pm 0.94$ $44$ $62.32$ 0.5 $\pm 0.23$ $16.67$ 0.5 $\pm 2.97$ $38$ $52.18$ 0.5 $\pm 2.90$ $32.97$ 0.5 $\pm 1.27$ $14$ $\rho=0.3$ $79.31$ 0.5 $\pm 3.95$ $47.62$ 0.5 $\pm 2.90$ $17$ $44.49$ 0.5 $\pm 1.57$ $28.03$ 0.5 $\pm 0.85$ $8$ $45.76$ 0.5 $\pm 0.09$ $36.08$ 0.5 $\pm 1.25$ $4$ $\rho=0.05$ $82.45$ 0.5 $\pm 0.13$ $57.73$ 0.5 $\pm 1.70$ $22$ $60.36$ 0.5 $\pm 1.47$ $22.95$ 0.5 $\pm 1.44$ $40$ $63.41$ 0.5 $\pm 0.05$ $34.24$ 0.5 $\pm 0.33$ $33$ $\rho=0.1$ $84.64$ 0.5 $\pm 1.47$ $60.90$ 0.5 $\pm 0.61$ $16$ $62.98$ 0.5 $\pm 0.42$ $26.17$ 0.5 $\pm 1.22$ $34$ $59.88$ 0.5 $\pm 0.11$ $37.17$ 0.5 $\pm 0.37$ $20$ $\rho=0.3$ $83.67$ 0.5 $\pm 0.72$ $62.43$ 0.5 $\pm 0.71$ $10$ $50.72$ 0.5 $\pm 2.97$ $32.13$ 0.5 $\pm 0.18$ $9$ $45.53$ 0.5 $\pm 0.53$ $38.64$ 0.5 $\pm 0.23$ $3$ $\rho=0.05$ $69.16$ 0.5 $\pm 0.65$ $67.37$ 0.5 $\pm 0.42$ $8$ $39.20$ 0.5 $\pm 0.31$ $34.38$ 0.5 $\pm 0.64$ $11$ $43.78$ 0.5 $\pm 0.31$ $38.75$ 0.5 $\pm 0.54$ $10$ $\rho=0.1$ $71.67$ 0.5 $\pm 0.83$ $68.64$ 0.5 $\pm 0.76$ $8$ $39.56$ 0.5 $\pm 0.14$ $34.62$ 0.5 $\pm 0.78$ $8$ $44.26$ 0.5 $\pm 0.10$ $38.82$ 0.5 $\pm 0.77$ $6$ $\rho=0.3$ $69.33$ 0.5 $\pm 0.24$ $\mathbf{69.67}$ 0.5 $\pm 1.27$ $3$ $39.97$ 0.5 $\pm 0.21$ $\mathbf{36.50}$ 0.5 $\pm 0.28$ $4$ $42.60$ 0.5 $\pm 0.21$ $\mathbf{40.65}$ 0.5 $\pm 0.36$ $3$ §.§ Datasets and Experiment settings Diverse distribution shifts scenarios. We establish clients with three types of distribution shifts. For label distribution shifts, we employ LDA with $\alpha = 1.0$, as introduced by Yoshida et al., 2019, Hsu et al., 2019, Reddi et al., 2021. For feature distribution shifts, we adopt the methodology from CIFAR10-C and CIFAR100-C creation [Hendrycks & Dietterich, 2019]. Regarding concept shift, we draw inspiration from Guo et al., 2023, Jothimurugesan et al., 2023, and selectively swap labels based on the parameter $\beta$. For example, with $\beta = 0.1$ for CIFAR10, two labels per concept are swapped, while the remaining eight labels remain unchanged. By default, we create three concepts in the experiments. More details about the construction of scenarios are included in Appendix <ref>. We use FedAvg [McMahan et al., 2016] as a single-model FL example. We consider the most recently published clustered FL methods as our baselines. For clustered FL with fixed cluster number, we select IFCA [Ghosh et al., 2020], FedEM [Marfoq et al., 2021], FeSEM [Long et al., 2023], and FedRC [Guo et al., 2023]. For the adaptive clustering FL methods, we choose CFL [Sattler et al., 2020], ICFL [Yan et al., 2023], and StoCFL [Zeng et al., 2023]. Experiment settings. Unless specifically mentioned, we divide the datasets into 100 clients and execute all algorithms for 200 communication rounds. Additional settings are provided in Appendix <ref>. We conducted all experiments using MobileNet-V2 [Sandler et al., 2018] and results on ResNet18 defer to Table <ref> of Appendix <ref>. Evaluation metrics. We present the following metrics to evaluate the personalization and generalization abilities of the algorithms: (1) Validation Accuracy for evaluating personalization: The average accuracy on local validation datasets that match the distribution of local training sets. (2) Test Accuracy evaluating generalization: The average accuracy on global shared test datasets. §.§ Results on diverse distribution shifts scenarios In this section, we compare the performance of with other clustered FL methods. We also perform ablation studies to confirm the effectiveness of 's proposed components. achieves better personalization-generalization trade-offs and comparable performance. We highlight some key observations in Table <ref>. A. consistently achieves superior test accuracy, with validation accuracy surpassing that of baseline methods with a similar number of clusters. This demonstrates improved efficiency and a better balance between personalization and generalization. B. Soft clustering methods like (FedEM) and (FedRC) outperform hard clustering methods in test accuracy, showcasing their superior generalization capabilities. C. While baseline methods may achieve higher validation accuracy by separating every client into different clusters (namely when the value of $K^{T}$ is close to 100), these trained clusters tend to overfit local distributions, resulting in significantly lower test accuracy. D. The extended algorithms, namely (FeSEM), (FedEM), and (FedRC), outperform the original methods that rely on fixed cluster numbers significantly. Additionally, these extended algorithms can automatically adjust the number of clusters, making the algorithms more practical, as we illustrated in Figure <ref>. Ablation studies on Sec <ref>. We perform ablation studies on $\tilde{\mu}$, which control the distance between sample-wise weights $\omega_{i,j;k}$ and client-wise weights $\tilde{\omega}_{i;k}$ in Figures <ref> and <ref>. A larger $\tilde{\mu}$ signifies a greater difference between $\omega_{i,j;k}$ and $\tilde{\omega}_{i;k}$. Our results show that (FedEM) prefers smaller distance between $\omega_{i,j;k}$ and $\tilde{\omega}_{i;k}$. However, (FedRC) prefers larger $\tilde{\mu}$ values, highlighting the necessity of different clustering weights among samples within the same clients. Number of clusters in over communication rounds. We illustrate changes in cluster numbers across communication rounds for various $\rho$ values using the CIFAR-10 dataset in our experiments.=-1 Ablation studies on Sec <ref>. In Figures <ref> and <ref>, we perform ablation studies on the soft clustering weight updating mechanism (w/ SCWU) introduced in Section <ref>. The term w/o SCWU refers to using the traditional clustering weight updating mechanism as described in Sattler et al., 2020, Zeng et al., 2023. The results demonstrate that our proposed SCWU consistently achieves better performance in terms of both validation and test accuracies. [Validation Acc] [Test Acc] Ablation studies on Sections <ref> and <ref>. For Sec <ref>, we evaluated test accuracies of using different backbones (FedEM and FedRC) and varying values of $\tilde{\mu}$, as shown in Figures <ref> and <ref>. For Sec <ref>, we present the best Val and Test accuracy achieved by with either FedEM or FedRC as backbones. “w/ SCWU” indicates the use of soft clustering weight updating mechanisms introduced in Section <ref>. More detailed results can be found in Tables <ref> and <ref> in Appendix <ref>. Ablation studies on techniques in Sec <ref>. We perform ablation studies to demonstrate the effectiveness of the designed distance metrics in Sec <ref>. The ablation studies include: 192 Using gradient similarity, as in previous works [Sattler et al., 2020, Yan et al., 2023], instead of distance on $\cP(\zz|x;\mphi)$ and $\cP(\zz|x, y;\mphi)$, as we proposed in Equation <ref>; 193 Remove $\E_{D_i}[\tilde{\cL}_k(\zz,y;\mtheta_k)]$ and $\E_{D_i}[\tilde{\cL}_k(\zz,y;\mtheta_k) ]$ in (<ref>); 194 Using mean distances instead of maximum distances in (<ref>). The results show that consistently achieves the highest test accuracy and produces a number of clusters closer to the ideal number than other ablation studies. Ablation studies on Sec <ref>. We conducted experiments on the CIFAR10 and CIFAR100 datasets, showcasing the highest test accuracies, the maximum number of clusters during training ($\max_{t} K^{t}$), and the final number of clusters ($K^{T}$) or each algorithm while maintaining a fixed value of $\rho = 0.3$. We used FedRC as the backbone, with 3 clusters identified as ideal by .=-1 2*Algorithm 3cCIFAR10, $\beta = 0.2$ 3cCIFAR10, $\beta = 0.4$ 3cCIFAR100, $\beta = 0.2$ 3cCIFAR100, $\beta = 0.4$ (lr)2-4 (lr)5-7 (lr)8-10 (lr)11-13 Test Acc $\max_{t} K^{t}$ $K^{T}$ Test Acc $\max_{t} K^{t}$ $K^{T}$ Test Acc $\max_{t} K^{t}$ $K^{T}$ Test Acc $\max_{t} K^{t}$ $K^{T}$ $\mathbf{69.67}$ 0.5 $\pm 1.27$ $\mathbf{4.5}$ $\mathbf{3.0}$ $\mathbf{70.13}$ 0.5 $\pm 0.42$ $\mathbf{7.0}$ $\mathbf{6.0}$ $\mathbf{36.50}$ 0.5 $\pm 0.28$ $\mathbf{3.5}$ $\mathbf{3.5}$ $\mathbf{32.22}$ 0.5 $\pm 0.20$ $5.0$ $\mathbf{4.0}$ $\;$ + 192 $67.83$ 0.5 $\pm 1.70$ $9.5$ $7.0$ $64.53$ 0.5 $\pm 0.23$ $10.5$ $10.0$ $\mathbf{36.77}$ 0.5 $\pm 0.67$ $9.5$ $8.5$ $31.33$ 0.5 $\pm 2.12$ $11.0$ $7.5$ $\;\;\;\;$ + 193 $56.14$ 0.5 $\pm 8.11$ $10.5$ $5.5$ $50.87$ 0.5 $\pm 2.26$ $12.5$ $8.5$ $34.11$ 0.5 $\pm 1.58$ $10.0$ $6.0$ $\mathbf{32.75}$ 0.5 $\pm 0.67$ $7.5$ $6.5$ $\;$ + 193 $68.52$ 0.5 $\pm 0.64$ $8.0$ $6.0$ $69.47$ 0.5 $\pm 0.15$ $11.0$ $8.5$ $34.65$ 0.5 $\pm 1.16$ $8.5$ $5.5$ $31.61$ 0.5 $\pm 0.54$ $11.0$ $7.5$ $\;$ + 194 $68.82$ 0.5 $\pm 0.59$ $5.5$ $3.5$ $65.74$ 0.5 $\pm 0.09$ $\mathbf{7.0}$ $7.0$ $35.97$ 0.5 $\pm 0.80$ $4.0$ $\mathbf{3.5}$ $31.72$ 0.5 $\pm 0.59$ $\mathbf{4.5}$ $\mathbf{4.0}$ § CONCLUSION In this paper, we introduce , a comprehensive clustered FL framework that unifies existing methods while enabling the integration of diverse algorithms to gather the advantages of various clustered FL approaches. Additionally, we identify persistent challenges unaddressed by current algorithms and propose as a solution. The is flexible and can generate numerous clustered FL methods by altering techniques in each tier. Though we have chosen some typical components and demonstrated their effectiveness, conducting further performance verification with more choices in each tier would be beneficial. [Bao et al., 2023] Bao, W., Wang, H., Wu, J., and He, J. Optimizing the collaboration structure in cross-silo federated [Briggs et al., 2020] Briggs, C., Fan, Z., and Andras, P. Federated learning with hierarchical clustering of local updates to improve training on non-iid data. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–9. IEEE, 2020. [Collins et al., 2021] Collins, L., Hassani, H., Mokhtari, A., and Shakkottai, S. Exploiting shared representations for personalized federated In International Conference on Machine Learning, pp. 2089–2099. PMLR, 2021. [Duan et al., 2021] Duan, M., Liu, D., Ji, X., Liu, R., Liang, L., Chen, X., and Tan, Y. FedGroup: Efficient federated learning via decomposed similarity-based clustering. In 2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), pp. 228–237. IEEE, [Duan et al., 2021] Duan, M., Liu, D., Ji, X., Wu, Y., Liang, L., Chen, X., Tan, Y., and Ren, A. Flexible clustered federated learning for client-level data distribution shift. IEEE Transactions on Parallel and Distributed Systems, 330 (11):0 2661–2674, 2021b. [Fang & Ye, 2022] Fang, X. and Ye, M. Robust federated learning with noisy and heterogeneous clients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10072–10081, 2022. [Gan et al., 2021] Gan, S., Mathur, A., Isopoussu, A., Kawsar, F., Berthouze, N., and Lane, N. FRuDA: Framework for distributed adversarial domain adaptation. IEEE Transactions on Parallel and Distributed Systems, 2021. [Ghosh et al., 2020] Ghosh, A., Chung, J., Yin, D., and Ramchandran, K. An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems, 33:0 19586–19597, 2020. [Guo et al., 2021] Guo, Y., Lin, T., and Tang, X. Towards federated learning on time-evolving heterogeneous data. arXiv preprint arXiv:2112.13246, 2021. [Guo et al., 2023] Guo, Y., Tang, X., and Lin, T. FedBR: Improving federated learning on heterogeneous data via local learning bias reduction. [Guo et al., 2023] Guo, Y., Tang, X., and Lin, T. FedRC: Tackling diverse distribution shifts challenge in federated learning by robust clustering. arXiv preprint arXiv:2301.12379, 2023b. [Hendrycks & Dietterich, 2019] Hendrycks, D. and Dietterich, T. Benchmarking neural network robustness to common corruptions and arXiv preprint arXiv:1903.12261, 2019. [Hsu et al., 2019] Hsu, T.-M. H., Qi, H., and Brown, M. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019. [Jain et al., 2013] Jain, P., Netrapalli, P., and Sanghavi, S. Low-rank matrix completion using alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pp. 665–674, 2013. [Jiang & Lin, 2023] Jiang, L. and Lin, T. Test-time robust personalization for federated learning. In International Conference on Learning Representations, 2023. [Jothimurugesan et al., 2023] Jothimurugesan, E., Hsieh, K., Wang, J., Joshi, G., and Gibbons, P. B. Federated learning under distributed concept drift. pp. 5834–5853, 2023. [Karimireddy et al., 2019] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S. J., Stich, S. U., and Suresh, A. T. SCAFFOLD: Stochastic controlled averaging for federated learning, URL <https://arxiv.org/abs/1910.06378>. [Karimireddy et al., 2020] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., and Suresh, A. T. SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pp. 5132–5143. PMLR, 2020. [Ke et al., 2022] Ke, S., Huang, C., and Liu, X. Quantifying the impact of label noise on federated learning. arXiv preprint arXiv:2211.07816, 2022. [Li et al., 2021] Li, Q., He, B., and Song, D. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021. [Li et al., 2018] Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127, 2018. [Lin et al., 2020] Lin, T., Stich, S. U., Patel, K. K., and Jaggi, M. Don't use large mini-batches, use local sgd. In International Conference on Learning Representations, 2020. URL <https://openreview.net/forum?id=B1eyO1BFPr>. [Long et al., 2023] Long, G., Xie, M., Shen, T., Zhou, T., Wang, X., and Jiang, J. Multi-center federated learning: clients clustering for better World Wide Web, 260 (1):0 481–500, 2023. [Marfoq et al., 2021] Marfoq, O., Neglia, G., Bellet, A., Kameni, L., and Vidal, R. Federated multi-task learning under a mixture of distributions. Advances in Neural Information Processing Systems, 34:0 15434–15447, 2021. [McMahan et al., 2016] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., and Arcas, B. A. y. Communication-efficient learning of deep networks from decentralized URL <https://arxiv.org/abs/1602.05629>. [Peng et al., 2019] Peng, X., Huang, Z., Zhu, Y., and Saenko, K. Federated adversarial domain adaptation, 2019. URL <https://arxiv.org/abs/1911.02054>. [Reddi et al., 2021] Reddi, S. J., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Konečný, J., Kumar, S., and McMahan, H. B. Adaptive federated optimization. In International Conference on Learning Representations, 2021. URL <https://openreview.net/forum?id=LkFG3lB13U5>. [Ruan & Joe-Wong, 2022] Ruan, Y. and Joe-Wong, C. FedSoft: Soft clustered federated learning with proximal local In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 8124–8131, 2022. [Sandler et al., 2018] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520, 2018. [Sattler et al., 2020] Sattler, F., Müller, K.-R., and Samek, W. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems, 320 (8):0 3710–3722, 2020a. [Sattler et al., 2020] Sattler, F., Müller, K.-R., Wiegand, T., and Samek, W. On the byzantine robustness of clustered federated learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8861–8865. IEEE, [Shen et al., 2021] Shen, Y., Du, J., Zhao, H., Zhang, B., Ji, Z., and Gao, M. FedMM: Saddle point optimization for federated adversarial domain arXiv preprint arXiv:2110.08477, 2021. [Stallmann & Wilbik, 2022] Stallmann, M. and Wilbik, A. Towards federated clustering: A federated fuzzy $ c $-means algorithm arXiv preprint arXiv:2201.07316, 2022. [Sun et al., 2022] Sun, Y., Chong, N., and Hideya, O. Multi-source domain adaptation based on federated knowledge arXiv preprint arXiv:2203.11635, 2022. [Tang et al., 2022] Tang, Z., Zhang, Y., Shi, S., He, X., Han, B., and Chu, X. Virtual homogeneity learning: Defending against data heterogeneity in federated learning. International Conference on Machine Learning, pp. 21111–21132, 2022. [Tziotis et al., 2022] Tziotis, I., Shen, Z., Pedarsani, R., Hassani, H., and Mokhtari, A. Straggler-resilient personalized federated learning. arXiv preprint arXiv:2206.02078, 2022. [Vahidian et al., 2023] Vahidian, S., Morafah, M., Wang, W., Kungurtsev, V., Chen, C., Shah, M., and Lin, B. Efficient distribution similarity identification in clustered federated learning via principal angles between client data subspaces. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 10043–10052, 2023. [Vershynin, 2018] Vershynin, R. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. [Wang et al., 2022] Wang, B., Li, G., Wu, C., Zhang, W., Zhou, J., and Wei, Y. A framework for self-supervised federated domain adaptation. EURASIP Journal on Wireless Communications and Networking, 20220 (1):0 1–17, 2022a. [Wang et al., 2020] Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., and Khazaeni, Y. Federated learning with matched averaging. arXiv preprint arXiv:2002.06440, 2020. [Wang et al., 2022] Wang, Z., Xu, H., Liu, J., Xu, Y., Huang, H., and Zhao, Y. Accelerating federated learning with cluster construction and hierarchical aggregation. IEEE Transactions on Mobile Computing, 2022b. [Wei & Huang, 2023] Wei, X.-X. and Huang, H. Edge devices clustering for federated visual classification: A feature norm based framework. IEEE Transactions on Image Processing, 32:0 995–1010, [Wu et al., 2022] Wu, S., Li, T., Charles, Z., Xiao, Y., Liu, Z., Xu, Z., and Smith, V. Motley: Benchmarking heterogeneity and personalization in federated arXiv preprint arXiv:2206.09262, 2022. [Wu et al., 2023] Wu, Y., Zhang, S., Yu, W., Liu, Y., Gu, Q., Zhou, D., Chen, H., and Cheng, W. Personalized federated learning under mixture of distributions. [Yan et al., 2023] Yan, Y., Tong, X., and Wang, S. Clustered federated learning in heterogeneous environment. IEEE Transactions on Neural Networks and Learning Systems, [Yoshida et al., 2019] Yoshida, N., Nishio, T., Morikura, M., Yamamoto, K., and Yonetani, R. Hybrid-FL: Cooperative learning mechanism using non-iid data in wireless networks. arXiv preprint arXiv:1905.07210, 2019. [Zeng et al., 2023] Zeng, D., Hu, X., Liu, S., Yu, Y., Wang, Q., and Xu, Z. Stochastic clustered federated learning. arXiv preprint arXiv:2303.00897, 2023. [Zhao et al., 2020] Zhao, F., Huang, Y., Sai, A. M. V. V., and Wu, Y. A cluster-based solution to achieve fairness in federated learning. In 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), pp. 875–882. IEEE, 2020. § PROOF OF OPTIMIZATION STEPS Given objective function $\cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega})$ \begin{align} \cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega}) & = \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \log \left( \sum_{k=1}^{K} \omega_{i,j;k} \cL_k(\xx_{i,j}, y_{ij}; \mphi, \mtheta_k) \right) \nonumber \\ & + \sum_{i=1}^{M} \sum_{j=1}^{N_i} \lambda_{i,j} \left( \sum_{k=1}^{K} \omega_{i,j;k} - 1 \right) \nonumber \\ & - \mu \sum_{i=1}^{M} \sum_{j=1}^{N_i} \left( \sum_{k=1}^{K} \tilde{\omega}_{i;k} \log \frac{\tilde{\omega}_{i;k}}{\omega_{i,j;k}} \right) \, , \end{align} and we define $\tilde{\mOmega} = \{ \tilde{\omega}_{i;k} | \forall i, k \}$, then $\tilde{\mOmega}$ is obtained by \begin{align} \tilde{\mOmega} & = \argmin_{\tilde{\mOmega}} \left| \max_{\mOmega} \cL(\mphi, \mTheta, \mOmega, \mOmega) - \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \right| \, . \end{align} Then E-M steps are obtained by maximizing $\cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega})$. \begin{align} \gamma_{i,j;k}^{t+1} & = \frac{ \omega_{i,j;k}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mtheta_k^{t})}{\sum_{n=1}^{K} \omega_{i,j;n}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mtheta_n^{t})} \, , \\ \tilde{\gamma}_{i,j;k}^{t+1} & = \frac{ \tilde{\omega}_{i;k}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mtheta_k^{t})}{\sum_{n=1}^{K} \omega_{i;n}^{t} \cL_k(\xx_{i,j}, y_{ij}; \mtheta_n^{t})} \, , \\ \tilde{\omega}_{i;k}^{t+1} & = \frac{1}{N_i} \sum_{j=1}^{N_i} \tilde{\gamma}_{i,j;k}^{t+1} \, , \\ \omega_{i,j;k}^{t+1} & = \frac{\gamma_{i,j;k}^{t+1}}{1 + \mu N} + \frac{\mu N}{1 + \mu N} \tilde{\omega}_{i;k}^{t+1} \, , \\ \mtheta_k^{t+1} & = \mtheta_k^{t} - \eta \sum_{i=1}^{M} \sum_{j=1}^{N_i} \frac{\gamma_{i,j;k}^{t+1}}{\cL_k(\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t})} \nabla_{\mtheta_k} \cL_k (\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t}) \, , \\ \mphi^{t+1} & = \mphi^{t} - \eta \sum_{i=1}^{M} \sum_{j=1}^{N_i} \sum_{k=1}^{K} \frac{\gamma_{i,j;k}^{t+1}}{\cL_k(\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t})} \nabla_{\mphi} \cL_k (\xx_{ij}, y_{ij}, \mphi^{t}, \mtheta_k^{t+1}) \end{align} Let's begin by assuming $\tilde{\omega}_{i;k}^{t+1}$ is given for each round $t$, then we will discuss how to compute $\omega_{i,j;k}^{t+1}$. Consider the objective function $\cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega})$, we have \begin{align} \derive{\cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega})}{\omega_{i,j;k}} = \frac{1}{N} \frac{\cL_k(\xx_{i,j}, y_{ij}; \mtheta_n)}{\sum_{n=1}^{K} \omega_{i,j;n} \cL_n(\xx_{i,j}, y_{ij}; \mtheta_n)} + \lambda_{i,j} + \mu \frac{\tilde{\omega}_{i;k}}{\omega_{i,j;k}} \, . \end{align} Then define \begin{align} \gamma_{i,j;k} = \frac{\omega_{i,j;k} \cL_k(\xx_{i,j}, y_{ij}; \mtheta_n)}{\sum_{n=1}^{K} \omega_{i,j;n} \cL_n(\xx_{i,j}, y_{ij}; \mtheta_n)} \, , \end{align} and take $\derive{\cL(\mTheta, \mOmega)}{\omega_{i,j;k}} = 0$ we have \begin{align} \frac{\gamma_{i,j;k}}{N} + \mu \tilde{\omega}_{i;k} = - \lambda_{i,j} \omega_{i,j;k} \, . \end{align} Then we have \begin{align} \omega_{i,j;k} = - \frac{1}{\lambda_{i,j}} \left( \frac{\gamma_{i,j;k}}{N} + \mu \tilde{\omega}_{i;k} \right) \, . \end{align} Because we have $\sum_{k=1}^{K} \omega_{i,j;k} = 1$, we have \begin{align} 1 & = - \frac{1}{\lambda_{i,j}} \left( \frac{1}{N} + \mu \right) \\ \lambda_{i,j} & = - \frac{1 + \mu N}{N} \, . \end{align} Then we have \begin{align} \omega_{i,j;k} = \frac{\gamma_{i,j;k}}{1 + \mu N} + \frac{\mu N}{1 + \mu N} \tilde{\omega}_{i;k} \, . \end{align} Then consider to optimize $\mtheta_k$, we have, \begin{align} & \derive{\cL (\mphi, \mathbf{\Theta}, \mOmega, \tilde{\mOmega}) }{\mtheta_k} \nonumber \\ & = \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \frac{\omega_{i,j;k} }{\sum_{n=1}^{K} \omega_{i,j;n} \cL_n (\xx_{ij}, y_{ij}; \mphi, \mtheta_n)} \cdot \derive{\cL_k (\xx_{ij}, y_{ij}; \mphi, \mtheta_k)}{\mtheta_k} \, , \\ & = - \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \frac{\gamma_{i,j;k}^{t+1}}{\cL_k(\xx_{ij}, y_{ij}; \mphi, \mtheta_k)} \nabla_{\mtheta_k} \cL_k (\xx_{ij}, y_{ij}; \mphi, \mtheta_k) \, . \end{align} Finally, consider to optimize $\mphi$, we have \begin{align} & \derive{\cL (\mphi, \mathbf{\Theta}, \mOmega, \tilde{\mOmega}) }{\mphi} \nonumber \\ & = \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \sum_{k=1}^{K} \frac{\omega_{i,j;k} }{\sum_{n=1}^{K} \omega_{i,j;n} \cL_n (\xx_{ij}, y_{ij}; \mphi, \mtheta_n)} \cdot \derive{\cL_k (\xx_{ij}, y_{ij}; \mphi, \mtheta_k)}{\mphi} \, , \\ & = - \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \sum_{k=1}^{K} \frac{\gamma_{i,j;k}^{t+1}}{\cL_k(\xx_{ij}, y_{ij}; \mphi, \mtheta_k)} \nabla_{\mphi} \cL_k (\xx_{ij}, y_{ij}; \mphi, \mtheta_k) \, . \end{align} Because if hard to find a close-form solution to $\derive{\cL (\mphi, \mathbf{\Theta}, \mOmega, \tilde{\mOmega}) }{\mtheta_k} = 0$ when $\mtheta_k$ is the parameter of deep neural networks, we use gradient ascent to optimize $\mtheta_k$. The same method is used for feature extractors $\mphi$. Then the remaining thing is how to decide $\tilde{\omega}_{i;k}$. From the formulation of objective function, the $\tilde{\omega}_{i;k}$ is decided by \begin{align} \tilde{\mOmega} & = \argmin_{\tilde{\mOmega}} \left| \max_{\mOmega} \cL(\mphi, \mTheta, \mOmega, \mOmega) - \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \right| \, . \end{align} To solve this problem, firstly, we can transform the definition of $\cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega})$ to \begin{align} \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) & = \frac{1}{N} \sum_{i=1}^{M} \sum_{j=1}^{N_i} \log \left( \sum_{k=1}^{K} \omega_{i,j;k} \cL_k(\xx_{i,j}, y_{ij}; \mphi, \mtheta_k) \right) \nonumber \\ & + \sum_{i=1}^{M} \sum_{j=1}^{N_i} \lambda_{i,j} \left( \sum_{k=1}^{K} \omega_{i,j;k} - 1 \right) \nonumber \\ & - \mu \sum_{i=1}^{M} \sum_{j=1}^{N_i} \left( \sum_{k=1}^{K} \tilde{\omega}_{i;k} \log \frac{\tilde{\omega}_{i;k}}{\omega_{i,j;k}} \right) \, , \\ s.t.\; & \; \omega_{i,j;k} = \tilde{\omega}_{i;k}, \; \forall i,j,k \, . \end{align} Then by removing the constrains, we can always find \begin{align} \max_{\mOmega} \cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega}) \ge \tilde{\cL}(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \, . \end{align} Then we have \begin{align} \tilde{\mOmega} & = \argmin_{\tilde{\mOmega}} \left| \max_{\mOmega} \cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega}) - \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \right| \\ & = \argmin_{\tilde{\mOmega}} \left( \max_{\mOmega} \cL(\mphi, \mTheta, \mOmega, \tilde{\mOmega}) - \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \right) \\ & = \argmax_{\tilde{\mOmega}} \cL(\mphi, \mTheta, \tilde{\mOmega}, \tilde{\mOmega}) \end{align} Then we can obtain the results by directly use the proof in [Guo et al., 2023] and [Marfoq et al., 2021]. § RELATED WORKS Federated Learning. As the de-facto algorithm in FL, FedAvg employs local SGD [McMahan et al., 2016, Lin et al., 2020] to reduce communication costs and protect client privacy. However, distribution shifts among clients pose a significant challenge in FL and hinder the performance of FL algorithms [Li et al., 2018, Wang et al., 2020, Karimireddy et al., 2020, Jiang & Lin, 2023, Guo et al., 2021]. Traditional FL methods primarily aim to improve the convergence speed of global models and incorporate bias reduction techniques [Tang et al., 2022, Guo et al., 2023, Li et al., 2021, Li et al., 2018]. At the same time, some studies investigate feature distribution shifts using domain generalization techniques [Peng et al., 2019, Wang et al., 2022, Shen et al., 2021, Sun et al., 2022, Gan et al., 2021]. However, single-model approaches are inadequate for handling heterogeneous data distributions, especially when dealing with concept shifts [Ke et al., 2022, Guo et al., 2023, Jothimurugesan et al., 2023]. To tackle these challenges, clustered FL algorithms are introduced to enhance FL algorithm performance.=-1 Clustered FL with fixed cluster numbers. Clustered FL groups clients based on their local data distribution, tackling the distribution shift problem. Most methods employ hard clustering with a fixed number of clusters, grouping clients by various similarity metrics, such as local loss values [Ghosh et al., 2020], local model parameter differences [Long et al., 2023], communication time/local calculation time [Wang et al., 2022], and fuzzy $c$-Means [Stallmann & Wilbik, 2022]. However, hard clustering may not capture complex relationships between local distributions adequately, and soft clustering paradigms have been proposed to address this issue. For instance, FedEM [Marfoq et al., 2021] employs Expectation-Maximization techniques to maximize likelihood functions. FedGMMcitepwu2023personalized suggests using joint distributions instead of conditional distributions. FedRC[Guo et al., 2023] introduces Robust Clustering, assigning clients with concept shifts to different clusters to enhance model generalization. FedSoft [Ruan & Joe-Wong, 2022] calculates weights based on the distances between clients' local model parameters and cluster model parameters, with smaller distances indicating larger weights for that cluster. In this paper, we propose a generalized formulation for clustered FL that encompasses the current methods and improves them by addressing issues related to intra-client inconsistency and efficiency. Clustered FL with adaptive clustering numbers. Another line of research focuses on automatically determining the number of clusters. Current methods utilize hierarchical clustering, which measures client dissimilarity using model parameters or local gradient distances. Most current methods modify cluster numbers by splitting them when client distances within clusters are large [Sattler et al., 2020, Sattler et al., 2020, Zhao et al., 2020, Briggs et al., 2020, Duan et al., 2021, Duan et al., 2021]. Recently, StoCFL [Zeng et al., 2023] suggests initially setting cluster numbers equal to the client count and merging clusters with small distances. In addition to model parameter distances, some papers employ alternative distance metrics for improved performance. For instance, Yan et al., 2023 employ principal eigenvectors of model parameters. Vahidian et al., 2023 use truncated singular value decomposition (SVD) to obtain a reduced set of principal vectors for distance measurement. Meanwhile, Wei & Huang, 2023 focus on the distance of normalized local features. FEDCOLLAB [Bao et al., 2023] focuses on cross-silo scenarios with a limited number of clients and quantifies client similarity by training client discriminators. However, the need for discriminators between every pair of clients in FEDCOLLAB makes it challenging to expand to cross-device scenarios with numerous clients. In this paper, we concentrate on cross-device settings, introducing a holistic adaptive clustering framework enabling cluster splitting and merging. We also present enhanced weight updating for soft clustering and finer distance metrics for various clustering principles.=-1 § ALGORITHMS Details of the . In Algorithm <ref>, we present a concise summary of the comprehensive algorithm that integrates all the enhanced components of , as introduced in Section <ref>. Specifically, during each communication round, the algorithm performs the following steps: (1) Randomly selects a subset of clients. (2) Calculates prototypes using Equations (<ref>) and (<ref>). (3) Performs local updates using Algorithm <ref>. (4) The server aggregates local updates, updates cluster model parameters, and computes client distance metrics using Equation (<ref>) for each cluster $k$. (5) Identifies $k_{max}$ as the cluster with the highest average distance. (6) Checks if the maximum distance within $k_{max}$ significantly exceeds the average distance in this cluster. (7) If the following condition is met, splits the clusters using Algorithm <ref>. \begin{align} \max (D_{k_{max}}^{t}) - \operatorname{mean} (D_{k_{max}}^{t}) \ge \rho \, . \end{align} (8) Mark and remove the empty clusters no clients will assign large clustering weights to using Algorithm <ref>. Intuitions on the distance metrics design. From the objective function (Eq. (<ref>)), we should assign higher clustering weights $\omega_{i,j;k}$ to clusters with greater $\cL_k(\xx_{i,j}, y_{i,j}, \mphi, \mtheta_k)$ to maximize the objective function. Because the ultimate goals of the clustering algorithms are solving the objective functions, we analyse the $\cL_k(\xx_{i,j}, y_{i,j}, \mphi, \mtheta_k)$ to check the key factors influencing the value of $\cL_k(\xx_{i,j}, y_{i,j}, \mphi, \mtheta_k)$, and the relationships between these factors and the clustering principles. We use the following algorithms as examples. For FedEM [Marfoq et al., 2021] and IFCA [Ghosh et al., 2020], $\cL_k(\xx, y, \mphi, \mtheta_k) = \cP_{\mphi, \mtheta_k}(y | \xx)$; For FedRC [Guo et al., 2023], $\cL_k(\xx, y, \mphi, \mtheta_k) = \frac{\cP_{\mphi, \mtheta_k}(\xx, y)}{\cP_{\mphi, \mtheta_k}(\xx)\cP_{\mphi, \mtheta_k}(y)}$. Defining $\zz = g(\xx;\mphi)$ as the local features extracted by $\mphi$, assuming a $\xx \to \zz \to y$ Probabilistic Graphical Model (with $\xx$ and $y$ being independent given $\zz$), we obtain:=-1 \begin{align*} \textstyle \cL_k(\xx, y, \mphi, \mtheta_k) \begin{split} \left \{ \begin{array}{ll} \cP(y|\xx; \mphi, \mtheta_k) = \frac{\cP(y | \zz; \mtheta_k) \cP(\zz |\xx; \mphi)}{ \cP(\zz | \xx, y; \mphi) } \, & \text{\tiny (FedEM, IFCA)} \\ \frac{\cP(\xx, y; \mphi, \mtheta_k)}{\cP(y; \mphi, \mtheta_k) \cP(\xx; \mphi, \mtheta_k)} = \frac{\cP(y | \zz; \mtheta_k) \cP(\zz |\xx; \mphi)}{\cP(y; \mphi, \mtheta_k) \cP(\zz | \xx, y; \mphi) } \, & \text{\tiny (FedRC)} \end{array} \right \} = \frac{\tilde{\cL}_k(\zz, y;\mtheta_k) \cP(\zz | \xx; \mphi)}{ \cP(\zz | \xx, y; \mphi) } \,. \end{split} \end{align*} Then we aim to give the following explanations of the three terms $\cP(\zz | \xx; \mphi)$, $\cP(\zz | \xx; \mphi)$, and $\tilde{\cL}_k(\zz, y;\mtheta_k)$, which align with the terms considered in Sec <ref>. * $\cP(\zz | \xx; \mphi)$ for feature and label shifts. Feature shifts introduce significant distances in $\xx$. Additionally, $\xx$ with different $y$ values generally exhibit substantial distances in the feature space. Without this, classifiers cannot distinguish samples with different labels. Hence, we employ $\cP(\zz | \xx; \mphi)$ to assess both feature and label shifts. * $\cP(\zz | \xx, y; \mphi)$ for concept shifts. Concept shifts signify alerted $\xx-y$ correlations. Hence, samples with concept shifts but have the same $y$ should exhibit a significant difference in $\cP(\zz | \xx, y; \mphi)$. * $\tilde{\cL}_k(\zz, y;\mtheta_k)$ for the quality of clustering. The $\tilde{\cL}_k(\zz, y;\mtheta_k)$ is defined using features $\zz = g(\xx;\mphi)$ instead of data $\xx$ in $\cL_k(\xx, y, \mphi, \mtheta_k)$. This term evaluates if features $\zz$ can be correctly assigned to clusters given the current $\mTheta$; otherwise, the objectives in (<ref>) cannot be achieved. Finally, we propose the following distance metric: \begin{align} \textstyle \mD_{i,j}^{k} \begin{split} = \left \{ \begin{array}{ll} \max \left \{ d_c, d_{lf} \right \} \E_{D_i} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \E_{D_j} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \, , & \text{\cpA} \\ \E_{D_i} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \E_{D_j} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \, , & \text{\cpB} \end{array} \right. \end{split} \label{equ:theory-distance-appendix} \end{align} where $\text{dist}$ is the cos-similarity in this paper, $d_c \!=\! \max_{y} \left\{ \text{dist} \left( \E_{D_i} \left[\cP(\zz | \xx, y;\mphi) \right], \E_{D_j} \left[\cP(\zz| \xx, y; \mphi) \right] \right) \right\}$, and $d_{lf} \!=\! \text{dist} \left( \E_{D_i} \left[\cP(\zz | \xx;\mphi) \right], \E_{D_j} \left[\cP(\zz| \xx; \mphi) \right] \right)$. The distances above become large only when the following conditions occur together: (1) Large values of $d_c$ indicate concept shifts between clients $i$ and $j$; (2) Large $d_{lf}$ indicate significant feature and label distribution differences. (2) Large values of $\E_{D_i} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right] \E_{D_j} \left[\tilde{\cL}_k(\zz, y;\mtheta_k) \right]$ indicate incorrect clustering weights with high confidence. Approximation of the distance metrics in practice. When calculating the distance metrics (Equation (<ref>)) in practice, to avoid training extra generative networks and transmitting more data between servers and clients, we substitute $\tilde{\omega}_{i;k}$ for $\tilde{\cL}_k(\zz, y;\mtheta_k)$ since $\tilde{\omega}_{i;k}$ is positively correlated with $\tilde{\cL}_k(\zz, y;\mtheta_k)$ [Marfoq et al., 2021, Guo et al., 2023]. Additionally, we approximate $\E_{D_i} \left[\cP(\zz | \xx, y;\mphi) \right]$ and $\E_{D_i} \left[\cP(\zz | \xx, y;\mphi) \right]$ using feature prototypes. The prototypes are defined by the following equation: \begin{align} \tilde{d}_c = Dist(\mP_{c, i}, \mP_{c, j}) \, , \tilde{d}_{lf} = Dist(\mP_{lf, i}, \mP_{lf, j}) \, , \end{align} \begin{align} \mP_{c, i} \in \R^{d \times C} & = [\frac{1}{N_{i, 1}} \sum_{j=1}^{N_i} \mathbf{1}_{y_{i,j}=1} g(\xx_{i,j}, \mphi), \cdots, \frac{1}{N_{i, C}} \sum_{j=1}^{N_i} \mathbf{1}_{y_{i,j}=C} g(\xx_{i,j}, \mphi)] \, , \label{equ:get-the-prototypes} \\ \mP_{lf, i} \in \R^{d} & = \frac{1}{N_{i}} \sum_{j=1}^{N_i} g(\xx_{i,j}, \mphi) \, , \label{equ:get-the-mean-prototypes} \end{align} $N_{i,c} = \sum_{j=1}^{N_i} \mathbf{1}_{y_{i,j}=c}$, $g(\xx_{i,j}, \mphi)$ is the function parameterized by $\mphi$, $Dist$ is a function to measure the distance between prototypes, which we use the cosine similarity as an example in this paper. [1] Local datasets $D_1, \dots, D_N$, number of local iterations $\mathcal{T}$, number of communication rounds $T$, number of clients chosen in each round $S$, initial number of clusters $K^{0}$, number of classes $C$, and hyper-parameter $\rho$. Trained global feature extractor $\mphi^{T}$, final number of clusters $K^{T}$, and cluster-specific predictors $\mTheta^{T} = [\mtheta_1^{T}, \cdots, \mtheta_{K^{T}}^{T}]$. Initialize $\mphi^{0}, \mTheta^{0} = [\mtheta_1^{0}, \cdots, \mtheta_{K^{0}}^{0}]$. $t = 0, \dots, T-1$ Choose a subset of clients $\mathcal{S}^{t}$, where $|\mathcal{S}^{t}| = S$. chosen client $i \in \mathcal{S}^{t}$ Calculate client prototypes $\mP_{i}^{t}$ by Equation (<ref>)- (<ref>). $\mathcal{F}_{i}^{t+1}, \tilde{\omega}_{i;k}^{t+1}, \mphi_{i}^{\mathcal{T}}, \mtheta_{k, i}^{\mathcal{T}} \gets$ Local updates by Algorithm <ref>. Send $\mP_{i}^{t}$, $\mathcal{F}_{i}^{t+1}$, and $\tilde{\omega}_{i;k}^{t+1}, \mphi_{i}^{\mathcal{T}}, \mtheta_{k, i}^{\mathcal{T}}$, $\forall k \le K^{t}$ to the server. $\mathcal{F}_{i}^{latest} \gets \mathcal{F}_{i}^{t+1}$. $\mphi^{t+1} = \frac{1}{\sum_{i \in \cS^{t}}} \sum_{i \in \cS^{t}} N_i \mphi_i^{\cT}$. $\mtheta_k^{t+1} = \frac{1}{\sum_{i \in \cS^{t}}} \sum_{i \in \cS^{t}} N_i \mtheta_{k,i}^{\cT}$, $\forall k \le K^{t}$. $\cF_{g}^{t+1} \gets [ \cF_{g, 1}^{t+1}, \cdots, \cF_{g, K^{t}}^{t+1} ]$, where $\cF_{g, k}^{t+1} \gets [\sum_{i} \cF_{i, k, 1}^{latest}, \cdots, \sum_{i} \cF_{i, k, C}^{latest}]$. Initialize $\cC_{k}^{t} = \emptyset$, $\forall k \le K^t$. all client $i$ $c_i \gets \argmax_{k} \tilde{\omega}_{i;k}$. $\cC_{c_{i}}^{t} \gets \cC_{c_i}^{t} \cup i$. $\cR^{t} \gets \emptyset$. $k \le K^{t}$ $\cC_{k}^{t}$ is empty $\cR^{t} \gets \cR^{t} \cup k$. Get the cluster-specific distance matrix $\mD_k \in \R^{|\tilde{\cS}_{k}^{t}| \times |\tilde{\cS}_{k}^{t}|}$, $\forall k \le K^{t}$ by Equation (<ref>). $k_{min} \gets \argmax_{k} \max(\mD_{k}^{t})$. $\max(\mD_{k_{min}}^{t}) - mean(\mD_{k_{min}}^{t}) \ge \rho$ Split $\cC_{k_{min}}^{t}$ into two clusters $\cC_{k_{min}, 1}^{t}$ and $\cC_{k_{min}, 2}^{t}$. $\mtheta_{k_{min}}^{t+1} = \frac{1}{\sum_{i \in \cC_{k_{min}, 1}^{t}}} \sum_{i \in \cC_{k_{min}, 1}^{t} } N_i \mtheta_{k_{min}, i}^{\cT} $. Add new cluster and update $\cF_g$ by server side of Algorithm <ref>. $K^{t+1} \gets K^{t} + 1$. $K^{t+1} \gets K^{t}$. cluster $k_r \in \cR^{t}$ Remove cluster $k_r$ and update $\cF_g$ by server side of Algorithm <ref>. $K^{t+1} \gets K^{t+1} - 1$. Send $\mphi^{t+1}$, $\mTheta^{t+1} = [\mtheta_1^{t+1}, \cdots, \mtheta_{K^{t+1}}^{t+1}]$, and information about add/remove cluster to clients. algorithmAlgorithm Framework of [1] Number of local iterations $\mathcal{T}$, current number of clusters $K^{t}$, number of classes $C$, local dataset $D_i$, global feature extractor $\mphi^{t}$, cluster-specific predictors $\mTheta^{t} = [\mtheta_1^{t}, \cdots, \mtheta_{K^{t}}^{t}]$. Trained feature extractor $\mphi_{i}^{\mathcal{T}}$, predictors $\mTheta_{i}^{\mathcal{T}} = [\mtheta_{i,1}^{\mathcal{T}}, \cdots, \mtheta_{i,K^{t}}^{\mathcal{T}}]$, $\tilde{\mOmega}_{i}^{t+1} = [\tilde{\omega}_{i;k}, \cdots, \tilde{\omega}_{i;K^{t}}]$, and $\mathcal{F}_{i}^{t+1} = [\mathcal{F}_{i, 1}^{t+1}, \cdots, \mathcal{F}_{i, K^{t}}^{t+1}]$, where $\mathcal{F}_{i, k}^{t+1} = [\mathcal{F}_{i, k, 1}^{t+1}, \cdots, \mathcal{F}_{i, k, C}^{t+1}]$. Update $\gamma_{i,j;k}^{t+1}, \tilde{\gamma}_{i,j;k}^{t+1}, \omega_{i,j;k}^{t+1}, \tilde{\omega}_{i;k}^{t+1}$ by Equations (<ref>)-(<ref>), $\forall j \le N_i, k \le K^{t}$.Tier 2 $\tau = 1, \dots, \mathcal{T}$Tier 1 Update $\mtheta_{k, i}^{\tau}$ by Equation (<ref>), $\forall k \le K^{t}$. Update $\mphi_{i}^{\tau}$ by Equation (<ref>). $\mathcal{F}_{i, k, c}^{t+1} \gets \sum_{j=1}^{N_i} \mathbf{1}_{y_{i,j}=c} \gamma_{i,j;k}^{t+1}$. algorithmLocal Updates of [1] $k_{min}$, set of clients $\cC_{k_{min}, 2}^{t}$, the corresponding $\mtheta_{k_{min}, i}^{\cT}$ for each client $i \in \cC_{k_{min}, 2}^{t}$, and $\cF_g$. New $\cF_g^{t+1}$, and predictor of the new cluster $\mtheta_{K^{t} + 1}^{t+1}$. Server Side: $\mtheta_{K^{t} + 1}^{t+1} = \frac{1}{\sum_{i \in \cC_{k_{min}, 2}^{t}}} \sum_{i \in \cC_{k_{min}, 2}^{t} } N_i \mtheta_{k_{min}, i}^{\cT} $. Add $\cF_{g, K^{t} + 1} \gets \cF_{g, k_{min}}$ to $\cF_g^{t+1}$. Add $\cF_{i, K^{t} + 1}^{latest} \gets \cF_{i, k_{min}}^{latest}$ to $\cF_{i}^{latest}$, $\forall i$. Client Side: $\omega_{i,j;K^{t} + 1} \gets \omega_{i,j;k_{min}} / 2$, $\forall j \le N_i$. $\omega_{i,j;k_{min}} \gets \omega_{i,j;k_{min}} / 2$, $\forall j \le N_i$. $\tilde{\omega}_{i;K^{t} + 1} \gets \tilde{\omega}_{i;k_min} / 2$. $\tilde{\omega}_{i;k_{min}} \gets \tilde{\omega}_{i;k_min} / 2$. algorithmCluster Adding of [1] The cluster needs to be removed $k_r$, and $\cF_g^{t+1}$. Server Side: Remove $\cF_{g, k_r}^{t+1}$ from $\cF_g^{t+1}$. Remove $\cF_{i, k_r}^{latest}$ from $\cF_{i}^{latest}$, $\forall i$. Client Side: $\omega_{i,j;k} \gets \frac{\omega_{i,j;k}}{\sum_{n \not = k_r} \omega_{i,j;n}}$, $\forall j \le N_i, k \not = k_r$. $\tilde{\omega}_{i;k} \gets \frac{\omega_{i;k}}{\sum_{n \not = k_r} \omega_{i;n}}$, $\forall k \not = k_r$. Remove $\gamma_{i,j;k_r}, \tilde{\gamma}_{i,j;k_r}, \omega_{i,j;k_r}, \tilde{\omega}_{i,j;k_r}$, $\forall j \le N_i$. algorithmCluster Removing of § EXPERIMENT RESULTS §.§ Datasets and Models Diverse distribution shifts scenarios. Similar to previous work [Guo et al., 2023], the diverse distribution shift scenario construct clients with three types of distribution shifts with each other: * Label Distribution Shifts: We use the idea introduced Yoshida et al., 2019, Hsu et al., 2019, Reddi et al., 2021, where we leverage the Latent Dirichlet Allocation (LDA) with $\alpha = 1.0$. We split datasets to 100 clients by default. * Feature Distribution Shifts: We utilize the idea of constructing CIFAR10-C, CIFAR100-C, and ImageNet-C [Hendrycks & Dietterich, 2019]. In detail, we apply random augmentations to client samples, selecting from 20 types, including 'Original', 'Gaussian Noise', 'Shot Noise', 'Impulse Noise', 'Defocus Blur', 'Glass Blur', 'Motion Blur', 'Zoom Blur', 'Snow', 'Frost', 'Fog', 'Brightness', 'Contrast', 'Elastic', 'Pixelate', 'JPEG', 'Speckle Noise', 'Gaussian Blur', 'Spatter', and 'Saturate'. Augmentation types remain consistent within each client. * Concept Shifts: For label $y \le C_{\beta}$, it becomes $y$, $(1 + y) \% C_{\beta}$, and $(2 + y) \% C_{\beta}$ across concepts, where $C_{\beta} = \lfloor C * \beta \rfloor$, and $C$ is the number of classes. Noisy label scenarios. We follow the methodology of previous works [Fang & Ye, 2022, Ke et al., 2022] to construct noisy label scenarios. Our approach involves two types of noisy labels: symmetric flip and pair flip. Symmetric flip entails randomly flipping the original class label to any wrong class label with equal probability. Pair flip involves flipping the original class label only to a very similar wrong category. We use the parameter $\chi$ to control the noisy rate, where $\chi = 0.1$ indicates that $10\%$ of the data have wrong labels. §.§ Baselines and Hyper-Parameter Settings Detailed implementations and hyper-parameter settings for all the algorithms Unless special mentioned, we split each dataset to 100 clients with 3 concepts. The learning rates are chosen in $\{0.03, 0.06, 1.0\}$, and we report the best results for each algorithm. We run the algorithms for 200 communication rounds and set the number of local epochs to 1. Detailed implementations and hyper-parameter settings of baseline algorithms. The details of the settings and hyper-parameters we used for the baseline methods a summarized below. We exclude the algorithms that do not require additional hyper-parameters here. * CFL [Sattler et al., 2020]. We use the public code provided by [Marfoq et al., 2021] for the CFL algorithm. The hyper-parameters $\text{tol}_1$ and $\text{tol}_2$ are tuned, and we report how the hyper-parameters affect the results of the algorithm in Table <ref>. * ICFL [Yan et al., 2023]. Follow the same setting as the original paper, we set the hyper-parameter $\alpha*(0)$ to $\{0.85, 0.98\}$, and $\epsilon_1 = 4.0$. * stoCFL [Zeng et al., 2023]. We choose $\tau = \{0, 0.05, 0.1, 0.15\}$ to control the trade-off between personalization and generalization as suggested by the original paper. In addition, we choose $\lambda = 0.5$, which always achieve the best performance as reported in the original paper. * (FedRC) [Guo et al., 2023]. We set $\tilde{\mu} = 0.4$, and choose $\rho = \{0.05, 0.1, 0.3\}$. The distance between clients are calculated by Equation (<ref>). * (FedEM) [Marfoq et al., 2021]. We set $\tilde{\mu} = 0.4$, and choose $\rho = \{0.05, 0.1, 0.3\}$. The distance between clients are calculated by Equation (<ref>). * (FeSEM) [Long et al., 2023]. We choose $\rho = \{0.05, 0.1, 0.3\}$. Follow the original paper, we use hard clustering paradigms that does not require the hyper-parameter $\tilde{\mu}$. The model splitting process is the same as Sattler et al., 2020 that designed for hard clustering paradigms. The distance between clients are calculated by Equation (<ref>). §.§ Additional Experiment Results Results on noisy data scenarios In Table <ref>, we show the performance of clustered FL emthods on noisy data scenarios. Results show that consistently outperform other methods by a large margin. Performance of algorithms on noisy data scenarios. We evaluated the performance of algorithms using the CIFAR10 dataset split into 100 clients. For each algorithm, we report the best test accuracy for all 200 communication rounds. 2*Algorithm 4cCIFAR10 (MobileNetV2) (lr)2-5 (lr)6-7 Pairflip, $\chi=0.1$ Pairflip,$\chi=0.2$ Symflip, $\chi=0.2$ Symflip, $\chi=0.4$ FedAvg $54.75$ 0.5 $\pm 1.45$ $52.35$ 0.5 $\pm 1.65$ $52.60$ 0.5 $\pm 0.50$ $41.80$ 0.5 $\pm 0.50$ FeSEM $32.60$ 0.5 $\pm 1.30$ $35.25$ 0.5 $\pm 2.95$ $32.40$ 0.5 $\pm 2.80$ $29.70$ 0.5 $\pm 0.01$ IFCA $24.95$ 0.5 $\pm 7.05$ $20.55$ 0.5 $\pm 4.65$ $30.35$ 0.5 $\pm 2.05$ $36.05$ 0.5 $\pm 4.45$ FedEM $64.40$ 0.5 $\pm 1.10$ $57.55$ 0.5 $\pm 2.95$ $53.00$ 0.5 $\pm 1.90$ $43.10$ 0.5 $\pm 0.20$ FedRC $67.90$ 0.5 $\pm 1.00$ $59.95$ 0.5 $\pm 1.05$ $55.25$ 0.5 $\pm 2.05$ $42.00$ 0.5 $\pm 0.40$ $\mathbf{66.70}$ 0.5 $\pm 0.40$ $\mathbf{62.70}$ 0.5 $\pm 0.30$ $\mathbf{59.95}$ 0.5 $\pm 1.15$ $\mathbf{47.20}$ 0.5 $\pm 0.20$ Additional results on diverse distribution shift scenarios. In Table <ref>, we show the performance of algorithms with $\beta = 0.4$. Results show always achieve the best test accuracy, and achieve a good local-global balance. Performance of the adaptive clustering methods. We evaluated algorithm performance on CIFAR10 and CIFAR100 datasets, employing 100 clients. For each algorithm, we present the highest validation and test accuracies across 200 communication rounds, and the final number of clusters during training denoted as $K^{T}$. All experiments utilized MobileNet-V2 [Sandler et al., 2018]. 2*Algorithm 3cCIFAR10, $\beta = 0.4$ 3cCIFAR100, $\beta = 0.4$ (lr)2-4 (lr)5-7 Val Test $K^{T}$ Val Test $K^{T}$ FedAvg $48.16$ 0.5 $\pm 1.64$ $49.93$ 0.5 $\pm 0.80$ 3.0 $22.77$ 0.5 $\pm 0.01$ $24.62$ 0.5 $\pm 0.55$ 3.0 FeSEM $46.08$ 0.5 $\pm 4.54$ $35.99$ 0.5 $\pm 4.59$ 3.0 $23.56$ 0.5 $\pm 1.52$ $22.31$ 0.5 $\pm 1.08$ 3.0 IFCA $36.15$ 0.5 $\pm 3.45$ $24.79$ 0.5 $\pm 1.18$ 3.0 $27.72$ 0.5 $\pm 0.82$ $21.37$ 0.5 $\pm 1.33$ 3.0 FedEM $60.26$ 0.5 $\pm 1.10$ $54.44$ 0.5 $\pm 0.04$ 3.0 $25.80$ 0.5 $\pm 0.20$ $22.88$ 0.5 $\pm 0.19$ 3.0 FedRC $57.99$ 0.5 $\pm 0.29$ $56.75$ 0.5 $\pm 0.38$ 3.0 $30.94$ 0.5 $\pm 0.88$ $31.63$ 0.5 $\pm 0.20$ 3.0 $\quad \text{tol}_1 =0.4, \text{tol}_2 =1.6$ $61.86$ 0.5 $\pm 5.29$ $51.15$ 0.5 $\pm 0.82$ 6.0 $34.11$ 0.5 $\pm 6.35$ $21.04$ 0.5 $\pm 2.21$ 5.0 $\quad \text{tol}_1 =0.4, \text{tol}_2 =0.8$ $60.42$ 0.5 $\pm 0.31$ $41.59$ 0.5 $\pm 2.14$ 8.0 $36.23$ 0.5 $\pm 3.58$ $16.03$ 0.5 $\pm 2.69$ 6.0 $\quad \text{tol}_1 =0.2, \text{tol}_2 =0.8$ $49.14$ 0.5 $\pm 6.11$ $49.88$ 0.5 $\pm 4.21$ 3.0 $34.20$ 0.5 $\pm 7.13$ $26.42$ 0.5 $\pm 0.73$ 2.5 $\quad \alpha^{*} (0) =0.85$ $77.73$ 0.5 $\pm 0.47$ $52.03$ 0.5 $\pm 0.10$ 100.0 $49.71$ 0.5 $\pm 0.55$ $28.55$ 0.5 $\pm 0.03$ 100.0 $\quad \alpha^{*} (0) =0.98$ $63.69$ 0.5 $\pm 3.58$ $54.02$ 0.5 $\pm 1.11$ 81.5 $45.72$ 0.5 $\pm 1.10$ $28.45$ 0.5 $\pm 0.82$ 70.0 $\quad \tau=0.00$ $48.55$ 0.5 $\pm 0.95$ $51.25$ 0.5 $\pm 1.16$ 1.5 $24.50$ 0.5 $\pm 0.03$ $25.70$ 0.5 $\pm 1.51$ 1.0 $\quad \tau=0.05$ $57.84$ 0.5 $\pm 2.26$ $50.42$ 0.5 $\pm 0.97$ 20.5 $26.24$ 0.5 $\pm 1.46$ $26.60$ 0.5 $\pm 1.17$ 4.0 $\quad \tau=0.10$ $72.91$ 0.5 $\pm 2.25$ $47.84$ 0.5 $\pm 2.60$ 59.0 $67.67$ 0.5 $\pm 1.68$ $9.89$ 0.5 $\pm 0.45$ 86.0 $\quad \tau=0.15$ $77.19$ 0.5 $\pm 2.31$ $41.49$ 0.5 $\pm 0.97$ 92.0 $70.13$ 0.5 $\pm 0.27$ $7.77$ 0.5 $\pm 0.23$ 94.0 $\rho=0.1$ $85.30$ 0.5 $\pm 1.05$ $45.20$ 0.5 $\pm 0.28$ $47.0$ $58.61$ 0.5 $\pm 4.14$ $18.29$ 0.5 $\pm 2.38$ $35.5$ $\rho=0.3$ $80.34$ 0.5 $\pm 1.33$ $48.25$ 0.5 $\pm 2.72$ 20.5 $44.65$ 0.5 $\pm 0.35$ $21.73$ 0.5 $\pm 1.27$ 12.0 $\rho=0.05$ $80.31$ 0.5 $\pm 1.60$ $53.62$ 0.5 $\pm 4.36$ $18.5$ $62.19$ 0.5 $\pm 1.54$ $21.15$ 0.5 $\pm 0.88$ $44.5$ $\rho=0.1$ $82.89$ 0.5 $\pm 0.92$ $56.27$ 0.5 $\pm 1.08$ 26.5 $59.08$ 0.5 $\pm 0.06$ $21.29$ 0.5 $\pm 0.87$ 31.5 $\rho=0.3$ $80.72$ 0.5 $\pm 1.90$ $55.77$ 0.5 $\pm 1.93$ 10.0 $49.84$ 0.5 $\pm 6.85$ $28.62$ 0.5 $\pm 0.78$ 11.0 $\rho=0.05$ $68.48$ 0.5 $\pm 0.25$ $66.77$ 0.5 $\pm 0.28$ 9.5 $38.75$ 0.5 $\pm 0.98$ $30.45$ 0.5 $\pm 0.07$ 10.0 $\rho=0.1$ $68.56$ 0.5 $\pm 3.56$ $65.75$ 0.5 $\pm 5.40$ 6.0 $40.30$ 0.5 $\pm 1.19$ $30.23$ 0.5 $\pm 0.85$ 11.0 $\rho=0.3$ $70.86$ 0.5 $\pm 0.31$ $70.13$ 0.5 $\pm 0.42$ 5.5 $39.62$ 0.5 $\pm 0.34$ $32.22$ 0.5 $\pm 0.20$ 5.0 Ablation studies on techniques in Sec <ref>. We evaluated algorithm performance on CIFAR10 and CIFAR100 datasets, showcasing the top Validation and Test accuracies for each. We kept $\rho = 0.3$ consistent across all algorithms and varied $\tilde{\mu}$ to adjust the penalty term's strength in the objective function. The best results in each block are highlighted. 2*Algorithm 2cCIFAR10, $\beta = 0.2$ 2cCIFAR10, $\beta = 0.4$ 2cCIFAR100, $\beta = 0.2$ 2cCIFAR100, $\beta = 0.4$ (lr)2-3 (lr)4-5 (lr)6-7 (lr)8-9 Val Test Val Test Val Test Val Test $\tilde{\mu} = 0.0$ $\mathbf{83.67}$ 0.5 $\pm 0.72$ $\mathbf{62.43}$ 0.5 $\pm 0.71$ $\mathbf{80.72}$ 0.5 $\pm 1.90$ $\mathbf{55.77}$ 0.5 $\pm 1.93$ $\mathbf{50.72}$ 0.5 $\pm 2.97$ $\mathbf{32.13}$ 0.5 $\pm 0.18$ $\mathbf{49.84}$ 0.5 $\pm 6.85$ $\mathbf{28.62}$ 0.5 $\pm 0.78$ $\tilde{\mu} = 0.1$ $81.60$ 0.5 $\pm 0.59$ $60.48$ 0.5 $\pm 0.50$ $80.36$ 0.5 $\pm 2.40$ $55.10$ 0.5 $\pm 1.75$ $48.78$ 0.5 $\pm 0.62$ $30.50$ 0.5 $\pm 0.33$ $48.56$ 0.5 $\pm 1.10$ $25.80$ 0.5 $\pm 1.17$ $\tilde{\mu} = 0.4$ $79.52$ 0.5 $\pm 0.11$ $53.33$ 0.5 $\pm 2.97$ $76.50$ 0.5 $\pm 0.34$ $49.97$ 0.5 $\pm 2.26$ $44.85$ 0.5 $\pm 0.48$ $28.39$ 0.5 $\pm 0.12$ $41.52$ 0.5 $\pm 0.08$ $22.83$ 0.5 $\pm 0.42$ $\tilde{\mu} = 0.0$ $\mathbf{70.82}$ 0.5 $\pm 0.25$ $69.15$ 0.5 $\pm 0.35$ $69.95$ 0.5 $\pm 1.99$ $67.09$ 0.5 $\pm 1.01$ $39.55$ 0.5 $\pm 1.29$ $35.49$ 0.5 $\pm 0.16$ $38.77$ 0.5 $\pm 1.20$ $31.87$ 0.5 $\pm 1.13$ $\tilde{\mu} = 0.1$ $69.91$ 0.5 $\pm 0.16$ $68.77$ 0.5 $\pm 1.56$ $69.53$ 0.5 $\pm 0.21$ $68.54$ 0.5 $\pm 1.08$ $39.38$ 0.5 $\pm 0.40$ $35.95$ 0.5 $\pm 0.59$ $\mathbf{39.77}$ 0.5 $\pm 2.33$ $31.52$ 0.5 $\pm 0.45$ $\tilde{\mu} = 0.4$ $69.33$ 0.5 $\pm 0.24$ $\mathbf{69.67}$ 0.5 $\pm 1.27$ $\mathbf{70.86}$ 0.5 $\pm 0.31$ $\mathbf{70.13}$ 0.5 $\pm 0.42$ $\mathbf{39.97}$ 0.5 $\pm 0.21$ $\mathbf{36.50}$ 0.5 $\pm 0.28$ $39.62$ 0.5 $\pm 0.34$ $\mathbf{32.22}$ 0.5 $\pm 0.20$ Ablation studies on techniques in Sec <ref>. We evaluated algorithm performance on CIFAR10 and CIFAR100 datasets, displaying their highest Validation and Test accuracies. We kept $\rho$ consistent at 0.3 for all algorithms. "w/ SCWU" denotes the use of soft clustering weight updating mechanisms designed in Section <ref>. 2*Algorithm 2cCIFAR10, $\beta = 0.2$ 2cCIFAR10, $\beta = 0.4$ 2cCIFAR100, $\beta = 0.2$ 2cCIFAR100, $\beta = 0.4$ (lr)2-3 (lr)4-5 (lr)6-7 (lr)8-9 Val Test Val Test Val Test Val Test w/ SCWU $\mathbf{83.67}$ 0.5 $\pm 0.72$ $62.43$ 0.5 $\pm 0.71$ $\mathbf{80.72}$ 0.5 $\pm 1.90$ $55.77$ 0.5 $\pm 1.93$ $\mathbf{50.72}$ 0.5 $\pm 2.97$ $32.13$ 0.5 $\pm 0.18$ $\mathbf{49.84}$ 0.5 $\pm 6.85$ $28.62$ 0.5 $\pm 0.78$ w/o SCWU $82.11$ 0.5 $\pm 2.39$ $63.84$ 0.5 $\pm 0.19$ $80.08$ 0.5 $\pm 0.99$ $58.83$ 0.5 $\pm 2.12$ $49.77$ 0.5 $\pm 1.93$ $32.90$ 0.5 $\pm 1.11$ $47.91$ 0.5 $\pm 2.67$ $27.40$ 0.5 $\pm 1.17$ w/ SCWU $69.33$ 0.5 $\pm 0.24$ $\mathbf{69.67}$ 0.5 $\pm 1.27$ $70.86$ 0.5 $\pm 0.31$ $\mathbf{70.13}$ 0.5 $\pm 0.42$ $39.97$ 0.5 $\pm 0.21$ $\mathbf{36.50}$ 0.5 $\pm 0.28$ $39.62$ 0.5 $\pm 0.34$ $\mathbf{32.22}$ 0.5 $\pm 0.20$ w/o SCWU $69.88$ 0.5 $\pm 0.30$ $68.83$ 0.5 $\pm 0.71$ $70.77$ 0.5 $\pm 0.47$ $68.87$ 0.5 $\pm 0.23$ $40.96$ 0.5 $\pm 1.24$ $35.72$ 0.5 $\pm 1.01$ $39.18$ 0.5 $\pm 0.13$ $32.08$ 0.5 $\pm 0.78$ Performance of algorithms with Resnet18. We evaluated algorithm performance on CIFAR10 datasets with $\beta = 0.2$, displaying their highest Validation and Test accuracies. All algorithms utilize ResNet18 and run for 200 communication rounds. Algorithm Val Test $\text{tol}_1 = 0.4, \text{tol}_2 = 0.6$ $63.07$ 0.5 $\pm 7.42$ $53.65$ 0.5 $\pm 2.33$ $\text{tol}_1 = 0.4, \text{tol}_2 = 0.8$ $61.14$ 0.5 $\pm 1.87$ $54.87$ 0.5 $\pm 1.32$ $\alpha^{*}(0) = 0.85$ $80.46$ 0.5 $\pm 0.99$ $45.28$ 0.5 $\pm 6.56$ $\alpha^{*}(0) = 0.98$ $82.34$ 0.5 $\pm 0.28$ $44.08$ 0.5 $\pm 0.40$ $\tau = 0.1$ $57.41$ 0.5 $\pm 6.69$ $48.95$ 0.5 $\pm 1.95$ $\tau = 0.15$ $66.54$ 0.5 $\pm 1.05$ $47.77$ 0.5 $\pm 0.14$ $\rho = 0.05$ $86.90$ 0.5 $\pm 0.20$ $50.34$ 0.5 $\pm 5.99$ $\rho = 0.1$ $85.55$ 0.5 $\pm 0.24$ $49.38$ 0.5 $\pm 6.15$ $\rho = 0.05$ $83.88$ 0.5 $\pm 0.25$ $58.92$ 0.5 $\pm 1.11$ $\rho = 0.1$ $83.83$ 0.5 $\pm 0.01$ $60.27$ 0.5 $\pm 3.11$ $\rho = 0.05$ $67.72$ 0.5 $\pm 1.30$ $64.13$ 0.5 $\pm 0.37$ $\rho = 0.1$ $67.51$ 0.5 $\pm 0.24$ $63.15$ 0.5 $\pm 0.78$
# Fast FullSubNet: Accelerate Full-band and Sub-band Fusion Model for Single- channel Speech Enhancement ###### Abstract FullSubNet is our recently proposed real-time single-channel speech enhancement network that achieves outstanding performance on the Deep Noise Suppression (DNS) Challenge dataset. A number of variants of FullSubNet have been proposed recently, but they all focus on the structure design towards better performance and are rarely concerned with computational efficiency. This work proposes a new architecture named Fast FullSubNet dedicated to accelerating the computation of FullSubNet. Specifically, Fast FullSubNet processes sub-band speech spectra in the mel-frequency domain by using cascaded linear-to-mel full-band, sub-band, and mel-to-linear full-band models such that frequencies involved in the sub-band computation are vastly reduced. After that, a down-sampling operation is proposed for the sub-band input sequence to further reduce the computational complexity along the time axis. Experimental results show that, compared to FullSubNet, Fast FullSubNet has only 13% computational complexity and 16% processing time, and achieves comparable or even better performance. Index Terms— Fast FullSubNet, FullSubNet, computational cost, sub-band, speech enhancement ## 1 Introducation Speech enhancement aims to improve speech intelligibility and perceptual quality in noisy environments [1]. Recent Deep Noise Suppression (DNS) Challenge [2, 3, 4] have significantly contributed to advances in the speech enhancement field and fostered many state-of-the-art (SOTA) methods [5, 6, 7]. Among these methods, FullSubNet [6] attracted broad attention because of its excellent perceptual quality of enhanced speech, especially for the reverberant speech case. Unlike the mainstream methods, which only process the full-band spectra, FullSubNet integrates a full-band model and a sub-band model and performs joint optimization. In FullSubNet, the full-band model extracts global spectral information and long-distance cross-band dependencies. Meanwhile, the sub-band model processes the frequency bands independently and focuses on local spectral patterns and signal stationarity. Experiments show that these two kinds of models are complementary and can be efficiently integrated into one framework. The fusion scheme proposed by FullSubNet is compatible with other advanced techniques employed in SOTA speech enhancement methods. In the past year, a number of FullSubNet variants [8, 9, 10, 11, 12] have been proposed. DCCRN- SUBNET [11] combines a deep complex convolution recurrent network (DCCRN) and attention gates as an improved full-band model and keeps the sub-band model unchanged. DPT-FSNET [9] proposes a dual-path transformer-based full-band and sub-band fusion network. FullSubNet+ [8] replaces the LSTM layers in the original full-band model with stacked temporal convolutional network blocks. STSubNet [12] uses a novel sub-band network to corporate an efficient spectro- temporal receptive field extractor to achieve simultaneous denoising and dereverberation. Through the sophisticated structure design, these variants achieve remarkable noise suppression performance. Nevertheless, the computational cost of the full-band and sub-band fusion models remains a blind spot due to the hundreds of times running the sub-band model for processing one signal clip. This work aims at accelerating the computation of FullSubNet. Instead of straightforwardly seeking to prune or quantize neural networks, we introduce a new architecture named Fast FullSubNet to reduce the complexity caused by the sub-band model while maintaining the speech enhancement performance. Unlike FullSubNet, which works in the linear-frequency domain, this work proposed to work in the mel-frequency domain to largely reduce the number of frequency bands. Mel-frequency presents speech spectra more compactly and meanwhile without losing spectral information in the sense of human auditory perception. Specifically, the linear-frequency spectra are first transformed into the mel- frequency domain, then processed with cascaded full-band and sub-band models following the spirit of FullSubNet. Afterward, an extra mel-to-linear full- band model is added to transform back to the linear-frequency domain, which is similar to the neural vocoders used in recent text-to-speech (TTS) systems [13, 14] that perform a mel-to-linear transformation. Besides, to further reduce the complexity of the sub-band model, we propose to downsample the feature sequence processed by the sub-band model, and then leverage the mel- to-linear full-band model to interpolate the output of the sub-band model. These models and strategies are efficiently integrated together. Experimental results show that Fast FullSubNet reduces the computational complexity and processing time to about 13% and 16% of that of FullSubNet. Moreover, Fast FullSubNet achieves comparable or even better speech enhancement performance than FullSubNet due to the use of mel-frequency and post mel-to-linear model. We believe that the design scheme of Fast FullSubNet is also suitable for other full-band and sub-band fusion models [8, 9, 10, 11, 12]. Code and audio samples are available at https://github.com/haoxiangsnr/FullSubNet. ## 2 Method Fig. 1: Diagram of Fast FullSubNet. The right parts of rectangle boxes show feature dimensions, e.g., “$1~{}(F)$” represents a $F$-dimension vector. “$F_{\text{mel}}(2N+2)$” denotes $F_{\text{mel}}$ independent ($2N+1$)-dimensional vectors. This work processes speech signals in the short-time Fourier transform (STFT) domain. The observed noisy speech signals are given by $x(t,f)=s(t,f)+n(t,f)$ (1) where $x(t,f)$, $s(t,f)$ and $n(t,f)$ represent the complex-valued time- frequency (T-F) bins of noisy speech, noise-free speech (can be the reverberant image signal received at the microphone) and interference noise, respectively, with $t\in[1,\cdots,T]$, $f\in[0,\cdots,F-1]$, $T$ and $F$ being the time frame, discrete frequency bin, number of frames and number of discrete frequencies, respectively. Note that this work only focuses on the denoising task, which means the purpose of this work is to suppress noise $n(t,f)$ and recover the reverberant speech signal $s(t,f)$. The proposed Fast FullSubNet reduces the computational complexity of FullSubNet by decreasing the number of frequencies and time frames involved in the sub-band model. Figure 1 shows the workflow of Fast FullSubNet, which keeps the motivation and logic of the original FullSubNet unchanged. To decrease the number of frequency bands, we first perform speech enhancement in the mel-frequency domain with a linear-to-mel full-band model $\mathcal{F}_{\text{l2m}}$ and a sub-band model $\mathcal{S}$, since the mel- frequency representation of speech is more compact and still informative in terms of human auditory perception. Then, the output of $\mathcal{S}$ is transformed back to the linear-frequency domain with a mel-to-linear full-band model $\mathcal{F}_{\text{m2l}}$. Each of the three models consists of a two- layer LSTM network. ### 2.1 Full-band model $\mathcal{F}_{\text{l2m}}$ The mel-scale spectral magnitude is first processed by the full-band model $\mathcal{F}_{\text{l2m}}$ for extracting global spectral information and long-distance cross-band dependencies. Formally, the linear frequency signal $x(t,f)$ is transformed to the mel-frequency domain as $x_{\text{mel}}(t,f),f\in[0,\cdots,F_{\text{mel}}-1]$, where $F_{\text{mel}}$ is the number of mel frequencies. The input vector of $\mathcal{F}_{\text{l2m}}$ at time $t$ is $\displaystyle\mathbf{x}(t)=[x_{\text{mel}}(t,0),\cdots,x_{\text{mel}}(t,F_{\text{mel}}-1)]^{T}\in\mathbb{R}^{F_{\text{mel}}}.$ (2) The sequence of this feature vector is processed with two layers of LSTM. This full-band model outputs a spectral embedding with the same size as $\mathbf{x}(t)$, namely one hidden unit for each mel frequency. This spectral embedding provides complementary information to the following sub-band model. ### 2.2 Sub-band model $\mathcal{S}$ The sub-band model processes the mel frequencies independently, and all frequencies share the same network. The sub-band model predicts clean speech leveraging the signal stationarity and the local spectral pattern of speech. Specifically, the input of the sub-band model contains two sources. For one frequency $f$, the first source is the noisy mel-spectra of this frequency and $N$ adjacent frequencies at each frequency side. The second source is the output of the full-band model at frequency $f$, denoted as $\mathcal{F}_{\text{l2m}}(\mathbf{x}(t))(f)$. We concatenate these two sources as the input of the sub-band model $\displaystyle\mathbf{x}_{\text{sub}}(t,f)$ $\displaystyle=[x_{\text{mel}}(t,f-N),\cdots,x_{\text{mel}}(t,f),\cdots,$ $\displaystyle x_{\text{mel}}(t,f+N),\mathcal{F}_{\text{l2m}}(\mathbf{x}(t))(f)]^{T}\in\mathbb{R}^{2N+2}.$ (3) The sequence of this feature vector is processed with the same two layers of LSTM for all frequencies. For each mel frequency, the sub-band model outputs a one-dimensional hidden unit. ### 2.3 Full-band model $\mathcal{F}_{\text{m2l}}$ The full-band model $\mathcal{F}_{\text{m2l}}$ transforms mel-frequency back to linear-frequency. The output of full-band model $\mathcal{F}_{\text{l2m}}$ and sub-band model $\mathcal{S}$ are concatenated as the input of $\mathcal{F}_{\text{m2l}}$: $\displaystyle\mathbf{x}_{\text{m2l}}(t)=[$ $\displaystyle\mathcal{F}_{\text{l2m}}(\mathbf{x}(t))^{T},\mathcal{S}(\mathbf{x}_{\text{sub}}(t,0)),\cdots,$ $\displaystyle\mathcal{S}(\mathbf{x}_{\text{sub}}(t,F_{\text{mel}}-1))]^{T}\in\mathbb{R}^{2F_{\text{mel}}}.$ (4) Two layers of LSTM followed by one linear layer predict the final linear- frequency output. The complex-valued Ideal Ratio Mask (cIRM) [15] is taken as the learning target. Denote cIRM as $y(t,f)\in\mathbb{C}$ for one T-F bin. $\mathcal{F}_{\text{m2l}}$ predicts the real-valued cIRM vector at time $t$ as $\displaystyle\mathbf{y}(t)=[$ $\displaystyle\text{R}\\{y(t,0)\\},\text{I}\\{y(t,0)\\},\cdots,$ $\displaystyle\text{R}\\{y(t,F-1)\\},\text{I}\\{y(t,F-1)\\}]^{T}\in\mathbb{R}^{2F},$ (5) where $\text{R}\\{\\}$ and $\text{I}\\{\\}$ denote the real and imaginary parts of complex number, respectively. This mel-to-linear model performs a similar function as the neural vocoders used in TTS [14, 16], as they both perform mel-frequency to linear-frequency transformation, except that speech enhancement can use the noisy signal phase. As shown in Figure 1, to employ a look-ahead of $\tau$ frames, the target sequence could be set to be delayed $\tau$ frames relative to the input sequence. ### 2.4 Sub-band down-sampling One key characteristic of speech signals is that the samples are ordered in time, and successive samples are dependent/redundant [17], which means we may process only a part of the samples without performance degradation. To further reduce the computational complexity of the sub-band model, we down-sample the feature sequence of the sub-band model by a factor of $m$. Down-sampling is conducted by non-overlapped averaging the input, i.e., $\mathbf{x}_{\text{sub}}(t,f)$, for every $m$ frames. For frequency $f$, the down-sampled input is denoted as $\tilde{\mathbf{x}}_{\text{sub}}(n,f)$, where $n\in[1,\cdots,\lceil\frac{T}{m}\rceil]$ is the index of down-sampled time frames. When $m=1$, there will be no downsampling. The down-sampled sequence may lose information employed by the sub-band model, such as the temporal dynamic of local spectral pattern, and thus degrades the quality of the sub- band output. The output of the sub-band model will be down-sampled accordingly. The down- sampled sub-band output is first copied for $m$ times and then fed to the following full-band model $\mathcal{F}_{\text{m2l}}$. For this case, $\mathcal{F}_{\text{m2l}}$ is not only used for mel-to-linear transformation but also for interpolating the sub-band output leveraging the dependence of adjacent frames. Note that, also as the input of $\mathcal{F}_{\text{m2l}}$, the output of $\mathcal{F}_{\text{l2m}}$ is not down-sampled, which may alleviate the difficulty of interpolation. As shown in Figure 1, to conduct down-sampling and meanwhile guarantee online processing, at a one-time step, the averaging of input frames only uses the previous $m-1$ time steps, while the output is copied to the future $m-1$ time steps. Table 1: Performance on DNS Challenge (INTERSPEECH 2020) dataset. For comparison methods, the scores are directly quoted from their original papers, and the missing scores in the original papers are shown as blanks. Method | Down-sampling | | | | | | | | | | | ---|---|---|---|---|---|---|---|---|---|---|---|--- factor $m$ | With Reverb | | | | No Reverb | | | | # Param | | | (M) | MACs | | | | | | | | | | | (G/s) | RTF | | | | | | | | | | | | | WB-PESQ | NB-PESQ | STOI | SI-SDR | WB-PESQ | NB-PESQ | STOI | SI-SDR | | | Noisy | | 1.822 | 2.753 | 86.62 | 9.033 | 1.582 | 2.454 | 91.52 | 9.071 | - | - | - DTLN [18] | | | 2.70 | 84.68 | 10.53 | | 3.04 | 74.76 | 16.34 | 0.99 | 0.11 | 0.043 DCCRN-E [5] | | | 3.077 | | | | 3.266 | | | 3.74 | 6.56 | 0.128 Conv-TasNet [19] | | 2.75 | | | | 2.73 | | | | 8.68 | 5.97 | 0.659 Sub-band Model [20] | $384\times 2$ | 2.650 | 3.274 | 90.53 | 14.67 | 2.369 | 3.052 | 94.24 | 16.15 | 1.30 | 21.68 | 0.401 FullSubNet [6] | | 2.969 | 3.473 | 92.62 | 15.75 | 2.888 | 3.305 | 96.11 | 17.29 | 5.64 | 30.73 | 0.511 Full-band Model | | 2.726 | 3.388 | 91.15 | 14.75 | 2.831 | 3.354 | 96.10 | 16.58 | 8.15 | 0.53 | 0.026 Fast FullSubNet | 1 | 3.031 | 3.511 | 93.14 | 15.68 | 2.865 | 3.375 | 96.29 | 17.11 | 6.84 | 7.79 | 0.147 | 2 | 3.016 | 3.497 | 92.96 | 15.85 | 2.808 | 3.353 | 96.11 | 16.98 | 6.84 | 4.12 | 0.082 | 4 | 2.896 | 3.438 | 92.35 | 15.51 | 2.707 | 3.294 | 95.85 | 16.35 | 6.84 | 2.29 | 0.053 | 8 | 2.862 | 3.414 | 92.11 | 15.31 | 2.692 | 3.380 | 95.77 | 16.38 | 6.84 | 1.39 | 0.042 | $+\infty$ | 2.865 | 3.419 | 92.18 | 15.12 | 2.763 | 3.325 | 95.96 | 16.60 | 4.91 | 0.32 | 0.016 ## 3 Experiments ### 3.1 Experimental setup For a fair comparison with FullSubNet, all experiments are conducted on the DNS challenge (INTERSPEECH 2020) dataset. This dataset consists of a clean speech set including about 500-hour clips from 2150 speakers and a noise dataset including over 180-hour clips from 150 classes. The synthesized noisy- clean speech pairs follow the dynamic mixing strategy of FullSubNet. Before the start of each training epoch, 75% of the clean speech clips are mixed with randomly selected room impulse responses (RIR) from (1) Multichannel Impulse Response Database [21] with three reverberation times 0.16 s, 0.36 s, and 0.61 s. (2) Reverb Challenge dataset [22] with three reverberation times 0.3 s, 0.6 s and 0.7 s. Then, based on a randomly selected SNR between -5 and 20 dB, the reverberated or no-reverberated speech will be mixed with a randomly selected noise. The total data “seen” by the model is over 5000 hours after ten epochs of training. We use the test dataset of the DNS Challenge dataset for evaluation, including two categories of synthetic clips, i.e., without and with reverberations. Each category has 150 noisy clips with SNR levels distributed between 0 dB to 20 dB. The sampling rate of audio signals is 16000 Hz. STFT uses a 32 ms (512 samples) Hanning window and a 16 ms hop size. The number of mel-frequency bins is set to 64. For training, we adopt the Adam optimizer with a learning rate of 0.001. For a fair comparison, same as the parameters mentioned in [20, 6], we set output delay $\tau$ to two frames so that the model exploits $16\times 2=32$ ms future information. The sequence length for training is set to the $T=192$ frames (about 3 s). According to preliminary experiments, the number of neighbor frequencies $N$ in Equation (2.2) is set to 5. The three models all consist of two stacked unidirectional LSTM layers and a linear layer. The two full-band models and one sub-band model, have 384/257, 512/512, and 384/384 hidden units for their own two LSTM layers, respectively. ### 3.2 Results Table 1 shows the experimental results in terms of perceptual quality (WB-/NB- PESQ [23]), speech intelligibility (STOI [24]), and SI-SDR [25]. Besides, the “# Param” column shows the number of parameters. We also report the Mult-Add calculations (MACs) calculated using the torchinfo tool 111https://github.com/TylerYep/torchinfo to show the computational complexity. In addition, real-time factor (RTF), a metric for measuring the inference speed, is also measured on a platform with Intel (R) Core (TM) i7-9700 CPU @ 3.00 GHz and PyTorch 1.12. Effectiveness of reducing frequencies. When the downsampling factor $m$ of Fast FullSubNet is set to one, there is no time downsampling. Compared with the original FullSubNet, decreasing the frequencies for sub-band processing (from 257 to 64) reduces MACs and RTF to about 25% and 29 % of that of FullSubNet. Moreover, almost all performance measures are comparable or even better, possibly due to the use of mel-frequency and a post mel-to-linear full-band model. The advantage of using mel-frequency will be more clear later. Effectiveness of sub-band downsampling. Another set of experiments is conducted with increasing downsampling factors. Compared with $m=1$, $m=2$ achieves a comparable enhancement performance, and MACs and RTF further reduce to about 13% and 16% of that of FullSubNet, respectively. This result fits our expectation that the successive time frames are relatively dependent/redundant, and the mel-to-linear full-band model can well interpolate the down-sampled sub-band output. Decreasing $m$ to 4 and 8, MACs and RTF will be further reduced but at the cost of enhancement performance degradation. Finally, when $m=+\infty$, the sub-band model is totally removed, which achieves similar speech enhancement performance as $m=8$, which means setting up to $m=8$, the sub-band model is no longer useful for speech enhancement. We would like to note that our preliminary experiments show that the performance degradation caused by sub-band downsampling is related to the number of look-ahead frames, as the look-ahead frames provide more recent information for the interpolation of the output of the sub-band model. This means a smaller/larger number of look-ahead frames will allow using a smaller/larger down-sampling factor, without suffering from performance degradation. When the sub-band model is removed with $m=+\infty$, the network consists of two layers of mel-frequency and two layers of mel-to-linear full-band LSTMs. To testify the use of mel-frequency, we train a Full-band Model (as shown in Table 1) composed of four layers of 512-dim LSTMs with linear-frequency input and output. It can be seen that the mel-frequency network performs better than the Full-band Model in terms of both speech enhancement performance and computational complexity. The possible reason is that mel-frequency represents speech spectra in a more compact way, which eases the learning of mapping between noisy and clean spectra. Comparsion with SOTAs. We compare Fast FullSubNet with several recent SOTA methods that provide results on the DNS Challenge dataset and open-source their network implementation to compare MACs and RTF fairly. It can be seen that FullSubNet has already outperformed these methods in terms of speech enhancement performance. The proposed Fast FullSubNet with $m=2$ has smaller MACs and RTF than these methods except for DTLN [18]. Notably, the flexible strategies for reducing computational complexity provided by Fast FullSubNet are also suitable for other FullSubNet variants. ## 4 Conclusion This paper proposes a new architecture named Fast FullSubNet to accelerate the computation of FullSubNet by reducing the number of frequencies and time frames involved in the computation of the sub-band model. Experimental results show that compared with the original FullSubNet, Fast FullSubNet achieves comparable or better performance with significantly smaller complexity. Importantly, the flexible strategies for reducing computational complexity provided by Fast FullSubNet are also suitable for other FullSubNet variants. ## References * [1] Philipos C. Loizou, Speech Enhancement: Theory and Practice, CRC Press, Inc., USA, 2nd edition, 2013. * [2] Chandan K.A. Reddy, Vishak Gopal, Ross Cutler, Ebrahim Beyrami, Roger Cheng, Harishchandra Dubey, Sergiy Matusevych, Robert Aichner, Ashkan Aazami, Sebastian Braun, Puneet Rana, Sriram Srinivasan, and Johannes Gehrke, “The INTERSPEECH 2020 Deep Noise Suppression Challenge: Datasets, Subjective Testing Framework, and Challenge Results,” in Interspeech 2020. Oct. 2020, pp. 2492–2496, ISCA. * [3] Chandan K. A. Reddy, Harishchandra Dubey, Vishak Gopal, Ross Cutler, Sebastian Braun, Hannes Gamper, Robert Aichner, and Sriram Srinivasan, “ICASSP 2021 Deep Noise Suppression Challenge,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), June 2021, pp. 6623–6627, ISSN: 2379-190X. * [4] Harishchandra Dubey, Vishak Gopal, Ross Cutler, Ashkan Aazami, Sergiy Matusevych, Sebastian Braun, Sefik Emre Eskimez, Manthan Thakker, Takuya Yoshioka, Hannes Gamper, and Robert Aichner, “Icassp 2022 Deep Noise Suppression Challenge,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 9271–9275, ISSN: 2379-190X. * [5] Yanxin Hu, Yun Liu, Shubo Lv, Mengtao Xing, Shimin Zhang, Yihui Fu, Jian Wu, Bihong Zhang, and Lei Xie, “DCCRN: Deep Complex Convolution Recurrent Network for Phase-Aware Speech Enhancement,” in Interspeech 2020. Oct. 2020, pp. 2472–2476, ISCA. * [6] Xiang Hao, Xiangdong Su, Radu Horaud, and Xiaofei Li, “Fullsubnet: A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), June 2021, pp. 6633–6637, ISSN: 2379-190X. * [7] Andong Li, Wenzhe Liu, Xiaoxue Luo, Guochen Yu, Chengshi Zheng, and Xiaodong Li, “A Simultaneous Denoising and Dereverberation Framework with Target Decoupling,” in Interspeech 2021. Aug. 2021, pp. 2801–2805, ISCA. * [8] Jun Chen, Zilin Wang, Deyi Tuo, Zhiyong Wu, Shiyin Kang, and Helen Meng, “FullSubNet+: Channel Attention Fullsubnet with Complex Spectrograms for Speech Enhancement,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 7857–7861, ISSN: 2379-190X. * [9] Feng Dang, Hangting Chen, and Pengyuan Zhang, “DPT-FSNet: Dual-Path Transformer Based Full-Band and Sub-Band Fusion Network for Speech Enhancement,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 6857–6861, ISSN: 2379-190X. * [10] Zhuangqi Chen and Pingjian Zhang, “Lightweight Full-band and Sub-band Fusion Network for Real Time Speech Enhancement,” in Interspeech 2022. Sept. 2022, pp. 921–925, ISCA. * [11] Xin yuan, Qun Yang, and Shaohan Liu, “DCCRN-SUBNET: A DCCRN and SUBNET Fusion Model for Speech Enhancement,” in 2021 7th International Conference on Computer and Communications (ICCC), 2021, pp. 525–529. * [12] Feifei Xiong, Weiguang Chen, Pengyu Wang, Xiaofei Li, and Jinwei Feng, “Spectro-Temporal SubNet for Real-Time Monaural Speech Denoising and Dereverberation,” in Interspeech 2022. Sept. 2022, pp. 931–935, ISCA. * [13] Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, Rif A. Saurous, Yannis Agiomvrgiannakis, and Yonghui Wu, “Natural TTS Synthesis by Conditioning Wavenet on MEL Spectrogram Predictions,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 2018, pp. 4779–4783, ISSN: 2379-190X. * [14] Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O. Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, and John Miller, “Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning,” Feb. 2022. * [15] Donald S. Williamson, Yuxuan Wang, and DeLiang Wang, “Complex Ratio Masking for Monaural Speech Separation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 3, pp. 483–492, Mar. 2016, Conference Name: IEEE/ACM Transactions on Audio, Speech, and Language Processing. * [16] Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, and Nobukatsu Hojo, “Maskcyclegan-VC: Learning Non-Parallel Voice Conversion with Filling in Frames,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), June 2021, pp. 5919–5923, ISSN: 2379-190X. * [17] Dimitris G. Manolakis, Vinay K. Ingle, and Stephen M. Kogon, Statistical and adaptive signal processing: spectral estimation, signal modeling, adaptive filtering, and array processing, Artech House signal processing library. Artech House, Boston, 2005. * [18] Nils L. Westhausen and Bernd T. Meyer, “Dual-Signal Transformation LSTM Network for Real-Time Noise Suppression,” in Interspeech 2020. Oct. 2020, pp. 2477–2481, ISCA. * [19] Yuichiro Koyama, Tyler Vuong, Stefan Uhlich, and Bhiksha Raj, “Exploring the Best Loss Function for DNN-Based Low-latency Speech Enhancement with Temporal Convolutional Networks,” Aug. 2020, arXiv:2005.11611 [cs, eess]. * [20] Xiaofei Li and Radu Horaud, “Online Monaural Speech Enhancement Using Delayed Subband LSTM,” in Interspeech 2020. Oct. 2020, pp. 2462–2466, ISCA. * [21] Elior Hadad, Florian Heese, Peter Vary, and Sharon Gannot, “Multichannel audio database in various acoustic environments,” in 2014 14th International Workshop on Acoustic Signal Enhancement (IWAENC), Sept. 2014, pp. 313–317. * [22] Keisuke Kinoshita, Marc Delcroix, Sharon Gannot, Emanuël A. P. Habets, Reinhold Haeb-Umbach, Walter Kellermann, Volker Leutnant, Roland Maas, Tomohiro Nakatani, Bhiksha Raj, Armin Sehr, and Takuya Yoshioka, “A summary of the REVERB challenge: state-of-the-art and remaining challenges in reverberant speech processing research,” EURASIP Journal on Advances in Signal Processing, vol. 2016, no. 1, pp. 7, Jan. 2016. * [23] A.W. Rix, J.G. Beerends, M.P. Hollier, and A.P. Hekstra, “Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs,” in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 2001, vol. 2, pp. 749–752 vol.2, ISSN: 1520-6149. * [24] Cees H. Taal, Richard C. Hendriks, Richard Heusdens, and Jesper Jensen, “An Algorithm for Intelligibility Prediction of Time–Frequency Weighted Noisy Speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2125–2136, Sept. 2011, Conference Name: IEEE Transactions on Audio, Speech, and Language Processing. * [25] Jonathan Le Roux, Scott Wisdom, Hakan Erdogan, and John R. Hershey, “SDR – Half-baked or Well Done?,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 626–630, ISSN: 2379-190X.
11institutetext: University of Massachusetts Amherst Contact Email: 11email<EMAIL_ADDRESS> # SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and Visual Cues Jaewook Lee Andrew Lan ###### Abstract In second language vocabulary learning, existing works have primarily focused on either the learning interface or scheduling personalized retrieval practices to maximize memory retention. However, the learning content, i.e., the information presented on flashcards, has mostly remained constant. Keyword mnemonic is a notable learning strategy that relates new vocabulary to existing knowledge by building an acoustic and imagery link using a keyword that sounds alike. Beyond that, producing verbal and visual cues associated with the keyword to facilitate building these links requires a manual process and is not scalable. In this paper, we explore an opportunity to use large language models to automatically generate verbal and visual cues for keyword mnemonics. Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues. We investigate the effectiveness of our approach via a human participant experiment by comparing it with manually generated cues. ###### Keywords: Keyword Mnemonic Vocabulary Learning Large Language Models ## 1 Introduction Learning vocabulary is key to learning second (mostly foreign) languages, but also a difficult task. One of the most well-known and effective methods is flashcards, i.e., writing the L2 (a second language word) word on the front and writing down the corresponding L1 word (a first or native language word) on the back, with content such as mnemonic or context. Moreover, one may manage flashcards by putting the cards in boxes to follow the Leitner system [13] to recall the word regularly following the forgetting curve [8]. However, both writing down every word and managing a bunch of cards require significant effort and can take a lot of effort from learners. Technology advances have enabled vocabulary learning to shift from manually writing down the words to using software systems such as Anki [10] and Quizlet [21], which make language learning more efficient and engaging. Some systems use ideas behind intelligent tutoring systems to model the learner’s knowledge state to intervene in the retrieval practice [18, 22, 23]. Many studies have shown that managing retrieval practice and designing personalized schedules using cognitive models can significantly improve learning efficiency [7, 12]. Many systems also use gamified interfaces and enable learners to share decks with others, making the learning process more interactive and socially relevant [1, 10, 21]. However, despite these advances, the learning _content_ , i.e., what is written on the flashcard, has mostly stayed the same throughout the years. Regarding the content for second language learning, keyword mnemonic [3] is a notable memory encoding strategy that uses interactive visual imagery with a keyword that sounds like part of a foreign word. Forming the keyword-based interactive image takes a two-step approach: creating first an acoustic and then an imagery link. Imagine a native English speaker is learning the Spanish word pato, which means duck. The keyword that sounds like the word is pot. Using the keyword, the learner first creates an acoustic link between the keyword and the Spanish word. Then, the learner builds an imagery link that connects the sound and its meaning by using a verbal cue, such as “A duck wearing a pot on its head.” By relating new information to existing knowledge, learners have an easier time memorizing the word and can retain it in memory for a longer time. Previous studies on keyword mnemonics have shown their effectiveness compared with different learning strategies. Comparing keyword mnemonic with rote rehearsal and combining both strategies showed that the keyword group outperformed the other two groups [5]. Comparing the keyword mnemonic group with verbal and visual cues with mixed methods of contextual clues, word structure analysis, and opposite word pairs showed that the keyword group performed better in both short-term and long-term retention [20]. However, since the cues given in these studies are manually generated by experts, it is difficult to employ this approach at a large scale in the systems mentioned above. In 2014, Savva et al. introduced an automatic keyword generation approach based on a cross-lingual system, TransPhoner [19]. It evaluates candidate keywords in the second language using the following measures for a given input word: imageability, phonetic similarity, orthographic similarity, and semantic similarity. The authors experimented on the effectiveness of TransPhoner using an evaluation set of 36 German words [9] with three other conditions: no keywords, randomly sampled keywords, and manually generated keywords. The result shows that the TransPhoner-generated condition achieved the highest score and the manually-generated keyword condition had no significant difference from randomly generated keywords. Despite TransPhoner’s success in automatically generating keywords as cues, other forms of richer verbal or visual cues that could further help learners build an imagery link cannot be automatically generated. The learner (or teacher) still needs to manually develop them to connect the keyword and the L1 word, which requires a lot of effort on their part. Moreover, it takes an expert to come up with an image as the visual cue that corresponds to the verbal cue. Using image APIs such as Google Image API, one can juxtapose images of a keyword and an L1 word, but doing is not as effective as showing both words together in a single image. To make keyword mnemonic scalable, we need an end-to-end solution that takes words as input and generates keyword, verbal and visual cues. Contributions. In this paper, we detail a pipeline for automatically generating verbal and visual cues in one shot via text generator and text-to- image generator. Our contributions are as follows: * • We propose a large language model (LLM)-based pipeline that automatically generates highly memorable verbal and visual cues for an L1 word in language learning. We believe that our automated approach will significantly reduce content development costs by enhancing time efficiency and reducing manual generation effort. To the best of our knowledge, we are the first to apply LLMs in the context of keyword mnemonic. * • We implement a web application for human participant studies and use it to compare our approach with existing ones. We analyze the effectiveness of four approaches: automatically generated keyword only, automatically generated keyword with a verbal cue, automatically generated keyword with both verbal and visual cues, and manually generated keyword and verbal cues. We also outline avenues for future work that could stem from our approach. ## 2 Methodology In this section, we detail our pipeline for automatically generating cues. Our work is driven by the following two research questions: * • Can we automatically generate human-level verbal cues for the keyword? * • Can we generate a visual cue that may facilitate building an imagery link that is described in a verbal cue? We narrow the scope of automatically generating verbal and visual cues to the experiments conducted in previous studies [9, 19] in this preliminary effort. We use the evaluation set of 36 German words and keywords from previous studies for both manually and automatically generated cues as baselines. Since verbal cues only exist for manually generated keywords, our task boils down to automatically generating verbal cues using TransPhoner-generated keywords and generating visual cues using verbal cues. ### 2.1 Pipeline for Auto-generating Verbal and Visual Cues Figure 1: Our end-to-end pipeline for automatically generating verbal and visual cues for an L2 word. We propose a pipeline consisting of two LLMs that generate verbal and visual cues in two steps: First, we use a text generator to automatically generate a sentence containing the TransPhoner keyword as the verbal cue. Second, we use a text-to-image generator to generate an image as the visual cue. LLMs, pre- trained with massive datasets, have shown human-level performance on the tasks we described above through prompts. This is because LLMs are good for controllable text generation [17] and following instructions [16]. With proper prompts, models show their ability to solve the tasks with zero-shot or few- shot setups. We use zero-shot setup LLMs for both generating verbal and visual cues. We detail the pipeline through an example in Fig. 1 where we need to generate cues for the German word flasche, which means a bottle. The keyword generated by TransPhoner is flashy; Using the keyword and the meaning of the word, we create the prompt: “Write a short, catchy sentence that connects flashy and bottle.” Additionally, we add a constraint on verbal cues to start with “Imagine” for two reasons. First, verbal cues in the previous study [9] are in that format. Since we are trying to answer whether we could achieve human-level verbal cues, we match the format. Second, we follow grammatical characteristics that come after the word imagine. After the word “Imagine”, usually a noun or gerund comes out; we found that the generated verbal cue contains fewer ambiguous pronouns, which makes the cue more descriptive. This feature is key to linking the text generator and text-to-image generator within the same pipeline. Using the prompt, our text generator, GPT-3 [6] (text-davinci-003, temp=0.5), generates the verbal cue. Then, we reuse the verbal cue as the prompt for our text-to-image generator, DALL-E 2 [15], by removing the word “Imagine”. One can freely choose any LLMs to automatically generate these verbal and visual cues. We present the gray region in Fig. 1 to the participant as learning content. ## 3 Experimental Evaluation In this section, we detail our experiments on presenting different content to different participants to explore whether automatically generated verbal and visual cues are effective in vocabulary learning. ### 3.1 Experimental Design In the experiment, participants learn 36 German words and are tested on recalling both the German word (generation) and its English meaning (recognition). The words are split into three sets, which means that each participant goes through the learning, recognition, and generation cycle three times. Words in each set are randomly shuffled for each participant. At the end of the experiment, we also ask participants to rate the helpfulness of the cues. #### 3.1.1 Learning and Testing We provide each participant with both instructions on the study and the content that helps them learn the word; see Section 3.2 for details. Each word has a 30-second limit for the participant to memorize, and the participant can choose to move on to the next word after 15 seconds. After 30 seconds, we automatically move on to the next word. German words are pronounced twice, after 2 seconds and 7 seconds, respectively, after being displayed. We show a timer to participants to make them aware of the time remaining for each word. Participants have 15 seconds for both recognition and generation during testing. To avoid confusion between the two tests, we provide instructions such as “What is this in English?” and “What is this in German?”. For generation, we also ask participants to use a, o, u, s instead of Umlaut ä, ö, ü, ß. We show a timer to participants as well. Words in both tasks are randomized in order. #### 3.1.2 Participants We recruit participants from Amazon Mechanical Turk [2]. We require participants to be native English speakers with no German language experience. Considering the experiment takes about 40 minutes, we paid each participant $7.25 and added a bonus of $2.75 for those who got a score of over 70% on the final test. The bonus encourages participants to do their best. However, we acknowledge that some participants may cheat on tests to achieve a high score by using an external dictionary, which we have no control of. #### 3.1.3 Web Interface We implement a React web application as our participant interface, which is designed based on the previous study [19]. We place an IRB-approved consent form on the front page and only participants who agree can participate in the experiment; the form explains in detail how the experiment is structured. We also show an example with a German word not in our evaluation set to clarify the procedure to participants. We collect metadata on time spent both during learning and testing, along with the responses to further investigate participant behavior. ### 3.2 Experimental Conditions We first divide participants into two groups based on how the keyword was generated: automatically (auto-cue) and manually (manual-cue). Among many combinations of verbal and visual cues that can be presented to the participants, we choose conditions that enable both intra- and inter-group comparisons. We recruit a total of 80 participants for our study, with 20 in each condition. As shown in Fig. 2, we show the example of our web interface on how the content is displayed in different conditions. For intra-group comparisons, we further divide the auto-cue group into three conditions: Condition I is only provided with the TransPhoner-generated keyword, Condition II is provided with the keyword and the verbal cue generated by our pipeline, and Condition III is provided with the keyword and both the verbal and visual cues generated by our pipeline. For the inter-group comparisons, we provide both the auto-cue group and manual-cue group with information in Condition II. We note that the previous study [19] compared the groups with Condition I by not including verbal cues that were originally presented with the manually generated keywords [9]. The manually generated verbal cue and keyword should be considered as a whole since the keyword might have been chosen to provide a verbal cue with the best imageability among many keyword candidates. Figure 2: A snapshot of our web interface shown to experiment participants. We refer to these four conditions as Auto-I, Auto-II, Auto-III, and Manual-II. The instructions for each condition are shown in Table 1. We use the same instructions for Condition I from Savva et al. Our instructions for Condition II tell participants to create an imagery of a scene specified in a verbal cue. Our instructions for Condition III tell participants to remember the image, which is based on the verbal cue that describes a specific scene. Table 1: Cues and instructions we used for different experimental conditions. Cond. | Cue | Instruction ---|---|--- | Keyword | Verbal | Visual I | yes | no | no | Imagine a visual scene connecting the given keyword with the English meaning, and the sound of the German word. II | yes | yes | no | Imagine a specific scene described in the verbal cue that connects the given keyword with the English meaning, and the sound of the German word. III | yes | yes | yes | Remember the image by following the verbal cue that connects the given keyword with the English meaning, and the sound of the German word. | | | | ### 3.3 Evaluation Metrics We use different metrics to score recognition and generation. For recognition, we use cosine similarity between the word embeddings [4] between the answer and the response. We also consider responses that miss “to” for “to”-infinitives to be correct. Unlike recognition, as a novice German learner, generation is bounded to the orthographic feature of vocabulary. Therefore, we use a standardized (subtracting 1 and normalizing to 1) Levenshtein distance to score generation, following previous studies [19]. We also ask participants to evaluate the helpfulness of the cues using a 5-point Likert scale, which is provided along with the entire 36 words and the cues. ### 3.4 Results and Discussion After we exclude participants who did not understand the experiment properly, such as those who wrote down the keyword when recalling the English meaning, we have a total of 72 participants: Auto-I (20) with an average age of 25.4 years (SD = 2.3), Auto-II (17) with an average age of 24.2 years (SD = 1.7), Auto-III (18) with an average age of 24.8 years (SD = 1.6), and Manual-II (17) with an average age of 25.3 years (SD = 1.1). Fig. 3 shows per-participant experimental data in box plots averaged among 36 German words. Learning time is time spent memorizing a word, while testing time is the average time on recognition and generation of the word. Similarly, the combined score is an average of recognition and generation scores. Learning time, testing time, and Likert scale are normalized by their maximum value. The median of time spent on learning was 19.8, 18.9, 18.6, and 19.2 seconds, respectively, for the four conditions out of the 30 seconds time limit, which may suggest that cognitive load across different conditions is similar. The median of time spent on testing, i.e., the average time spent on recognition and generation, was 8.85, 9.75, 8.7, and 7.95 seconds out of the 15 seconds time limit. The median of the 5-point Likert scale was 4.2, 3.95, 4.25, and 4.4. Now, we analyze the combined score based on the per-word combined score, as shown in Fig. 4. We perform a one-tailed Welch’s t-test assuming unequal variances on the hypotheses of one condition being better than another. We set our level of significance to 5%. We detail each hypothesis below. Case A, B, and C in Fig. 4 are words we present with content generated through our pipeline for qualitative analysis. Figure 3: Box plots of per-participant data for each experimental condition. #### 3.4.1 Auto-I vs. Auto-II: Does a verbal cue help learning? We hypothesize that Auto-II, with additional verbal cues, will result in better recognition and generation scores than Auto-I, which uses only keywords. We define our null hypothesis ($H_{0}$) and alternate hypothesis ($H_{a}$) as follows: * • $H_{0}$: $\mu_{Auto-II}\leq\mu_{Auto-I}$ * • $H_{a}$: $\mu_{Auto-II}>\mu_{Auto-I}$ A right-tailed test shows there is no significant effect of verbal cues, $t(33)=-1.79,p=0.96$; we cannot reject $H_{0}$. On the contrary, a left-tailed test shows statistical significance in favor of the keyword-only condition, $t(33)=1.79,p=0.04$. This result can be explained by several factors: The participants might have done rote rehearsals instead of building links as instructed in Table 1. Moreover, participants may come up with their own verbal cues that are more memorable than automatically generated ones. Personalized by default, participants’ own verbal cues may be a better fit for each individual’s own experience. #### 3.4.2 Auto-II vs. Manual-II: Are automated verbal cues effective? We hypothesize Manual-II to be an upper bound of Auto-II since the former cues are generated by experts in psycholinguistics. Therefore, we define our null hypothesis and alternate hypothesis as follows: * • $H_{0}$: $\mu_{Manual-II}\leq\mu_{Auto-II}$ * • $H_{a}$: $\mu_{Manual-II}>\mu_{Auto-II}$ A right-tailed test shows that there is no significant difference between the two conditions, $t(24)=-0.32,p=0.62$; we cannot reject $H_{0}$. In Fig. 4, we show three words where participants perform better in the Auto-II condition than Manual-II (case A) and otherwise (case B), respectively. Case A in Table 2 shows that auto-generated cues are more memorable than manual cues even with a grammatical error (risen should be raised) or not realistic (Reuben sandwich calling your name). Case B, on the other hand, contains keywords where auto- generated cues are not frequently used (Triton, frizzy) or making it hard to imagine (a wagon with stories). This result implies that although we can automatically generate high-quality verbal cues, choosing appropriate keywords remains crucial. Therefore, we need to add keyword generation to the pipeline and evaluate the quality of both generated keywords and the verbal cue. Figure 4: Per-word combined score for all four experimental conditions, with three cases highlighting some words that work especially well with certain cues. Table 2: Examples of automatically and manually generated verbal cues. A keyword is represented in italic, while a meaning is in bold. Case | Word | Auto | Manual ---|---|---|--- A | Treten | Imagine stepping into treason, a treacherous path that can never be undone. | Imagine you step on a stair tread. Rasen | Imagine a risen lawn that is lush and green! | Imagine your lawn covered in raisins. Rufen | Imagine Reuben calling out your name! | Imagine you call a friend to put a new roof on a cottage. B | Streiten | Imagine Triton and his trident quarreling with the waves. | Imagine you quarrel about the Menai straits. Sagen | Imagine a wagon full of stories just waiting to be told! | Imagine you tell someone sago is good for them. Friseur | Imagine a hairdresser who can tame even the most frizzy hair! | Imagine your hairdresser inside a freezer. C | Nehmen | Imagine Newman taking the initiative to take action! | Imagine you take a name in your address book. Brauchen | Imagine needing to fix a broken heart. | Imagine brokers need much experience. #### 3.4.3 Auto-II vs. Auto-III: Does a visual cue help learning? We hypothesize better performance by Auto-III, which uses additional visual cues, than Auto-II. Therefore, we define our null hypothesis and alternate hypothesis as follows: * • $H_{0}$: $\mu_{Auto-III}\leq\mu_{Auto-II}$ * • $H_{a}$: $\mu_{Auto-III}>\mu_{Auto-II}$ A right-tailed test shows that there is no significant difference between the two conditions, $t(32)=0.39,p=0.35$; we cannot reject $H_{0}$. In Fig. 4, we show three words for the cases where participants perform better in the Auto- III condition than in Auto-II (case B) and two for when it does not (case C), respectively. Case B shows that Auto-III, which has additional visual cues than Auto-II, performs similarly as Manual-II. Considering the previous comparison that Auto-II has a lower score than Manual-II, we see that Auto-III does somewhat outperform Auto-II. Therefore, we can conclude that visual cues help participant build the imagery link to some degree. For a more qualitative analysis, Fig. 5 shows visual cues generated by our pipeline. Fig. 5 (a-c) shows that visual cues may be helpful in cases where keywords that lack imageability and are not frequently used (Triton, frizzy) or in cases where auto-generated verbal cues are hard to imagine (a wagon with stories). However, as shown in case C, visual cues for abstract words (to take, to need) do not help much. Fig. 5 (d-e) shows that in these cases the generated image is not descriptive enough to facilitate the imagery link. Interestingly, the Likert scale score was higher for Auto-III than Auto-II in every word except one. This result implies that participants think it is helpful to have additional visual cues. However, we cannot create effective visual cues for every word. Generating descriptive visual cues, especially for abstract words, remains a challenging task. Case B | | | ---|---|---|--- (a) Streiten (b) Sagen (c) Friseur Case C | | ---|---|--- (d) Nehmen (e) Brauchen Figure 5: Examples of visual cues generated by our pipeline in cases where they are helpful to participants and cases where they are not. ## 4 Conclusions and Future Work In this paper, we explored the opportunity of using large language models to generate verbal and visual cues for keyword mnemonics. A preliminary human experiment suggested that despite showing some promise, this approach has limitations and cannot reach the performance of manually generated cues yet. There are many avenues for future work. First, we need a larger-scale experiment in a real lab study, which provides us a controlled environment to test both short-term and long-term retention. Since we only tested short-term retention, it is possible that no approach can significantly outperform others. We also need more psycholinguistics perspectives on constraining time spent on learning and testing. By conducting the research in a more controlled environment, we can use additional information (e.g., demographics, language level) to help us conduct a deeper analysis of the results. We do clarify that using Amazon’s Mechanical Turk to conduct experiments is standard in prior work, which is part of the reason why we chose this experimental setting. To track long-term retention, we likely have to resort to knowledge tracing models that handle either memory decay [11] or open-ended responses [14]. Second, we can extend our pipeline by generating the keyword automatically as well instead of using TransPhoner-generated keywords, which may make our approach even more scalable. One important aspect that must be studied is how to evaluate the imageability of the keywords and verbal cue that contains both keywords and vocabulary, which remains challenging. Third, we can generate personalized content for each participant. We may provide additional information in the text generator about the topic they are interested in that we could use to generate a verbal cue. Moreover, we can generate a story that takes all words into account. It is also possible to generate verbal cues in L2 as well, which may help learners by providing even more context. Fourth, instead of the pronunciation of the word, we can use other features in language to generate verbal cues. For example, when learning Mandarin, memorizing Chinese characters is as important as learning how to pronounce the word. The Chinese character 休 means rest, which is xiū in Mandarin. The character is called a compound ideograph, a combination of a person (人) and a tree (木), which represents a person resting against a tree. Combined with a keyword, shoe, for example, we could accomplish two goals with one verbal cue, “A person is resting by a tree, tying up their shoe.” This way, we can make visual cues more descriptive for abstract words. ## 5 Acknowledgements The authors thank the NSF (under grants 1917713, 2118706, 2202506, 2215193) for partially supporting this work. ## References * [1] Ahn, L.v.: Duolingo, https://www.duolingo.com * [2] Amazon: Amazon mechanical turk, https://www.mturk.com * [3] Atkinson, R.C., Raugh, M.R.: An application of the mnemonic keyword method to the acquisition of a russian vocabulary. Journal of experimental psychology: Human learning and memory 1(2), 126 (1975) * [4] Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Transactions of the association for computational linguistics 5, 135–146 (2017) * [5] Brahler, C.J., Walker, D.: Learning scientific and medical terminology with a mnemonic strategy using an illogical association technique. Advances in physiology education 32(3), 219–224 (2008) * [6] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877–1901 (2020) * [7] Carrier, M., Pashler, H.: The influence of retrieval on retention. Memory & cognition 20, 633–642 (1992) * [8] Ebbinghaus, H.: Memory: A contribution to experimental psychology. Annals of neurosciences 20(4), 155 (2013) * [9] Ellis, N.C., Beaton, A.: Psycholinguistic determinants of foreign language vocabulary learning. Language learning 43(4), 559–617 (1993) * [10] Elmes, D.: Anki, http://ankisrs.net * [11] Ghosh, A., Heffernan, N., Lan, A.S.: Context-aware attentive knowledge tracing. In: Proc. ACM SIGKDD. pp. 2330–2339 (2020) * [12] Larsen, D.P., Butler, A.C., Roediger III, H.L.: Repeated testing improves long-term retention relative to repeated study: a randomised controlled trial. Medical education 43(12), 1174–1181 (2009) * [13] Leitner, S.: So lernt man lernen. Herder (1974), https://books.google.com/books?id=opWFRAAACAAJ * [14] Liu, N., Wang, Z., Baraniuk, R., Lan, A.: Open-ended knowledge tracing for computer science education. In: Conference on Empirical Methods in Natural Language Processing. pp. 3849–3862 (2022) * [15] OpenAI: Dall-e 2, https://openai.com/dall-e-2 * [16] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) * [17] Prabhumoye, S., Black, A.W., Salakhutdinov, R.: Exploring controllable text generation techniques. arXiv preprint arXiv:2005.01822 (2020) * [18] Reddy, S., Labutov, I., Banerjee, S., Joachims, T.: Unbounded human learning: Optimal scheduling for spaced repetition. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. pp. 1815–1824 (2016) * [19] Savva, M., Chang, A.X., Manning, C.D., Hanrahan, P.: Transphoner: Automated mnemonic keyword generation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. pp. 3725–3734 (2014) * [20] Siriganjanavong, V.: The mnemonic keyword method: Effects on the vocabulary acquisition and retention. English Language Teaching 6(10), 1–10 (2013) * [21] Sutherland, A.: Quizlet, http://quizlet.com * [22] Ye, J., Su, J., Cao, Y.: A stochastic shortest path algorithm for optimizing spaced repetition scheduling. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. pp. 4381–4390 (2022) * [23] Zylich, B., Lan, A.: Linguistic skill modeling for second language acquisition. In: LAK21: 11th International Learning Analytics and Knowledge Conference. pp. 141–150 (2021)
# Task and Motion Planning in Hierarchical 3D Scene Graphs Aaron Ray∗,1, Christopher Bradley∗,2, Luca Carlone1, Nicholas Roy2 * denotes equal contribution 1 LIDS, MIT, Cambridge, MA 02142, USA ({aaronray<EMAIL_ADDRESS> 2 CSAIL, MIT, Cambridge, MA 02142, USA ({cbrad<EMAIL_ADDRESS> ###### Abstract Recent work in the construction of 3D scene graphs has enabled mobile robots to build large-scale hybrid metric-semantic hierarchical representations of the world. These detailed models contain information that is useful for planning, however how to derive a planning domain from a 3D scene graph that enables efficient computation of executable plans is an open question. In this work, we present a novel approach for defining and solving Task and Motion Planning problems in large-scale environments using hierarchical 3D scene graphs. We identify a method for building sparse problem domains which enable scaling to large scenes, and propose a technique for incrementally adding objects to that domain during planning time to avoid wasting computation on irrelevant elements of the scene graph. We test our approach in two hand crafted domains as well as two scene graphs built from perception, including one constructed from the KITTI dataset. A video supplement is available at https://youtu.be/63xuCCaN0I4. ## I INTRODUCTION We aim to enable an autonomous agent to solve large-scale Task and Motion Planning (TAMP) problems in real-world environments. In order to solve a TAMP problem, an abstract planning domain is needed, which efficiently and accurately represents the robot’s environment and the available actions. Over the past several years, significant progress has been made in the area of generating hybrid metric-semantic representations of the world using 3D scene graphs. One such example is the Hydra scene graph [12], which jointly represents low-level information such as scene geometry and robot trajectories, and higher level information such as semantically-annotated object instances, places, rooms, and buildings. This environmental abstraction lends itself well to large-scale TAMP problems, as it is capable of storing both higher level abstractions like objects and connectivity of regions needed for task planning, as well as the low-level metric information required to check kinematic feasibility of different actions [12]. However, how best to construct a planning domain from the 3D scene graph representation remains an open question. The trend in scene graph development has been toward encoding an increasing amount of information in the hierarchical representations. Such detailed information may be crucial to the success of a planner, as the more faithfully any planning domain (regardless of the planning system or formalism) represents the real world, the more likely a found solution will actually be executable by the robot. Moreover, increased representational capacity of the planning domain leads to a wider range of robot goals that can be specified, thus increasing flexibility and generality. However, as a planning problem instance grows in the number of objects, so too does the complexity of finding a plan. TAMP is PSPACE-Hard [22], so problems can become computationally intractable very quickly as the sizes of the state and action spaces grow [7]. To avoid creating intractable planning problems when converting a 3D scene graph into a planning problem, it is critical to determine which elements of the environment (i.e., which elements of the scene graph) are relevant. Consider, for example, a robot responding to a Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE) scenario, receiving instructions to inspect and potentially neutralize dangerous objects scattered in a large area, represented as a pre-built scene graph. The robot is able to pass near an object only after it has neutralised and cleared it, and may be instructed to avoid particular regions entirely, to solve different tasks. Depending on the geometry of the scene and the specified goal, only a subset of these potentially dangerous obstacles and regions may ultimately be relevant to finding a plan. But, for a robot building a scene graph representation from perception, it is not at all obvious which elements of the scene graph should be added to a planning domain to ensure a valid plan can be found and executed. If we were to add _every_ potentially relevant element to our planning domain, we would be guaranteed to eventually find a plan which takes our robot to its goal, but at the expense of substantial computational complexity. On the other hand, if we are too aggressive in pruning away seemingly irrelevant elements, we may restrict ourselves to sub-optimal solutions, or fail to find a plan at all. Previous approaches to the problem of inferring a task-relevant planning domain have relied on representations of connectivity in the scene graph to prune superfluous elements [1]. However, these efforts have been limited to very specific task planning problems, as the pruning approaches employed often remove information necessary for checking the geometric feasibility of plans, or implicitly limit the types of goals that can be specified. Alternative approaches for reducing the planning domain size involve attempting to learn the relevance of planning objects, then incrementally adding objects to the domain according to the learned relevance score until the problem is solvable [18]. Unfortunately, this approach requires training on numerous similar planning problems, and is difficult to generalize to tasks at large scales. In this work, we propose a novel approach to both enable and accelerate TAMP in large environments (Fig. 1). The first contribution of this work is the definition of a set of properties that a TAMP domain must possess to ensure that any plans returned by solving a problem actually meet the user’s specifications. We then define a method for translating a Hydra scene graph [12] into a planning domain that meets these criteria. Our second contribution is the formulation of a sufficient condition for removing symbols from a planning problem while maintaining feasibility. This condition shows that, for example, many of the places in a complete scene graph can be ignored when formulating certain kinds of planning problems, greatly reducing planning times. This improvement relies on the definition of a hierarchical planner, which ignores certain elements of the scene at the highest level to accelerate discrete planning, while planning over the entire scene at lower levels to ensure constraints on the plan are not violated. The third contribution is a method to further accelerate planning by incrementally identifying objects in the scene as relevant during search according to how they affect the feasibility of certain abstract plans. In contrast to the places component of the domain, finding a small set of objects ahead of time to include in the planning instance while guaranteeing that the problem remains feasible is quite difficult. Instead, each time we fail to solve a sub-problem in an abstract plan, we use the scene graph to identify which objects were responsible. We then augment our domain with those objects, and continue planning. We show the effectiveness of our approach across two hand-crafted domains, in addition to two scene graphs built from real perception. ## II Task and Motion Planning in 3D Scene Graphs Figure 1: An illustration of how we derive our planning representation from a 3D scene graph generated from the KITTI dataset. In (A), we present three levels of the scene graph hierarchy, with the low-level Mesh layer on the bottom, the Places layer above that, and the Objects layer on the top. In (B), we see an isometric view of this graph, giving an insight to the scale, while also plotting the higher level Regions layer. In (C) we show a very simplified version of this scene, where the agent is given a task specified in logic. The agent is tasked with either visiting Place 6 while avoiding Place 1, or visiting Place 5. Given the geometry of the scene, the first clause in this disjunction is un-satisfiable. We also see that Place 5 is partially obstructed by a suspicious object, so the agent must consider either avoiding it (green trajectory), or inspect and neutralize the object (blue trajectory) before continuing to its goal. To solve problems like this one at large scales, we take advantage of the structure of the scene graph to accelerate planning. TAMP jointly considers elements of high-level task planning [8, 13] and low- level motion planning [15] in an attempt to solve hybrid discrete/continuous, multi-modal planning problems [7]. ### II-A Task and Motion Planning Preliminaries A common formalism for encoding planning problems is Planning Domain Definition Language (PDDL). In a PDDL problem, a _state_ $\mathcal{I}$ is a set of facts: instances of boolean functions called predicates $p(\bar{x})\in\mathcal{P}$, which are parameterized by tuples of objects $\bar{x}=[x_{1},...,x_{k}]$, $x\in\mathcal{O}$. Transitions between states are defined by actions $a(\bar{x})\in\mathcal{A}$ (also parameterized by objects) which are expressed as two sets of predicates: preconditions $\text{Pre}(a_{i})$ and effects $\text{Eff}(a_{i})$. An action’s preconditions determine if an instance can be applied from a particular state, and its effects define the set of facts that are added ($\text{Eff}^{+}(a_{i}))$ or removed $(\text{Eff}^{-}(a_{i}))$ from the state. A planning _domain_ is composed of lifted sets of predicates $\mathcal{P}$ and actions $\mathcal{A}$, and a problem _instance_ $P=(\mathcal{P},\mathcal{A},\mathcal{O},\mathcal{I}_{0},\mathcal{G})$ combines a domain with an initial state $\mathcal{I}_{0}$ and a set of goal states $\mathcal{G}$, parameterized by objects $\mathcal{O}$. We will refer to objects in a PDDL problem as _symbols_ when they may be confused with physical objects from a scene graph. One approach for representing a TAMP problem —which we use in this work— is an extension of PDDL called PDDLStream [6]. A PDDLStream problem instance $(\mathcal{P},\mathcal{A},\mathcal{S},\mathcal{O},\mathcal{I}_{0},\mathcal{G})$ represents the discrete search portion of a TAMP problem in PDDL, with the same representation of predicates, actions, objects, initial state, and set of goal states. In a TAMP problem however, certain actions may have symbol parameterisations that are either cumbersome or intractable to add to a problem instance. For example, consider a move action, which defines robot motion from one configuration to another, and which may have a precondition that requires the existence of a collision-free trajectory between those configurations. Adding all such trajectory symbols between all possible points in the configuration space to the planning domain is impossible, so instead, we define sub-problems called _streams_ $s\in\mathcal{S}$ which can be solved to return symbols that (if a solution exists) can be used to certify the preconditions of the associated action. In the case of a trajectory, the relevant sub-problem/stream might be a motion planner, which, if solved, returns a symbol representing a collision-free trajectory between two configurations that the robot can follow. We refer the reader to [6] for a detailed description of PDDLstream. Solutions to PDDLStream problems take the form of a sequence of parameterized action instances $\pi=[a_{1}(\bar{x}_{1}),a_{2}(\bar{x}_{2}),...,a_{n}(\bar{x}_{n})]$ that define a plan, where assigned parameters $\bar{x}$ satisfy each action’s constraints [7]. From the parameters of an action, we can derive a _motion sequence_ , which specifies how a robot executes a particular action. For example, from an action plan composed of a sequence of move actions, the corresponding motion sequence would be composed of the trajectories that were solved for by the motion-planning sub-problem and which are parameters of those move actions. Executing that sequence would involve multiple calls to a trajectory controller. We say that two motion sequences are equivalent if they result in the agent acting identically. For any action sequence, there is a corresponding sequence of states $\mathcal{I}_{\pi}=[\mathcal{I}_{0},\mathcal{I}_{1},\mathcal{I}_{2},...,\mathcal{I}_{n}]$, leading from the initial state to a goal state that can be constructed from each action’s effects. For an action plan $\pi$, its corresponding state plan $\mathcal{I}_{\pi}$ is _valid_ if $\mathcal{I}_{i}\in\text{Pre}(a_{i+1})$ for $i=0,...,N-1$, and $\mathcal{I}_{N}\in\mathcal{G}$. Tasks specified in PDDL can be solved by a range of solvers [9, 11], and any state plan found by such a solver is valid by construction. A feasible planning problem is one for which there is a valid solution. ### II-B Building 3D Scene Graphs from Perception In most TAMP settings, the planning domain and problem instance are assumed to be given ahead of time. However, a robot operating in the real world must be able to build its own world model from perception, especially the symbols and predicates that describe the world states. Recently, 3D scene graphs such as those implemented by Hydra [12] or S-Graphs [3] have emerged as promising hierarchical representations that can be built in real-time onboard robots, to provide abstractions that are useful for encoding planning domains. Our definition of a 3D scene graph, directly based on Hydra [12], consists of several layers of increasing abstraction (see Fig. 1). Each layer consists of a collection of _nodes_ representing the location and other attributes of an element, with _edges_ connecting nodes within the same layer representing relative spatial constraints and edges between different layers representing an inclusion relationship. The lowest layer of the hierarchy is a semantically-annotated mesh of the scene geometry. The next layer contains objects identified by a semantic image segmentation. The _places_ layer represents navigable regions of the environment based on semantic and geometric properties of the mesh. Places are clustered into groups based on geometric and semantic information, and these groups become nodes in the higher-level _regions_ layer (e.g., rooms in an indoor environment). Previous work on 3D scene graphs has mainly focused on indoor uses. These representations rely on the Generalized Voronoi Diagram (GVD) [16] to generate places, an abstraction of 3D spatial connectivity, which are not well suited for ground robot navigation. We use an alternate formulation of 2D places in our _navigable scene graph_. Each of these places represents a 2D polygon with consistent terrain classification, and is connected to adjacent places in the scene graph. As the resolution of the places is much coarser than the mesh resolution, planning over sequences of places can be much faster than planning over the mesh itself, while still retaining important geometric information. The learned metric-semantic method for extracting regions from places presented in Strader et al. [21] is more appropriate for these 2D places compared to the purely geometric method room-based region extraction originally presented in Hughes et al. [12]. Hydra and its region-based clustering extension [21] can construct this map representation in realtime from RGBD sensor data while accounting for odometry drift, enabling large scale, consistent, and information-rich maps. ### II-C Inferring the Planning Domain from Scene Graphs We introduce a method for deriving a planning problem instance from a Hydra scene graph to demonstrate the salient aspects of solving planning problems based on large-scale environments. In general, we generate six classes of predicates that may be relevant for scene graph related planning. 1) Type information derived from the nodes of the graph, where each node is a unary predicate: (Configuration ?c), (Place ?p), and (Object ?o), etc. 2) Agent/object predicates that define the state of the robot and objects: (AtConfig ?c), (AtPlace ?p), (AtRoom ?r), etc. 3) Connection predicates defined by edges in the same level of the graph: (Connected ?n1 ?n2). 4) Hierarchical predicates indicating edges connecting nodes of different levels of the graph: (PoseInPlace ?c ?p), (PlaceInRoom ?p ?r), etc. 5) Preconditions of actions that are certified by solving a stream’s associated sub-problem. For the example move action, the predicate certified by the motion-planner would contain the two configurations, and the trajectory between them: (Trajectory ?c1 ?t ?c2). 6) Additional, problem specific predicates defined by the user that can be used to specify goal states and problem constraints such as which places to visit or which objects to collect. As a running example for the remainder of this paper, we define an example problem using these predicates, motivated by CBRNE scenarios. In this “Inspection Domain”, an agent can be commanded to visit or avoid certain places, and inspect and neutralise objects that have been marked as suspicious. These suspicious objects cannot be passed until they have been inspected and neutralised. We therefore define problem specific predicates: (VisitedPlace ?p) which indicate the current and past places the robot has been to, and (Safe ?o) or (Suspicious ?o) which describe an object. We define sub-problems for planning motion between two poses, sampling poses for inspecting and neutralizing objects, and sampling poses in a specific place. Goal specifications in this domain can include positive or negated facts based on these predicates. The agent’s available actions are to move between poses in connected places, and to inspect objects from appropriate poses (for simplicity we do not separate the inspect and neutralise actions). Any valid plan is composed of these actions, which at execution time are converted into a motion sequence of FollowPath($t_{i}$) and InspectObject($o_{i}$) primitives for paths $t_{i}$ and objects $o_{i}$. Places and objects are the most numerous nodes in the scene graph, so we need to ensure solving tasks in our domain remains tractable as the number of places and objects grows. The complexity of dealing with places and objects will have the largest implication for the move action, which we present here: (:action move :parameters (?p1 ?p2 ?c1 ?c2 ?t) :precondition (and (Trajectory ?c1 ?t ?c2) (PoseInPlace ?c1 ?p1) (PoseInPlace ?c2 ?p2) (Connected ?p1 ?p2) (AtPose ?c1) :effect (and (AtPose ?c2) (not (AtPose ?c1)) (VisitedPose ?c2)))) (:derived (AtPlace ?p) (exists (?pose) (and (AtPose ?pose) (PoseInPlace ?pose ?p)))) This action allows the agent to move between its current place $p_{1}$ and any place $p_{2}$ that is connected to $p_{1}$ in the scene graph, and updates the state to note the agent’s current pose and that $p_{2}$ has been visited. We refer to this problem representation as the _direct encoding_ of the Inspection domain. The direct encoding of the problem has one major drawback: moving between places that are not physically near each other necessitates chaining together potentially many move actions, which requires motion planning in many short segments between neighboring places. For simple navigation, planning in short segments can actually aid in the efficiency of long-distance motion planning, as we will discuss in Sec.III-C. However, relying on a general task planner to construct long-horizon action sequences, any segment of which may be invalidated by the low-level geometry, leads to very inefficient planning in practice. In the Inspection domain for example, solving a motion-planning sub- problem for a trajectory requires accounting for Suspicious objects that may block the way. However, there is no a priori logical connection in the PDDL problem between a trajectory and the objects which might interfere with it. Therefore, choosing the right object to inspect in order to unblock a path is difficult for the task planner. The longer the sequence of move actions needed to reach a goal state, the more potential plans the solver must consider. This scales poorly both in the size of the environment, and as we consider increasingly complex goal specifications (Sec. IV). ## III Scalable Scene Graph Planning Our objective is to both enable, and reduce the computational burden of solving TAMP problems in large environments. We emphasize three main criteria in our development of a DSG planning framework: 1. 1. Generating plans that we know can be executed. 2. 2. Preserving the set of goals that can be specified and solved. 3. 3. Planning at large scale. Solving problem instances with the direct encoding described above is computationally intractable for large scenes. To address this potential intractability, we first define the characteristics of a planning domain that, if met, would enable sparsification while still meeting the criteria listed above. Next, we propose a method for constructing a simplified planning domain that has these characteristics and enables planning in large scenes. Critically, our method depends on factoring the task and motion planning problem into separate task, navigation, and motion planning problems, aligned with different layers of the scene graph hierarchy. Finally, we describe our approach for planning, using feedback from failed attempts to solve sub- problems to add facts to our domain. ### III-A Simplifying Problem Instances One method for reducing the complexity of a planning problem is to shorten the depth and branching factor of the search that produces a plan. In the case of the direct encoding of our _Inspect Domain_ , one way to do this would be to prune Place symbols from the planning domain that are not relevant to the problem instance, and redefine the move action to allow for travel over longer distances. This pruning would remove possible regions the robot might travel and potentially reduce the depth of search required to find a plan, thus simplifying the search problem. However, identifying symbols that do not impact the solution is in general as hard as solving the planning problem itself, and simply removing places from a problem instance might render the problem as encoded in Section II-C infeasible if it is done naively. In this section, we will show that slight modifications to the domain and motion planning sub-problem lead to straightforward identification of symbols that can safely be removed from the planning problem. We begin by defining a set of symbols that are redundant for a particular goal specification. ###### Definition 1 (Redundant Symbol). For a set of domain actions $\mathcal{A}$ and specific goal $\mathcal{G}$, a symbol x is redundant if both of the following hold: 1. 1. For every valid plan $\pi$ where x parameterizes an action, there is another valid plan $\pi^{\prime}$ with equivalent motion sequence, where x is not an action parameter, 2. 2. No action precondition or goal, expressed in negative normal form, contains a universal quantifier that can be parameterized by x. The brief intuition behind this notion of redundancy is that 1) if any plan involving the symbol yields a motion sequence that can be rewritten without the symbol, the symbol is redundant, and 2) if we solve a planning instance where a redundant symbol has been removed, we would like to know that the plan is still valid in the original problem. The requirements for a symbol to be redundant are quite strong (_every_ plan that uses a symbol must have an alternate plan that does not use the symbol and still results in the same motion sequence), but we will later show that a slight modification to the movement action lets many places have this property. We now prove that we do not lose anything by removing a redundant symbol. ###### Proposition 1 (Removing Redundant Symbol Preserves Feasibility). Consider a feasible planning instance $(\mathcal{P},\mathcal{A},\mathcal{S},\mathcal{O},\mathcal{I}_{0},\mathcal{G})$. For a redundant symbol $x\in\mathcal{O}$, we define a related instance $R^{\prime}=(\mathcal{P},\mathcal{A},\mathcal{S},\mathcal{O}^{\prime},\mathcal{I}_{0}^{\prime},\mathcal{G}^{\prime})$ where $x$ has been removed, i.e., $\mathcal{O}^{\prime}=\mathcal{O}\setminus x$ and $\mathcal{I}_{0}^{\prime}$ contains all facts in $\mathcal{I}_{0}$ except those parameterized by $x$, and similarly for $\mathcal{G}^{\prime}$. Let $\Pi_{R}$ denote the set of valid plans for $R$. Then, $\Pi_{R^{\prime}}\subseteq\Pi_{R}$ and $\Pi_{R^{\prime}}\neq\emptyset$. ###### Proof. Consider $\pi\in\Pi_{R}$. If $\pi$ does not contain any actions parameterized by $x$, then the same plan $\pi$ is also a valid solution for $R^{\prime}$. Consider the alternative case, where $\pi$ does contain an action parameterized by $x$. By Definition 1, there is another plan $\pi^{\prime}$ with equivalent motion sequence not parameterized by $x$, which is a valid solution for $R^{\prime}$. Now that we have shown that $\Pi_{R^{\prime}}$ is not empty, we need to show that any valid plan for $R^{\prime}$ is valid for $R$. Consider plan $\pi=[a_{1},...,a_{N}]\in\Pi_{R^{\prime}}$ with corresponding state plan $\mathcal{I}_{\pi}=[\mathcal{I}_{0},...,\mathcal{I}_{N}]$. If the addition of facts $\mathcal{F}$ parameterized by $x$ make $\pi$ invalid, then there must exist a state $\mathcal{I}_{k}$ such that $\mathcal{I}_{k}\cup\mathcal{F}\notin\text{Pre}(a_{k+1})$, which means that $a_{k+1}$ is parameterized by a symbol that did not exist in $R^{\prime}$. Only a universal or existential quantifier in $\text{Pre}(a_{k+1})$ can cause $a_{k+1}$ to be parameterized by an additional symbol. Adding additional facts cannot turn an existentially quantified formula from true to false. By Definition 1, $a_{k+1}$ does not have any universal quantifiers that can be parameterized by $x$ in its precondition. Thus $\pi$ must be valid for $R$. ∎ In the original direct domain encoding presented in Sec. II-C, there are no redundant places, since removing any place implies that no other plans could result in a trajectory going through that place, which may be necessary depending on the geometry of the scene. However, let us consider a domain with a more general move action: moveRelaxed. This action takes $N$ places as parameters and has a precondition that a sub-problem is solved for a trajectory that goes through each of these $N$ places (and possibly others not included in the set of $N$). Trajectory represents this trajectory. (:action moveRelaxed :parameters (?p1 ... ?pN ?c1 ?t ?c2) :precondition (and (Trajectory ?p1 ... ?pN ?c1 ?c2 ?t) (AtPose ?c1) :effect (and (AtPose ?c2) (not (AtPose ?c1)) (VisitedPlace ?p1) ..... (VisitedPlace ?pN) (VisitedPose ?c2)))) With these modifications, potentially many places become redundant. For any place that appears in a plan incidentally (i.e., not part of the goal), there now exists an equivalent plan where these intermediate places do not appear. For example, imagine our robot begins in Place A, and is tasked with (among other things) inspecting an object in place C. A motion sequence corresponding to a plan to move from Place A to Place B, then from Place B to Place C can be equivalent to a sequence generated from a plan to move from Place A to C directly, so it seems that we can remove Place B from the problem. Extrapolating this pruning approach to larger scenes and more complex goals has the potential to vastly reduce planning horizons. However, now consider that the robot was also instructed to avoid Place B. The ability to include a constraint on the goal states of the form (not (VisitedPlace B)) complicates the pruning process. Executing moveRelaxed from Place A to Place C may involve following a trajectory that takes the robot through Place B, even if the goal specifies that Place B should not be visited, whether or not Place B is in problem domain. Technically this is still a valid solution to the planning problem since place B never appears as a parameter to the moveRelaxed action (and therefore (VisitedPlace B) is not an effect), but clearly the domain with a relaxed movement action does not fully capture how the user wants the robot to interact with the environment. ### III-B Execution-Consistency To formalize the discrepancy between what happens when the robot executes a motion sequence and the constraints that we expect a planning problem to impose, we introduce the concept of a _verifier function_. A _verifier function_ maps motion sub-sequences to sets of PDDL domain facts, and “verifies” which additional domain facts must be implicitly true as a result of the agent executing a motion sequence, even if actually adding these facts to the problem instance during the solving process is undesirable computationally. For example, if we want to consider if an agent’s trajectory avoids certain places, we can imagine a verifier that would return (VisitedPlace $p_{i}$) for each place $p_{i}$ that the continuous trajectory overlaps with. Given a verifier $V$, the facts that hold at each step when executing a motion sequence may be different than expected in the original plan. We denote the facts that would be added by such a verifier applied to the motion sequence associated with $a_{i}$ as $V(a_{i})$, and term this sequence of expanded states the $V$-extended state plan. ###### Definition 2 (V-Extended State Plan). For an action plan $\pi=[a_{1},...,a_{n}]$, its corresponding state plan $\mathcal{I}_{\pi}=[\mathcal{I}_{0},...,\mathcal{I}_{n}]$, and hypothetical verifier function $V$, the V-extended state plan for state $\mathcal{I}_{k}$ is: $\mathcal{I}^{\prime}_{1}=\mathcal{I}_{1}\cup V(a_{1})$ $\mathcal{I}^{\prime}_{k}=\mathcal{I}_{k}\cup\left(\mathcal{I}^{\prime}_{k-1}\setminus\text{Eff}^{-}(a_{k})\right)\cup V(a_{k})$ An extended state $\mathcal{I}^{\prime}_{k}$ is composed of the facts $\mathcal{I}_{k}$ in the initial plan, plus any extra facts that were present in the previous extended state $\mathcal{I}^{\prime}_{k-1}$ other than those removed by action $a_{k}$, plus any facts that would be returned by a verifier applied to action $a_{k}$. Informally, $\mathcal{I}^{\prime}_{k}$ is the state at step $k$ as experienced by the verifier. As discussed in Sec. II-A, any state plan found by a search algorithm is valid by construction. However, a state plan that is augmented with the extra facts that would be produced by a verifier might not be, for example, if the goal specification has a condition that the robot should avoid certain places in the domain. To address this problem, let us imagine a verifier $V_{place}$ that takes a motion sub-sequence $\mu$, and returns a VisitedPlace fact for each place that intersects with the agent’s position while executing $\mu$. For a place $p$ and a trajectory $t$ to be followed by the motion primitive $\texttt{FollowPath}(t)$, we denote $p\cap t$ the section of $t$ that intersects with $p$. We can then define the verifier as $\displaystyle\begin{split}V_{place}(\mu)=\\{&\texttt{(VisitedPlace p)}~{}|~{}\\\ &p\cap t_{i}\neq\emptyset\text{ for }\texttt{FollowPath}(t_{i})\in\mu\\}.\end{split}$ (1) If the motion sequence associated with the action plan would result in the agent visiting a place that we do not expect, then the $V_{place}$-extended state plan would include a VisitedPlace fact that may conflict with the goal. From this idea, we define the concept of execution-consistency, which requires that solutions to the planning problem are still valid after considering the facts from a verifier. ###### Definition 3 (Execution-consistent). A domain is execution-consistent with respect to verifier $V$ if, for every valid plan $\pi$, the V-extended state plan is valid. A domain is trivially execution-consistent for the empty verifier $V(\cdot)=\emptyset$, as the extended state plan is equal to the original plan. A domain is also execution-consistent if the range of $V$ applied to each motion subsequence $\mu_{i}$ corresponding to action $a_{i}$ is limited to facts in $\text{Eff}(a_{i})$. In other cases, a domain can still be execution-consistent for a verifier that would introduce new facts if the domain is carefully crafted. In defining a planning domain for any task, we seek to have it execution-consistent with respect to any defined verifiers. In our example, we want to prevent the agent from entering places that it should not, and so would like to show that the Inspection domain is execution- consistent with respect to the verifier $V_{place}$. Recall that $V_{place}$ can only introduce new VisitedPlace facts. As VisitedPlace does not appear in any action preconditions, the only way for a VisitedPlace fact to render a valid state plan invalid is to conflict with the goal specification. Consider the set of places that must be avoided to satisfy some goal state: $\mathcal{P}_{avoid}=\\{p~{}|~{}\texttt{(not (VisitedPlace p))}\in\mathcal{G}\\}$. If a place in $\mathcal{P}_{avoid}$ can only be visited by an action that explicitly lists it in the action effects, then the domain will be execution-consistent with respect to $V_{place}$. This can easily be guaranteed by preventing the motion planner from generating plans that enter places $\mathcal{P}_{avoid}\setminus\mathcal{P}_{param}$, where $\mathcal{P}_{param}$ is the set of places that appear as parameters to the action. In other words, we define a motion planner that cannot enter any Place which we might want to avoid, unless that Place is given as the parameter to the moveRelaxed action. As a result, we can be sure to never visit a potentially illegal place without (VisitedPlace p) being an effect of the action. This notion of avoiding potentially illegal places implies something important with respect to the motion planning problem. If we identify a place to be redundant, and remove it from the planning domain, the definition of redundancy requires that any plan that would have passed through this place if we hadn’t removed it can be replicated. As a result, while the discrete PDDLStream planner does not need to be aware of pruned places, the motion planner we implement to solve for trajectories must still be able to plan paths through these places. Our motion planner must be aware of the full scene graph, regardless of what is pruned from the PDDLStream problem instance. We address this further in Sec. III-C. With that in mind, we can now consider which specific places we can remove from the PDDLStream problem instance. ###### Proposition 2 (Redundant Places). Consider a problem instance in the Inspection Domain with no quantifiers that can be parameterized by a place in the goal. A place $p$ is redundant if no facts parameterized by $p$ appear in the initial or goal states, or if (not (VisitedPlace $p$)) appears as a clause in the conjunctive normal form (CNF) of the goal specification. ###### Proof. First note that no actions in this domain have universal quantifiers, so we only need to check Definition 1.1 to show that a symbol is redundant. Consider a place $p$ such that (not (VisitedPlace $p$)) appears in the CNF of the goal. If $p$ parameterizes moveRelaxed, then (VisitedPlace $p$) is in the effects, violating the goal. Since moveRelaxed is the only action that can be parameterized by a place, no plan can parameterize $p$ and Definition 1.1 is trivially satisfied. Next, consider a place $p$ that does not parameterize any initial or goal facts. For any plan $\pi$ with an action parameterized by $p$, let $a_{k}^{\mathcal{P}}$ denote an action parameterized by places $\mathcal{P}$, including $p$. Plan $\pi^{\prime}$ where $a_{k}^{\mathcal{P}}$ is replaced by $a_{k}^{\mathcal{P}\setminus p}$ is also valid, since state plans $\mathcal{I}_{\pi}$ and $\mathcal{I}_{\pi^{\prime}}$ only differ by a (VisitedPlace $p$) fact, and no action preconditions or goals involve this fact. Since $p$ was not in $\mathcal{P}_{avoid}$, the command sub-sequence corresponding to $a_{k}^{\mathcal{P}}$ is also valid for $a_{k}^{\mathcal{P}\setminus p}$. Thus, for any $\pi$ parameterized by $p$, we can construct $\pi^{\prime}$ that has an equivalent motion sequence but does not parameterize $p$, showing $p$ is redundant. ∎ We have now identified a potentially large (depending on the sparsity and geometry of the given scene graph) set of Place symbols that are redundant to our instantiation of the Inspection domain. Removing these places from our task planner’s domain, assuming the motion planner is still aware of them, should enable our solver to find valid plans that are guaranteed to have also been valid in the un-pruned problem. Moreover, the ability to prune these elements does not restrict the type of goals we are able to specify to our agent, preserving expressivity, while enabling planning at a larger scale. Our explicit method of defining our problem’s initial state is as follows: ###### Remark 1 (Problem Initialization). In light of Proposition 2, we only include the following places when instantiating a problem: 1) the initial place that the robot is in and 2) any place that appears in the goal. A place $p$ that parameterizes a negated fact $(\texttt{not }(\texttt{VisitedPlace }p))$ that appears as a clause in the CNF of the goal specification can also be removed. Note that if $(\texttt{not }(\texttt{VisitedPlace }p))$ appears in the goal, but not as a standalone clause in the CNF, we do include that place. Any problem in the Inspection domain with this instantiation is guaranteed to return only plans that are execution-consistent with respect to the $V_{place}$ verifier discussed earlier, and as such, no plans need to be verified at runtime. In implementing the resulting domain, there is a tradeoff between branching factor and required plan depth. Letting each move action be parameterized by $N$ places results in a search problem with large branching factor and expensive task planner preprocessing when running the search. In exchange, a single action can move through several places that appear in $\mathcal{P}_{avoid}$. Alternatively, we can restrict each move action to include only two places – those at the beginning and end points of the action – and preserve the set of solution trajectories. This comes at the expense of more numerous sub-problems and longer action plans, though we find this is a worthwhile trade off in the domains we consider. We can mitigate this computational cost by using an intermediate abstract planner to accelerate motion planning, which we describe below. ### III-C Motion Planning Through Places Critical to the success of our approach is the implementation of a motion planner that is capable of planning over large distances, while avoiding specific regions as specified by the task planner. Solving for a valid plan for a TAMP problem in a large environment requires solving for trajectories between potentially many different possible start and end poses, so the efficiency of the motion planner has a large effect on the overall runtime of the planner. The connectivity information between places in the scene graph is a useful tool for focusing the search space for feasible trajectories over large distances, since each place represents a region of navigable space. However, the connectivity information in the scene graph does not account for robot kinematic constraints or necessary standoff distances from obstacles. Thus, though we leverage information from the scene graph when possible, we still need to run a kinematic motion planner. To find a motion plan between two configurations $c_{1}$ and $c_{2}$, we plan through the places layer of the complete, unpruned scene-graph, using A* to find a sequence of places $[p_{1},...p_{N}]$ with corresponding centers $[x_{1},...x_{N}]$ such that $c_{1}$ is in place $p_{1}$, $c_{2}$ is in $p_{N}$, and edge $(p_{i},p_{i+i})$ is present in the scene graph. Though we have pruned the PDDL planning domain to accelerate the discrete component of search, we still retain the full information in the scene graph in order to generate trajectories. This is a much simpler search problem than using the PDDL domain of the full scene graph. Second, a kinematic planner plans a path through these positions. This path ignores obstacles that may be along the path and does not necessarily stay within places on the reference path, so it is very fast to generate. For a robot with a kinematic bicycle model, this initial path might be a series of Dubins curves calculated between sequential place centers. Next, this path is checked for collisions against the geometry of all objects in the scene. Any segments of the trajectory that are rendered infeasible by collision with objects or overlap non-traversable places are re- solved by a planner that considers the obstacles, such as RRT [14]. This two- phase approach allows for fast solving of “easy” portions of the trajectory, while being able to spend time searching for feasible solutions to more difficult parts of the path. Better alignment between which edges are present in the scene graph and kinematic feasibility for the robot leads to better performance of this heuristic. This coupling of the motion planner with the higher-level discrete planner using the places layer of the scene graph, gives us in actuality a three level hierarchical planner: 1) an abstract discrete task planner for interacting with the environment using the PDDL domain based on the pruned scene graph described in Sec. III-A, 2) a navigation planner for moving from place to place in the places layer of the full scene graph, and 3) a low-level motion planner using the abstract plan and the geometry of the scene graph to produce kinematically feasible trajectories. This planner architecture aligns naturally with the scene graph hierarchy, and we notice that with this factorization, each planning problem we care about becomes considerably simpler. The top-level planner only needs the subset of relevant the places layer, so we get a speedup by eliminating the parts of the scene graph that are redundant and reducing the planning horizon. The abstract navigation planner only plans through the places layer to one sub-goal at a time, and so can take advantage of Euclidean distance heuristics. Finally, the fine-grained motion planner is guided by the solution of the plan through the places layer, and so does not suffer from the usual challenges of finding motion plans over large distances. Crucially, we are able to do this without restricting the types of goals we can specify to the planner. ### III-D Weakly Redundant Symbols and Object Pruning We have demonstrated the ability to identify elements of a planning instance that are provably redundant when searching for a plan. However, there may be cases where planning could be made vastly more efficient by pruning elements of a problem that do not meet the strong condition of Definition 1. Consider, for example, the case where a robot is tasked with travelling across a large scene graph in the Inspection domain, containing many suspicious objects. The robot can inspect and neutralize an obstacle to pass it safely, but depending on the geometry of the scene, there may exist a path to the goal that is completely free of any obstacles. Alternatively some subset of the objects may be needed to be cleared to find safe passage. Regardless of that reality, none of the objects in the scene meet our definition of redundancy due to the fact that motion sequences which involve inspecting an object cannot be generated once the object is removed by the domain. The conditions for a symbol being redundant are quite strong – every trajectory must be preserved when removing the symbol. We now introduce a weaker notion of redundancy. ###### Definition 4 (Weakly Redundant Symbols). A symbol x is weakly redundant if both of the following hold: 1. 1. For every plan $\pi$ where x parameterizes an action, there is another plan $\pi^{\prime}$ that is valid where x is not an action parameter. 2. 2. No action preconditions or goals, expressed in negative normal form, contains a universal quantifier that can be parameterized by x. In contrast to strongly redundant symbols, removing a weakly redundant symbol from a planning instance may limit the set of motion sequences that can be found. An object is weakly redundant if a valid plan can be found without that object being necessary to the plan (e.g., if the robot can find a path around the object in question). Because the applicability of this definition may depend on the geometry of the scene, it is more difficult to identify symbols with weak redundancy, though it potentially applies to a wider set of objects. Consider two sets of symbols, $\mathcal{O}_{1}\subseteq\mathcal{O}_{2}\subseteq\mathcal{O}$. If there is a valid plan $p$ that is only parameterized by objects in $O_{1}$, then the objects in $O_{2}\setminus O_{1}$ are weakly redundant and do not need to appear in problem instance in order to find a valid plan. Unfortunately, identifying weakly redundant objects in the Inspection domain can be as hard as solving the original planning problem. However, we can still take advantage of the sparsity that exists in many scene-graphs by considering an incremental approach for identifying a subset of objects without (some) weakly redundant objects. We begin planning by including some subset of all objects, and attempt to solve the planning problem. If this limited problem has a valid solution which is also a valid solution to the original problem, then we have found a plan (and have shown the excluded objects were in fact weakly redundant). Otherwise, we incrementally add objects to the planning problem, and repeat (Algorithm 1). Identifying the initial set of objects $\mathcal{O}_{S}$ is not trivial. These objects are those that we are certain are neither redundant, nor weakly redundant. We have proven in the Inspection domain to be able to identify Places from the scene graph which are not redundant as described in Remark 1, so those are added to the problem. Any Object in the scene graph is potentially weakly redundant, depending on scene-geometry, unless it is explicitly mentioned in the goal, so we only add those goal Objects to the initial problem instance. For general problems, there is a body of literature dedicated to identifying object relevance, either by a reachability analysis [4, 5], or otherwise learning to predict importance [19] which could also be used to augment this initial set. We then initialize a problem instance with these objects $\mathcal{O}_{I}=\mathcal{O}_{S}$. Let $\mathcal{O}_{R}$ denote the rest of the objects in the scene graph. Next, in accordance with the _adaptive_ algorithm in Garrett et al. [6], we optimistically assume that certain sub- problems are solvable, and identify candidate action-plans which, if this assumption were true, would be a valid plans. We refer to these candidates as a plan skeletons. The set of plan skeletons for the problem instance involving objects $\mathcal{O}_{I}$ are generated by GetSkeletons (Line 7). SolveSubProblems (Line 8) attempts to solve the relevant sub-problems in a given candidate plan skeleton. If a sub-problem fails to be solved, we query the solver for feedback information (which objects in the full set $\mathcal{O}_{R}$ are a potential cause of failure). If all sub-problems were solved successfully, we return the full valid plan. Otherwise, we choose to add one or more new objects to the problem with GetNewObjects, which considers the feedback about the failure mode that CheckSolution (Line 9) returned. This approach will continue to incrementally add objects until a plan is found, and so is complete111Assuming SolveSubProblems is complete and GetSkeletons will return arbitrarily long skeletons once all objects have been added.. Algorithm 1 Incremental Object Solver 1:procedure IncrementalObjectSolver($A,S,I,G$) 2: $\mathcal{O}_{S}\leftarrow GetRelevantObjects(\mathcal{O})$ $\triangleright$ Find objects relevant for non-collision reasons 3: $\mathcal{O}_{I}\leftarrow\mathcal{O}_{S}$ 4: $\mathcal{O}_{R}\leftarrow\mathcal{O}\setminus\mathcal{O}_{S}$ 5: while $|\mathcal{O}_{I}|<|\mathcal{O}|$ do 6: $SkeletonInfo=[~{}]$ 7: for $k\in GetSkeletons(A,S,\mathcal{O}_{I},G)$ do 8: $T\leftarrow SolveSubProblems(k)$ 9: $Feedback\leftarrow CheckSolution(T,\mathcal{O}_{R})$ 10: $SkeletonInfo.\texttt{append}(Feedback)$ 11: end for 12: if $\pi\in SkeletonInfo$ is valid then 13: return $\pi$ 14: else 15: $\mathcal{O}_{new}=GetNewObjects(C,F)$ 16: $\mathcal{O}_{I}=\mathcal{O}_{I}\cup\mathcal{O}_{new}$ 17: $\mathcal{O}_{R}=\mathcal{O}_{R}\setminus\mathcal{O}_{new}$ 18: 19: end if 20: end while 21: return INFEASIBLE 22:end procedure Existing literature related to identifying irrelevant objects in a planning problem often focuses on ignoring or more efficiently encoding symbols based on careful analysis of the PDDL problem’s _logical_ factorization [10, 5]. In contrast, our algorithm is well suited to address the geometric dependence of movement actions on objects that may render a trajectory infeasible. A movement action must avoid collision with _all_ objects in the environment, and it is difficult to show that an object cannot interfere with any plan and can be ignored without explicitly inspecting the geometry. Algorithm 1 can be seen as a way of lazily uncovering which of these objects do interfere with plans that would otherwise be valid. We take advantage of the hierarchy in the scene graph to implement GetNewObjects, which is tasked with identifying which new objects to add to the planning problem based off feedback from failed sub-problems. Each time we attempt to solve a motion plan, we produce a plan that avoids the objects that have been added to our planning domain (if one exists). Then, we check that plan against the full set of objects. If one of those were intersected, and we cannot find a plan that does not intersect any of the full set of objects, the first object is potentially relevant to the planning problem, and we add it to the planning domain. By adding this object to the planning problem, the planner can now consider inspecting and neutralizing it, thus introducing a new potential path to a goal state. We envision many potential extensions to this approach, such as learning parameters for these subroutines, which we do not explore in this work. In the next section we evaluate our planning approach in the Inspection domain. ## IV Evaluation Figure 2: Three of the maps used for evaluation (not including the KITTI environment). Top Left: narrow alley map. Top Right: A simple 10x10 grid world map. Bottom: Scene graph built from real data collected by a robot in an office environment. There are three primary axes of complexity for planning problems in large scene graphs that we are interested in investigating. One is the complexity of the goal specification, another is correlated to the scale of the environment, and the third is related to the geometry of the scene (and potential obstructions therein). In our experiments, we aim to identify which types of planning problems our planning approach is well suited for considering these sources of complexity. To that end, we compare our encoding of the Inspection domain to the dense, direct encoding in a variety of different settings. We test on four map archetypes (Fig. 2)– a synthetic small constrained alleyway, a synthetic 10x10 gridworld, a scene graph built from real data in an office environment comprising 557 Places and 28 Objects, and a much larger scene graph built from the KITTI dataset composed of 17861 Places and 1315 Objects (Figs. 1 and 2). For each environment, we test several different goal clauses across different variations in robot and object initial conditions. To randomize tasks across trials, we define a mechanism for sampling goal specifications according to an increasing number of clauses in Disjunctive Normal Form (DNF). More explicitly, we generate goals in DNF with N clauses, where each clause has K conjunctions. For example, if N = 2 and K = 3, the goal looks like this: (Or ($C_{1}$, $C_{2}$)), where we sample 3 clauses for $C_{i}$, e.g., (And ((Visited P1), (Safe O4), (Not (Visited P9)))). ### IV-A Scene Graph Size Figure 3: This log-log plot compares the time (in seconds) to solve tasks of comparable complexity across different environmental scales for the dense formulation and the proposed sparse formulation. In the smallest environment, the dense baseline planner performs just as well as our approach. However, as we increase the scale, first to a 10x10 grid, and then to a real scene graph constructed on the floor of an academic building, we see the benifits of our approach. First, we investigate the effect of scene graph size on our ability to plan to a set of goals of consistent complexity. For this set of trials, our goal complexity is (N, K) = (3, 3), and we compare planning time for the direct encoding to our planner, highlighted in Fig. 3. Each point on the scatter plot corresponds to a single trial and, as highlighted in the legend, different colors correspond to different environment types. We begin in the Alley environment which, as shown in Fig. 2, is a 2x6 grid world containing only connected places. In this environment, for this level of complexity, our planner performs about as well as the dense encoding, as there is not much advantage to sparsification in such a small environment. As we scale up however, this changes. The next environment we consider is a 10x10 grid, where we see modest improvement as shown in Fig. 3. In the figure, any samples above the black line indicate that our planner outperforms the dense baseline. As we scale up even further, into the indoor DSG, we see the baseline planner taking on the order of hundreds of seconds to plan on average, while our planner averages in the tens of seconds. We note that we only considered goals that involved visiting or not visiting certain Places in the graph. When we attempted to introduce object inspection, the dense planner timed out before finding a plan in all instances. Similarly, when testing the baseline in the KITTI scene graphs, it was also unable to find solutions for goals of any complexity. Planning with these relatively simple goals, our planner experienced only a modest increase in planning time as the size of the map scaled. ### IV-B Goal Complexity Figure 4: Here we highlight how our approach scales with the complexity of the goal specification in the simple 10x10 grid world. As we increase the number of unique PDDL objects in the goal specification, the problem is no longer sparse, and so no longer benefits from our approach. Eventually, the dense, direct encoding begins to outperform our method. Next, we consider the effect of increasing goal complexity on planning time. To do this, we investigate a series of various different goal constructions in the Grid World environment. Specifically, we run experiments for K = 5 and K = 10, incrementing the N value from 1 to 5 for each K value. Stated differently, we have conjunctive clauses that are either 5 or 10 atoms long, and “Or” together 1, 2, 3, 4, or 5 of these conjunctions. Each of these clauses is once again either (Visited P1) or (Not (Visited P1)). Figure 4 presents a plot comparing the complexity of the goal in terms of total unique symbol referenced vs planning time. For less complex goals in this environment, our planner outperforms the dense planner, up to a crossover point at around 20 unique objects. At this point, the benefits of sparsity disappear for a scene of this size. Notably however, if we were to introduce obstructing objects, the dense direct encoding baseline is unable to solve goals of any meaningful complexity. Once again, not included in this plot are the results on the KITTI dataset, as the direct encoding never successfully completes a trial in this setting due to timing out. ### IV-C Object Obstruction in KITTI Figure 5: An example plan from the KITTI environment. The robot begins in the bottom left corner and is tasked with inspecting one object (denoted by the red triangle at the end of the black trajectory). Along the way, there are numerous objects potentially blocking the agent’s path, so it must add at least one to its planning domain. After inspect and neutralizing this object, the robot is free to reach its goal. Finally, we investigate the performance of the incremental object solver algorithm in task instances where objects not directly listed in the goal must be inspected in order to solve the task. In this experiment, we give a robot one of two goal types in a scene graph built from the KITTI dataset (Figs. 1 and 5): either (Visited $P_{i}$) or (Safe $O_{j}$). Given the size of the map, satisfying these goals may require the agent to traverse a large distance, but more importantly, if there are obstructing objects in the way, it may be forced to inspect and neutralize them to find a safe path to the goal. As a baseline, we sample 20 goals in the map shown in Fig. 5 using our planner without any of the objects being labeled as suspicious. In this case, they do not obstruct the agent’s path, and we find plans in 19 of 20 trials. Next, we “activate” 13 objects in the scene by labeling them as suspicious. A suspicious object has an inflated radius that is only safe for the robot to enter after it has been inspected and neutralized (in the KITTI scene, this radius is large enough to block an entire road as shown by the blue objects in Fig 5). For the agent to inspect the object, it has to sample a pose that is traversable and within line-of-sight and range of the object. Then, by taking the inspect action, the object is now safe, and can be passed. To demonstrate the difficulty of this task, and to highlight the importance of object pruning, we attempt to solve these same tasks without using our incremental feedback approach for pruning potentially weakly redundant objects (Sec. III-D). Instead, we add all suspicious objects to the scene directly. Using this encoding, the planner only succeeds in finding a plan in 4 out of 20 trials. Inspecting these solutions further reveals that in all 4 of these successful cases, there was a direct path to the goal without inspecting any objects. This result makes sense, as the odds of sampling the correct object to inspect is low without the benefit of geometric information. Finally, we test our proposed approach with pruning objects from the planning domain, and incrementally adding them back if they impacted the failure of a sub-problem during search (Sec. III-D). Our planner is able to solve 12 of the 20 trials, including 9 cases wherein the agent inspected one or more obstructing objects on the way to its goal. These experiments further demonstrate the importance of our proposed approach to sparsifying otherwise dense, long-horizon planning problems. An example plan, where the agent investigates two objects on the way to its goal is shown in Fig. 5. ## V Related Work There has substantial recent work enabling construction of information-rich scene graphs. Wu et al. [23] use a graph neural network to build higher-level abstractions from sensor data. Hughes et al. [12] and Bavle et al. [3] build scene graphs while localizing a robot’s motion, allowing for co-optimization of pose and scene estimation. Other recent directions include the registration of scene graph pairs [17]. There has been recent work in the area of deriving PDDL representations for task planning from scene graphs [1], with a particular focus on using the hierarchical nature of scene graphs to sparsify the representation in order to make planning tractable. The approach used by Agia et al. [1] is only guaranteed to produce valid solutions for very specific planning domains, where only constraints between symbols with a clear “ancestor” relationships are expressible, and it is unclear how to extend this to a more general set of planning tasks. Unfortunately the scene graph abstraction may not satisfy the property of downward-refinement [2], breaking many of the assumptions in task planning. The existance of low-level geometric constraints requires going beyond task planning approaches, and into the realm of TAMP. There has been additional work in pruning superfluous elements of scenes to accelrate TAMP. Silver et al. [19] learn to predict which symbols are relevant to a particular TAMP problem. Similarly, Srivastava et al. [20] attempt to guide TAMP from failed motion plans, much like we do in Sec III-D, however do so by adding additional goal conditions, which can complicate planning. ## VI CONCLUSIONS In this work we proposed an approach for enabling and accelerating TAMP in large scene graphs. To do this, we defined characteristics of planning domains that permit the pruning of certain symbols. Then, we proposed a method for deriving a domain from a Hydra scene graph which has these characteristics, and demonstrate how we prune Places and Objects from the domain. We also prove how the plans we produce from this pruned scene graph are valid and conform to the constraints of the full planning domain. Finally, we demonstrate experimentally how our approach scales with scene graph size, goal complexity, and geometric constraints in several environments, including a scene graph built from the KITTI dataset. In future work, we hope to demonstrate under what conditions we can extend our pruning method to other domains derived from large-scale scene graphs. Furthermore, augmenting our approach with learned methods for object pruning is a natural extension. The metric-semantic information in the scene graph is potentially a strong signal for a learner to identify further irrelevant symbols in a planning domain. ## Acknowledgements This work was partially funded by the ARL DCIST program, by Lincoln Laboratory’s Autonomy Al Fresco program, and by the National Defense Science and Engineering Graduate Fellowship program. ## References * Agia et al. [2022] Christopher Agia, Krishna Murthy Jatavallabhula, Mohamed Khodeir, Ondrej Miksik, Vibhav Vineet, Mustafa Mukadam, Liam Paull, and Florian Shkurti. Taskography: Evaluating robot task planning over large 3D scene graphs. pages 46–58. PMLR, January 2022. * Bacchus and Yang [1991] Fahiem Bacchus and Qiang Yang. The downward refinement property. In _IJCAI_ , pages 286–293, 1991. * Bavle et al. [2022] Hriday Bavle, Jose Luis Sanchez-Lopez, Muhammad Shaheer, Javier Civera, and Holger Voos. S-graphs+: Real-time localization and mapping leveraging hierarchical representations. _arXiv preprint arXiv:2212.11770_ , 2022. * Blum and Furst [1997] Avrim L Blum and Merrick L Furst. Fast planning through planning graph analysis. _Artificial intelligence_ , 90(1-2):281–300, 1997\. * Fishman et al. [2023] Michael Fishman, Nishanth Kumar, Cameron Allen, Natasha Danas, Michael Littman, Stefanie Tellex, and George Konidaris. Task scoping: Generating task-specific simplifications of open-scope planning problems. In _PRL Workshop Series $\\{$$\backslash$textendash$\\}$ Bridging the Gap Between AI Planning and Reinforcement Learning_, 2023. * Garrett et al. [2020] Caelan Reed Garrett, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning. In _Proceedings of the International Conference on Automated Planning and Scheduling_ , 2020. * Garrett et al. [2021] Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. Integrated task and motion planning. _Annual review of control, robotics, and autonomous systems_ , 2021\. * Ghallab et al. [2016] Malik Ghallab, Dana Nau, and Paolo Traverso. _Automated planning and acting_. Cambridge University Press, 2016. * Helmert [2006a] Malte Helmert. The fast downward planning system. _Journal of Artificial Intelligence Research_ , 26:191–246, 2006a. URL https://www.jair.org/index.php/jair/article/view/10457. * Helmert [2006b] Malte Helmert. The fast downward planning system. _Journal of Artificial Intelligence Research_ , 26:191–246, 2006b. * Hoffmann [2001] Jörg Hoffmann. Ff: The fast-forward planning system. _AI magazine_ , 22(3):57–57, 2001. URL https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1572. * Hughes et al. [2024] Nathan Hughes, Yun Chang, Siyi Hu, Rajat Talak, Rumaisa Abdulhai, Jared Strader, and Luca Carlone. Foundations of spatial perception for robotics: Hierarchical representations and real-time systems. _Intl. J. of Robotics Research_ , 2024. URL https://journals.sagepub.com/doi/10.1177/02783649241229725. * Karpas and Magazzeni [2020] Erez Karpas and Daniele Magazzeni. Automated planning for robotics. _Annual Review of Control, Robotics, and Autonomous Systems_ , 2020\. * LaValle [1998] Steven LaValle. Rapidly-exploring random trees: A new tool for path planning. _Research Report 9811_ , 1998. * LaValle [2006] Steven M LaValle. _Planning algorithms_. Cambridge university press, 2006. * Oleynikova et al. [2018] Helen Oleynikova, Zachary Taylor, Roland Siegwart, and Juan Nieto. Sparse 3D topological graphs for micro-aerial vehicle planning. In _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_ , 2018. * Sarkar et al. [2023] Sayan Deb Sarkar, Ondrej Miksik, Marc Pollefeys, Daniel Barath, and Iro Armeni. Sgaligner: 3d scene alignment with scene graphs. _arXiv preprint arXiv:2304.14880_ , 2023. * [18] Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, and Leslie Pack Kaelbling. Planning with learned object importance in large problem instances using graph neural networks. * Silver et al. [2021] Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua B Tenenbaum, Tomas Lozano-Perez, and Leslie Pack Kaelbling. Planning with learned object importance in large problem instances using graph neural networks. In _Proceedings of the AAAI conference on artificial intelligence_ , 2021. * Srivastava et al. [2014] Siddharth Srivastava, Eugene Fang, Lorenzo Riano, Rohan Chitnis, Stuart Russell, and Pieter Abbeel. Combined task and motion planning through an extensible planner-independent interface layer. In _2014 IEEE international conference on robotics and automation (ICRA)_ , pages 639–646. IEEE, 2014. URL https://aair-lab.github.io/Publications/icra14.pdf. * Strader et al. [2023] Jared Strader, Nathan Hughes, William Chen, Alberto Speranzon, and Luca Carlone. Indoor and outdoor 3D scene graph generation via language-enabled spatial ontologies. _arXiv preprint arXiv:2312.11713_ , 2023. URL https://arxiv.org/pdf/2312.11713.pdf. * Vega-Brown and Roy [2020] William Vega-Brown and Nicholas Roy. Task and motion planning is pspace-complete. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 10385–10392, 2020. * Wu et al. [2021] Shun-Cheng Wu, Johanna Wald, Keisuke Tateno, Nassir Navab, and Federico Tombari. Scenegraphfusion: Incremental 3d scene graph prediction from rgb-d sequences. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 7515–7525, 2021.
# Three-body molecules $\bar{D}\bar{D}^{\ast}\Sigma_{c}$\- understanding the nature of $T_{cc}$, $P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$ Ya-Wen Pan School of Physics, Beihang University, Beijing 102206, China Tian-Wei Wu School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou, 310024, China Ming-Zhu Liu <EMAIL_ADDRESS>School of Space and Environment, Beihang University, Beijing 102206, China School of Physics, Beihang University, Beijing 102206, China Li-Sheng Geng<EMAIL_ADDRESS>School of Physics, Beihang University, Beijing 102206, China Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing 102206, China School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China ###### Abstract The nature of the three pentaquark states, $P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$, discovered by the LHCb Collaboration in 2019, is still under debate, although the $\bar{D}^{(\ast)}\Sigma_{c}$ molecular interpretation seems to be the most popular. In this work, by adding a $\bar{D}$ meson into the $\bar{D}^{\ast}\Sigma_{c}$ pair, we investigate the mass and decay width of the three-body molecules $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ and explore the correlation between the existence of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules with the existence of $\bar{D}^{(\ast)}\Sigma_{c}$ and $\bar{D}^{\ast}\bar{D}$ two-body molecules. The latter can be identified with the doubly charmed tetraquark state $T_{cc}$ recently discovered by the LHCb Collaboration. Based on the molecular nature of $P_{c}(4312)$, $P_{c}(4440)$, $P_{c}(4457)$, and $T_{cc}$, our results indicate that there exist two three- body bound states of $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ with $I(J^{P})=1(1/2^{+})$ and $I(J^{P})=1(3/2^{+})$, and binding energies $37.24$ MeV and $29.63$ MeV below the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ mass threshold. In addition, we find that the mass splitting of these two three- body molecules are correlated to the mass splitting of $P_{c}(4440)$ and $P_{c}(4457)$, which offers a non-trivial way to reveal the molecular nature of these states. The partial widths of two $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules decaying into $J/\psi p\bar{D}$ and $J/\psi p\bar{D}^{\ast}$ are found to be several MeV. We recommend the experimental searches for the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules in the $J/\psi p\bar{D}$ and $J/\psi p\bar{D}^{\ast}$ invariant mass distributions. ## I Introduction In terms of the constituent quark model proposed by Gell-mann Gell-Mann (1964) and Zweig Zweig (1964a, b), hadrons can be classified either as mesons made of a pair of quark and anti-quark or baryons made of three quarks, the property of which can be well described in the conventional quark model Godfrey and Isgur (1985); Capstick and Isgur (1985). However, more and more the so-called exotic states beyond the traditional quark model have been discovered experimentally, starting from $X(3872)$ in 2003 Choi et al. (2003). To clarify the nature of these exotic states, a lot of theoretical interpretations were proposed, such as hadronic molecules, compact multiplet quark sates, kinetic effects, and so on (for recent reviews, see Refs. Chen et al. (2016); Hosaka et al. (2017); Lebed et al. (2017); Oset et al. (2016); Guo et al. (2018); Olsen et al. (2018); Ali et al. (2017); Brambilla et al. (2020); Liu et al. (2019a); Guo et al. (2020) ). Among them, the hadronic molecular picture is rather popular because many (if not all) of these states are located near the mass threshold of a pair of conventional hadrons. Nevertheless, how to confirm the molecular nature of these exotic states remains a big challenge for both experiments and theory. To confirm an exotic state as a hadronic molecule, one needs to be able to describe its production rate, decay width, mass, spin-parity, and other relevant properties consistently in the molecular picture. However, most approaches can only describe part of the relevant properties. This motivates us to find alternative methods to help achieve this goal. In nuclear physics, the existence of light nuclei, such as triton or ${}^{3}H$ serves as a non- trivial check on the two-body bound-state nature of the deuteron. Along this line, assuming $D_{s0}^{*}(2317)$ as a $DK$ bound state, we have studied the few-body systems of $DDK$ and $DDDK$, the existence of which indeed support the molecular nature of $D_{s0}^{*}(2317)$ Wu et al. (2019). In this work, assuming $T_{cc}$ and the three pentaquark states [$P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$] as $DD^{\ast}$ and $\bar{D}^{\ast}\Sigma_{c}$ bound states, respectively, we investigate the related three-body system $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ and explore the correlation of the three- body bound states $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ with the related two-body bound states. The hidden charm pentaquark states, $P_{c}(4380)$ and $P_{c}(4450)$, were firstly discovered by the LHCb Collaboration in 2015 Aaij et al. (2015). With a statistics 10 times larger, the $P_{c}(4450)$ state splits into $P_{c}(4440)$ and $P_{c}(4457)$, in addition a new state $P_{c}(4312)$ appears, all of which lie close to the mass thresholds of $\bar{D}^{(\ast)}\Sigma_{c}$ Aaij et al. (2019). In our previous work Liu et al. (2019b), we have employed a contact-range effective field theory (EFT) to assign $P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$ as $\bar{D}^{(\ast)}\Sigma_{c}$ hadronic molecules dictated by heavy quark spin symmetry(HQSS), which was confirmed by many other groups Xiao et al. (2019a, b); Sakai et al. (2019); Yamaguchi et al. (2020); Liu et al. (2021); Pavon Valderrama (2019); Meng et al. (2019); Du et al. (2020); Burns and Swanson (2019); Wu and Chen (2019); Azizi et al. (2021); Phumphan et al. (2021). Even so, there still exist other explanations, such as, hadro-charmonium Eides et al. (2020), compact pentaquark states Ali and Parkhomenko (2019); Mutuk (2019); Wang (2020); Cheng and Liu (2019); Weng et al. (2019); Zhu et al. (2019); Pimikov et al. (2020); Ruangyoo et al. (2021), virtual states Fernández-Ramírez et al. (2019) and double triangle singularities Nakamura (2021). The existence of three-body bound states $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ will further verify the molecular nature of the pentaquark states, where $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system can be viewed as a cluster of a $\bar{D}$ meson and the $\bar{D}^{\ast}\Sigma_{c}$ pair or a $\bar{D}^{\ast}$ meson and the $\bar{D}\Sigma_{c}$ pair. In addition, the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system can be regarded as a cluster of a $\Sigma_{c}$ baryon and the $\bar{D}^{\ast}\bar{D}$ pair, which is related to the doubly charmed tetraquark state ${T}^{+}_{cc}$ discovered by the LHCb Collaboration Aaij et al. (2021a). The mass of $T^{+}_{cc}$ is below the mass threshold of $D^{0}D^{\ast+}$ by only several hundred keV, and its decay width from the unitary analysis is rather small, only a few tens of keV Aaij et al. (2021b). Assuming that the $T_{cc}$ state is a $DD^{\ast}$ bound state, its mass and decay width can be described in the hadronic molecular model Meng et al. (2021); Dong et al. (2021); Chen et al. (2021); Ling et al. (2022); Ren et al. (2022); Feijoo et al. (2021); Yan and Valderrama (2022); Albaladejo (2021); Du et al. (2022); Ke et al. (2022). Although the molecular interpretation seems to be the most popular, the interpretation of a compact tetraquark state cannot be ruled out. The study of the three-body system $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ could also be helpful to verify its molecular nature. The three-body system $\bar{D}\bar{D}^{(\ast)}\Sigma_{c}$ is particularly interesting for a number of reasons. First of all, there is no annihilation of a pair of light quark and its anti-quark. This indicates that such a state, if exists, has a minimum quark content $\bar{c}\bar{c}cqqqq$, which is explicitly exotic. Second, the interactions of the sub-systems $\bar{D}\Sigma_{c}$, $\bar{D}^{\ast}\Sigma_{c}$, and $\bar{D}^{\ast}\bar{D}$ can be precisely determined by reproducing the masses of their corresponding molecular candidates using the one-boson-exchange (OBE) potential, which largely reduces the uncertainty of the so-obtained binding energy of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ state. Such exotic states, if discovered by experiments, will help verify the molecular nature of $T_{cc}$ as well as $P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$. In this work, we employ the Gaussian Expansion Method(GEM) to study the three-body system, $\bar{D}\bar{D}^{\ast}\Sigma_{c}$, and then use an effective Lagrangian approach to evaluate its main strong decay modes. This paper is organized as follows. In Sec. II, we briefly explain how to solve the three-body Schrödinger equation by GEM. Next in Sec. III, we present the binding energies of the three-body bound states $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ and calculate the partial widths of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules decaying into $J/\psi p\bar{D}^{(\ast)}$ and $\bar{T}_{cc}\Lambda_{c}\pi$. Finally, this paper is ended with a short summary in Sec. IV. ## II FORMALISM To obtain the binding energy of the $\bar{D}\bar{D}^{(\ast)}\Sigma_{c}$ system, we need to solve the three-body Schrödinger equation by GEM, which has been widely applied to investigate few-body systems in nuclear physics Hiyama et al. (2003) and hadron physics Hiyama et al. (2006); Yoshida et al. (2015). The three-body Schrödinger equation reads $[T+V^{1}(r_{1})+V^{2}(r_{2})+V^{3}(r_{3})-E]\Psi_{JM}^{Total}=0,$ (1) where $T$ is the kinetic-energy operator, $V^{i}(r_{i})$ is the potential between the $j_{\rm{th}}$ and $k_{\rm{th}}$ particle pair $(i,j,k=1-3)$, and the $1_{\rm{st}}$, $2_{\rm{nd}}$ and $3_{\rm{rd}}$ particle refer to the $\bar{D}$ meson, $\bar{D}^{*}$ meson and $\Sigma_{c}$ baryon, respectively. The total wave function $\Psi_{JM}^{Total}$ is expressed as a sum of three component functions: $\Psi_{JM}^{Total}=\sum_{i=1}^{3}C_{i,\alpha}\Phi_{JM,\alpha}^{c=i}(\textbf{r}_{i},\textbf{R}_{i}),$ (2) where $C_{i,\alpha}$ are the expansion coefficients of relevant basis, $i=1,2,3$ denotes the three channels of Fig. 1, and $\alpha\equiv\\{nl,NL,\lambda,\Sigma,s,T,t\\}$. Here $l$ and $L$ are the orbital angular momentum of the coordinates $r$ and $R$, $t$ and $s$ are the isospin and spin of the two-body subsystem in each channel, and $\lambda$, $\Sigma$ and $T$ are the total orbital angular momentum, spin and isospin, respectively. The wave function of each channel is expressed as $\Phi_{JM,\alpha}^{c}(\textbf{r}_{i},\textbf{R}_{i})=\left[\Phi_{lL,\lambda}^{c}\Omega_{\Sigma,s}^{c}\right]_{JM}H_{Tt}^{c},$ (3) where $\Phi_{lL,\lambda}^{c}$ is the spatial wave function, and $\Omega_{\Sigma,s}^{c}$ is the spin wave function. The total isospin wave function $H_{Tt}^{c}$ in each channel are written as $\displaystyle H_{Tt_{1}}^{c=1}=[[\eta_{\frac{1}{2}}(\bar{D}^{\ast})\eta_{1}(\Sigma_{c})]_{t_{1}}\eta_{\frac{1}{2}}(\bar{D})]_{T},$ (4) $\displaystyle H_{Tt_{2}}^{c=2}=[[\eta_{\frac{1}{2}}(\bar{D})\eta_{1}(\Sigma_{c})]_{t_{2}}\eta_{\frac{1}{2}}(\bar{D}^{\ast})]_{T},$ $\displaystyle H_{Tt_{3}}^{c=3}=[[\eta_{\frac{1}{2}}(\bar{D}^{\ast})\eta_{\frac{1}{2}}(\bar{D})]_{t_{3}}\eta_{1}(\Sigma_{c})]_{T},$ where $\eta$ is the isospin wave function of each particle. The spatial wave function $\Phi_{lL,\lambda}^{c}$ can be expanded as $\displaystyle\Phi_{lL,\lambda}^{c}=[\phi_{n_{c}l_{c}}^{G}(\textbf{r}_{c})\psi_{N_{c}L_{c}}^{G}(\textbf{R}_{c})]_{\lambda},$ (5) $\displaystyle\phi_{nlm}^{G}(\textbf{r})=N_{nl}r^{l}e^{-\nu_{n}r^{2}}Y_{lm}(\hat{\textbf{r}}),$ $\displaystyle\psi_{NLM}^{G}(\textbf{R})=N_{NL}R^{L}e^{-\lambda_{N}R^{2}}Y_{LM}(\hat{\textbf{R}}),$ where $N_{nl}(N_{NL})$ is the normalization constant, and the relevant parameters $\nu_{n}$ and $\lambda_{N}$ are given by $\displaystyle\nu_{n}=1/r^{2}_{n},\quad r_{n}=r_{1}a^{n-1},\quad(n=1-n_{max}),$ (6) $\displaystyle\lambda_{N}=1/R^{2}_{N},\quad R_{N}=R_{1}A^{N-1},\quad(N=1-N_{max}),$ where $\\{n_{max},r_{min},a~{}\mbox{or}~{}$$r_{max}\\}$ and $\\{N_{max},R_{min},A~{}\mbox{or}~{}$ $R_{max}\\}$ are Gaussian basis parameters given in Table 1. \begin{overpic}[scale={.153}]{DbarDbarstarSigmaC.png} \end{overpic} Figure 1: Three Jacobi coordinates of the $\bar{D}\bar{D}^{*}\Sigma_{c}$ system Table 1: Three-body angular-momentum space and the Gaussian range parameters for the $I(J^{P})=1(\frac{1}{2}^{+})$ and $1(\frac{3}{2}^{+})$ configurations of the $\bar{D}\bar{D}^{*}\Sigma_{c}$ system. Lengths are in units of fm. $I(J^{P})$ | $c$ | $l$ | $L$ | $\lambda$ | $s$ | $\Sigma$ | $t$ | $n_{max}$ | $r_{min}$ | $r_{max}$ ---|---|---|---|---|---|---|---|---|---|--- $(N_{max})$ | $(R_{min})$ | $(R_{max})$ $1(\frac{1}{2}^{+})$ | 1 | 0 | 0 | 0 | 1/2 | 1/2 | 1/2(3/2) | 10 | 0.1 | 20.0 2 | 0 | 0 | 0 | 1/2 | 1/2 | 1/2(3/2) | 10 | 0.1 | 20.0 3 | 0 | 0 | 0 | 1 | 1/2 | 0(1) | 10 | 0.1 | 20.0 $1(\frac{3}{2}^{+})$ | 1 | 0 | 0 | 0 | 3/2 | 3/2 | 1/2(3/2) | 10 | 0.1 | 20.0 2 | 0 | 0 | 0 | 1/2 | 3/2 | 1/2(3/2) | 10 | 0.1 | 20.0 3 | 0 | 0 | 0 | 1 | 3/2 | 0(1) | 10 | 0.1 | 20.0 ## III Results and Discussions First we discuss the quantum numbers of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system. Considering only $S$-wave interactions, the total angular momentum of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system is either $J=1/2$ or $J=3/2$. The isospin of $\bar{D}\bar{D}^{\ast}$ is either 0 or 1. In the OBE model, the interaction in isospin 0 is much stronger than that in isospin 1, to such an extent that $T_{cc}$ can be understood as an isospin 0 $\bar{D}\bar{D}^{\ast}$ bound state. As a result, the total isospin of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system is taken to be 1. Therefore, in this work we investigate the two $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ configurations with $I(J^{P})=1(\frac{1}{2}^{+})$ and $I(J^{P})=1(\frac{3}{2}^{+})$. The relevant quantum numbers of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system are given Table 1. In this work, we employ the OBE model to construct the potentials of $\bar{D}\Sigma_{c}$, $\bar{D}^{\ast}\Sigma_{c}$, and $\bar{D}^{\ast}\bar{D}$ through the effective Lagrangians describing the interactions between charmed hadrons and light mesons $\pi$, $\rho$, $\sigma$ and $\omega$. For details, we refer to Refs. Liu et al. (2019c, 2021). Since the $S$-wave interaction plays a dominant role in forming hadronic molecules, we only consider the $S$-wave interaction in this work. To estimate the impact of the finite size of hadrons on the OBE potentials, we adopt a monopole form factor $\frac{\Lambda^{2}-m^{2}}{\Lambda^{2}-q^{2}}$ for the relevant meson-baryon vertices, which introduces an unknown parameter $\Lambda$. To decrease the uncertainty of the OBE potential induced by the cutoff, we determine it by reproducing the masses of some well known molecular candidates. Assuming $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$ as $\bar{D}^{(\ast)}\Sigma_{c}$ bound states, the corresponding cutoff (denoted by $\Lambda_{P}$) is fixed to be 1.16 GeV, while the cutoff of the $\bar{D}\bar{D}^{\ast}$ system (denoted by $\Lambda_{T}$) is fixed to be $0.998$ GeV if $T_{cc}$ is regarded as a $\bar{D}\bar{D}^{\ast}$ bound state. Therefore, we take three sets of cutoff values to search for three-body bound states in the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system: Case I: $\Lambda_{T}$=$\Lambda_{P}$=0.998 GeV; Case II: $\Lambda_{T}$=$0.998$ GeV, $\Lambda_{P}$=1.16 GeV; and Case III: $\Lambda_{T}$=$\Lambda_{P}$=1.16 GeV. As the OBE interaction increases with the cutoff, we anticipate that Case III will yield the largest binding energies while Case I the smallest ones. Table 2: Binding energies (in units of MeV), expectation values of the Hamiltonian (potential and kinetic energies)(in units of MeV) and root-mean-square radii (in units of fm) of the three-body system $\bar{D}\bar{D}^{*}\Sigma_{c}$ obtained in the three cases detailed in the main text. $I(J^{P})$ | $B$ | $T$ | $V_{\bar{D}^{*}\Sigma_{c}}$ | $V_{\bar{D}\Sigma_{c}}$ | $V_{\bar{D}\bar{D}^{*}}$ | $r_{\bar{D}^{*}\Sigma_{c}}$ | $r_{\bar{D}\Sigma_{c}}$ | $r_{\bar{D}\bar{D}^{*}}$ ---|---|---|---|---|---|---|---|--- Case I $\Lambda_{P}=\Lambda_{T}=0.998$ GeV $1(\frac{1}{2}^{+})$ | 10.86 | 65.41 | -19.64 | -21.69 | -34.94 | 1.42 | 1.41 | 1.36 $1(\frac{3}{2}^{+})$ | 7.06 | 52.18 | -19.66 | -10.46 | -29.12 | 1.62 | 1.81 | 1.64 Case II $\Lambda_{T}=0.998$ GeV $\Lambda_{P}=1.16$ GeV $1(\frac{1}{2}^{+})$ | 37.24 | 116.16 | -41.53 | -72.44 | -39.43 | 1.00 | 0.88 | 1.03 $1(\frac{3}{2}^{+})$ | 29.63 | 92.50 | -81.32 | -21.67 | -19.15 | 0.91 | 1.36 | 1.40 Case III $\Lambda_{P}=\Lambda_{T}=1.16$ GeV $1(\frac{1}{2}^{+})$ | 63.07 | 169.01 | -52.14 | -66.03 | -113.91 | 0.83 | 0.82 | 0.75 $1(\frac{3}{2}^{+})$ | 46.94 | 141.01 | -61.84 | -25.27 | -100.84 | 0.91 | 1.02 | 0.86 In case I, the cutoff of the $\bar{D}^{(\ast)}\Sigma_{c}$ potential is taken the same as that of the $\bar{D}\bar{D}^{\ast}$ potential. For such potentials, there exist two three-body bound states $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ with $I(J^{P})=1(\frac{1}{2}^{+})$ and $I(J^{P})=1(\frac{3}{2}^{+})$, and binding energies, 10.9 MeV and 7.1 MeV, respectively. For a cutoff of $\Lambda=0.998$ GeV, the OBE $\bar{D}^{\ast}\Sigma_{c}$ potential does not support the existence of $\bar{D}^{\ast}\Sigma_{c}$ bound states corresponding to $P_{c}(4440)$ and $P_{c}(4457)$. As a result, case I indicates that there exist two three-body $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states even the $\bar{D}^{\ast}\Sigma_{c}$ system does not bind as long as $T_{cc}$ is a $\bar{D}\bar{D}^{\ast}$ bound state. In case II, we change the cutoff of the $\bar{D}^{(\ast)}\Sigma_{c}$ potential from 0.998 GeV to 1.16 GeV, while keep the cutoff of the $\bar{D}^{\ast}D^{\ast}$ potential unchanged. In this case, the strength of the $\bar{D}^{(\ast)}\Sigma_{c}$ potential becomes stronger, resulting in two three-body bound states with larger binding energies $37.2$ MeV and $29.6$ MeV. One can see that assuming $T_{cc}$ as a $\bar{D}\bar{D}^{\ast}$ bound state and $P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$ as $\bar{D}^{(\ast)}\Sigma_{c}$ bound states, we obtain two three-body bound states below the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ mass threshold. In case III, we change the cutoff of the $\bar{D}^{\ast}\bar{D}$ potential from 0.998 GeV to 1.16 GeV, which naturally results in two bound states with even larger binding energies as shown in Table 2. In Fig. 2, we present the binding energies of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system as a function of $\Lambda$. One can see that the three-body system $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ remains bound even when both $DD^{\ast}$ and $\bar{D}^{(\ast)}\Sigma_{c}$ are unbound, with binding energies of the order of several MeV. If the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states are observed experimentally in the future, it will help verify the molecular nature of $P_{c}(4312)$, $P_{c}(4440)$, $P_{c}(4457)$ and $T_{cc}$ in terms of the results shown in Fig. 2. Figure 2: Binding energies of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules as a function of $\Lambda$. The red and blue solid lines represent the $J^{P}=1/2$ and $J^{P}=3/2$ states, respectively. The green dot lines represent cutoffs that could reproduce the binding energy of $T_{cc}$($\Lambda_{T}$) and the pentaquark states($\Lambda_{P}$). Figure 3: Mass splitting of the three-body $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ doublet as a function of the cutoff of the $\bar{D}\bar{D}^{\ast}$ potential for fixed $\Lambda_{P}=1.16$ GeV. It is interesting to note that the mass splitting of the three-body $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ doublet in case III is larger than that of case II, which implies that the strength of the $\bar{D}\bar{D}^{\ast}$ potential affects the splitting. In Fig. 3, we present the mass splitting as a function of the cutoff of the $\bar{D}\bar{D}^{\ast}$ potential. It is obvious that the mass splitting increases with the strength of the $\bar{D}\bar{D}^{\ast}$ potential. Interestingly, the mass splitting is positive, which means that the mass of the spin 3/2 $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound state is larger than that of its spin 1/2 counterpart. We note that in this case the $J^{P}=\frac{1}{2}^{-}$ $\bar{D}^{\ast}\Sigma_{c}$ system is more bound than the $J^{P}=\frac{3}{2}^{-}$ $\bar{D}^{\ast}\Sigma_{c}$ system. It indicates that the mass splitting of the three-body $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ doublet is oppositely correlated to the mass splitting of the two-body $\bar{D}^{\ast}\Sigma_{c}$ bound states, which offers a non-trivial way to check the molecular nature of the involved states. We note in passing that in Ref. Pan et al. (2020), we found that the mass splitting of $P_{c}(4440)$ and $P_{c}(4457)$ is correlated to the mass splitting of the $\Xi_{cc}^{(\ast)}\Sigma_{c}^{(\ast)}$ doublet via heavy antiquark diquark symmetry. (a) (b) (c) Figure 4: Tree level diagrams for the three-body $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states decaying into $J/\psi p\bar{D}$(a), $J/\psi p\bar{D}^{\ast}$(b) and $\pi\Lambda_{c}\bar{T}_{cc}$(c). From our above study we conclude that there exist two three-body molecules $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ with $I(J^{P})=1(\frac{1}{2}^{+})$ and $I(J^{P})=1(\frac{3}{2}^{+})$. In the following, we denote $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$ as $P_{c1}$, $P_{c2}$, and $P_{c3}$, and the $I(J^{P})=1(\frac{1}{2}^{+})$ and $I(J^{P})=1(\frac{3}{2}^{+})$ $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states as $Hq_{1}$ and $Hq_{2}$, respectively. We will discuss their possible decay modes and calculate the decay widths via the effective Lagrangian approach. Such $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states can be regarded as three kinds of quasi two-body bound states, $P_{c2}(P_{c3})\bar{D}$, $P_{c1}\bar{D}^{\ast}$ and $\bar{T}_{cc}\Sigma_{c}$. One should note that the particles, $P_{c2/c3}$, $P_{c1}$, and $\Sigma_{c}$, should be viewed as unstable particles in contrast to the particles, $\bar{D}$, $\bar{D}^{\ast}$ and $\bar{T}_{cc}$ Zyla et al. (2020); Aaij et al. (2021b), and these unstable particles can further decay into two other particles. As a result, the three- body bound states $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ can decay into $J/\psi p\bar{D}$, $J/\psi p\bar{D}^{\ast}$, and $\Lambda_{c}\pi\bar{T}_{cc}$, as shown in Fig. 4. Since the minimum number of valence quarks of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ states is 7, it will not couple to a pair of traditional hadrons, which indicates that they can only decay into at least three traditional hadrons. In other words, the decay mechanism shown in Fig. 4 should be the dominant ones. In the following, we will show how to calculate the partial decay widths of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states in the effective Lagrangian approach. It should be noted that in the following study, we focus on Case II, among the three cases studied, because it can reproduce the pentaquark states and $T_{cc}$. The effective Lagrangians describing the interactions between the three-body bound states and their constituents have the following form $\displaystyle\mathcal{L}_{Hq_{1}P_{c3}\bar{D}}=$ $\displaystyle g_{Hq_{1}P_{c3}\bar{D}}Hq_{1}(x)\int dy\bar{D}(x+\omega_{P_{c3}}y)P_{c3}(x+\omega_{\bar{D}}y)\Phi(y^{2}),$ (7) $\displaystyle\mathcal{L}_{Hq_{2}P_{c2}\bar{D}}=$ $\displaystyle g_{Hq_{2}P_{c2}\bar{D}}Hq_{2}^{\mu}(x)\int dy\bar{D}(x+\omega_{P_{c2}}y)P_{c2\mu}(x+\omega_{\bar{D}}y)\Phi(y^{2}),$ $\displaystyle\mathcal{L}_{Hq_{1}P_{c1}\bar{D}^{\ast}}=$ $\displaystyle g_{Hq_{1}P_{c1}\bar{D}^{\ast}}Hq_{1}(x)\int dy\bar{D}^{\ast\mu}(x+\omega_{P_{c1}}y)\gamma_{\mu}\gamma_{5}P_{c1}(x+\omega_{\bar{D}^{\ast}}y)\Phi(y^{2}),$ $\displaystyle\mathcal{L}_{Hq_{2}P_{c1}\bar{D}^{\ast}}=$ $\displaystyle g_{Hq_{2}P_{c1}\bar{D}^{\ast}}Hq_{2\mu}(x)\int dy\bar{D}^{\ast\mu}(x+\omega_{P_{c1}}y)P_{c1}(x+\omega_{\bar{D}^{\ast}}y)\Phi(y^{2}),$ $\displaystyle\mathcal{L}_{Hq_{1}T_{cc}\Sigma_{c}}=$ $\displaystyle g_{Hq_{1}T_{cc}\Sigma_{c}}Hq_{1}(x)\int dyT_{cc}^{\mu}(x+\omega_{\Sigma_{c}}y)\gamma_{\mu}\gamma_{5}\Sigma_{c}(x+\omega_{\bar{T}_{cc}}y)\Phi(y^{2}),$ $\displaystyle\mathcal{L}_{Hq_{2}T_{cc}\Sigma_{c}}=$ $\displaystyle g_{Hq_{2}T_{cc}\Sigma_{c}}Hq_{2\mu}(x)\int dy\bar{T}_{cc}^{\mu}(x+\omega_{\Sigma_{c}}y)\Sigma_{c}(x+\omega_{\bar{T}_{cc}}y)\Phi(y^{2}),$ where $\Phi(y^{2})$ denotes the Gaussian form factor, $g$ with different subscripts represent the relevant coupling constants, and $\omega_{i}=\frac{m_{i}}{m_{i}+m_{j}}$ is the kinematical parameter with $m_{i}$ and $m_{j}$ being the masses of the involved hadrons. We rely on the compositeness condition to estimate the above couplings, which is an effective approach to estimate the couplings between bound states and their constituents Weinberg (1963). The condition implies that the coupling constants can be determined from the fact that the renormalization constant of the wave function of a composite particle should be zero. Following our previous works Ling et al. (2021, 2022), with the cutoff $\Lambda=1$ GeV and the masses of the three-body bound states $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ the couplings are determined as shown in Table 3. The details about deriving the couplings can be found in Ref. Ling et al. (2021). The Lagraingian describing the secondary decay process are expressed as $\displaystyle\mathcal{L}_{P_{c1}J/\psi p}$ $\displaystyle=$ $\displaystyle g_{P_{c1}J/\psi p}P_{c1}\gamma_{\mu}\gamma_{5}J/\psi^{\mu}p,$ (8) $\displaystyle\mathcal{L}_{P_{c2}J/\psi p}$ $\displaystyle=$ $\displaystyle g_{P_{c2}J/\psi p}P_{c2\mu}J/\psi^{\mu}p,$ $\displaystyle\mathcal{L}_{P_{c3}J/\psi p}$ $\displaystyle=$ $\displaystyle g_{P_{c3}J/\psi p}P_{c3}\gamma_{\mu}\gamma_{5}J/\psi^{\mu}p,$ $\displaystyle\mathcal{L}_{\pi\Lambda_{c}{\Sigma}_{c}}$ $\displaystyle=$ $\displaystyle\frac{g_{\pi\Lambda_{c}{\Sigma}_{c}}}{f_{\pi}}~{}\bar{\Lambda}_{c}\gamma^{\mu}\gamma_{5}\partial_{\mu}\vec{\phi}_{\pi}\cdot\vec{\tau}{\Sigma}_{c},$ where the pion decay constant $f_{\pi}=132$ MeV and the coupling $g_{\Sigma_{c}\Lambda_{c}\pi}$ are determined as 0.55 by reproducing the decay width of $\Sigma_{c}\to\Lambda_{c}\pi$ of 1.89 MeV Zyla et al. (2020). Since the partial decay widths of the pentaquark states into $J/\psi p$ are unknown, we can not determine them in terms of experimental data. In this work, we determine the couplings of the pentaquark states to $J/\psi p$ in the contact- range EFT approach, where the $\eta_{c}p$, $J/\psi p$, and $\bar{D}^{(\ast)}\Sigma_{c}$ channels dictated by HQSS are taken into account. By reproducing the masses and widths of the pentaquark states, the relevant couplings are determined to be $g_{p_{c1}J/\psi p}=0.22$, $g_{p_{c2}J/\psi p}=0.44$, and $g_{p_{c3}J/\psi p}=0.33$ Xie et al. (2022), consistent with the chiral unitary approach Xiao et al. (2020). Table 3: Couplings of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules to their components obtained with a cutoff of $\Lambda=$1 GeV. Couplings | $g_{H_{q1}P_{c2}\bar{D}}$ | $g_{H_{q1}P_{c1}\bar{D}^{\ast}}$ | $g_{H_{q1}T_{cc}\Sigma_{c}}$ ---|---|---|--- Value | 3.04 | 1.70 | 2.58 Couplings | $g_{H_{q2}P_{c3}\bar{D}}$ | $g_{H_{q2}P_{c1}\bar{D}^{\ast}}$ | $g_{H_{q2}T_{cc}\Sigma_{c}}$ Value | 1.94 | 2.64 | 4.15 With the above relevant Lagrangians the corresponding amplitudes of the strong decays of Fig. 4 are $\displaystyle\mathcal{M}_{a(J=1/2)}=$ $\displaystyle ig_{Hq_{1}\bar{D}P_{c3}}g_{P_{c3}J/\psi p}\bar{u}_{Hq_{1}}\frac{1}{{/\\!\\!\\!k}-m_{P_{c3}}}\gamma_{\mu}\gamma_{5}\varepsilon^{\mu}(p_{2})u_{p},$ (9) $\displaystyle\mathcal{M}_{a(J=3/2)}=$ $\displaystyle ig_{Hq_{2}\bar{D}P_{c2}}g_{P_{c2}J/\psi p}\bar{u}_{Hq_{2}}^{\mu}\frac{S_{\mu\nu}(k)}{{/\\!\\!\\!k}-m_{P_{c2}}}\varepsilon^{\mu}(p_{2})u_{p},$ $\displaystyle\mathcal{M}_{b(J=1/2)}=$ $\displaystyle ig_{Hq_{1}\bar{D}^{\ast}P_{c1}}g_{P_{c1}J/\psi p}\bar{u}_{Hq_{1}}\varepsilon_{\nu}(p_{3})\gamma^{\nu}\gamma^{5}\frac{1}{{/\\!\\!\\!k}-m_{P_{c1}}}\gamma_{\mu}\gamma_{5}\varepsilon^{\mu}(p_{2})u_{p},$ $\displaystyle\mathcal{M}_{b(J=3/2)}=$ $\displaystyle ig_{Hq_{2}\bar{D}^{\ast}P_{c1}}g_{P_{c1}J/\psi p}\bar{u}_{Hq_{1}}^{\nu}\varepsilon_{\nu}(p_{3})\frac{1}{{/\\!\\!\\!k}-m_{P_{c1}}}\gamma_{\mu}\gamma_{5}\varepsilon^{\mu}(p_{2})u_{p},$ $\displaystyle\mathcal{M}_{c(J=1/2)}=$ $\displaystyle ig_{Hq_{1}T_{cc}\Sigma_{c}}\frac{g_{\pi\Lambda_{c}{\Sigma}_{c}}}{f_{\pi}}\bar{u}_{Hq_{1}}\varepsilon_{\nu}(p_{3})\gamma^{\nu}\gamma^{5}\frac{1}{{/\\!\\!\\!k}-m_{\Sigma_{c}}}\gamma_{\mu}\gamma_{5}p_{2}^{\mu}u_{\Lambda_{c}},$ $\displaystyle\mathcal{M}_{c(J=3/2)}=$ $\displaystyle ig_{Hq_{2}T_{cc}\Sigma_{c}}\frac{g_{\pi\Lambda_{c}{\Sigma}_{c}}}{f_{\pi}}\bar{u}_{Hq_{2}}^{\nu}\varepsilon_{\nu}(p_{3})\frac{1}{{/\\!\\!\\!k}-m_{\Sigma_{c}}}\gamma_{\mu}\gamma_{5}p_{2}^{\mu}u_{\Lambda_{c}},$ where $u$ and $\bar{u}$ represent the corresponding spinor function denoted by the subscripts, $\varepsilon$ is the polarization vector, and $S_{\mu\nu}=g^{\mu\nu}-\frac{1}{3}\gamma^{\mu}\gamma^{\nu}-\frac{\gamma^{\mu}p^{\nu}-\gamma^{\nu}p^{\mu}}{3m}-\frac{2p^{\mu}p^{\nu}}{3m^{2}}$ with $m$ being the corresponding mass of an spin-3/2 particle. With the so-obtained amplitudes , one can easily calculate the partial decay width $d\Gamma=\frac{1}{(2\pi)^{3}}\frac{1}{2J+1}\frac{\overline{|\mathcal{M}|^{2}}}{32m_{Hq_{1(2)}}^{3}}dm_{12}^{2}dm_{23}^{2}.$ (10) In Table 4, we present the partial decay widths of the two three-body bound states, $Hq_{1}$ and $Hq_{2}$. We find that the bound state with $J=1/2$ dominantly decays into $J/\psi p\bar{D}$, much larger than that of the $J=3/2$ state. Therefore, the $J/\psi p\bar{D}$ mode is a golden channel to discriminate the spin of $Hq_{1}$ and $Hq_{2}$ molecules. $Hq_{1}$ and $Hq_{2}$ decay almost equally into $J/\psi p\bar{D}^{\ast}$, while the decay into $\bar{T}_{cc}\Lambda_{c}\pi$ is rather small in contrast to the other two decay modes. One should note that the partial decay widths of the three pentaquark states decaying into $J/\psi p$ are not very precisely known Xiao et al. (2020), leading to some uncertainties about the partial decay widths given in Table 4. Nonetheless, we suggest to search for them in the $J/\psi p\bar{D}^{\ast}$ or $J/\psi p\bar{D}$ mass distributions. Table 4: Partial decay widths of the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules. Modes of Fig. 4(a) | $Hq_{1}\to J/\psi p\bar{D}$ | $Hq_{2}\to J/\psi p\bar{D}$ ---|---|--- Value (MeV) | 12.3 | 0.9 Modes of Fig. 4(b) | $Hq_{1}\to J/\psi p\bar{D}^{\ast}$ | $Hq_{2}\to J/\psi p\bar{D}^{\ast}$ Value (MeV) | 3.7 | 3.9 Modes of Fig. 4(c) | $Hq_{1}\to\bar{T}_{cc}\Lambda_{c}\pi$ | $Hq_{2}\to\bar{T}_{cc}\Lambda_{c}\pi$ Value (keV) | 0.2 | 3.3 ## IV Summary and conclusion The exotic states, $T_{cc}$, $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$, discovered by the LHCb Collaboration recently, have been suggested to be $DD^{\ast}$ and $\bar{D}^{(\ast)}\Sigma_{c}$ hadronic molecules. However, their molecular nature is difficult to be confirmed either experimentally or theoretically. In this work, we have investigated these exotic states in the three-body $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ system, which is equivalent to adding a $\bar{D}$ meson into the $\bar{D}^{\ast}\Sigma_{c}$ system. The OBE interactions of the sub-systems, $\bar{D}^{(\ast)}\Sigma_{c}$ and $\bar{D}\bar{D}^{\ast}$, are determined by reproducing the masses of the molecular candidates, i.e., the three pentaquatk states [$P_{c}(4312)$, $P_{c}(4440)$ and $P_{c}(4457)$] and $\bar{T}_{cc}$. After solving the three- body schrödinger equation, we obtained two three-body bound states, $I(J^{P})=1(\frac{1}{2}^{+})$ $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ and $I(J^{P})=1(\frac{3}{2}^{+})$ $\bar{D}\bar{D}^{\ast}\Sigma_{c}$, with binding energies 37.2 MeV and 29.6 MeV, respectively. In particular, we explored the correlation between the existence of $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ molecules with the existence of $\bar{D}^{(\ast)}\Sigma_{c}$ and $\bar{D}^{\ast}\bar{D}$ molecules. If the $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states can be observed experimentally in the future, the correlation can help to test the molecular nature of $P_{c}(4312)$, $P_{c}(4440)$, $P_{c}(4457)$ and $\bar{T}_{cc}$. The mass splitting of the three-body doublet is found to be correlated to that of the $\bar{D}^{\ast}\Sigma_{c}$ doublet. Assuming $P_{c}(4457)$ and $P_{c}(4440)$ as $J=1/2$ and $J=3/2$ $\bar{D}^{\ast}\Sigma_{c}$ bound states, respectively, we find that the mass splitting between $I(J^{P})=1(\frac{1}{2}^{+})$ and $I(J^{P})=1(\frac{3}{2}^{+})$ $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states is positive. At last, we employed the effective Lagrangian approach to calculate the partial decay widths of $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states. We find that $J=1/2$ and $J=3/2$ $\bar{D}\bar{D}^{\ast}\Sigma_{c}$ bound states mainly decay into $J/\psi p\bar{D}$ and $J/\psi p\bar{D}^{\ast}$, respectively, while the decay into $\bar{T}_{cc}\Lambda_{c}\pi$ is small. We strongly recommend experimental searches for such three-body bound states in the $J/\psi p\bar{D}$ and $J/\psi p\bar{D}^{\ast}$ mass distributions, which can help verify the molecular nature of $T_{cc}$, $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$. ## V Acknowledgments This work is supported in part by the National Natural Science Foundation of China under Grants No.11975041, No.11735003, and No.11961141004. Ming-Zhu Liu acknowledges support from the National Natural Science Foundation of China under Grant No.1210050997. Tian-Wei Wu acknowledges support from the National Natural Science Foundation of China under Grant No.12147152. ## References * Gell-Mann (1964) M. Gell-Mann, Phys. Lett. 8, 214 (1964). * Zweig (1964a) G. Zweig, CERN-TH-401 (1964a). * Zweig (1964b) G. Zweig, CERN-TH-412, NP-14146 (1964b). * Godfrey and Isgur (1985) S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985). * Capstick and Isgur (1985) S. Capstick and N. Isgur, AIP Conf. Proc. 132, 267 (1985). * Choi et al. (2003) S. K. Choi et al. (Belle), Phys. Rev. Lett. 91, 262001 (2003), eprint hep-ex/0309032. * Chen et al. (2016) H.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Phys. Rept. 639, 1 (2016), eprint 1601.02092. * Hosaka et al. (2017) A. Hosaka, T. Hyodo, K. Sudoh, Y. Yamaguchi, and S. Yasui, Prog. Part. Nucl. Phys. 96, 88 (2017), eprint 1606.08685. * Lebed et al. (2017) R. F. Lebed, R. E. Mitchell, and E. S. Swanson, Prog. Part. Nucl. Phys. 93, 143 (2017), eprint 1610.04528. * Oset et al. (2016) E. Oset et al., Int. J. Mod. Phys. E 25, 1630001 (2016), eprint 1601.03972. * Guo et al. (2018) F.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao, and B.-S. Zou, Rev. Mod. Phys. 90, 015004 (2018), eprint 1705.00141. * Olsen et al. (2018) S. L. Olsen, T. Skwarnicki, and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018), eprint 1708.04012. * Ali et al. (2017) A. Ali, J. S. Lange, and S. Stone, Prog. Part. Nucl. Phys. 97, 123 (2017), eprint 1706.00610. * Brambilla et al. (2020) N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C.-P. Shen, C. E. Thomas, A. Vairo, and C.-Z. Yuan, Phys. Rept. 873, 1 (2020), eprint 1907.07583. * Liu et al. (2019a) Y.-R. Liu, H.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Prog. Part. Nucl. Phys. 107, 237 (2019a), eprint 1903.11976. * Guo et al. (2020) F.-K. Guo, X.-H. Liu, and S. Sakai, Prog. Part. Nucl. Phys. 112, 103757 (2020), eprint 1912.07030. * Wu et al. (2019) T.-W. Wu, M.-Z. Liu, L.-S. Geng, E. Hiyama, and M. P. Valderrama, Phys. Rev. D100, 034029 (2019), eprint 1906.11995. * Aaij et al. (2015) R. Aaij et al. (LHCb), Phys. Rev. Lett. 115, 072001 (2015), eprint 1507.03414. * Aaij et al. (2019) R. Aaij et al. (LHCb), Phys. Rev. Lett. 122, 222001 (2019), eprint 1904.03947. * Liu et al. (2019b) M.-Z. Liu, Y.-W. Pan, F.-Z. Peng, M. Sánchez Sánchez, L.-S. Geng, A. Hosaka, and M. Pavon Valderrama, Phys. Rev. Lett. 122, 242001 (2019b), eprint 1903.11560. * Xiao et al. (2019a) C. W. Xiao, J. Nieves, and E. Oset, Phys. Rev. D100, 014021 (2019a), eprint 1904.01296. * Xiao et al. (2019b) C.-J. Xiao, Y. Huang, Y.-B. Dong, L.-S. Geng, and D.-Y. Chen, Phys. Rev. D 100, 014022 (2019b), eprint 1904.00872. * Sakai et al. (2019) S. Sakai, H.-J. Jing, and F.-K. Guo, Phys. Rev. D100, 074007 (2019), eprint 1907.03414. * Yamaguchi et al. (2020) Y. Yamaguchi, H. García-Tecocoatzi, A. Giachino, A. Hosaka, E. Santopinto, S. Takeuchi, and M. Takizawa, Phys. Rev. D 101, 091502 (2020), eprint 1907.04684. * Liu et al. (2021) M.-Z. Liu, T.-W. Wu, M. Sánchez Sánchez, M. P. Valderrama, L.-S. Geng, and J.-J. Xie, Phys. Rev. D 103, 054004 (2021), eprint 1907.06093. * Pavon Valderrama (2019) M. Pavon Valderrama, Phys. Rev. D100, 094028 (2019), eprint 1907.05294. * Meng et al. (2019) L. Meng, B. Wang, G.-J. Wang, and S.-L. Zhu, Phys. Rev. D100, 014031 (2019), eprint 1905.04113. * Du et al. (2020) M.-L. Du, V. Baru, F.-K. Guo, C. Hanhart, U.-G. Meißner, J. A. Oller, and Q. Wang, Phys. Rev. Lett. 124, 072001 (2020), eprint 1910.11846. * Burns and Swanson (2019) T. J. Burns and E. S. Swanson, Phys. Rev. D100, 114033 (2019), eprint 1908.03528. * Wu and Chen (2019) Q. Wu and D.-Y. Chen, Phys. Rev. D100, 114002 (2019), eprint 1906.02480. * Azizi et al. (2021) K. Azizi, Y. Sarac, and H. Sundu, Chin. Phys. C 45, 053103 (2021), eprint 2011.05828. * Phumphan et al. (2021) K. Phumphan, W. Ruangyoo, C.-C. Chen, A. Limphirat, and Y. Yan (2021), eprint 2105.03150. * Eides et al. (2020) M. I. Eides, V. Y. Petrov, and M. V. Polyakov, Mod. Phys. Lett. A 35, 2050151 (2020), eprint 1904.11616. * Ali and Parkhomenko (2019) A. Ali and A. Ya. Parkhomenko, Phys. Lett. B793, 365 (2019), eprint 1904.00446. * Mutuk (2019) H. Mutuk, Chin. Phys. C 43, 093103 (2019), eprint 1904.09756. * Wang (2020) Z.-G. Wang, Int. J. Mod. Phys. A35, 2050003 (2020), eprint 1905.02892. * Cheng and Liu (2019) J.-B. Cheng and Y.-R. Liu, Phys. Rev. D100, 054002 (2019), eprint 1905.08605. * Weng et al. (2019) X.-Z. Weng, X.-L. Chen, W.-Z. Deng, and S.-L. Zhu, Phys. Rev. D100, 016014 (2019), eprint 1904.09891. * Zhu et al. (2019) R. Zhu, X. Liu, H. Huang, and C.-F. Qiao, Phys. Lett. B797, 134869 (2019), eprint 1904.10285. * Pimikov et al. (2020) A. Pimikov, H.-J. Lee, and P. Zhang, Phys. Rev. D101, 014002 (2020), eprint 1908.04459. * Ruangyoo et al. (2021) W. Ruangyoo, K. Phumphan, C.-C. Chen, A. Limphirat, and Y. Yan (2021), eprint 2105.14249. * Fernández-Ramírez et al. (2019) C. Fernández-Ramírez, A. Pilloni, M. Albaladejo, A. Jackura, V. Mathieu, M. Mikhasenko, J. A. Silva-Castro, and A. P. Szczepaniak (JPAC), Phys. Rev. Lett. 123, 092001 (2019), eprint 1904.10021. * Nakamura (2021) S. X. Nakamura, Phys. Rev. D 103, 111503 (2021), eprint 2103.06817. * Aaij et al. (2021a) R. Aaij et al. (LHCb) (2021a), eprint 2109.01038. * Aaij et al. (2021b) R. Aaij et al. (LHCb) (2021b), eprint 2109.01056. * Meng et al. (2021) L. Meng, G.-J. Wang, B. Wang, and S.-L. Zhu, Phys. Rev. D 104, 051502 (2021), eprint 2107.14784. * Dong et al. (2021) X.-K. Dong, F.-K. Guo, and B.-S. Zou, Commun. Theor. Phys. 73, 125201 (2021), eprint 2108.02673. * Chen et al. (2021) R. Chen, Q. Huang, X. Liu, and S.-L. Zhu, Phys. Rev. D 104, 114042 (2021), eprint 2108.01911. * Ling et al. (2022) X.-Z. Ling, M.-Z. Liu, L.-S. Geng, E. Wang, and J.-J. Xie, Phys. Lett. B 826, 136897 (2022), eprint 2108.00947. * Ren et al. (2022) H. Ren, F. Wu, and R. Zhu, Adv. High Energy Phys. 2022, 9103031 (2022), eprint 2109.02531. * Feijoo et al. (2021) A. Feijoo, W. H. Liang, and E. Oset, Phys. Rev. D 104, 114015 (2021), eprint 2108.02730. * Yan and Valderrama (2022) M.-J. Yan and M. P. Valderrama, Phys. Rev. D 105, 014007 (2022), eprint 2108.04785. * Albaladejo (2021) M. Albaladejo (2021), eprint 2110.02944. * Du et al. (2022) M.-L. Du, V. Baru, X.-K. Dong, A. Filin, F.-K. Guo, C. Hanhart, A. Nefediev, J. Nieves, and Q. Wang, Phys. Rev. D 105, 014024 (2022), eprint 2110.13765. * Ke et al. (2022) H.-W. Ke, X.-H. Liu, and X.-Q. Li, Eur. Phys. J. C 82, 144 (2022), eprint 2112.14142. * Hiyama et al. (2003) E. Hiyama, Y. Kino, and M. Kamimura, Prog. Part. Nucl. Phys. 51, 223 (2003). * Hiyama et al. (2006) E. Hiyama, M. Kamimura, A. Hosaka, H. Toki, and M. Yahiro, Phys. Lett. B 633, 237 (2006), eprint hep-ph/0507105. * Yoshida et al. (2015) T. Yoshida, E. Hiyama, A. Hosaka, M. Oka, and K. Sadato, Phys. Rev. D 92, 114029 (2015), eprint 1510.01067. * Liu et al. (2019c) M.-Z. Liu, T.-W. Wu, M. Pavon Valderrama, J.-J. Xie, and L.-S. Geng, Phys. Rev. D99, 094018 (2019c), eprint 1902.03044. * Pan et al. (2020) Y.-W. Pan, M.-Z. Liu, F.-Z. Peng, M. Sánchez Sánchez, L.-S. Geng, and M. Pavon Valderrama, Phys. Rev. D 102, 011504 (2020), eprint 1907.11220. * Zyla et al. (2020) P. A. Zyla et al. (Particle Data Group), PTEP 2020, 083C01 (2020). * Weinberg (1963) S. Weinberg, Phys. Rev. 130, 776 (1963). * Ling et al. (2021) X.-Z. Ling, J.-X. Lu, M.-Z. Liu, and L.-S. Geng, Phys. Rev. D 104, 074022 (2021), eprint 2106.12250. * Xie et al. (2022) J.-M. Xie, X.-Z. Ling, M.-Z. Liu, and L.-S. Geng (2022), eprint In Preparation. * Xiao et al. (2020) C. W. Xiao, J. X. Lu, J. J. Wu, and L. S. Geng, Phys. Rev. D 102, 056018 (2020), eprint 2007.12106.
# Rapid expansion of red giant stars during core helium flash by waves propagation to the envelope and implications to exoplanets Ealeal Bear Department of Physics, Technion – Israel Institute of Technology, Haifa 3200003, Israel<EMAIL_ADDRESS><EMAIL_ADDRESS>Ariel Merlov Department of Physics, Technion – Israel Institute of Technology, Haifa 3200003, Israel<EMAIL_ADDRESS><EMAIL_ADDRESS>Yarden Arad Department of Physics, Technion – Israel Institute of Technology, Haifa 3200003, Israel<EMAIL_ADDRESS><EMAIL_ADDRESS>Noam Soker Department of Physics, Technion – Israel Institute of Technology, Haifa 3200003, Israel<EMAIL_ADDRESS><EMAIL_ADDRESS>Guangdong Technion Israel Institute of Technology, Guangdong Province, Shantou 515069, China ###### Abstract We assume that the strong convection during core helium flash of low mass red giant branch (RBG) stars excite waves that propagate to the envelope, and find that the energy that these waves deposit in the envelope causes envelope expansion and brightening. We base our assumption and the estimate of the waves’ energy on studies that explored such a process due to the vigorous core convection of massive stars just before they experience a core collapse supernova explosion. Using the stellar evolutionary code mesa we find that the waves’ energy causes an expansion within few years by tens to hundreds solar radii. Despite the large brightening, we expect the increase in radius and luminosity to substantially enhance mass loss rate and dust formation. The dust shifts the star to become much redder (to the infrared), and the star might actually become fainter in the visible. The overall appearance is of a faint red transient event that lasts for months to few years. We suggest that in some cases envelope expansion might lead stars that are about to leave the RGB to engulf exoplanets. The extended envelope has a smaller binding energy to a degree that allows planets of several Jupiter masses or more and brown dwarfs to survive the common envelope evolution. We suggest this scenario to account for the planet orbiting the white dwarf (WD) WD 1856+534 (TIC 267574918) and for the WD - brown dwarf binary system ZTFJ003855.0+203025.5. planet-star interactions – binaries: close – white dwarfs – binaries: close – planets and satellites: individual: WD 1856+534 b ## 1 Introduction The presence of exoplanets in close orbits around evolved stars, i.e., horizontal branch stars or low mass white dwarfs (WDs) that are descendant of red giant branch (RGB) stars and WDs that are descendant of asymptotic giant branch (AGB) stars, is not easy to explain. By close orbits we refer to orbits with semi-major axes of $a\ll R_{\rm G}$ where $R_{\rm G}$ is the maximum radius that the giant progenitor has attained. Such close orbits imply that either the planet survived a common envelope evolution (CEE) or that it acquired its close orbit by dynamical interaction with a third body in the system. An example is the recently reported system WD 1856+534 (TIC 267574918) where a planet orbits a WD with $a\simeq 0.02{~{}\rm AU}$ and an orbital period of $P_{\rm orb}=1.4{~{}\rm days}$ (Vanderburg et al., 2020). Vanderburg et al. (2020) claim that the large orbital separation makes the dynamical origin more likely that the CEE origin for this system. There are other claims for exoplanets orbiting WDs (e.g., Gänsicke et al. 2019; Manser et al. 2019). Jones & Jenkins (2014) refuted the claim of Setiawan et al. (2010) for an exoplanet orbiting a low metalicity horizontal branch star with an orbital period of $P_{\rm orb}=16.2{~{}\rm days}$ (for some other similar refuted claims see, e.g., Krzesinski et al. 2020). Nonetheless, the refuted claim of Setiawan et al. (2010) for an exoplanet orbiting a metal poor horizontal branch star that maintained a relatively massive envelope of $\simeq 0.3M_{\odot}$ and at a relatively large orbital separation of $a\simeq 25R_{\odot}$, promoted Bear et al. (2011) to speculate on a scenario where a metal-poor RGB star suffered a large expansion following its core helium flash and engulfed an exoplanet (section 2). Systematic studies earlier than 2011 have already established the notion that planets can influence the evolution on the RGB and beyond (e.g., Soker 1998; Nelemans & Tauris 1998; Siess & Livio 1999a; Carney et al. 2003; Denissenkov & Herwig 2004; Nordhaus & Blackman 2006; Massarotti 2008; Schröder & Smith 2008; Carlberg et al. 2009; Villaver & Livio 2009; Nordhaus et al. 2010; for later studies on RGB stars engulfing planets see, e.g., Kunitomo et al. 2011; Mustill & Villaver 2012; Nordhaus & Spiegel 2013; Villaver et al. 2014; Aguilera-Gómez et al. 2016; Carlberg et al. 2016; Geier et al. 2016; Guo et al. 2016; Privitera et al. 2016; Rao et al. 2018; Schaffenroth et al. 2019; Hegazi et al. 2020; Jimenez et al. 2020; Kramer et al. 2020). However, without lowering the envelope binding energy just at the termination of the RGB, the planet that the RGB star engulfs spirals to very small orbits and either the RGB core tidally destroys the planet or the final orbit is very small and the star maintains only a very light envelope. Regarding the planet-WD system WD 1856+534, one group of models for its formation considers the scattering of the planet to a close orbit around the WD by other planet(s) (Maldonado et al., 2021), or by a another star i.e., the Lidov-Kozai effect (e.g., Muñoz & Petrovich 2020; O’Connor et al. 2021; Stephan et al. 2020; Vanderburg et al. 2020). Another group of models considers the CEE. Lagos et al. (2021) study a CEE that takes place during the AGB phase of the WD progenitor. Chamandy et al. (2021) take the extra energy source that allows a planet to survive to come from the orbital energy that an inner planet releases as it enters the giant envelope first and causes its expansion (e.g., Siess & Livio 1999a, b; Staff et al. 2016). Inner planets can indeed help outer planets to survive the evolution of their parent star (e.g., Bear et al. 2011; Lagos et al. 2021). We describe the basic scenario and its assumptions in section 2. We then present our numerical scheme (section 3) and the possible results of waves that the core helium flash excite in the core and propagate to the envelope (sections 4). We summarize our results and discuss implications to exoplanets orbiting horizontal branch stars and WDs in section 5. ## 2 The proposed scenario Bear et al. (2011) considered the energy source that causes envelope expansion during the core helium flash to be the ignition of hydrogen at the base of the hydrogen-rich envelope in metal poor stars. They based their speculative scenario on the results of Mocák et al. (2010) who calculated hydrogen ignition by the core helium flash. In the calculations of Mocák et al. (2010) the hydrogen burning provides $\approx 1\times 10^{48}{~{}\rm erg}$ during the first year, i.e., an average luminosity of $L_{\rm H}\approx 10^{7}L_{\odot}$ (their Figure 1). After a year this luminosity decreases to $L_{\rm H}\approx 10^{6}L_{\odot}$, still much larger than the RGB luminosity. The huge energy production by the core convection and by the hydrogen burning decay in a time scale of $\approx 10-100{~{}\rm yr}$ (e.g., Mocák et al. 2010). Bear et al. (2011) justified the mixing of hot core material with the hydrogen-rich envelope by the metal-poor star they studied and/or by rapid core rotation due to an inner planet that spun-up the core. Bear et al. (2011) manually added an energy of $E_{\rm in}=8.5\times 10^{46}{~{}\rm erg}$ just above the hydrogen-burning shell of their stellar model. This amounts to $7\%$ of the energy that the hydrogen burning releases in the model of Mocák et al. (2010). Bear et al. (2011) injected the energy in a time period of 7 years at an average power of $L_{\rm in}=10^{5}L_{\odot}$. Bear et al. (2011) found in their calculation that the outer radius of the convective zone in the envelope increases by a factor of about 4. After about $100{~{}\rm yr}$ the star shrinks back to its original radius. We raise here the possibility for another energy source. Based on the calculations of Quataert & Shiode (2012) and Shiode & Quataert (2014) for pre- supernova massive stars, we consider the possibility that the vigorous convection during the core helium flash excites waves that propagate into the envelope. Quataert & Shiode (2012) and Shiode & Quataert (2014) proposed a process by which a fraction of the energy in the gravity waves that the core convection excites in pre-supernova massive stars converts into sound waves as the gravity waves propagate into the envelope. The sound waves dissipate in the envelope. This energy deposition leads to envelope expansion (e.g., Mcley & Soker 2014; Fuller 2017). Mcley & Soker (2014) study one case and find the pre-supernova model they use, of an initial mass of $15M_{\odot}$, to expand by a factor of two, from about $1000R_{\odot}$ to about $2000R_{\odot}$. In that respect we note that Mocák et al. (2010) find that the convection during the core helium flash does excite gravity waves in the core. They also find that convection carries most of the energy that the nuclear reactions liberate. There are earlier studies that consider other roles that the internal gravity waves that the core helium flash excites play in stellar evolution. Schwab (2020), for example, consider the extra mixing that these waves induce near the core-envelope boundary during the helium flash. Miller Bertolami et al. (2020), as another recent example, study how the waves can cause periodic photometric variabilities in hot subdwarf stars. In this study we consider a different role that these propagating waves might play. We do not calculate the propagation of waves from the core to the envelope, as this requires a separate study. We simply take the same wave luminosity (power) as Shiode & Quataert (2014) take. This wave energy is (Lecoanet & Quataert, 2013), $\displaystyle\begin{aligned} L_{\rm wave,0}&\approx\mathcal{M}^{5/8}L_{\rm conv}=2.7\times 10^{6}\\\ &\times\left(\frac{\mathcal{M}}{0.001}\right)^{5/8}\left(\frac{L_{\rm conv}}{2\times 10^{8}L_{\odot}}\right)L_{\odot},\end{aligned}$ (1) where $L_{\rm conv}(r)=4\pi r^{2}v^{3}_{\rm conv}(r)\rho(r)$ (2) is the luminosity that the convection carry at radius $r$ and $\mathcal{M}=v_{\rm conv}/c_{\rm s}$ is the Mach number of the convective motion of velocity $v_{\rm conv}$. In what follows we take the maximum value of $L_{\rm wave,0}$ at each time. For scaling we use typical values from Mocák et al. (2009) and Mocák et al. (2009), in addition to our simulations that we describe later. In what follows we will consider much lower wave energies than what equation (1) gives, $L_{\rm w}\ll L_{\rm wave,0}$. ## 3 Numerical setting We use mesa-single version 10398 (Paxton et al., 2011, 2013, 2015, 2018, 2019). We follow the example of $1M~{}pre~{}ms~{}to~{}wd$ and only change the mass and the stopping condition. The mass was changed to either $M=1.6M_{\odot}$ that we focus our study on, or to $M=1.2M_{\odot}$ or $M=2M_{\odot}$. We set the termination condition of this phase to the time when carbon nucleosynthesis starts ($star~{}C~{}mass~{}max~{}limit=0.02$) which signifies that the He flash has already occurred. After determining the maximum amount of energy and the duration of the wave that we assume convection excited (using the profile files produced by mesa, we apply equation 1). We manually insert a fraction of this maximum energy in the src folder in the run-star-extra.f in the subroutine: subroutine energy- routine file, when we set the pointer of $other~{}energy$ to true. We insert the wave luminosity $L_{\rm W}$ during four years in the outer zone of the envelope, either outer $0.2M_{\rm env}$, or $0.5M_{\rm env}$, or $0.8M_{\rm env}$. For each of these three cases we find the appropriate mass coordinate, m($0.2M_{\rm env})=0.8619M_{\odot}$, $m(0.5M_{\rm env})=0.6546M_{\odot}$ and $m(0.8M_{\rm env})=0.4473M_{\odot}$ and insert the energy above that mass coordinate at a constant power per unit mass. We examine four cases of wave power as we describe in section 4. ## 4 Core helium flash We follow the evolution of stellar models with initial masses of $M_{\rm ZAMS}=1.2M_{\odot}$, $1.6M_{\odot}$, and $2M_{\odot}$, but focus on the $M_{\rm ZAMS}=1.6M_{\odot}$ model. We follow each stellar model evolution in more details in the time period around its core helium flash. We define the time scale $t_{w}$ that we set to zero at the peak of the wave luminosity (see below). At the beginning of the core helium flash (just before the flash) of the $M_{\rm ZAMS}=1.6M_{\odot}$ model (now an RGB star) its luminosity, radius, core mass, and envelope mass are, $L_{\rm b}=2027.74L_{\odot}$, $R_{\rm b}=128.4R_{\odot}$, $M_{\rm core,b}=0.45M_{\odot}$, and $M_{\rm env,b}=1.01M_{\odot}$, respectively. We use the subscript ‘b’ to indicate values just before the core helium flash. ### 4.1 The energy in the waves The relevant properties of the core helium flash in our proposed scenario are the convective luminosity and the Mach number of the convective velocity (equation 1). In Fig. 1 we present the convective luminosity $L_{\rm conv}(r)$ according to equation (2) (upper panel) the mach number of the convection ${\mathcal{M}}(r)=v_{\rm conv}/c_{\rm s}$ (lower panel) as function of radius in the core and at several times as indicated for a stellar model with an initial mass of $M_{\rm ZAMS}=1.6M_{\odot}$. Figure 1: The convective luminosity (upper panel) and convective Mach number (middle panel) in the core of a stellar model with an initial mass of $M_{\rm ZAMS}=1.6M_{\odot}$ versus radius, and at several times around the peak of the core helium flash. The times in the insets are $t_{\rm W}$ that is measured relative to the peak of the wave energy. Solid lines are for pre-peak and dotted line for post-peak values. The line get thinner away from the peak, both before and after the peak. In Fig. 1 we see the well known properties of the core helium flash that it has a distinct peak in time that last for several months and that the nuclear burning occurs off-center due to neutrino cooling in the center. In Fig. 2 we plot the variation of $L_{\rm wave,0}(t)$ as function of time for the time period when $L_{\rm wave,0}>10^{4}L_{\odot}$. For comparison we also plot $L_{\rm wave,0}(t)$ for the two other stellar models that we simulate here. Figure 2: The wave luminosity according to equation (1) as function of time $t_{\rm W}$ measure from the peaks of eh wave energy for the three models of $M_{\rm ZAMS}=1.2M_{\odot}$, $M_{\rm ZAMS}=1.6M_{\odot}$, and $M_{\rm ZAMS}=2M_{\odot}$. We show only the time period when $L_{\rm wave,0}>10^{4}L_{\odot}$. The times of core helium flash for the three models are $t_{\rm He}=6.25\times 10^{9}{~{}\rm yr}$, $t_{\rm He}=2.39\times 10^{9}{~{}\rm yr}$, and $t_{\rm He}=1.08\times 10^{9}{~{}\rm yr}$ for $M_{\rm ZAMS}=1.2M_{\odot}$, $M_{\rm ZAMS}=1.6M_{\odot}$, and $M_{\rm ZAMS}=2M_{\odot}$, respectively. The masses of the cores at these times are $M_{\rm core,b}=0.415M_{\odot}$, $M_{\rm core,b}=0.45M_{\odot}$, and $M_{\rm core,b}=0.43M_{\odot}$, respectively. Integrating over the wave power during the time period $\Delta t_{4}$ when $L_{\rm wave,0}>10^{4}L_{\odot}$ we find the total wave energy for the $M_{\rm ZAMS}=1.6M_{\odot}$ stellar model to be $E_{\rm wave,0}=\int_{\Delta t_{4}}L_{\rm wave,0}dt=2.1\times 10^{47}{~{}\rm erg}=1.7\times 10^{6}L_{\odot}{~{}\rm yr}.$ (3) There is a large uncertainty in the variation of the power that the jets deposit to the envelope because the propagation time of the waves through the envelope is about the dynamical time, which is several months. For the present model $\Delta t_{4}\simeq 4{~{}\rm yr}$. For that we consider the average luminosity over four years $L_{\rm W,0}\equiv\frac{E_{\rm wave,0}}{4{~{}\rm yr}}=4.3\times 10^{5}L_{\odot}$ (4) For the cases of $M_{\rm ZAMS}=1.2M_{\odot}$ and $M_{\rm ZAMS}=2M_{\odot}$ the total energy in the waves are about 20% large and 20% smaller than the value we give in equation (3), respectively. Since the envelope mass of the $M_{\rm ZAMS}=1.2M_{\odot}$ model is lower, it will suffer a much larger envelope expansion when waves dissipate their energy in the envelope with respect to the case we study here for $M_{\rm ZAMS}=1.6M_{\odot}$. ### 4.2 Wave energy dissipation in the envelope We do not know where exactly in the envelope the wave will deposit their energy. Quataert & Shiode (2012) and Shiode & Quataert (2014) proposed that the location of wave-energy deposition is in the outer regions where the maximum convective flux the envelope can carry, $L_{\rm max,conv}$ becomes below that of the wave power $L_{\rm max,conv}=4\pi r^{2}c^{3}_{\rm s}\rho<L_{\rm wave}.$ (5) In Fig. 3 we plot the variation of $L_{\rm max,conv}$ in the envelope and mark the values of $L_{\rm W,0}$ and of $L_{\rm W}=2\times 10^{4}L_{\odot}$ by horizontal lines. We plot the later value as we assume that the actual energy of the waves is much lower than $L_{\rm W,0}$. According to equation (5) and as we see in teh Fig. 3 wave dissipation occurs in the outer parts of the envelope. By mass coordinate wave dissipation occurs in the zone $M>M_{\rm d}(L_{\rm W,0})=1.378M_{\odot}$ for a wave power of $L_{\rm W,0}=4.3\times 10^{5}L_{\odot}$, and in the zone $M>M_{\rm d}(L_{\rm W})=1.445M_{\odot}$ for a wave power of $L_{\rm W,0}=2\times 10^{4}L_{\odot}$. The two zones correspond to envelope mass fractions of $7.5\%$ and $0.9\%$, respectively. We will deposit the wave energy into much larger envelope zone, the outer $20\%-80\%$ of the envelope mass, to take into account the pressure that the wave exert as they propagate through the envelope before the radius of dissipation (Mcley & Soker, 2014). Figure 3: The maximum wave luminosity $L_{\rm max,conv}$ that convection can carry in the envelope according to equation (5) as function of radius for the $M_{\rm ZAMS}=1.6M_{\odot}$ stellar model (red line). The blue line represents a wave energy of $L_{\rm W,0}$ according to equation (4) and the cyan line represents a wave energy of $L_{\rm W}=2\times 10^{4}L_{\odot}$. Strong wave energy dissipation takes place in the outer zone where the wave energy is larger than $L_{\rm max,conv}$. We indicate the mass coordinates where wave energy equals $L_{\rm max,conv}$ and the total stellar mass. Because of the above uncertainty in the distribution of energy deposition, in section 4.3 we present results for three simple prescriptions, where in each we spread the wave luminosity over the outer region of the envelope with a constant power per unit mass. We spread the energy in either the outer $\xi=80\%$, $\xi=50\%$, or $\xi=20\%$ of the envelope mass. ### 4.3 Outcomes of wave-energy deposition In section 4.1 we describe the possible properties of the waves that the convection excite during the core helium flash and their possible propagation and energy deposition to the envelope. Following the discussion there, we here present the results of energy deposition with a power of $L_{\rm W}\ll L_{\rm W,0}$ during a time period of $\Delta t_{\rm dep}=4{~{}\rm yr}$ for the model of $M_{\rm ZAMS}=1.6M_{\odot}$. We deposit the energy to the envelope outer $\xi M_{\rm env,b}$ mass, with $\xi=80\%$, $\xi=50\%$, or $\xi=20\%$, and with a constant power per unit mass. We examine the response of the envelope for four different values of $L_{\rm W}$. Overall we have 12 energy deposition prescriptions. In Fig. 4 we present the evolution of the luminosity and the stellar radius during and after the energy deposition for the 12 energy deposition prescriptions. In Fig. 5 we present the evolution of the effective temperature for the cases with $L_{\rm W}=2\times 10^{4}L_{\odot}$. We see that the star relaxes towards its previous state on a time scale of several years. We will discuss the implications of this behavior below and in section 5. Figure 4: The luminosity (upper two rows) and radius (lower two rows) as function of time during ($t_{\rm W}=0-4{~{}\rm yr}$) and after energy deposition as function of time $t_{\rm W}$ (set to zero at maximum wave luminosity). Each panel presents results for three different values of the outer envelope mass to which we deposit the energy. We indicate above each panel the power of the waves that we use. Note that the vertical scales and minimum values change from one panel to another. Figure 5: Similar to the left panels in the second and fourth rows of Fig. 4 with $L_{\rm W}=2\times 10^{4}L_{\odot}$, but for the effective temperature. In Fig. 6 we present the density profiles of the star just before the flash, at the end of the energy deposition time $t_{\rm W}=4{~{}\rm yr}$, and at $t_{\rm W}=4{~{}\rm yr}$ but without energy deposition. The upper panel emphasizes the changes in the core which are mainly due to the core helium flash and do not depend on wave energy deposition, while the lower panel emphasize the changes in the envelope that are due to wave energy deposition. Figure 6: The density (in units of ${~{}\rm g}{~{}\rm cm}^{-3}$) profiles of the case with $M_{\rm ZAMS}=1.6M_{\odot}$, $L_{\rm W}=2\times 10^{4}L_{\odot}$ and energy deposition in the outer $\xi=50\%$ envelope mass. We present the density profile before wave energy deposition (red line), at the end of wave energy deposition at $t_{\rm W}=4{~{}\rm yr}$, and at $t_{\rm W}=4{~{}\rm yr}$ but with no wave energy deposition. The upper panel with logarithmic radius scale emphasizes the changes in the core due to the core helium flash, and the lower panel with a linear radius scale emphasizes the changes in the envelope due to wave energy deposition. The wave energy dissipation in the envelope leads to a brightening of the star. The luminosity increase depends on the wave energy $L_{\rm W}$ and the mass of the envelope into which we deposit the wave energy. Here we deposit the energy into the outer zone, mass fraction $\xi$, of an envelope of mass $M_{\rm env,b}=1.01M_{\odot}$ as we indicate in the different panels of Fig. 4. The luminosity increases from about $L_{\rm b}=2\times 10^{3}L_{\odot}$ to $L(T_{\rm W}=4{~{}\rm yr})=2.2\times 10^{3}L_{\odot}$ in the case of $(L_{\rm W},\xi)=(5\times 10^{3}L_{\odot},80\%)$ and to $L(T_{\rm W}=4{~{}\rm yr})=5\times 10^{4}L_{\odot}$ in the case of $(L_{\rm W},\xi)=(5\times 10^{4}L_{\odot},20\%)$. Other values are in between. In the later case the envelope has reached a steady state and in the last three years of energy deposition ($t_{\rm W}\simeq 1{~{}\rm yr}$ to $t_{\rm W}=4{~{}\rm yr}$) it emits all the wave energy. We see this also from the evolution of the radius of the star that reaches a constant value in the last three years of wave energy deposition. Consider the case of $(L_{\rm W},\xi)=(2\times 10^{4}L_{\odot},20\%)$ as an example. Luminosity increases to $L=1.7\times 10^{4}L_{\odot}$ over four years. However, we do not expect the observation of such an increase in the visible band. After one year the luminosity increases from its initial value of $L_{\rm b}=2\times 10^{3}L_{\odot}$ to $L(1)\simeq 4\times 10^{3}L_{\odot}$ and the radius increases from $R_{\rm b}=128R_{\odot}$ to about $R(1)=180R_{\odot}$. This must lead to a substantial increase in mass loss rate and a copious amount of dust formation. The large amounts of dust and possible increase of the molecular opacity in the cooler upper atmosphere (as Kravchenko et al. 2021 found for Betelgeuse) can even lead to a decrease of the stellar brightness in the visible. The RGB star turns to a luminous red transient. This event might be classified as a weak and red ‘gap event’, i.e., a transient event with a peak luminosity between those of classical novae and typical supernovae. In this case it would be on the lower boundary of gap transients. The onset of a common envelope evolution of a low mass companion ($M_{2}\simeq 0.3-0.5M_{\odot}$) that enters the envelope of and RGB star or an AGB star might lead to a similar transient event that have an increase of luminosity by $\approx 10^{4}L_{\odot}$ at peak and an event that lasts for several years, and accompanied by enhanced mass loss rate and dust formation. We expect present and future sky surveys to detect such weak-red gap transients, a small fraction of which might be RGB stars experiencing a core helium flash. ## 5 Discussion and Summary The basic assumption of our study is that the gravity waves that the strong convection excite during the core helium flash drive waves that propagate all the way to the envelope. We based this assumption on a similar process that Quataert & Shiode (2012) and Shiode & Quataert (2014) proposed and studied for massive stars just before they experience a core collapse supernova explosion (section 2). Using their wave power relation to the core convection (equation 1) we find the average wave power over $\Delta_{4}\simeq 4{~{}\rm yr}$ to be $L_{\rm W,0}=4.3\times 10^{5}L_{\odot}$ (equation 4 for the $M_{\rm ZAMS}=1.6M_{\odot}$ model, and Fig. 2 for the three models). Under their assumption the waves deposit their energy in the outer envelope where the envelope convection cannot transport the wave power (Fig. 3). We did not study the wave propagation, but only their power and the effect of energy deposition into the envelope of the $M_{\rm ZAMS}=1.6M_{\odot}$ stellar model over four years during its core helium flash. We implemented a more conservative approach and deposited much lower energies than $E_{\rm wave,0}$ according to equation (3) and in a more extended envelope zone than what Fig. 3 suggests. We present the response of the $M_{\rm ZAMS}=1.6M_{\odot}$ envelope to energy deposition in Fig. 4 (luminosity and radius), in Fig. 5 (effective temperature) and in Fig. 6 (density profile). We found the degree of envelope expansion and luminosity increase, and that the star relaxes back on a time scale of several years. We discuss two consequences of wave energy deposition that lead to envelope expansion and luminosity increase. These are the appearance of a gap transient event (with peak luminosity between classical novae and typical supernovae) and the possible engulfment of orbiting planets. In section 4.3 we discussed our expectation that a small fraction of weak and red gap transients be RGB stars experiencing a core helium flash. Because of the large amount of dust that we expect the expanding star to form at the beginning of the event, the object might not increase its luminosity in the visible, and might actually become fainter in the visible. Such events are better observed in the infrared. We turn to discuss the possible engulfment of planets, that we further elaborate on in an accompanying paper (Merlov et al., 2021). As we show in Merlov et al. (2021), the expanding envelope as a result of wave energy deposition at the core helium flash might engulf planets in the right orbital separation that without the envelope expansion would survive the entire RGB phase of their parent star. This formation of a common envelope evolution brings the planet to spiral-in towards the core of the RGB star. Because of the inflated envelope its binding energy is lower, and the planet might unbind most of the envelope and therefore might survive the common envelope evolution. Consider the case of $(L_{\rm W},\xi)=(2\times 10^{4}L_{\odot},50\%)$ for the $M_{\rm ZAMS}=1.6M_{\odot}$ stellar model. The binding energy of the envelope residing above radius $r=1R_{\odot}$ at the end of energy deposition at $t_{\rm W}=4{~{}\rm yr}$ is $E_{\rm e,bind}(1R_{\odot})=7\times 10^{45}{~{}\rm erg}$. We can compare this energy to the orbital energy that a planet of mass $M_{\rm p}$ releases as it spirals-in to an orbital separation of $a=1R_{\odot}$, $E_{\rm orbit}=8\times 10^{45}(M_{\rm p}/10M_{\rm J}){~{}\rm erg}$, where $M_{\rm J}$ is Jupiter mass. Because $E_{\rm orbit}\gtrsim E_{\rm e,bind}$ it is possible for a planet to expel most of the envelope and survive such an evolution. Only more accurate numerical simulations of common envelope evolution with planets (e.g., Kramer et al. 2020) can determine whether the planet survives or not. In Merlov et al. (2021) we suggest that the general scenario that we discussed in this paper might explain the system WD 1856+534 (TIC 267574918) where a planet orbits a WD of mass $M_{\rm WD}\simeq 0.52M_{\odot}$ with $a\simeq 0.02{~{}\rm AU}$ and an orbital period of $P_{\rm orb}=1.4{~{}\rm days}$ (Vanderburg et al., 2020). We discussed some other scenarios for the formation of this system in section 1, and present the evolution towards a common envelope in Merlov et al. (2021). Here we note that in our models of $(L_{\rm W},\xi)=(2\times 10^{4}L_{\odot},50\%)$ and $(L_{\rm W},\xi)=(2\times 10^{4}L_{\odot},20\%)$ for the $M_{\rm ZAMS}=1.6M_{\odot}$ stellar model the binding energies of the envelope that resides above mass coordinate $m=0.52M_{\odot}$ at the end of wave energy deposition is $E_{\rm e,bind}(0.52M_{\odot})=3.9\times 10^{45}{~{}\rm erg}$ and $E_{\rm e,bind}(0.52M_{\odot})=6.4\times 10^{45}{~{}\rm erg}$, respectively. Depositing the wave energy in a lower envelope mass fraction (in our study 20% of the envelope mass) leads to a larger envelope expansion that allows the envelope to engulf more exoplanets. However, because of larger emission the decrease in the envelope binding energy is less pronounced. The binding energy without wave energy deposition is much larger $E_{\rm e,bind,b}(0.52M_{\odot})=1.2\times 10^{46}{~{}\rm erg}$. The orbital energy that the planet releases as it spirals-in to an orbital separation of $a=4R_{\odot}$ around that WD is $E_{\rm orbit}=2.5\times 10^{45}(M_{\rm p}/10M_{\rm J}){~{}\rm erg}$. For the above scaling, the planet might directly unbind a large fraction of the envelope mass. A lower mass progenitor, e.g., $M_{\rm ZAMS}\simeq 1.2-1.4M_{\odot}$, makes the scenario more favorable. If the RGB star engulfs inner planets to the planet that eventually survives, the inner plant(s) can increase the likelihood of the planet to survive. Siess & Livio (1999a, b) show that accretion process of a planet onto the core is accompanied by a substantial expansion of the star that can lead to high mass ejection. Namely, an inner planet or two can substantially reduce the mass of the RGB envelope when the surviving planet enters the envelope during the core helium flash. Finally, we also suggest that the scenario we studied here where convection- excited waves cause large envelope expansion during core helium flash might explain the formation of the system ZTFJ003855.0+203025.5 that van Roestel et al. (2021) discovered, where a brown dwarf of mass $\simeq 0.059M_{\odot}$ and a WD of mass $\simeq 0.5M_{\odot}$ orbit each other with a semi-major axis of $2.0R_{\odot}$. This research was supported by a grant from the Israel Science Foundation (769/20) Data availability The data underlying this article will be shared on reasonable request to the corresponding author. ## References * Aguilera-Gómez et al. (2016) Aguilera-Gómez, C., Chanamé, J., Pinsonneault, M. H., & Carlberg, J. K. 2016, ApJ, 833, L24. doi:10.3847/2041-8213/833/2/L24 * Bear et al. (2011) Bear, E., Soker, N., & Harpaz, A. 2011, ApJ, 733, L44. doi:10.1088/2041-8205/733/2/L44 * Carlberg et al. (2009) Carlberg, J. K., Majewski, S. R., & Arras, P. 2009, ApJ, 700, 832. doi:10.1088/0004-637X/700/1/832 * Carlberg et al. (2016) Carlberg, J. K., Smith, V. V., Cunha, K., & Carpenter, K. G., 2016, ApJ, 818, 25. doi:10.3847/0004-637X/818/1/25 * Carney et al. (2003) Carney, B. W., Latham, D. W., Stefanik, R. P.,Laird, J. B.,& Morse, J. A., 2003, AJ, 125, 293. doi:10.1086/345386 * Chamandy et al. (2021) Chamandy, L., Blackman, E. G., Nordhaus, J., & Wilson, E. 2021, MNRAS, 502, L110. doi:10.1093/mnrasl/slab017 * Denissenkov & Herwig (2004) Denissenkov, P. A. & Herwig, F. 2004, ApJ, 612, 1081. doi:10.1086/422575 * Fuller (2017) Fuller, J. 2017, MNRAS, 470, 1642. doi:10.1093/mnras/stx1314 * Gänsicke et al. (2019) Gänsicke, B. T., Schreiber, M. R., Toloza, O., Gentile Fusillo, N. P., Koester, D., & Manser, C. J., 2019, Nature, 576, 61. doi:10.1038/s41586-019-1789-8 * Geier et al. (2016) Geier, S., Kupfer, T., Schaffenroth, V., & Heber, U., 2016, The General Assembly of Galaxy Halos: Structure, Origin and Evolution, 317, 302. doi:10.1017/S174392131500681 * Guo et al. (2016) Guo, J., Lin, L., Bai, C., & Liu, J., 2016, Ap&SS, 361, 122. doi:10.1007/s10509-016-2684-5 * Hegazi et al. (2020) Hegazi, A., Bear, E., & Soker, N. 2020, MNRAS, 496, 612. doi:10.1093/mnras/staa1551 * Jimenez et al. (2020) Jimenez, R., Gråe JØrgensen, U., & Verde, L. 2020, J. Cosmology Astropart. Phys, 2020, 027. doi:10.1088/1475-7516/2020/10/027 * Jones & Jenkins (2014) Jones, M. I. & Jenkins, J. S. 2014, A&A, 562, A129. doi:10.1051/0004-6361/201322132 * Kramer et al. (2020) Kramer, M., Schneider, F. R. N., Ohlmann, S. T., Geier, S., Schaffenroth, V., Pakmor, R., & Röpke, F. K., 2020, A&A, 642, A97. doi:10.1051/0004-6361/202038702 * Kravchenko et al. (2021) Kravchenko K., Jorissen A., Van Eck S., Merle T., Chiavassa A., Paladini C., Freytag B., et al., 2021, arXiv, arXiv:2104.08105 * Krzesinski et al. (2020) Krzesinski, J., Blokesz, A., Siwak, M., et al. 2020, A&A, 642, A105. doi:10.1051/0004-6361/202038121 * Kunitomo et al. (2011) Kunitomo, M., Ikoma, M., Sato, B., Katsuta, Y., Ida S., 2011, ApJ, 737, 66. doi:10.1088/0004-637X/737/2/66 * Lagos et al. (2021) Lagos, F., Schreiber, M. R., Zorotovic, M., et al. 2021, MNRAS, 501, 676. doi:10.1093/mnras/staa3703 * Lecoanet & Quataert (2013) Lecoanet, D. & Quataert, E. 2013, MNRAS, 430, 2363. doi:10.1093/mnras/stt055 * Maldonado et al. (2021) Maldonado, R. F., Villaver, E., Mustill, A. J., et al. 2021, MNRAS, 501, L43. doi:10.1093/mnrasl/slaa193 * Manser et al. (2019) Manser, C. J., Gänsicke, B. T., Eggl, S., et al. 2019, Science, 364, 66. doi:10.1126/science.aat5330 * Massarotti (2008) Massarotti, A. 2008, AJ, 135, 2287. doi:10.1088/0004-6256/135/6/2287 * Mcley & Soker (2014) Mcley, L. & Soker, N. 2014, MNRAS, 445, 2492. doi:10.1093/mnras/stu1952 * Merlov et al. (2021) Merlov, A., Bear, E., & Soker, N. 2021, in preparation * Miller Bertolami et al. (2020) Miller Bertolami, M. M., Battich, T., Córsico, A. H., Christensen-Dalsgaard, J., & Althaus, L. G., 2020, Nature Astronomy, 4, 67. doi:10.1038/s41550-019-0890-0 * Mocák et al. (2010) Mocák, M., Campbell, S. W., Müller, E., & Kifonidis, K. 2010, A&A, 520, A114. doi:10.1051/0004-6361/201014461 * Mocák et al. (2009) Mocák, M., Müller, E., Weiss, A., et al. 2009, A&A, 501, 659. doi:10.1051/0004-6361/200811414 * Muñoz & Petrovich (2020) Muñoz, D. J. & Petrovich, C. 2020, ApJ, 904, L3. doi:10.3847/2041-8213/abc564 * Mustill & Villaver (2012) Mustill, A. J. & Villaver, E. 2012, ApJ, 761, 121. doi:10.1088/0004-637X/761/2/121 * Nelemans & Tauris (1998) Nelemans, G. & Tauris, T. M. 1998, A&A, 335, L85 * Nordhaus & Blackman (2006) Nordhaus, J. & Blackman, E. G. 2006, MNRAS, 370, 2004. doi:10.1111/j.1365-2966.2006.10625.x * Nordhaus & Spiegel (2013) Nordhaus, J. & Spiegel, D. S. 2013, MNRAS, 432, 500. doi:10.1093/mnras/stt569 * Nordhaus et al. (2010) Nordhaus, J., Spiegel, D. S., Ibgui, L., Goodman, J., & Burrows A., 2010, MNRAS, 408, 631. doi:10.1111/j.1365-2966.2010.17155.x * O’Connor et al. (2021) O’Connor, C. E., Liu, B., & Lai, D. 2021, MNRAS, 501, 507. doi:10.1093/mnras/staa3723 * Paxton et al. (2011) Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3 * Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4 * Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15 * Paxton et al. (2018) Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34 * Paxton et al. (2019) Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10, arXiv:1903.01426 * Privitera et al. (2016) Privitera, G., Meynet, G., Eggenberger, P., Vidotto, A. A., Villaver, E., & Bianda M., 2016, A&A, 591, A45. doi:10.1051/0004-6361/201528044 * Quataert & Shiode (2012) Quataert, E. & Shiode, J. 2012, MNRAS, 423, L92. doi:10.1111/j.1745-3933.2012.01264.x * Rao et al. (2018) Rao, S., Meynet, G., Eggenberger, P., Haemmerlé, L., Privitera, G., Georgy, C., Ekström, S., et al., 2018, A&A, 618, A18. doi:10.1051/0004-6361/201833107 * Schaffenroth et al. (2019) Schaffenroth, V., Barlow, B. N., Geier, S., et al. 2019, A&A, 630, A80. doi:10.1051/0004-6361/201936019 * Schröder & Smith (2008) Schröder, K.-P. & Smith, R. C. 2008, MNRAS, 386, 155. doi:10.1111/j.1365-2966.2008.13022.x * Schwab (2020) Schwab, J. 2020, ApJ, 901, L18. doi:10.3847/2041-8213/abb45f * Setiawan et al. (2010) Setiawan, J., Klement, R. J., Henning, T., et al. 2010, Science, 330, 1642. doi:10.1126/science.1193342 * Shiode & Quataert (2014) Shiode, J. H. & Quataert, E. 2014, ApJ, 780, 96. doi:10.1088/0004-637X/780/1/96 * Siess & Livio (1999a) Siess, L. & Livio, M. 1999a, MNRAS, 304, 925. doi:10.1046/j.1365-8711.1999.02376.x * Siess & Livio (1999b) Siess, L. & Livio, M. 1999b, MNRAS, 308, 1133. doi:10.1046/j.1365-8711.1999.02784.x * Soker (1998) Soker, N. 1998, AJ, 116, 1308. doi:10.1086/300503 * Staff et al. (2016) Staff, J. E., De Marco, O., Wood, P., Galaviz, P., & Passy, J.-C 2016, MNRAS, 458, 832. doi:10.1093/mnras/stw331 * Stephan et al. (2020) Stephan, A. P., Naoz, S., & Gaudi, B. S. 2020, arXiv:2010.10534 * Vanderburg et al. (2020) Vanderburg, A., Rappaport, S. A., Xu, S., et al. 2020, Nature, 585, 363. doi:10.1038/s41586-020-2713-y * van Roestel et al. (2021) van Roestel, J., Kupfer, T., Bell, K. J., Burdge, K., Mróz, P., Prince, T. A., Bellm, E. C., et al., 2021, arXiv, arXiv:2105.08687 * Villaver & Livio (2009) Villaver, E. & Livio, M. 2009, ApJ, 705, L81. doi:10.1088/0004-637X/705/1/L81 * Villaver et al. (2014) Villaver, E., Livio, M., Mustill, A. J., & Siess, L., 2014, ApJ, 794, 3. doi:10.1088/0004-637X/794/1/3
# Decentralized Constrained Optimization: Double Averaging and Gradient Projection Firooz Shahriari-Mehr, David Bosch and Ashkan Panahi Department of Computer Science and Engineering Chalmers University of Technology Gothenburg, Sweden Firooz, Davidbos<EMAIL_ADDRESS> ###### Abstract In this paper, we consider the convex, finite-sum minimization problem with explicit convex constraints over strongly connected directed graphs. The constraint is an intersection of several convex sets each being known to only one node. To solve this problem, we propose a novel decentralized projected gradient scheme based on local averaging and prove its convergence using only local functions’ smoothness. Experimental studies demonstrate the effectiveness of the proposed method in both constrained and unconstrained problems. ## 1 Introduction In the past decade, decentralized optimization techniques have attracted significant interest [18, 36]. In this setting, multiple computing nodes are involved, and there is no coordinator (central) node with which all nodes communicate. A fairly general framework for decentralized optimization problems is given by $\min\limits_{\mathbf{x}\in\mathbb{R}^{m}}\quad f(\mathbf{x})\triangleq\sum\limits_{v=1}^{M}f_{v}(\mathbf{x}),$ (1) where $M$ is the total number of the nodes in the network, $\mathbf{x}\in\mathbb{R}^{m}$ is called the global optimization variable, and $f(.)$ is the global objective function which has a finite-sum structure. Here, each node $v$ has access to its local function $f_{v}:\mathbb{R}^{m}\rightarrow\mathbb{R}$ and communicates to its neighbors $\mathcal{N}_{v}$ to achieve an _optimal consensus solution_. A natural extension to this setup is when $\mathbf{x}$ in problem (1) is required to lie in an intersection of several convex sets, i.e. $\mathbf{x}\in\bigcap_{v=1}^{N}S_{v}$, and each constraint $S_{v}$ is known only to one node. Applications of this setup, that we refer to as the Decentralized Constrained Optimization Problem (DCOP), are ubiquitous, e.g. smart grid control [1, 7], optimal energy management [23], sensor networks [2], and support vector machines [4]. However, practical approaches to solving it have not been extensively discussed in the literature. This paper responds to this shortcoming by providing a numerical method to solve DCOP with guaranteed convergence properties. Node communication is a crucial factor in the design of decentralized optimization techniques, which is represented by either a directed or undirected communication graph. Earlier studies on decentralized techniques considered static undirected graphs, which indicate that each communication link between two nodes is time invariant and bi-directional, meaning that both nodes can send and receive information. This assumption is not compatible with many practical applications, such as broadcast channels with no return link, or communication failures leading to uni-directional links [34]. These problems intrigue researchers to propose decentralized methods considering directed graphs as the underlying communication network, where each communication link in the network is uni-directional. For simplicity, decentralized methods in which a directed graph represents their node communication are called _decentralized directed methods_ , throughout this paper. In the same way, we refer to _decentralized undirected methods_. We address the more general case of decentralized directed scenarios, while the undirected cases follow as a special case. For undirected communication graphs, efficient optimization techniques with provable convergence properties exist based on suitable iterative averaging over neighbor nodes [5, 21, 28]. The averaging procedure is mathematically represented by the so-called gossip matrices, which are compatible with the network structure, double stochastic, and symmetric [28, 20]. The required gossip matrices can be constructed using Laplacian or Metropolis matrices for undirected graphs. Such gossip matrices are not compatible with directed graphs, as they require asymmetry and finding doubly stochastic matrices is not straightforward, often requiring distributed and iterative numerical procedures such as iterative weight balancing [6]. For this reason, practical schemes utilize row stochastic or column stochastic matrices, instead of using doubly stochastic matrices. In this case, convergence bounds comparable to the undirected scenarios, even in the absence of constraints, are lacking to the best of our knowledge. We further address this issue by proposing a novel double-averaging scheme, similar to the so-called push-pull approach [24], which takes both row and column stochastic matrices into account, and at the same time enjoys superior convergence guarantees. ### 1.1 Contributions The main contributions of the paper are summarized as follows: * • We propose a novel algorithm, called DAGP, to solve the problem of decentralized constraint optimization. Our scheme employs double averaging and projection onto convex sets. It extends the tracking approach, which has been proposed for the first time in [28, 20], to constrained problems and can benefit a fixed step size and fast convergence. * • In contrast to the previously proposed methods in the literature, our method simultaneously considers a directed communication graph and individual constraints at each node. * • We show that our technique is applicable to generic constrained convex problems, lacking strong convexity, while maintaining the convergence rate of order $\mathcal{O}(\left.1\middle/\sqrt{n}\right.)$, under mild conditions. We are not aware of any decentralized unconstrained method over directed communication graphs with similar established convergence properties. * • We present experiments for constrained decentralized optimization problems on directed graphs, where DAGP outperforms the existing algorithms. We also conduct experiments on unconstrained problems, where DAGP performs similarly to the state of the art, decentralized optimization algorithms. ### 1.2 Literature Review In this section, we review the decentralized methods in the existing literature. We organize our review into three parts: techniques on directed and undirected graphs, and decentralized constrained methods. Several classes of methods exist in the literature that are not in the scope of this paper, e.g., methods considering time-varying graphs [19, 20], local functions with a finite-sum structure [17, 8, 9, 33], or compressed communication [12, 3]. #### 1.2.1 Decentralized optimization over undirected graphs The algorithms for undirected communication graphs can be divided into several categories. First, the decentralized gradient decent methods including [21, 15] use diminishing step size for convergence to the exact solution of the problem. The diminishing step size leads to practical difficulties with step tuning but establish the convergence rate of $\mathcal{O}(\left.\log n\middle/\sqrt{n}\right.)$ in a convex and smooth setting and $\mathcal{O}(\left.\log n\middle/n\right.)$ in a strongly convex and smooth setting. The second category refers to the methods that use gradient tracking technique and leverage the gradient information at all nodes to estimate the gradient of the global function [28, 20, 25]. These methods use fixed step sizes and achieve linear convergence rate, i.e. $\mathcal{O}(\mu^{n})$ $\mu<1,$ in a strongly convex and smooth setting. [25] has also shown a sublinear rate of convergence, i.e. $\mathcal{O}(\left.1\middle/n\right.)$, when the functions are not strongly convex. The dual-based methods [27, 30, 10] are included in the third group. Although these methods are optimal and have linear convergence rate, they need to compute some computationally costly oracles, e.g. the gradient of a conjugate function, which is not practical in some applications. #### 1.2.2 Decentralized optimization over directed graphs Earlier methods for directed problems apply the so-called push-sum protocol [11] to decentralized gradient descent methods to tackle the problem of computing a doubly stochastic gossip matrix for directed graphs [29, 19]. These methods utilize a column stochastic matrix, but a diminishing step size is still vital for their convergence. The methods based on the push-sum protocol converge with order of $\mathcal{O}(\left.\log n\middle/n\right.)$ for smooth and strongly convex functions. To achieve fixed step size, [32, 20] have put push-sum protocol and gradient tracking technique together and have respectively proposed the DEXTRA and Push-DIGing algorithms. These algorithms achieved a linear rate of convergence in a smooth and strongly convex setting. DEXTRA suffers theoretical limitations on the step size, namely a feasible step size might not exist in some cases. [32] has proposed the ADD-OPT algorithm to solve this problem. This algorithm also enjoys linear convergence for strongly convex functions. Recently, methods based on two gossip matrices, one column stochastic and the other one row stochastic, have been proposed in the literature [24, 35], called Push-Pull methods. These methods also have linear convergence in a smooth and strongly convex setting. Our algorithm is similar to push-pull methods as it can be applied with similar underlying matrices. #### 1.2.3 Decentralized Constrained optimization Despite extensive studies on decentralized optimization, there exist few papers that consider constraints explicitly. It is worth mentioning that a straightforward approach to solving constrained problems is to add the indicator functions of the constraint sets to the problem, then apply the methods proposed for the unconstrained problem. This approach requires methods that are applicable to non-smooth and non-strongly convex functions with unbounded (sub)gradients due to indicator function characteristics. For this reason, we note that utilizing the previously mentioned methods does not guarantee convergence. [26] is among the first papers incorporating the projection and averaging approaches, but it assumes that the constraint set is identical at all nodes. This leads to a problem when the projection onto the constraint set is not computationally efficient. In response, the projected subgradient algorithm has been proposed, which assumes that the constraint set is different and distributed among all nodes [22]. This paper is similar in setup to ours, but does not provide a precise convergence rate. Moreover, convergence of local variables to a consensus stopping point is proven only in two special cases: when the constraints are identical, or when the graph is fully connected. As the constraint at each node might be an intersection of several constraints, or in some applications, the nodes do not have access to all of their local constraints at each iteration, [14] has proposed a randomized projection scheme. This algorithm suffers from the same limitations as [22], i.e. the proof is only reliable for fully connected networks or a setting with identical constraints at each node. All the above-mentioned methods use a diminishing step size as they do not leverage any gradient tracking technique. Moreover, they assume that the underlying communication graph is undirected. [31] has proposed the DDPS algorithm, which is applicable when the communication graph is directed. However, this algorithm uses diminishing step size as well, and its convergence rate is of order $\mathcal{O}(\left.\log n\middle/\sqrt{n}\right.)$. Moreover, it is subject to the restrictive assumption that the constraints are identical. There are also methods with different problem description, such as composite constrained optimization [16]. These methods consider undirected communication graphs, and they are different from our problem, in nature. ### 1.3 Paper Outline The rest of the paper is organized as follows. In the following, some preliminary definitions and notations are introduced. The DAGP algorithm is proposed in section 2, along with theoretical convergence analysis. The proofs of all Lemmas and Theorems are provided in the Appendix. Finally, section 3 is devoted to the numerical studies. ###### Definition 1 (Normal cone and Projection Operator). For a closed convex set $S\subset\mathbb{R}^{n}$, the normal cone of $S$ is given by $\partial I_{S}(\mathbf{x})=\begin{cases}\;\emptyset&\mathbf{x}\notin S\\\ \left\\{\mathbf{g}\in\mathbb{R}^{n}|\;\forall\mathbf{z}\in S,\;\mathbf{g}^{T}(\mathbf{z}-\mathbf{x})\leq 0\right\\}&\mathbf{x}\in S\end{cases}.$ Moreover, the projection of a vector $\mathbf{x}\in\mathbb{R}^{n}$ onto $S$ is computed by $P_{S}(\mathbf{x})=\operatorname*{arg\,min}\limits_{\mathbf{y}\in S}\|\mathbf{y}-\mathbf{x}\|_{2}^{2}.$ ###### Definition 2 (Graph Theory). A directed graph is shown by $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\\{1,\dots,N\\}$ is a set of all nodes, and $\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ is a set of ordered pairs of distinct nodes, called edges. A directed path between two distinct nodes $u,v\in\mathcal{V}$ is a sequence of nodes $(u=v_{0},v_{1},\dots,v_{k}=v)$ such that each pair $(v_{i},v_{i+1})$ is an edge in $\mathcal{E}$. A graph $\mathcal{G}$ is strongly connected if for any two distinct nodes $u,v\in\mathcal{V}$, there exist a directed path between $u$ and $v$. The adjacency matrix denoted by $\mathbf{A}=[a_{ij}]$ is an asymmetric matrix, where $a_{ij}$ is $+1$ if $(i,j)\in\mathcal{E}$, and $0$ otherwise. In this paper, each pair shows a communication link between two distinct nodes, and $(i,j)$ is a pair in $\mathcal{E}$, if there is a link from node $j$ to node $i$. With this intuition, the $i$th row of an adjacency matrix shows from which nodes it can receive information, and constitute the incoming neighbors of node $i$ called $\mathcal{N}_{i}^{\text{in}}=\left\\{j|(i,j)\in\mathcal{E}\right\\}$. On the other hand, the $i$th column of an adjacency matrix shows to which nodes it can send information, and constitute the outgoing neighbors of node $i$ called $\mathcal{N}_{i}^{\text{out}}=\left\\{j|(j,i)\in\mathcal{E}\right\\}$. The in- degree and out-degree of node $i$ are defined as the cardinality of $\mathcal{N}_{i}^{\text{in}}$ and $\mathcal{N}_{i}^{\text{out}}$, respectively. Consequently, two Laplacian matrices can be defined as $\mathbf{L}^{\text{in}}=\mathbf{D}^{\text{in}}-\mathbf{A},\qquad\mathbf{L}^{\text{out}}=\mathbf{D}^{\text{out}}-\mathbf{A},$ where $\mathbf{D}^{\text{in}}$ is the in-degree diagonal matrix; that is $d_{ii}^{\text{in}}=\left|\mathcal{N}_{i}^{\text{in}}\right|$, and $\mathbf{D}^{\text{out}}$ is defined in a similar way. $\mathbf{L}^{\text{in}}$ and $\mathbf{L}^{\text{out}}$ have zero row-sum and column-sum characteristics and their scaled versions are used in this paper. ### 1.4 Mathematical Notation In this paper, bold lowercase and uppercase letters are used to respectively represent vectors and matrices. $w_{vu}$ denotes the element at the $v^{\text{th}}$ row and the $u^{\text{th}}$ column of the matrix $\mathbf{W}$. $\mathbf{W}^{T}$ shows the transpose of $\mathbf{W}$, and $\ker(\mathbf{W})$ is its right null space, meaning that $\mathbf{x}\in\ker(\mathbf{W})$, if and only if $\mathbf{W}\mathbf{x}=\mathbf{0}$. $\mathbf{1}_{n}$ and $\mathbf{0}_{n}$ respectively denote the $n-$dimensional vectors of all ones and zeros. The index $n$ may be dropped if there is no risk of confusion. Furthermore, $\mathbf{O}$ denotes a matrix with all zero elements. The Euclidean inner product of vectors is denoted by $\langle.,.\rangle$. Matrix inner product is denoted by $\langle\mathbf{A},\mathbf{C}\rangle=\text{Tr}(\mathbf{A}\mathbf{C}^{T})$. In this paper, subscript generally defines the iteration number, and superscript defines the node number, e.g., $\nabla f_{v}(\mathbf{x}^{v}_{n})$ indicates the gradient of the node $v$’s local function at its local variable at iteration $n$. Finally, $\delta_{n,0}$ is the Kronecker delta function. ## 2 Problem setting and Proposed Algorithm In this section, we propose a new algorithm to solve DCOP, considering directed graphs as a communication network between the nodes. The proposed algorithm is called _DAGP_ due to Double Averaging and Gradient Projection approaches used in the iterative equations, which are introduced in the subsection 2.2. ### 2.1 Problem Formulation Decentralized Constraint Optimization Problem (DCOP) is formulated as $\min\limits_{\mathbf{x}\in\mathbb{R}^{m}}\;\;f(\mathbf{x})\triangleq\sum\limits_{v=1}^{M}f_{v}(\mathbf{x})\quad\mbox{{s.t.}}\quad\mathbf{x}\in\bigcap\limits_{v=1}^{M}S_{v},$ (2) where $S_{v}$ is a closed convex set, and the intersection of all these constraint sets is called the feasible set. Note that without loss of generality, the number of constraints is equal to the number of functions. This allows us to assume $M$ nodes in our setting, each having access to one function and one constraint. Note that a different number of constraints can still be considered in our setup, as each node may further access a composite constraint (i.e. an intersection of simpler constraints), or multiple nodes may share identical constraints (i.e. $S_{v}=S_{u}$) or have trivial constraints $S_{v}=\mathbb{R}^{m}$. Nevertheless, we merely require the projection operator to the constraint set $S_{v}$ of each node to be available, and neglect further possible structures in them. In decentralized optimization, each node stores and updates a local variable $\mathbf{x}^{v}$ as its solution. The nodes should achieve consensus, i.e. $\mathbf{x}^{v}$s must converge to an equal stopping point $\mathbf{x}^{*}$, which is further required to be a feasible and optimal solution of (2). ### 2.2 DAGP Algorithm The DAGP algorithm does the following updates in each iteration, $\forall v\in\mathcal{V}$. $\displaystyle\mathbf{z}^{v}$ $\displaystyle=\;$ $\displaystyle\mathbf{x}^{v}_{n}-\sum\limits_{u\in\mathcal{N}_{v}^{\text{in}}}w_{vu}\mathbf{x}^{u}_{n}-\mu\left(\nabla f_{v}(\mathbf{x}^{v}_{n})-\mathbf{g}^{v}_{n}\right)$ (3) $\displaystyle\mathbf{x}_{n+1}^{v}$ $\displaystyle=\;$ $\displaystyle P_{S_{v}}\left(\mathbf{z}^{v}\right)$ (4) $\displaystyle\mathbf{g}_{n+1}^{v}$ $\displaystyle=\;$ $\displaystyle\mathbf{g}_{n}^{v}+\rho\left[\nabla f_{v}(\mathbf{x}^{v}_{n})-\mathbf{g}^{v}_{n}+\frac{1}{\mu}\left(\mathbf{z}^{v}-\mathbf{x}^{v}_{n+1}\right)\right]$ $\displaystyle\;+\alpha\left(\mathbf{h}_{n}^{v}-\mathbf{g}_{n}^{v}\right)$ (5) $\displaystyle\mathbf{h}_{n+1}^{v}$ $\displaystyle=\;$ $\displaystyle\mathbf{h}_{n}^{v}-\sum\limits_{u\in\mathcal{N}_{v}^{\text{in}}}q_{vu}(\mathbf{h}_{n}^{u}-\mathbf{g}_{n}^{u})$ (6) To interpret the algorithm, consider the above update equations. Take into account that the node $v$ has access to $\left(\mathbf{x}^{u}_{n},\mathbf{h}^{u}_{n}-\mathbf{g}^{u}_{n}\right)$, $\forall u\in\mathcal{N}_{v}^{\text{in}}$, as each node $u$ broadcasts this pair of messages to its out-neighbors. First weighted averaging happens in (3), where each node computes a weighted average of its local variable and its in-neighbors’ local variables. This averaging is a basis for achieving consensus, and $\mathbf{W}=\left[w_{vu}\right]$ must have zero row-sum structure to achieve this objective. Then, the resulting averaged vector is aligned with the negative of the augmented local descent direction $\left(\nabla f_{v}(\mathbf{x}^{v}_{n})-\mathbf{g}^{v}_{n}\right)$ scaled by a fixed step size $\mu$. The resulting solution $\mathbf{z}^{v}$ is projected onto the local constraint set in (4). Therefore from the second iteration, the local variables at each iteration lie in their own local constraint set, but not necessarily in the feasible set of the problem in (2). One of the novelties of this paper is the definition of additional variables $\mathbf{h}^{v}_{n}$ together with $\mathbf{g}^{v}_{n}$ to push the algorithm towards an optimal consensus solution. Vectors $\mathbf{g}^{v}$ act as the memory of the algorithm, which preserve and track the previous information of local functions’ gradients and _feasible directions_ , i.e. $\nabla f_{v}$ and $\mathbf{z}^{v}-\mathbf{x}^{v}$, respectively. To reach an optimal solution, the gradients and feasible directions of all nodes must be aggregated. This is achieved by adding the term $\alpha(\mathbf{h}^{v}-\mathbf{g}^{v})$ to (5), where $\mathbf{h}^{v}$ is updated using (6). $\mathbf{h}^{v}$ propagates the information of the gradients and feasible directions of other nodes through the second weighted averaging using $\mathbf{Q}=\left[q_{vu}\right]$. In (6), we further require $\mathbf{Q}$ to be a matrix with zero column-sum structure. Then, $\sum_{v\in\mathcal{V}}\mathbf{h}^{v}$ will not change over time. We also require $\mathbf{h}^{v}_{0}$s be initialized in a way that satisfy $\sum_{v\in\mathcal{V}}\mathbf{h}^{v}=\mathbf{0}$. The easiest way is to initialize them with zero vectors. In this way, when $\mathbf{g}^{v}$s converges to $\mathbf{h}^{v}$s, their summation over all nodes will be equal to zero. This in turn, leads to satisfying the optimality condition of the problem as $\mathbf{g}^{v}$s contain gradients and feasible directions of the problem. This is further elaborated in Theorem 1, and Appendix A. ### 2.3 Convergence Analysis In this section, we discuss the convergence properties of DAGP. We present two results. First in Theorem 1, we prove that if the iterates of DAGP converge, any stopping point is an optimal and consensus solution of the problem in (2). Then, in Theorem 2, we prove the convergence rate of our proposed algorithm in a smooth and convex setting. #### 2.3.1 Assumptions We proceed by formalizing the adopted assumptions as follows. ###### Assumption 1. The nodes will communicate across a strongly connected directed graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$. This assumption guarantees the sufficient information flow between the nodes as there exists a directed path between every two nodes in the graph. As a result, the nodes can achieve consensus. ###### Assumption 2. The optimization problem (2) is feasible and attains a finite optimal value $f^{*}=f(\mathbf{x}^{*})$ at an optimal feasible solution $\mathbf{x}^{*}$ satsfying the optimality condition: $\mathbf{0}\in\sum_{v=1}^{M}\left(\partial I_{S_{v}}(\mathbf{x}^{*})+\nabla f_{v}(\mathbf{x}^{*})\right).$ ###### Assumption 3. There are two weight matrices $\mathbf{W}$ and $\mathbf{Q}$ with a similar sparsity pattern to the adjacency matrix $\mathbf{A}$ of $\mathcal{G}$. They further satisfy the zero row-sum and zero column-sum structure, respectively. The first one is required for achieving consensus, and the second one is required for attaining the optimality of the solution. Moreover, we assume that $\ker(\mathbf{Q})=\ker(\mathbf{W}^{T})$ and $\ker(\mathbf{W})=\mathrm{span}\\{\mathbf{1}\\}$. ###### Assumption 4. The functions $f_{v}(.)$ are convex, differentiable and $L$-smooth. Now, we define the matrices $\mathbf{R}$ and $\mathbf{P}$ as $\mathbf{R}=\left[\begin{array}[]{cccc}\mathbf{O}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\\ \mathbf{I}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\\ -\frac{\rho}{\mu}\mathbf{I}&\frac{\rho}{\mu}(\mathbf{I}-\mathbf{W})&\mathbf{I}&\alpha\mathbf{I}\\\ \frac{\rho}{\mu}\mathbf{I}&-\frac{\rho}{\mu}(\mathbf{I}-\mathbf{W})&\mathbf{O}&(1-\alpha)\mathbf{I}-\mathbf{Q}\end{array}\right],\qquad\mathbf{P}=\left[\begin{array}[]{c}\mathbf{I}\\\ \mathbf{O}\\\ \mathbf{O}\\\ \mathbf{O}\end{array}\right].$ (7) Moreover, for an arbitrary positive value of $\eta$, matrix $\mathbf{S}$ is computed as $\mathbf{S}=\left[\begin{array}[]{cccc}\left(1-\frac{L\mu}{2}\right)\mathbf{I}-M\eta\left(\mathbf{I}-\frac{1}{M}\mathbf{1}\mathbf{1}^{T}\right)&-\frac{1}{2}(\mathbf{I}-\mathbf{W})+\frac{L\mu}{2}\mathbf{I}&-\frac{\mu}{2}\mathbf{I}&\mathbf{O}\\\ -\frac{1}{2}(\mathbf{I}-\mathbf{W}^{T})+\frac{L\mu}{2}\mathbf{I}&-\frac{L\mu}{2}\mathbf{I}&\mathbf{O}&\mathbf{O}\\\ -\frac{\mu}{2}\mathbf{I}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\\ \mathbf{O}&\mathbf{O}&\mathbf{O}&\mathbf{O}\end{array}\right].$ (8) Please check Appendix B for understanding the evolution of these matrices. ###### Assumption 5. There exists a strictly positive constant $C$ such that for every value of $\beta>0$ and in a small neighborhood of $z=0$ on the complex plane, $1$ is not an eigenvalue of the matrix $\left[\begin{array}[]{ccc}\mathbf{I}&\mathbf{O}&\mathbf{O}\end{array}\right]\mathbf{F}^{-1}(z,\beta)\left[\begin{array}[]{c}-(C+\beta)\mathbf{I}\\\ \mathbf{I}\\\ \mathbf{O}\end{array}\right],$ (9) where $\mathbf{F}$ is defined as $\mathbf{F}(z,\beta)=\left[\begin{array}[]{ccc}\mathbf{S}&z^{-1}\mathbf{I}-\mathbf{R}^{T}&\mathbf{O}\\\ z\mathbf{I}-\mathbf{R}&\mathbf{O}&-\mathbf{P}\\\ \mathbf{O}&-\mathbf{P}^{T}&-\beta\mathbf{I}\end{array}\right].$ (10) #### 2.3.2 Main Results Here, we present the main theoretical results and postpone the details and proofs to the Appendix. ###### Theorem 1. Let Assumptions 3 and 4 hold. If the iterates of DAGP algorithm converge, any stopping point is an optimal and consensus solution of the decentralized constrained optimization problem in (2), i.e. $\mathbf{x}^{v}=\mathbf{x}^{*},\;\forall v\in\mathcal{V}$, and $\mathbf{x}^{*}$ satisfies the sufficient optimality conditions. We also present guarantees for the rate of convergence. ###### Theorem 2. Let all the assumptions hold. Define $\bar{\mathbf{x}}^{v}_{N}=\frac{1}{N}\sum\limits_{n=0}^{N-1}\mathbf{x}^{v}_{n}$ and $\bar{\mathbf{x}}_{N}=\frac{1}{M}\sum\limits_{v}\bar{\mathbf{x}}^{v}_{N}$. Then, $\|\bar{\mathbf{x}}_{N}-\bar{\mathbf{x}}^{v}_{N}\|^{2}=O(\frac{1}{N})$, $\mathrm{dist}^{2}(\bar{\mathbf{x}}_{N},S_{v})=O(\frac{1}{N})$ and $\left|\sum\limits_{v}f_{v}(\bar{\mathbf{x}}^{v}_{N})-\sum\limits_{v}f_{v}(\mathbf{x}^{*})\right|=O\left(\frac{1}{\sqrt{N}}\right).$ (11) ## 3 Experimental Results We evaluate and compare the performance of the DAGP algorithm in two scenarios; decentralized constraint and unconstrained problems. In the first experiment, which contains examples with synthetic data, we consider constraints and examine the convergence and feasibility gap of the DAGP compared to the DDPS algorithm. In the second experiment, we solve the classical logistic regression problem, which is unconstrained. In the latter, we compare our algorithm to the state of the art decentralized unconstrained optimization algorithms over directed graphs, namely the ADD-OPT and Push-Pull algorithms. In all algorithms used in the first experiment, the parameters are hand-tuned in a way that the algorithms achieve their best performance, leading to a fair comparison. In the real-world logistic regression problem, hand-tuning parameters is not computationally feasible due to the size of the experiment. Therefore, an appropriate step size is selected for all algorithms in the second experiment. Different algorithms use different matrices for averaging. In this paper, we respectively use $\left.\mathbf{L}^{\text{in}}\middle/2\mathbf{d}_{\text{max}}^{\text{in}}\right.$ and $\left.\mathbf{L}^{\text{out}}\middle/2\mathbf{d}_{\text{max}}^{\text{out}}\right.$ as zero row-sum and zero column-sum matrices, where $\mathbf{d}_{\text{max}}^{\text{in}}$ and $\mathbf{d}_{\text{max}}^{\text{out}}$ are the largest diagonal elements of $\mathbf{L}^{\text{{in}}}$ and $\mathbf{L}^{\text{{out}}}$. By subtracting these matrices from the identity matrix, row stochastic and column stochastic matrices used in this paper are computed. Moreover, random directed and strongly connected graphs are used in our experiments, which are shown in Fig 1. The numerical experiments are described next. We repeated each experiment multiple times, but only one instance from each experiment is presented as the difference between individual runs was minimal. (a) First setup (b) Second setup (c) Logistic Regression Figure 1: Directed random graphs used in our experiments. ### 3.1 Numerical Results In this experiment, which contains two setups, we consider synthetic functions and constraints. In our setups, there are $M$ nodes, each having access to one function and one constraint. The nodes communicate over a randomly generated graph and their local variables $\mathbf{x}^{v}$s are of size $m$, which their initial vectors are generated randomly from zero mean and unit variance normal distribution. The functions are selected to be smooth, but not strongly convex as follows: $f_{v}(\mathbf{x})=\log\big{(}\cosh(\mathbf{a}_{v}^{T}\mathbf{x}-b_{v})\big{)},$ (12) where $\mathbf{a}_{v}$s and $b_{v}$s are randomly generated from a zero mean and unit variance normal distribution. Moreover, we choose randomly generated linear constraints $\mathbf{c}_{v}^{T}\mathbf{x}-d_{v}\leq 0$ since their orthogonal projection operator is simple to compute.111 In all simulations, $\mathbf{c}_{v}$ and $d_{v}$ are selected such that their intersection not being an empty set, e.g., $\mathbf{c}_{v}$s are generated randomly, then $d_{v}$s are selected such that $\mathbf{c}_{v}^{T}\mathbf{x}\leq d_{v}$ for one arbitrary vector $\mathbf{x}$. In the first setup, $m=20$ and $M=10$, while in the second one, $m=10$ and $M=20$. These parameters are chosen since the feasible set in the second experiment is significantly smaller than the feasible set in the first experiment. In both setups, the objective value and the distance to the feasible set, called feasibility gap, are reported, both computed at $\bar{\mathbf{x}}=\sum_{v\in\mathcal{V}}\mathbf{x}^{v}$. The results of these setups are shown respectively in Figures 2 and 3. We observe that $\bar{\mathbf{x}}$ moves completely to the feasible set in our algorithm unlike DDPS in which $\bar{\mathbf{x}}$ becomes only close to the feasible set. Moreover, our algorithm converges faster to the optimal consensus solution in comparison with DDPS algorithm since DDPS needs a diminishing step size. To show that all nodes achieve consensus, the squared norm of error between $\mathbf{x}^{v}$ at five random nodes and $\mathbf{x}^{0}$ is plotted in Fig 2c, where all nodes converge to one stopping point. As described in section 2.3, $\sum_{v\in\mathcal{V}}\mathbf{g}^{v}$ should become equal to zero to have optimal solution. For this reason, the norm of this variable is computed and shown in Fig 2d, which approaches zero as the algorithm proceeds. (a) Objective Value (b) Feasibility Gap (c) Consensus solution (d) Optimal solution Figure 2: First setup results with $m=20$ and $M=10$. Local variables move to a consensus and optimal stopping point in DAGP, while they move to a sub- optimal point in DDPS. (a) Objective Value (b) Feasibility Gap Figure 3: Second setup results with $m=10$ and $M=20$. As DDPS has not converged to a point in the feasible set, it can achieve less function value. In the first setup, $\mathbf{C}=[\mathbf{c}_{i}^{T}]\in\mathbb{R}^{M\times m}$ has a null space, and the feasible set of its corresponding optimization problem is larger than the feasible set of the second experiment. The linear convergence rate of DAGP in Fig 2a, can be due to a situation where the constraints are not active at the solution, and the algorithm attains the optimal value of the unconstrained version of the problem. Then, the overall unconstrained objective function may be strongly convex, explaining faster convergence. On the other hand, in the second experiment, some constraints are active, and the algorithm slows down in converging to a consensus and an optimal solution. In Fig 3a, DDPS achieves a smaller objective value as its local variables remain infeasible. ### 3.2 Logistic Regression For the sake of comparison with other algorithms, we examine an unconstrained logistic regression problem. We consider the MNIST [13] dataset restricted to two digits to form a binary classification problem. We once again consider a random directed graph with $M=20$ nodes as shown in Fig 1c. Total number of $N_{s}=10000$ images are used for training the model, i.e. for minimizing the logistic loss function defined as $\min\limits_{\mathbf{w}\in\mathbb{R}^{784}}\;\;\sum\limits_{i=1}^{N_{s}}\log\left(1+\exp{(-y_{i}\mathbf{x}_{i}^{T}\mathbf{w})}\right)+\frac{\lambda}{2}\|\mathbf{w}\|_{2},$ (13) where $\left\\{\mathbf{x}_{i},y_{i}\right\\}_{i=1}^{N_{s}}\subseteq\mathbb{R}^{784}\times\\{+1,-1\\}$ is the set of training samples, and $\lambda$ is the regularization parameter, which is chosen to be $\left.1\middle/N_{s}\right.$. We assume that the training samples are distributed in a balanced way between $20$ nodes; as a result, each node has $500$ training samples. The loss function at each node $f_{v}$ is the collection of terms in (13) associated with the samples of the node $v$. The regularization term ensures that this problem is strongly convex, leading to linear rate of convergence for all algorithms. We compare DAGP with Push-Pull and ADD-OPT. For all algorithms, fixed, similar, and appropriate step size is used. Centralized gradient descent is used to determine the optimal value $f^{*}$ and compare all algorithms objective values with respect to it. The results for optimality gap, defined as $\sum_{v\in\mathcal{V}}f_{v}(\bar{\mathbf{x}})-f^{*}$, are shown in Fig 4. As observed, the difference between the convergence rate of these three graphs is minimal. As discussed earlier, optimal step sizes are not used in this experiment due to the size of the problem. Nevertheless, for smaller experiments, we observe that DAGP and Push-Pull can utilize larger step sizes, while ADD-OPT fails to converge with a similar step size. We conclude that by choosing the optimal values of the step size, DAGP and Push-Pull behave similarly, and both outperform ADD-OPT. The practical applicability of our algorithm is immediate, as it is competitive with respect to constrained problems, but also provably capable of solving unconstrained optimization problems, without any modifications to the algorithmic structure. Figure 4: Convergence rate comparison of decentralized unconstrained algorithms over directed graphs. A fixed step size is used in all algorithms. ## 4 Conclusion We introduce the Double Averaging Gradient Projection (DAGP) algorithm designed for solving decentralized optimization problems over directed graphs for problems with and without constraints. In contrast to the existing literature, DAGP allows for different constraints at each node in a directed graph. Previous algorithms such as DDPS [31], require a decreasing step size to solve similar constrained problems, while DAGP requires constant step size by employing a gradient tracking technique. By taking advantage of the projection operator onto convex sets and double averaging, we prove the $\mathcal{O}\left(\left.1\middle/\sqrt{n}\right.\right)$ rate of convergence on constrained problems in a convex and smooth setting. Comparing to the previously proposed algorithms, DAGP is competitive with other decentralized unconstrained directed optimization algorithms, such as ADP-OPT [32] and Push- Pull [24], and greatly outcompetes the previous decentralized constrained optimization algorithms such as DDPS. ## 5 acknowledgement This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. ## References * [1] Mahnoosh Alizadeh, Xiao Li, Zhifang Wang, Anna Scaglione, and Ronald Melton. Demand-side management in the smart grid: Information processing for the power switch. IEEE Signal Processing Magazine, 29(5):55–67, 2012. * [2] Juan Andrés Bazerque and Georgios B Giannakis. Distributed spectrum sensing for cognitive radio networks by exploiting sparsity. IEEE Transactions on Signal Processing, 58(3):1847–1862, 2009. * [3] Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, and Mher Safaryan. On biased compression for distributed learning. arXiv preprint arXiv:2002.12410, 2020. * [4] Christopher M Bishop. Pattern recognition and machine learning. springer, 2006. * [5] Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip algorithms. IEEE transactions on information theory, 52(6):2508–2530, 2006\. * [6] Bahman Gharesifard and Jorge Cortés. Distributed strategies for generating weight-balanced and doubly stochastic digraphs. European Journal of Control, 18(6):539–557, 2012. * [7] Xiaohong Guan, Zhanbo Xu, and Qing-Shan Jia. Energy-efficient buildings facilitated by microgrid. IEEE Transactions on smart grid, 1(3):243–252, 2010. * [8] Hadrien Hendrikx, Francis Bach, and Laurent Massoulié. An accelerated decentralized stochastic proximal algorithm for finite sums. arXiv preprint arXiv:1905.11394, 2019. * [9] Hadrien Hendrikx, Francis Bach, and Laurent Massoulie. Dual-free stochastic decentralized optimization with variance reduction. arXiv, (NeurIPS):1–12, 2020. * [10] Hadrien Hendrikx, Francis Bach, and Laurent Massoulie. An optimal algorithm for decentralized finite sum optimization. arXiv preprint arXiv:2005.10675, 2020. * [11] David Kempe, Alin Dobra, and Johannes Gehrke. Gossip-based computation of aggregate information. In 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings., pages 482–491. IEEE, 2003. * [12] Anastasia Koloskova, Sebastian U. Stich, and Martin Jaggi. Decentralized stochastic optimization and gossip algorithms with compressed communication. 36th International Conference on Machine Learning, ICML 2019, 2019-June:6088–6111, 2019. * [13] Yann LeCun and Corinna Cortes. Mnist handwritten digit database. 2010\. * [14] Soomin Lee and Angelia Nedic. Distributed random projection algorithm for convex optimization. IEEE Journal of Selected Topics in Signal Processing, 7(2):221–229, 2013. * [15] Ilan Lobel and Asuman Ozdaglar. Distributed subgradient methods for convex optimization over random networks. IEEE Transactions on Automatic Control, 56(6):1291–1306, 2010. * [16] Qingguo Lü, Xiaofeng Liao, Huaqing Li, and Tingwen Huang. A computation-efficient decentralized algorithm for composite constrained optimization. IEEE Transactions on Signal and Information Processing over Networks, 6:774–789, 2020. * [17] Aryan Mokhtari and Alejandro Ribeiro. DSA: Decentralized double stochastic averaging gradient algorithm. Journal of Machine Learning Research, 17:1–35, 2016. * [18] Angelia Nedic. Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization. IEEE Signal Processing Magazine, 37(3):92–101, 2020. * [19] Angelia Nedić and Alex Olshevsky. Distributed optimization over time-varying directed graphs. IEEE Transactions on Automatic Control, 60(3):601–615, 2014. * [20] Angelia Nedic, Alex Olshevsky, and Wei Shi. Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM Journal on Optimization, 27(4):2597–2633, 2017. * [21] Angelia Nedic and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48–61, 2009. * [22] Angelia Nedic, Asuman Ozdaglar, and Pablo A Parrilo. Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control, 55(4):922–938, 2010. * [23] HNT Dinh Hoa Nguyen, Tatsuo Narikiyo, and Michihiro Kawanishi. A distributed optimization method for optimal energy management in smart grid. In Research Trends and Challenges in Smart Grids. IntechOpen, 2019\. * [24] Shi Pu, Wei Shi, Jinming Xu, and Angelia Nedic. Push-pull gradient methods for distributed optimization in networks. IEEE Transactions on Automatic Control, 2020. * [25] Guannan Qu and Na Li. Harnessing smoothness to accelerate distributed optimization. IEEE Transactions on Control of Network Systems, 5(3):1245–1260, 2018. * [26] S Sundhar Ram, Angelia Nedić, and Venugopal V Veeravalli. Distributed stochastic subgradient projection algorithms for convex optimization. Journal of optimization theory and applications, 147(3):516–545, 2010. * [27] Kevin Seaman, Francis Bach, Sebastien Bubeck, Yin Tat Lee, and Laurent Massoulie. Optimal algorithms for smooth and strongly convex distributed optimization in networks. 34th International Conference on Machine Learning, ICML 2017, 6:4630–4642, 2017. * [28] Wei Shi, Qing Ling, Gang Wu, and Wotao Yin. Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization, 25(2):944–966, 2015. * [29] Konstantinos I Tsianos, Sean Lawlor, and Michael G Rabbat. Push-sum distributed dual averaging for convex optimization. In 2012 ieee 51st ieee conference on decision and control (cdc), pages 5453–5458. IEEE, 2012. * [30] César A Uribe, Soomin Lee, Alexander Gasnikov, and Angelia Nedić. A dual approach for optimal algorithms in distributed optimization over networks. Optimization Methods and Software, pages 1–40, 2020. * [31] Chenguang Xi and Usman A Khan. Distributed subgradient projection algorithm over directed graphs. IEEE Transactions on Automatic Control, 62(8):3986–3992, 2016. * [32] Chenguang Xi, Ran Xin, and Usman A. Khan. ADD-OPT: Accelerated Distributed Directed Optimization. IEEE Transactions on Automatic Control, 63(5):1329–1339, 2018. * [33] Ran Xin, Soummya Kar, and Usman A Khan. Decentralized stochastic optimization and machine learning: A unified variance-reduction framework for robust performance and fast convergence. IEEE Signal Processing Magazine, 37(3):102–113, 2020. * [34] Ran Xin and Usman A. Khan. A Linear Algorithm for Optimization over Directed Graphs with Geometric Convergence. IEEE Control Systems Letters, 2(3):313–318, 2018. * [35] Ran Xin and Usman A Khan. A linear algorithm for optimization over directed graphs with geometric convergence. IEEE Control Systems Letters, 2(3):315–320, 2018. * [36] Ran Xin, Shi Pu, Angelia Nedić, and Usman A Khan. A general framework for decentralized optimization with first-order methods. Proceedings of the IEEE, 108(11):1869–1889, 2020. The notations used throughout the paper are used similarly here, except for the $\mathbf{z}^{v}$ variable, to which the iteration number is added. ## Appendix A: Proof of Theorem 1 We start by presenting the following lemma. ###### Lemma 1. Let $\ker(\mathbf{W}^{T})=\ker(\mathbf{Q})$. Then, $\mathbf{Q}\mathbf{W}\mathbf{x}=\mathbf{0}$ if and only if $\mathbf{x}\in\ker(\mathbf{W})$. ###### Proof. The forward proof is trivial. For the backward proof, we can write $\displaystyle\mathbf{Q}\mathbf{W}\mathbf{x}=\mathbf{0}$ $\displaystyle\Rightarrow\mathbf{W}\mathbf{x}\in\ker(\mathbf{Q})$ $\displaystyle\Rightarrow\mathbf{W}\mathbf{x}\in\ker(\mathbf{W}^{T})$ $\displaystyle\Rightarrow\mathbf{W}^{T}\mathbf{W}\mathbf{x}=\mathbf{0}$ $\displaystyle\Rightarrow\mathbf{x}^{T}\mathbf{W}^{T}\mathbf{W}\mathbf{x}=0$ $\displaystyle\Rightarrow\|\mathbf{W}\mathbf{x}\|_{2}^{2}=0$ $\displaystyle\Rightarrow\mathbf{x}\in\ker(\mathbf{W})$ ∎ Now, consider an arbitrary stopping point of the algorithm, that is $\mathbf{x}_{n+1}^{v}=\mathbf{x}^{v}_{n}=\mathbf{x}^{v}$, $\nabla\mathbf{f}_{n+1}=\nabla\mathbf{f}_{n}=\nabla\mathbf{f}$, $\mathbf{h}^{v}_{n+1}=\mathbf{h}^{v}_{n}=\mathbf{h}^{v}$ and $\mathbf{g}^{v}_{n+1}=\mathbf{g}^{v}_{n}=\mathbf{g}^{v}$. We have $\displaystyle\mathbf{Z}=\mathbf{X}-\mathbf{W}\mathbf{X}-\mu\left(\nabla\mathbf{f}-\mathbf{G}\right)$ (14) $\displaystyle\mathbf{X}=\mathbfcal{P}_{S}\left(\mathbf{Z}\right)$ (15) $\displaystyle\rho\left[\nabla\mathbf{f}-\mathbf{G}+\frac{1}{\mu}\left(\mathbf{Z}-\mathbf{X}\right)\right]+\alpha\left(\mathbf{H}-\mathbf{G}\right)=\textbf{O}$ (16) $\displaystyle\mathbf{Q}\left(\mathbf{H}-\mathbf{G}\right)=\textbf{O},$ (17) where $\mathbf{Z},\mathbf{G},\mathbf{H},\nabla\mathbf{f},\mathbfcal{P}_{S}$ are matrices with $\mathbf{z}^{v},\mathbf{g}^{v},\mathbf{h}^{v},$ $\nabla f^{v}(\mathbf{x}^{v}),P_{S_{v}}(\mathbf{z}^{v})$ as their rows. Left multiplying (16) by $\mathbf{Q}$, considering (17), we have $\mathbf{Q}(\mathbf{G}-\nabla\mathbf{f})=\frac{1}{\mu}\mathbf{Q}(\mathbf{Z}-\mathbf{X}).$ (18) Left multiplying (14) by $\mathbf{Q}$, and applying (18) leads to $\mathbf{Q}\mathbf{W}\mathbf{X}=\mathbf{0}$. Therefore, $\mathbf{X}\in\ker(\mathbf{W})=\mathrm{span}\\{\mathbf{1}\\}$ based on the result of Lemma 1, which means that $\mathbf{x}^{v}=\mathbf{x}^{*},\;\forall v\in\mathcal{V}$. As $\mathbf{X}\in\ker(\mathbf{W})$, (14) reduces to $\mathbf{Z}-\mathbf{X}=\mu(\mathbf{G}-\nabla\mathbf{f})$, which leads to $\mathbf{H}=\mathbf{G}$ by incorporating it into (16). Since (6) is designed to preserve the summation of $\mathbf{h}^{v}$s, and each element of $\mathbf{H}$ is initialized with zero vector, we have $\mathbf{1}^{T}\mathbf{G}=\mathbf{1}^{T}\mathbf{H}=\sum\limits_{v\in\mathcal{V}}({\mathbf{h}^{v}})^{T}=\mathbf{0}^{T}.$ (19) From (15), we have $\mathbf{Z}-\mathbf{X}\in{\mbox{\boldmath$\partial$}}\mathbf{I}_{S}$, consequently, $\mu(\mathbf{G}-\nabla\mathbf{f})\in{\mbox{\boldmath$\partial$}}\mathbf{I}_{S}.$ As ${\mbox{\boldmath$\partial$}}\mathbf{I}_{S_{v}}$ is a cone, and therefore invariant to scaling, we can write $(\mathbf{G}-\nabla\mathbf{f})\in{\mbox{\boldmath$\partial$}}\mathbf{I}_{S}$. Left multiplying by $\mathbf{1}^{T}$, and moving all the terms to one side, considering (19), we have $\mathbf{0}\in\sum\limits_{v\in\mathcal{V}}\left(\partial I_{S_{v}}(\mathbf{x}^{*})+\nabla f_{v}(\mathbf{x}^{*})\right),$ (20) which shows that $\mathbf{x}^{*}$ is the optimal solution. ∎ ## Appendix B: Proof of Theorem 2 We start by defining $F^{v}(\mathbf{x})=f_{v}(\mathbf{x})-f_{v}(\mathbf{x}^{*})-\langle\nabla f_{v}(\mathbf{x}^{*}),\mathbf{x}-\mathbf{x}^{*}\rangle$ (21) and $F^{v}_{n}=F^{v}(\mathbf{x}^{v}_{n}).$ (22) Note that from the convexity of $f_{v}$, the values of $F^{v}(\mathbf{x})$, particularly $F^{v}_{n}$ are non-negative. From convexity, we also conclude that $\displaystyle F^{v}_{n}+\left\langle\nabla f_{v}(\mathbf{x}^{*})-\nabla f_{v}(\mathbf{x}^{v}_{n}),\mathbf{x}^{v}_{n}-\mathbf{x}^{*}\right\rangle=$ $\displaystyle f_{v}(\mathbf{x}^{v}_{n})-f_{v}(\mathbf{x}^{*})+\left\langle\nabla f_{v}(\mathbf{x}^{v}_{n}),\mathbf{x}^{*}-\mathbf{x}^{v}_{n}\right\rangle\leq 0.$ (23) From the $L-$smoothness property of $f_{v}$, we also obtain $\displaystyle F_{n+1}^{v}-F_{n}^{v}-\langle\nabla f_{v}(\mathbf{x}_{n}^{v})-\nabla f_{v}(\mathbf{x}^{*}),\mathbf{x}_{n+1}^{v}-\mathbf{x}^{v}_{n}\rangle=$ $\displaystyle f_{v}(\mathbf{x}^{v}_{n+1})-f_{v}(\mathbf{x}^{v}_{n})-\langle\nabla f_{v}(\mathbf{x}_{n}^{v}),\mathbf{x}_{n+1}^{v}-\mathbf{x}^{v}_{n}\rangle\leq$ $\displaystyle\frac{L}{2}\left\|\mathbf{x}_{n+1}^{v}-\mathbf{x}_{n}^{v}\right\|^{2}.$ (24) Adding (Appendix B: Proof of Theorem 2) to (24) yields: $F^{v}_{n+1}+\left\langle\nabla f_{v}(\mathbf{x}^{*})-\nabla f_{v}(\mathbf{x}^{v}_{n}),\ \mathbf{x}^{v}_{n+1}-\mathbf{x}^{*}\right\rangle-\frac{L}{2}\left\|\mathbf{x}_{n+1}^{v}-\mathbf{x}_{n}^{v}\right\|^{2}\leq 0.$ (25) Now, we define $T^{v}(\mathbf{x})=-\langle\mathbf{n}^{v},\mathbf{x}-\mathbf{x}^{*}\rangle,$ (26) and $T_{n}^{v}=T^{v}(\mathbf{x}^{v}_{n}),$ where $\mathbf{n}^{v}\in\partial I_{S_{v}}(\mathbf{x}^{*})$. The fact that $\mathbf{x}_{n+1}^{v}\in S_{v}$ yields $T^{v}_{n+1}\geq 0$. Note that from $\mathbf{x}^{v}_{n+1}=P_{S_{v}}(\mathbf{z}^{v}_{n})$ and the fact that $\mathbf{x}^{*}\in S_{v}$, we have $\langle\mathbf{x}^{*}-\mathbf{x}^{v}_{n+1},\mathbf{z}_{n}^{v}-\mathbf{x}^{v}_{n+1}\rangle\leq 0,$ (27) which can also be written as $\mu T^{v}_{n+1}+\langle\mathbf{x}^{*}-\mathbf{x}^{v}_{n+1},\mathbf{z}_{n}^{v}-\mathbf{x}^{v}_{n+1}-\mu\mathbf{n}^{v}\rangle\leq 0.$ (28) Multiplying (25) by $\mu$, adding to (28), plugging the definition of $\mathbf{z}_{n}^{v}$ in and summing over $v\in\mathcal{V}$ and $n=0,1,\ldots,N-1$ we obtain $\displaystyle\mu$ $\displaystyle\sum\limits_{n\in[N],v}\left(F_{n+1}^{v}+T_{n+1}^{v}\right)-\frac{L\mu}{2}\sum\limits_{n\in[N],v}\left\|\mathbf{x}_{n+1}^{v}-\mathbf{x}_{n}^{v}\right\|^{2}$ $\displaystyle+\sum\limits_{n\in[N],v}\Big{\langle}\mathbf{x}^{*}-\mathbf{x}^{v}_{n+1},\mathbf{x}_{n}^{v}-\sum\limits_{u}w_{vu}\mathbf{x}^{u}_{n}-\mathbf{x}^{v}_{n+1}+\mu(\mathbf{g}^{v}_{n}-\nabla f_{v}(\mathbf{x}^{*})-\mathbf{n}^{v})\Big{\rangle}\leq 0.$ (29) We also replace the expression of $\mathbf{z}^{v}_{n}$ in the dynamics of $\mathbf{g}^{v}_{n}$ leading to $\mathbf{g}_{n+1}^{v}=\mathbf{g}_{n}^{v}+\frac{\rho}{\mu}\left(\mathbf{x}_{n}^{v}-\sum\limits_{u}w_{vu}\mathbf{x}^{u}_{n}-\mathbf{x}^{v}_{n+1}\right)+\alpha\bm{\delta}_{n}^{v},$ (30) where $\bm{\delta}_{n}^{v}=\mathbf{h}_{n}^{v}-\mathbf{g}_{n}^{v}$ that follows the following dynamics $\bm{\delta}_{n+1}^{v}=(1-\alpha)\bm{\delta}_{n}^{v}-\sum\limits_{u}q_{vu}\bm{\delta}^{u}_{n}-\frac{\rho}{\mu}\left(\mathbf{x}_{n}^{v}-\sum\limits_{u}w_{vu}\mathbf{x}^{u}_{n}-\mathbf{x}^{v}_{n+1}\right).$ (31) For simplicity, we define $\tilde{\mathbf{x}}_{n}^{v}=\mathbf{x}^{v}_{n}-\mathbf{x}^{*}$ and $\tilde{\mathbf{g}}^{v}_{n}=\mathbf{g}^{v}_{n}-\nabla f_{v}(\mathbf{x}^{*})-\mathbf{n}^{v}$ and rewrite (Appendix B: Proof of Theorem 2), (30) and (31) as $\displaystyle\sum\limits_{n=0}^{N-1}\left(\mu\sum\limits_{v}\left(F_{n+1}^{v}+T_{n+1}^{v}\right)+\frac{\eta}{2}\sum\limits_{u,v}\|\mathbf{x}_{n+1}^{u}-\mathbf{x}_{n+1}^{v}\|^{2}\right)$ $\displaystyle-\sum\limits_{n=0}^{N-1}\sum\limits_{v}\left\langle\tilde{\mathbf{x}}^{v}_{n+1},\tilde{\mathbf{x}}_{n}^{v}-\sum\limits_{u}w_{vu}\tilde{\mathbf{x}}^{u}_{n}-\tilde{\mathbf{x}}^{v}_{n+1}+\mu\tilde{\mathbf{g}}^{v}_{n}\right\rangle$ $\displaystyle-\frac{L\mu}{2}\sum\limits_{n=0}^{N-1}\sum\limits_{v}\left\|\tilde{\mathbf{x}}_{n+1}^{v}-\tilde{\mathbf{x}}_{n}^{v}\right\|^{2}$ $\displaystyle-\frac{\eta}{2}\sum\limits_{n=0}^{N-1}\sum\limits_{u,v}\|\tilde{\mathbf{x}}_{n+1}^{u}-\tilde{\mathbf{x}}_{n+1}^{v}\|^{2}\leq 0,$ (32) $\tilde{\mathbf{g}}_{n+1}^{v}=\tilde{\mathbf{g}}_{n}^{v}+\frac{\rho}{\mu}\left(\tilde{\mathbf{x}}_{n}^{v}-\sum\limits_{u}w_{vu}\tilde{\mathbf{x}}^{u}_{n}-\tilde{\mathbf{x}}^{v}_{n+1}\right)+\alpha\bm{\delta}_{n}^{v},$ (33) $\bm{\delta}_{n+1}^{v}=(1-\alpha)\bm{\delta}_{n}^{v}-\sum\limits_{u}q_{vu}\bm{\delta}^{u}_{n}-\frac{\rho}{\mu}\left(\tilde{\mathbf{x}}_{n}^{v}-\sum\limits_{u}w_{vu}\tilde{\mathbf{x}}^{u}_{n}-\tilde{\mathbf{x}}^{v}_{n+1}\right),$ (34) where in the first inequality we also add and remove the term $\frac{\eta}{2}\sum\limits_{u,v}\|\mathbf{x}_{n+1}^{u}-\mathbf{x}_{n+1}^{v}\|^{2}$. In the following, we will consider the last three summations in (32) as $A_{N}$. We show an asymptotic lower bound for $A_{N}$, i.e. show that there exists a constant $C$ only depending on the initial values such that for sufficiently large $N$, $A_{N}\geq-C$. Note that since $A_{N}\leq 0$, we must have $C\geq 0$. Then, we conclude from (32) that $\sum\limits_{n=0}^{N-1}\Bigg{(}\mu\sum\limits_{v}\left(F_{n+1}^{v}+T_{n+1}^{v}\right)+\frac{\eta}{2}\sum\limits_{u,v}\|\mathbf{x}_{n+1}^{u}-\mathbf{x}_{n+1}^{v}\|^{2}\Bigg{)}\leq C.$ (35) Defining $\bar{\mathbf{x}}^{v}_{N}=\frac{1}{N}\sum\limits_{n=0}^{N-1}\mathbf{x}^{v}_{n}$ and noting that each term in the summation over $n$ is a fixed convex function of $\\{\mathbf{x}_{n+1}^{v}\\}_{v}$, we may recall Jensen’s inequality to conclude $\mu\sum\limits_{v}F^{v}(\bar{\mathbf{x}}^{v}_{N})+T^{v}(\bar{\mathbf{x}}^{v}_{N})+\frac{\eta}{2}\sum\limits_{u,v}\|\bar{\mathbf{x}}_{N}^{u}-\bar{\mathbf{x}}_{N}^{v}\|^{2}\leq\frac{C}{N}.$ (36) Defining $\bar{\mathbf{x}}_{N}=\frac{1}{M}\sum\limits_{v}\bar{\mathbf{x}}^{v}_{N}$, we conclude that $\|\bar{\mathbf{x}}_{N}-\bar{\mathbf{x}}^{u}_{N}\|^{2}=O(\frac{1}{N}).$ Since $\bar{\mathbf{x}}^{u}_{N}\in S_{u}$, we also conclude that $\mathrm{dist}^{2}(\bar{\mathbf{x}}_{N},S_{u})=O(\frac{1}{N}).$ Finally, $\displaystyle\left|\sum\limits_{v}f_{v}(\bar{\mathbf{x}}^{v}_{N})-\sum\limits_{v}f_{v}(\mathbf{x}^{*})\right|$ $\displaystyle\leq\frac{C}{\mu N}+\sum\limits_{v}\left|\langle\mathbf{n}^{v}+\nabla f_{v}(\mathbf{x}^{*}),\bar{\mathbf{x}}_{N}^{v}-\bar{\mathbf{x}}_{N}\rangle\right|$ $\displaystyle\leq\frac{C}{\mu N}+\sqrt{\sum\limits_{v}\|\mathbf{n}^{v}+\nabla f_{v}(\mathbf{x}^{*})\|^{2}}\sqrt{\sum\limits_{v}\|\bar{\mathbf{x}}_{N}^{v}-\bar{\mathbf{x}}_{N}\|^{2}}$ $\displaystyle=O(\frac{1}{\sqrt{N}})$ which completes the proof. ∎ ## Bound on $A_{N}$ To find the bound $C$, we start by simplifying the notation in (32), (33) and (34). Let us introduce $\bm{\Psi}_{n}=\left[\begin{array}[]{cccc}\tilde{\mathbf{X}}_{n+1}&\tilde{\mathbf{X}}_{n}&\tilde{\mathbf{G}}_{n}&\bm{\Delta}_{n}\end{array}\right]^{T},$ (37) where $\tilde{\mathbf{X}}_{n},\tilde{\mathbf{G}}_{n},\bm{\Delta}_{n}$ are matrices with $\tilde{\mathbf{x}}^{v}_{n},\tilde{\mathbf{g}}^{v}_{n},\bm{\delta}^{v}_{n}$ as their $v^{\text{th}}$ row, respectively. We may write (33) and (34) as $\bm{\Psi}_{n+1}=\mathbf{R}\bm{\Psi}_{n}+\mathbf{P}\tilde{\mathbf{X}}_{n+2},\qquad n=0,\dots,N-2$ (38) where $\mathbf{P}$ and $\mathbf{R}$ are defined as $\mathbf{R}=\left[\begin{array}[]{cccc}\mathbf{O}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\\ \mathbf{I}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\\ -\frac{\rho}{\mu}\mathbf{I}&\frac{\rho}{\mu}(\mathbf{I}-\mathbf{W})&\mathbf{I}&\alpha\mathbf{I}\\\ \frac{\rho}{\mu}\mathbf{I}&-\frac{\rho}{\mu}(\mathbf{I}-\mathbf{W})&\mathbf{O}&(1-\alpha)\mathbf{I}-\mathbf{Q}\end{array}\right],\qquad\mathbf{P}=\left[\begin{array}[]{c}\mathbf{I}\\\ \mathbf{O}\\\ \mathbf{O}\\\ \mathbf{O}\end{array}\right].$ (39) We also have $A_{N}=\sum\limits_{n=0}^{N-1}\left\langle\bm{\Psi}_{n},\mathbf{S}\bm{\Psi}_{n}\right\rangle,$ (40) where $\mathbf{S}$ is computed as $\mathbf{S}=\left[\begin{array}[]{cccc}\left(1-\frac{L\mu}{2}\right)\mathbf{I}-M\eta\left(\mathbf{I}-\frac{1}{M}\mathbf{1}\mathbf{1}^{T}\right)&-\frac{1}{2}(\mathbf{I}-\mathbf{W})+\frac{L\mu}{2}\mathbf{I}&-\frac{\mu}{2}\mathbf{I}&\mathbf{O}\\\ -\frac{1}{2}(\mathbf{I}-\mathbf{W}^{T})+\frac{L\mu}{2}\mathbf{I}&-\frac{L\mu}{2}\mathbf{I}&\mathbf{O}&\mathbf{O}\\\ -\frac{\mu}{2}\mathbf{I}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\\ \mathbf{O}&\mathbf{O}&\mathbf{O}&\mathbf{O}\end{array}\right].$ (41) The following Lemmas provide a guarantee that $A_{N}$ is bounded. ###### Lemma 2. Consider matrices $\mathbf{R},\mathbf{P}$ and $\mathbf{S}$, defined in (39, 41). Define a ”dual” sequence $\\{\bm{\Lambda}_{n}\\}_{n=-1}^{N-1}$ such that $\bm{\Lambda}_{N-1}=\bm{\Lambda}_{-1}=\mathbf{O}$. Suppose that there exists a $C\geq 0$ such that for every $\beta>0$, the system of equations in (38) together with $\displaystyle\bm{\Lambda}_{n-1}-\mathbf{R}^{T}\bm{\Lambda}_{n}+(\mathbf{S}+\left(C+\beta\right)\delta_{n,0}\mathbf{I})\bm{\Psi}_{n}=\mathbf{O}$ $\displaystyle\qquad n=0,1,\ldots,N-1$ (42) $\displaystyle\mathbf{P}^{T}\bm{\Lambda}_{n}+\beta\tilde{\mathbf{X}}_{n+2}=\mathbf{O}\qquad\qquad\quad$ $\displaystyle\qquad n=0,1,\ldots,N-2$ (43) has no non-zero solution for $\\{\bm{\Psi}_{n},\bm{\Lambda}_{n},\tilde{\mathbf{X}}_{n+2}\\}$. Then, $\mathbf{A}_{n}\geq-C\|\bm{\Psi}_{0}\|_{\mathrm{F}}^{2}$ always holds true. ###### Proof. Note that the claim is equivalent to the statement that zero is the optimal value for the optimization problem $\displaystyle\min\limits_{\\{\bm{\Psi}_{n}\\}_{n=0}^{N-1},\\{\tilde{\mathbf{X}}_{n+2}\\}_{n=0}^{N-2}}$ $\displaystyle\quad\frac{1}{2}\sum\limits_{n=0}^{N-1}\langle\bm{\Psi}_{n},\mathbf{S}\bm{\Psi}_{n}\rangle+\frac{C}{2}\|\bm{\Psi}_{0}\|^{2}_{\mathrm{F}}$ subject to $\displaystyle\quad\bm{\Psi}_{n+1}=\mathbf{R}\bm{\Psi}_{n}+\mathbf{P}\tilde{\mathbf{X}}_{n+2},\quad n=0,1,\ldots,N-2$ (44) If the claim does not hold, the optimization is unbounded and the following restricted optimization will achieve a strictly negative optimal value at a non-zero solution: $\displaystyle\min\limits_{\\{\bm{\Psi}_{n}\\}_{n=0}^{N-1},\\{\tilde{\mathbf{X}}_{n+2}\\}_{n=0}^{N-2}}$ $\displaystyle\quad\frac{1}{2}\sum\limits_{n=0}^{N-1}\langle\bm{\Psi}_{n},\mathbf{S}\bm{\Psi}_{n}\rangle+\frac{C}{2}\|\bm{\Psi}_{0}\|^{2}_{\mathrm{F}}$ subject to $\displaystyle\quad\bm{\Psi}_{n+1}=\mathbf{R}\bm{\Psi}_{n}+\mathbf{P}\tilde{\mathbf{X}}_{n+2},\quad n=0,1,\ldots,N-2$ $\displaystyle\quad\frac{1}{2}\|\bm{\Psi}_{0}\|_{\mathrm{F}}^{2}+\frac{1}{2}\sum\limits_{n=0}^{N-2}\|\tilde{\mathbf{X}}_{n+2}\|_{\mathrm{F}}^{2}\leq\frac{1}{2}$ (45) Such a solution satisfies the KKT condition, which coincides with (42), (43) where $\\{\bm{\Lambda}_{n}\\},\beta\geq 0$ are dual (Lagrangian) multipliers corresponding to the constraints. We also observe that the optimal value at this point is given by $-\beta\left(\|\bm{\Psi}_{0}\|_{\mathrm{F}}^{2}+\sum\limits_{n=0}^{N-2}\|\tilde{\mathbf{X}}_{n+2}\|_{\mathrm{F}}^{2}\right)$. This shows that $\beta>0$. This contradicts the assumption that such a point does not exists and completes the proof. ∎ We may further simplify the conditions in Lemma 2 by the following result: ###### Lemma 3. For a complex value $z$ and a real value $\beta$, define $\mathbf{F}(z,\beta)=\left[\begin{array}[]{ccc}\mathbf{S}&z^{-1}\mathbf{I}-\mathbf{R}^{T}&\mathbf{O}\\\ z\mathbf{I}-\mathbf{R}&\mathbf{O}&-\mathbf{P}\\\ \mathbf{O}&-\mathbf{P}^{T}&-\beta\mathbf{I}\end{array}\right].$ (46) The condition of Lemma 2 is satisfied if the matrix $\lim\limits_{z\to 0}\left[\begin{array}[]{ccc}\mathbf{I}&\mathbf{O}&\mathbf{O}\end{array}\right]\mathbf{F}^{-1}\left[\begin{array}[]{c}-(C+\beta)\mathbf{I}\\\ \mathbf{I}\\\ \mathbf{O}\end{array}\right]$ (47) doesn’t have an eigenvalue of 1. ###### Proof. With an abuse of notation, define the z-transforms $\bm{\Psi}(z)=\sum\limits_{n=0}^{N-1}\bm{\Psi}_{n}z^{-n},\quad\bm{\Lambda}(z)=\sum\limits_{n=0}^{N-2}\bm{\Lambda}_{n}z^{-n},\quad\mathbf{U}(z)=\sum\limits_{n=0}^{N-2}\tilde{\mathbf{X}}_{n+2}z^{-n}$ Then, for the sequences defined in (42, 43, 38), we have $(z^{-1}\mathbf{I}-\mathbf{R}^{T})\bm{\Lambda}(z)+\mathbf{S}\bm{\Psi}(z)+(C+\beta)\bm{\Psi}_{0}=\mathbf{O}$ (48) $(z\mathbf{I}-\mathbf{R})\bm{\Psi}(z)-\bm{\Psi}_{0}+\mathbf{R}\bm{\Psi}_{N-1}z^{n-1}-\mathbf{P}\mathbf{U}(z)=\mathbf{O}$ (49) $\mathbf{P}^{T}\bm{\Lambda}(z)+\beta\mathbf{U}(z)=\mathbf{O}$ (50) which can also be written as $\mathbf{F}(z,\beta)\left[\begin{array}[]{c}\bm{\Psi}(z)\\\ \bm{\Lambda}(z)\\\ \mathbf{U}(z)\end{array}\right]=\left[\begin{array}[]{c}-(C+\beta)\bm{\Psi}_{0}\\\ \bm{\Psi}_{0}-\mathbf{R}\bm{\Psi}_{N-1}z^{N-1}\\\ \mathbf{O}\end{array}\right]$ (51) Note that $\mathbf{F}(z,\beta)$ may be rank-deficient at a finite number of points. Hence, there exists a sufficiently small simple loop $\mathcal{C}$ around $z=0$ such that $\mathbf{F}$ is invertible on and inside it except at $z=0$. In this region we have $\bm{\Psi}(z)=\left[\begin{array}[]{ccc}\mathbf{I}&\mathbf{O}&\mathbf{O}\end{array}\right]\mathbf{F}^{-1}\mathbf{A},$ (52) where $\mathbf{A}$ is $\left[\begin{array}[]{c}-(C+\beta)\mathbf{I}\\\ \mathbf{I}\\\ \mathbf{O}\end{array}\right]\bm{\Psi}_{0}+\left[\begin{array}[]{c}\mathbf{O}\\\ -\mathbf{R}\\\ \mathbf{O}\end{array}\right]z^{N-1}\bm{\Psi}_{N-1}$ On the other hand, $2\pi j\bm{\Psi}_{0}=\oint\limits_{\mathcal{C}}\frac{1}{z}\bm{\Psi}(z){\text{d}}z$ (53) Further from the Cauchy integral formula for sufficiently large $N$, we have $\oint\limits_{\mathcal{C}}\frac{1}{z}z^{N-1}\mathbf{F}^{-1}(z,\beta){\text{d}}z=2\pi j\lim\limits_{z\to 0}z^{N-1}\mathbf{F}^{-1}(z,\beta)=\mathbf{O}.$ (54) By applying (52) to (53), considering relation in (54), we can conclude that $\bm{\Psi}_{0}=\lim\limits_{z\to 0}\left[\begin{array}[]{ccc}\mathbf{I}&\mathbf{O}&\mathbf{O}\end{array}\right]\mathbf{F}^{-1}\left[\begin{array}[]{c}-(C+\beta)\mathbf{I}\\\ \mathbf{I}\\\ \mathbf{O}\end{array}\right]\bm{\Psi}_{0}$ (55) Note that by the assumption we can conclude that $\bm{\Psi}_{0}=\mathbf{O}$, which completes the proof. ∎
# Observation of Berry curvature in non-Hermitian system from far-field radiation Xuefan Yin1,2† Ye Chen2† Xiaoyu Zhang2 Zixuan Zhang2 Susumu Noda1 Chao Peng2,3∗ ###### Abstract Berry curvature that describes local geometrical properties of energy bands can elucidate many fascinating phenomena in solid-state, photonic, and phononic systems, given its connection to global topological invariants such as the Chern number. Despite its significance, the observation of Berry curvature poses a substantial challenging since wavefunctions are deeply embedded within the system. Here, we theoretically propose a correspondence between the geometry of far-field radiation and the underneath band topology of non-Hermitian systems, thus providing a general method to fully capture the Berry curvature without strongly disturbing the eigenstates. We further experimentally observe the Berry curvature in a honeycomb photonic crystal slab from polarimetry measurements and quantitatively obtain the non-trivial valley Chern number. Our work reveals the feasibility of retrieving the bulk band topology from escaping photons and paves the way to exploring intriguing topological landscapes in non-Hermitian systems. 1. 1. Department of Electronic Science and Engineering, Kyoto University, Kyoto- Daigaku-Katsura, Nishikyo-ku, Kyoto 615-8510, Japan 2. 2. State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, $\&$ Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing, 100871, China 3. 3. Peng Cheng Laboratory, Shenzhen 518055, China †These authors contributed equally to this work ∗To whom correspondence should be addressed; E-mail<EMAIL_ADDRESS> Topology, namely the mathematics of conserved properties under continuous deformations, is creating a range of new opportunities throughout matters, photonics, phononics, and other wave systems [?,?,?,?,?,?]. To characterize the topology in physics, the Berry curvature [?,?] is an essential concept that describes the gauge-invariant, local, geometric manifestation of the wavefunctions in the parameter space, which is closely related to the global topological invariants such as a variety of Chern numbers [?,?,?,?,?]. However, since Berry curvature belongs to the intrinsic topological property of wavefunctions, it is usually deeply bound inside the system and difficult to observe. Although using tomography can reconstruct the wavefunctions in some particular scenarios [?,?], much effort has been devoted to retrieving the Berry curvature from its external consequence in physics. Examples includes Hall drift in driving optical lattice [?,?,?] or synthetic gauge field [?,?], Aharonov-Bohm interference of magnetic-controlled ultracold atom [?,?,?,?], and pseudospin [?,?,?,?,?,?] or dichroism [?,?] in exciton- polariton-correlated material. Even though the specific physics varies, the mentioned observations of Berry curvature generally rely on the strong light- matter interaction to imprint the topological features of bulk wavefunction to external observables, and thus they should be categorized into the class of “strong measurement” [?] that the observation strongly interferes with the system. In comparison, the method of measuring the Berry curvature without much disturbing the eigenstates [?] remains absent. Recall the fact that non- Hermitian photonic systems [?,?,?,?,?] necessarily lose photons. The escaping photons, namely the far-field radiation, naturally carry information about the wavefunctions, thus allowing direct access to the intrinsic bulk topology that would be conventionally thought impossible. The escaping photons simply act as the “messengers” that weakly interact with the system, but they could bridge the band topology and radiation topology to enable direct observation of Berry curvature from the far field. In recent years, radiation geometry [?,?,?,?,?,?,?] that concerns the non-trivial geometric structures of far- field polarization has attracted much attention because they can give rise to interesting physical consequences such as polarization half-charge around paired exceptional point [?,?], vortex beam [?], chiral devices with circular dichroism [?,?], bound states in the continuum (BICs) [?,?,?,?,?,?,?,?,?] and unidirectional guided resonances (UGRs) [?,?,?]. However, whether the geometric features in the radiation originated from the band topology and how to retrieve the Berry curvature from the far field radiation still remain as elusive questions. Here we theoretically establish a correspondence between the band topology and radiation geometry to experimentally observe the Berry curvature by characterizing the escaping photons from a non-Hermitian photonic crystal (PhC) slab system. Specifically, we prove that a full tomography of the Berry curvature can be realized by simply measuring a number of radiation channels, while for a two-level system, only one radiation channel is sufficient. Accordingly, we experimentally observe the nontrivial Berry curvatures originated from diabolic points (DPs) in a honeycomb latticed PhC slab by using a polarimetry measurement [?]. The Berry phases [?,?,?] of $\gamma\sim\pm\pi$ are obtained in an individual valley by employing integrals of the observed Berry curvature, showing nontrivial valley Chern numbers [?,?] of $C_{v}^{\mathcal{K}}\sim\pm 1/2$ as expected, and thus quantitatively validates our method. The theory and measurement also clarify that the band topology only manifests on the “left-right” curvature [?] upon a bi-orthogonal basis of a non-Hermitian system, while a “right-right” curvature [?] represents the geometry of radiation itself and connects to Pancharatnam-Berry (PB) phase [?,?,?,?] of far-field polarization. The bulk-radiation correspondence of Berry curvature — We start from a PhC slab operating in the radiation continuum as schematically illustrated in Fig. 1A, in which the $n$th photonic eigenstate $|\psi_{n}\rangle$ of a Hamiltonian $\hat{H}$ resides in the continuum. The eigenstate would radiate towards some specific directions owing to the diffraction of periodically modulated permittivity, and each gives rise to a radiation vector of $|\Psi_{n}\rangle$ in the far field. In a given diffraction direction, the radiation vector $|\Psi_{n}\rangle$ can be described by the polarization vector field of $|\Psi_{n}\rangle=[c_{x;n},~{}c_{y;n}]^{T}$, where $c_{x,y;n}$ are complex- valued electric-field components in $x$ and $y$ directions, respectively. Components in $s$ and $p$ directions are discussed in Supplementary materials [?]. Consequently, the radiation process can be understood as a linear mapping of $\mathcal{P}:|\psi_{n}\rangle\mapsto|\Psi_{n}\rangle=\hat{P}|\psi_{n}\rangle$, showing a direct connection between the bulk wavefunction and its radiation far-field, governed by a projection matrix denoted as $\hat{P}$. It is well known that the Berry curvature of bulk bands can be calculated from wavefunctions as $B_{n}=i\nabla\times\langle\phi_{n}|\nabla\psi_{n}\rangle$ (bottom panel, Fig. 1B), with $|\phi_{n}\rangle$ representing the left vector of $|\psi_{n}\rangle$. Since our system is intrinsically non-Hermitian due to the existence of radiation losses, $|\psi_{n}\rangle$ and $|\phi_{n}\rangle$ should be bi-orthogonal to each other. If $\hat{P}$ is an invertible matrix, we found that $|\Psi_{n}\rangle$ and the accompanied left vector $|\Phi_{n}\rangle=(\hat{P}^{-1})^{\dagger}|\phi_{n}\rangle$ also form a bi- orthogonal basis for the non-Hermitian Hamiltonian of $\hat{H}^{r}=\hat{P}\hat{H}\hat{P}^{-1}$, leading to another Berry curvature of $B_{n}^{r}=i\nabla\times\langle\Phi_{n}|\nabla\Psi_{n}\rangle$ defined in the radiation field (top panel, Fig. 1B). We further prove that [?], in the case that the matrix $\hat{P}$ is smooth ($\nabla\hat{P}\sim 0$) and doesn’t give rise to extra vortexes such as BICs that carry vanished amplitudes $\langle\Psi_{n}|\Psi_{n}\rangle=0$, the system follows a simple correspondence between the band topology and radiation topology, given by: $\displaystyle B_{n}^{r}\approx B_{n}$ (1) The above equation reveals that the escaping photons act as “messengers” that project the bulk Berry curvature onto the far field through the matrix $\hat{P}$ (mid panel, Fig. 1B). Although the wavefunctions $|\psi_{n}\rangle$ and $|\phi_{n}\rangle$ are difficult to access because they belong to the near-field features of the eigenstates, the radiation vectors $|\Psi_{n}\rangle$ and $|\Phi_{n}\rangle$ are directly observable and can be characterized by standard optical measurement. In theory, the radiation in a particular diffraction direction gives a perspective projection of the wavefunctions, and thus it is noteworthy to discuss whether the projection is complete. For a general system that has $N$ internal degrees of freedom (DOFs), $\hat{P}$ is in the form of $2\times N$ matrix. Since one diffraction direction can only characterize two DOFs ($[c_{x;n},c_{y;n}]^{T}$), we probably need to measure multiple radiation channels simultaneously — observe the same object from different views, to fully capture the information about the bulk wavefunctions [?]. As a specific case, if the system is a simple two-level one with $N=2$, the measurement of only one radiation channel can provide sufficient DOFs to make the projection of wavefunctions complete. Namely, we can reverse the projection process to directly determine the bulk wavefunctions from far-field radiations if $\hat{P}$ is a non-singular $2\times 2$ matrix. To elaborate the correspondence, we consider a two-dimensional (2D) PhC slab of Si3N4 material with circular air hole patterns on a honeycomb lattice (Fig. 2A). The lattice constant and slab thickness are denoted as $a$ and $h$, respectively, which give a reciprocal lattice as shown in Fig. 2B. The grey shading area denotes the first Brillouin zone (BZ). According to the Bloch theorem, the bulk wavefunctions can be depicted by a superposition of a series of quasi-plane-waves with discrete momentum, represented by the dots in the reciprocal lattice and we refer them as “diffraction orders” [?]. Around the second $\mathcal{K}$ point that resides in the continuum, several diffraction orders fall into the light cone and thus open radiation channels. We take the $\mathcal{K}_{1}$ point of $(-4\pi/3a,0)$ as a specific example, and there exist three radiation channels $C_{1-3}$ that cause the non-Hermiticity (red arrows, Fig. 2B). Accordingly, their in-plane momentum at $\mathcal{K}_{1}$ point are $\beta_{1}=(\sqrt{3}\beta_{0}/3\hat{x},0)$, $\beta_{2}=(-\sqrt{3}\beta_{0}/6\hat{x},-\beta_{0}/2\hat{y})$ and $\beta_{3}=(-\sqrt{3}\beta_{0}/6\hat{x},\beta_{0}/2\hat{y})$, respectively, with denoting $\beta_{0}=4\pi/\sqrt{3}a$. Three transverse-electric (TE) polarized modes can be found around the $\mathcal{K}_{1}$ point, and we mark them as TEA,B,C. We assume the adjacent air holes have different radii of $r_{1}$ and $r_{2}$. For $\delta_{r}=r_{2}-r_{1}=0$ that preserves the $C_{6v}$ symmetry (left panel, Fig. 2C), it would result in a two-fold degeneracy of TEA and TEB right at the $\mathcal{K}_{1}$ point. As a contrary, if $r_{1}\neq r_{2}$ (right panel, Fig. 2D), the $C_{6v}$ symmetry degrades into the $C_{3v}$ symmetry that lifts the degeneracy. In the case that the in- plane symmetry breaking is sufficiently weak that allows the coupling from the TEC mode to be neglectable, the TEA and TEB modes form a two-level system near the $\mathcal{K}_{1}$ point that is described by the Hamiltonian of: $\displaystyle\hat{H}=\omega+\delta\sigma_{z}+\eta k_{x}\sigma_{x}+\eta k_{y}\sigma$ (2) in which $\omega$ is the degenerate frequency at $\mathcal{K}_{1}$ point; $\delta$ is related to the in-plane asymmetry and $\eta$ is the group velocity; $k_{x}$ and $k_{y}$ are dimensionless numbers to describe the momentum deviation from the $\mathcal{K}_{1}$ point of $\mathbf{k}=k_{x}\beta_{0}\hat{x}+k_{y}\beta_{0}\hat{y}$. Here we only consider the radiation loss but omit the material dissipation or gain, so the non-Hermiticity of $\hat{H}$ is represented by the complex degenerate frequency $\omega=\omega_{r}+i\gamma_{0}$, where $\gamma_{0}$ is the radiation decay rate. When $C_{6v}$ symmetry is preserved ($\delta=0$), the eigenvectors can be derived as $|\psi_{n}\rangle=[1,\pm|\eta|e^{i\theta}/\eta]^{T}$ where $e^{i\theta}=(k_{x}+ik_{y})/|\mathbf{k}|$, creating a diabolic point at the $\mathcal{K}_{1}$ point which is exactly a non-Hermitian counterpart of the Dirac point in Hermitian case, and we still denote it as the DP for short without any confusion. At the DP, the far-field polarizations of TEA,B bands are ill-defined since they can be mixed in arbitrary weights. Correspondingly, polarization vortexes can be found in the momentum space that each carries a half-integer topological charge (Fig. 2E), showing as a geometric feature of the DP in the far-field radiation. Once the symmetry-breaking lifts the degeneracy, the polarization vortexes degrade to a meron and anti-meron configuration [?] with circular-polarization (CPs) in opposite helicities around the $\mathcal{K}_{1}$ point (Fig. 2F) for TEA,B modes, respectively, which can be an observable signature to validate the theory. By taking the degeneracy lifting into account, the close form of theoretical Berry curvature in such a two-level system follows: $\displaystyle B_{n;t}=\pm\frac{4\delta\eta^{2}}{(4\delta^{2}+4\eta^{2}|\mathbf{k}|^{2})^{3/2}}$ (3) where the subscript “$t$” distinguishes $B_{n;t}$ from the $B_{n}$ while the later one takes TEC into account; the signs “$\pm$” denote the two bands $n=A,B$, respectively. Since TEC mode can be neglected in our case, we have $B_{n}\approx B_{n;t}$. For a $C_{6}$ symmetric system ($\delta=0$), the Berry curvature shows as a $\delta$-function peaked at the $\mathcal{K}_{1}$ point. While for $\delta\neq 0$, the DP splits and opens a topologically nontrivial bandgap, leading to a nontrivial Berry curvature in the vicinity of $\mathcal{K}_{1}$ point that gives rise to non-zero valley Chern number. As we stated, such a nontrivial Berry curvature can be directly observed from the far-field radiation by employing the bulk-radiation correspondence of Eq. 1. Specifically, the radiation vector $|\Psi_{n}\rangle$ exactly corresponds to the far-field polarization: $\vec{S}_{n}(k_{x},k_{y})=[s_{1},s_{2},s_{3}]^{T}/s_{0}=\langle\Psi_{n}|\hat{\mathbf{\sigma}}|\Psi_{n}\rangle/s_{0}$, where $\hat{\mathbf{\sigma}}=[\hat{\sigma}_{z},\hat{\sigma}_{x},\hat{\sigma}_{y}]^{T}$ are the Pauli’s matrices and $\vec{S}_{n}$ refers to the Stokes’ vector in Poincare sphere which can be measured by using standard polarimetry method. Besides, the left radiation vector $|\Phi_{n}\rangle$ of $|\Psi_{n}\rangle$ can be determined from the bi-orthogonal normalization relation of $\langle\Phi_{m}|\Psi_{n}\rangle=\delta_{mn}$. As a result, we can observe the intrinsic band Berry curvature $B_{n;t}$ directly from measuring $B_{n}^{r}$. Experimental observation of Berry curvature — To experimentally observe the Berry curvature, we first fabricate the PhC sample by using e-beam lithography (EBL) and inductively coupled plasma etching (ICP) processes on a Si3N4 slab of thickness $h=180$ nm on silica substrate (see Methods section for details). The air holes are arranged as a honeycomb lattice of $a=440$ nm with two slightly different hole radii of $r_{1}=50$ and $r_{2}=54$ nm, respectively, as the scanning electron microscope (SEM) images shown in Fig. 2A. The angle- resolved measurement system is schematically illustrated in Fig. 3A, in which a supercontinuum white light source is first sent through an acoustic-optic tunable filter (AOTF) and then linearly polarized by POL1 to generate incoherent light in a wavelength range from $550$ nm to $580$ nm. After passing through a quarter-wave plate (QWP1), the light is focused by a lens (L1) onto the rear focal plane (RFP) of an infinity-corrected objective lens ($NA=0.95$), and then illuminates the sample to excite the optical modes. POL1 and QWP1 are used for adjusting the incident polarization for better excitation. The radiations from the PhC sample are collected by the same objective lens and imaged by a charge-coupled device (CCD) camera which is co- focused with the RFP of the objective lens. By inserting another polarizer (POL2) and another quarter-wave plate (QWP2) before the CCD, we can fully characterize the Stokes’ vector of radiation through a polarimetry method. As shown in Fig. 3B, we found three scattered beams in the aperture of the objective lens, which correspond to the radiation channels $C_{1-3}$ plotted in Fig. 2B. To achieve the best excitation, we fine-tune the incident angle by moving L1 lens in the $x-y$ plane to illuminate the PhC sample through the channel $C_{1}$. According to Fig. 2F, POL1 and QWP1 adjust the incident polarization to be left-handed circular polarized (LCP) to excite TEA mode and right-handed circular polarized (RCP) to excite TEB mode, respectively. When the AOTF selects a specific wavelength, the scatterings originating from fabrication disorders would make the iso-frequency contour at such a wavelength visible upon every radiation channel due to the on-resonance pumping mechanism [?,?,?] (Fig. 3C). Accordingly, we observe the $C_{3}$ channel which is far from the direct reflected light and apply a cascaded $4f$ system to zoom-in the far-field pattern at a magnification rate of $\times 6$. Further, by recording the CCD images at particular arrangements of POL2 and QWP2 as polarimetry measurements, we can decompose the radiation into Stokes’ vector. The result at the wavelength of $\lambda_{i}=570.328$ nm is presented in Fig. 3D, in which the dashed line is the iso-frequency contour calculated from numerical simulation for visual guidance. By overlapping all the iso- frequency contours in the wavelength range of $559.383\sim 564.861$ nm for the TEA band and $565.203\sim 571.352$ nm for the TEB band, we obtain the polarization vector field in the momentum space for both TEA,B bands (Fig. 3E). In the vicinity of the $\mathcal{K}_{1}$ point, a LCP and a RCP can be found on the TEA and TEB bands, respectively (red circles, Fig. 3E), which agree well with our prediction in theory as shown in Fig. 2F. Berry curvature $B^{r}_{n}$ defined in far-field radiation can be directly obtained from the full polarization vector field. Here we consider four samples with radius differences of $\delta_{r}=0,~{}~{}4,~{}~{}7,~{}~{}10$ nm, and measure the $B_{A,B}^{r}$ of each sample, respectively (top panels, Fig. 4). Accordingly, we calculate the numerical Berry curvature $B_{A,B}$ by employing the semi-analytical coupled-wave theory framework (CWT) [?,?,?] for a comparison (bottom panels, Fig.4). To better show the evolution of Berry curvatures, we plot the unit-cell geometry of each sample and the corresponding band structures of TEA,B as the insets in the top and bottom panels of Fig. 4. Specifically, we start from a realistic sample with $\delta_{r}=0$ (Fig. 4A), where the fabrication imperfections would inevitably break the DP degeneracy at $\mathcal{K}_{1}$ point and give rise to a very small bandgap. We estimate that the bandgap is equivalent to the case of $\delta_{r}=2$ nm. In this case, we found that the $B_{A,B}^{r}$ show as bright spots centered at $\mathcal{K}_{1}$ point — quite like $\delta$-functions (top panels, Fig. 4A), agreeing well with the numerical result $B_{A,B}$ (bottom panels, Fig. 4A). Note that the signs of the $B_{A}^{r}$ and $B_{B}^{r}$ are exactly opposite, agreeing well with the theoretical prediction in Eq. 3. Further, we gradually open the bandgap by increasing $\delta_{r}$ from $4$ nm to $10$ nm (Fig. 4B). During this process, the Berry curvatures $B_{B}^{r}$ and $B_{B}$ gradually diffuse to a larger region in the momentum space while their peak absolute values decrease. At $\delta_{r}=10$ nm, the bandgap becomes quite large, and thus both $B_{n}^{r}$ and $B_{n}$ become fully dispersed and no longer congregate around $\mathcal{K}_{1}$ point. $B_{A}^{r}$ and $B_{A}$ also match well with each other and show the similar behavior (see Methods section). The great agreement between the observed Berry curvatures $B_{A,B}^{r}$ and the numerical Berry curvature $B_{A,B}$ validates the bulk-radiation correspondence of band topology we propose in Eq. 2. Noteworthy that Berry curvatures are generally complex values in a non-Hermitian system. For our PhC slab in which only radiation contributes to the non-Hermiticity, the imaginary parts of Berry curvature are quite small compared to their real parts [?]. To quantitatively validate the correspondence of Berry curvature, we calculate the geometric phases (Berry phases) $\gamma_{n}^{r}$ and $\gamma_{n}$ by applying 2D integrals over the measured and numerical Berry curvatures of $B_{n}^{r}$ and $B_{n}$ in Fig. 4, respectively. As a reference, we also derive a close-form of the theoretical geometric phase $\gamma_{n;t}$ in the two-level model without the contribution of TEC according to Eq. 3 as $\displaystyle\gamma_{n;t}=\pm\left(\pi-\frac{2\delta}{\sqrt{4\delta^{2}+4\eta^{2}k_{s}^{2}}}\pi\right)$ (4) According to the valleytronics [?], the integral upon the individual valley region can determine the valley-Chern number (blue shading, Fig. 5A). Considering that the nontrivial Berry curvatures congregate around the $\mathcal{K}_{1}$ point, here we choose a circular integral region with radius of $k_{s}=0.03$ for simplicity and thus calculate $\gamma_{n}^{r}$ (circle), $\gamma_{n}$ (triangle), and $\gamma_{n;t}$ (solid line) shown in Fig. 5B and C. According to Eq.4, the close-form geometric phases $\gamma_{n;t}$ exactly equals to $\pm\pi$ at $\delta_{r}=0$ for TEA,B bands, respectively [?], corresponding to the nontrivial, quantized, valley-Chern number of $C_{v}^{\mathcal{K}}=\pm 1/2$ in an individual valley [?]. When $\delta_{r}\neq 0$, the open bandgap (nonzero $\delta$) would make the geometric phases deviate from $\pm\pi$, unless the integral region $k_{s}$ tends to be infinite [?]. Such a behavior can be verified by our experimental observation. Specifically, the three geometric phases $\gamma_{n}^{r}$, $\gamma_{n}$ and $\gamma_{n;t}$ of both TEA,B bands agree well with each other, quantitatively confirming the validity of bulk-radiation correspondence we propose in Eq. 2. We also found $\gamma_{A,B}^{r}\approx\gamma_{A,B}\approx\gamma_{A,B;t}\approx\pm 0.6\pi$ at $\delta_{r}=10$ nm, clearly showing the impact of nonzero bandgap. Besides, when $\delta_{r}$ is considerable large, we notice that the theoretical phase $\gamma_{n;t}$ from the two-level model slightly deviates from the numerical phase $\gamma_{n}$. This is because the influence of the TEC band is neglected in the derivation of $\gamma_{n;t}$. The discussion on measuring Berry curvature in a system with higher DOFs ($N>2$) is presented in Supplementary materials [?]. In the experiment stated above, we obtain the left vector $|\Phi_{n}\rangle$ by using the bi-orthogonal normalization relation of $\langle\Phi_{n}|\Psi_{n}\rangle=1$. However, the left vector can also be directly measured. According to the reciprocity, the left vector at the $\mathbf{k}$ point corresponds to the right vector at the $-\mathbf{k}$ point, namely $|\Phi_{n}(\mathbf{k})\rangle=|\Psi^{*}_{n}(-\mathbf{k})\rangle$ [?]. Therefore, we can also observe one band at both the $\mathbf{k}$ and $-\mathbf{k}$ points to retrieve the system’s topology, instead of measuring two bands simultaneously. We also emphasize that the radiation Berry curvature $B_{n}^{r}$ directly corresponds to the bulk topology of $B_{n}$ only when the projection matrix $\hat{P}$ doesn’t cause extra geometric phases, which is the case of our experiment ($\nabla\hat{P}\approx 0$ around the $\mathcal{K}$ point). We note that other than the aforementioned “left-right” curvature $B_{n}^{r}$ which depicts the bulk band topology, we can also define a “right-right” curvature of $B_{n;rr}^{r}=i\nabla\times\langle\Psi_{n}|\nabla\Psi_{n}\rangle$ to capture the geometric features of the polarization field itself. The integral over “right-right” curvature directly presents the PB phase of far- field polarization, showing the swirling structure of Stokes’ vector $\vec{S}_{n}$. For instance, as shown in Fig. 3E, we can find meron and anti- meron features that are originated from the DP. Such nontrivial polarization features don’t directly correspond to the valley-Chern number defined on bulk Berry curvature (“left-right” curvature). Instead, they are related to the Skyrmion number [?,?,?] that is associated with the PB phase and “right-right” curvature (See Methods section for details). Conclusion — In summary, our findings of the “bulk-radiation correspondence” of Berry curvature reveal the feasibility of retrieving the band topology from characterizing escaping photons in far-field radiation. We prove in theory and demonstrate in experiments that characterizing the polarizations of one radiation channel can capture a complete map of the eigenstates in a two-level non-Hermitian system, to directly access the Berry curvature and Chern number without strongly disturbing the system. The proposed method can also be extended to multi-level systems by measuring more radiation channels simultaneously [?], and utilized to extract other topological features such as quantum geometric tensor [?,?,?,?,?,?]. Our work demonstrates a simple and effective way of directly observing the Berry curvature in non-Hermitian systems and thus could shed light on the exploration of the intriguing phases in topological systems. ## References * 1. Hasan, M. Z. & Kane, C. L. Colloquium: topological insulators. _Rev. Mod. Phys._ 82, 3045 (2010). * 2. Xiao, D., Chang, M.-C. & Niu, Q. Berry phase effects on electronic properties. _Rev. Mod. Phys._ 82, 1959 (2010). * 3. Lu, L., Joannopoulos, J. D. & Soljačić, M. Topological photonics. _Nature Photon._ 8, 821–829 (2014). * 4. Khanikaev, A. B. & Shvets, G. Two-dimensional topological photonics. _Nature Photon._ 11, 763–773 (2017). * 5. Ozawa, T. _et al._ Topological photonics. _Rev. Mod. Phys._ 91, 015006 (2019). * 6. Bergholtz, E. J., Budich, J. C. & Kunst, F. K. Exceptional topology of non-hermitian systems. _Rev. Mod. Phys._ 93, 015005 (2021). * 7. Berry, M. V. Quantal phase factors accompanying adiabatic changes. _Proc. R. Soc. A_ 392, 45–57 (1984). * 8. Haldane, F. D. M. Berry curvature on the fermi surface: Anomalous hall effect as a topological fermi-liquid property. _Phys. Rev. Lett._ 93, 206602 (2004). * 9. Thouless, D. J., Kohmoto, M., Nightingale, M. P. & den Nijs, M. Quantized hall conductance in a two-dimensional periodic potential. _Phys. Rev. Lett._ 49, 405 (1982). * 10. Sheng, D., Weng, Z., Sheng, L. & Haldane, F. Quantum spin-hall effect and topologically invariant chern numbers. _Phys. Rev. Lett._ 97, 036808 (2006). * 11. Xiao, D., Yao, W. & Niu, Q. Valley-contrasting physics in graphene: magnetic moment and topological transport. _Phys. Rev. Lett._ 99, 236809 (2007). * 12. Zhang, F., Jung, J., Fiete, G. A., Niu, Q. & MacDonald, A. H. Spontaneous quantum hall states in chirally stacked few-layer graphene systems. _Phys. Rev. Lett._ 106, 156801 (2011). * 13. Zhang, F., MacDonald, A. H. & Mele, E. J. Valley chern numbers and boundary modes in gapped bilayer graphene. _Proc. Natl. Acad. Sci. U.S.A._ 110, 10546–10551 (2013). * 14. Hauke, P., Lewenstein, M. & Eckardt, A. Tomography of band insulators from quench dynamics. _Phys. Rev. Lett._ 113, 045303 (2014). * 15. Fläschner, N. _et al._ Experimental reconstruction of the berry curvature in a floquet bloch band. _Science_ 352, 1091–1094 (2016). * 16. Price, H. M. & Cooper, N. Mapping the berry curvature from semiclassical dynamics in optical lattices. _Phys. Rev. A_ 85, 033620 (2012). * 17. Jotzu, G. _et al._ Experimental realization of the topological haldane model with ultracold fermions. _Nature_ 515, 237–240 (2014). * 18. Aidelsburger, M. _et al._ Measuring the chern number of hofstadter bands with ultracold bosonic atoms. _Nat. Phys._ 11, 162–166 (2015). * 19. Ozawa, T. & Carusotto, I. Anomalous and quantum hall effects in lossy photonic lattices. _Phys. Rev. Lett._ 112, 133902 (2014). * 20. Wimmer, M., Price, H. M., Carusotto, I. & Peschel, U. Experimental measurement of the berry curvature from anomalous transport. _Nat. Physics_ 13, 545–550 (2017). * 21. Abanin, D. A., Kitagawa, T., Bloch, I. & Demler, E. Interferometric approach to measuring band topology in 2d optical lattices. _Phys. Rev. Lett._ 110, 165304 (2013). * 22. Atala, M. _et al._ Direct measurement of the zak phase in topological bloch bands. _Nature Physics_ 9, 795–800 (2013). * 23. Duca, L. _et al._ An aharonov-bohm interferometer for determining bloch band topology. _Science_ 347, 288–292 (2015). * 24. Li, T. _et al._ Bloch state tomography using wilson lines. _Science_ 352, 1094–1097 (2016). * 25. Bleu, O., Solnyshkov, D. & Malpuech, G. Measuring the quantum geometric tensor in two-dimensional photonic and exciton-polariton systems. _Phys. Rev. B_ 97, 195422 (2018). * 26. Gianfrate, A. _et al._ Measurement of the quantum geometric tensor and of the anomalous hall drift. _Nature_ 578, 381–385 (2020). * 27. Ren, J. _et al._ Nontrivial band geometry in an optically active system. _Nat. commun._ 12, 689 (2021). * 28. Liao, Q. _et al._ Experimental measurement of the divergent quantum metric of an exceptional point. _Phys. Rev. Lett._ 127, 107402 (2021). * 29. Polimeno, L. _et al._ Tuning of the berry curvature in 2d perovskite polaritons. _Nat. Nanotech._ 16, 1349–1354 (2021). * 30. Łempicka-Mirek, K. _et al._ Electrically tunable berry curvature and strong light-matter coupling in liquid crystal microcavities with 2d perovskite. _Sci. Adv._ 8, eabq7533 (2022). * 31. Wu, S. _et al._ Electrical tuning of valley magnetic moment through symmetry control in bilayer mos2. _Nat. Phys._ 9, 149–153 (2013). * 32. Cho, S. _et al._ Experimental observation of hidden berry curvature in inversion-symmetric bulk 2 h- wse 2. _Phys. Rev. Lett._ 121, 186401 (2018). * 33. Vallone, G. & Dequal, D. Strong measurements give a better direct measurement of the quantum wave function. _Phys. Rev. Lett._ 116, 040502 (2016). * 34. Dressel, J., Malik, M., Miatto, F. M., Jordan, A. N. & Boyd, R. W. Colloquium: Understanding quantum weak values: Basics and applications. _Rev. Mod. Phys._ 86, 307 (2014). * 35. Feng, L., El-Ganainy, R. & Ge, L. Non-hermitian photonics based on parity–time symmetry. _Nat. Photonics_ 11, 752–762 (2017). * 36. Leykam, D., Bliokh, K. Y., Huang, C., Chong, Y. D. & Nori, F. Edge modes, degeneracies, and topological numbers in non-hermitian systems. _Phys. Rev. Lett._ 118, 040401 (2017). * 37. El-Ganainy, R. _et al._ Non-hermitian physics and pt symmetry. _Nat. Phys._ 14, 11–19 (2018). * 38. Shen, H., Zhen, B. & Fu, L. Topological band theory for non-hermitian hamiltonians. _Phys. Rev. Lett._ 120, 146402 (2018). * 39. Zhen, B., Hsu, C. W., Lu, L., Stone, A. D. & Soljačić, M. Topological nature of optical bound states in the continuum. _Phys. Rev. Lett._ 113, 257401 (2014). * 40. Doeleman, H. M., Monticone, F., den Hollander, W., Alu, A. & Koenderink, A. F. Experimental observation of a polarization vortex at an optical bound state in the continuum. _Nat. Photonics_ 12, 397–401 (2018). * 41. Zhang, Y. _et al._ Observation of polarization vortices in momentum space. _Phys. Rev. Lett._ 120, 186103 (2018). * 42. Chen, W., Chen, Y. & Liu, W. Singularities and poincaré indices of electromagnetic multipoles. _Phys. Rev. Lett._ 122, 153907 (2019). * 43. Yin, X. & Peng, C. Manipulating light radiation from a topological perspective. _Photonics Res._ 8, B25–B38 (2020). * 44. Liu, W., Liu, W., Shi, L. & Kivshar, Y. Topological polarization singularities in metaphotonics. _Nanophotonics_ 10, 1469–1486 (2021). * 45. Che, Z. _et al._ Polarization singularities of photonic quasicrystals in momentum space. _Phys. Rev. Lett._ 127, 043901 (2021). * 46. Zhou, H. _et al._ Observation of bulk fermi arc and polarization half charge from paired exceptional points. _Science_ 359, 1009–1012 (2018). * 47. Chen, W., Yang, Q., Chen, Y. & Liu, W. Evolution and global charge conservation for polarization singularities emerging from non-hermitian degeneracies. _Proc. Natl. Acad. Sci. U.S.A._ 118, e2019578118 (2021). * 48. Huang, C. _et al._ Ultrafast control of vortex microlasers. _Science_ 367, 1018–1021 (2020). * 49. Zhang, X., Liu, Y., Han, J., Kivshar, Y. & Song, Q. Chiral emission from resonant metasurfaces. _Science_ 377, 1215–1218 (2022). * 50. Chen, Y. _et al._ Observation of intrinsic chiral bound states in the continuum. _Nature_ 613, 474–478 (2023). * 51. von Neuman, J. & Wigner, E. Über merkwürdige diskrete Eigenwerte. Uber das Verhalten von Eigenwerten bei adiabatischen Prozessen. _Physikalische Zeitschrift_ 30, 467–470 (1929). * 52. Friedrich, H. & Wintgen, D. Interfering resonances and bound states in the continuum. _Phys. Rev. A_ 32, 3231–3242 (1985). * 53. Hsu, C. W. _et al._ Observation of trapped light within the radiation continuum. _Nature_ 499, 188–191 (2013). * 54. Yang, Y., Peng, C., Liang, Y., Li, Z. & Noda, S. Analytical perspective for bound states in the continuum in photonic crystal slabs. _Phys. Rev. Lett._ 113, 037401 (2014). * 55. Hsu, C. W., Zhen, B., Stone, A. D., Joannopoulos, J. D. & Soljačić, M. Bound states in the continuum. _Nat. Rev. Mater._ 1, 16048 (2016). * 56. Jin, J. _et al._ Topologically enabled ultrahigh-q guided resonances robust to out-of-plane scattering. _Nature_ 574, 501–504 (2019). * 57. Sadreev, A. F. Interference traps waves in an open system: bound states in the continuum. _Rep. Prog. Phys._ 84, 055901 (2021). * 58. Kang, M., Zhang, S., Xiao, M. & Xu, H. Merging bound states in the continuum at off-high symmetry points. _Phys. Rev. Lett._ 126, 117402 (2021). * 59. Hu, P. _et al._ Global phase diagram of bound states in the continuum. _Optica_ 9, 1353–1361 (2022). * 60. Yin, X., Jin, J., Soljačić, M., Peng, C. & Zhen, B. Observation of topologically enabled unidirectional guided resonances. _Nature_ 580, 467–471 (2020). * 61. Zeng, Y., Hu, G., Liu, K., Tang, Z. & Qiu, C.-W. Dynamics of topological polarization singularity in momentum space. _Phys. Rev. Lett._ 127, 176101 (2021). * 62. Yin, X., Inoue, T., Peng, C. & Noda, S. Topological unidirectional guided resonances emerged from interband coupling. _Phys. Rev. Lett._ 130, 056401 (2023). * 63. Mcmaster, W. H. Polarization and the stokes parameters. _Am. J. Phys._ 22, 351–362 (1954). * 64. Simon, B. Holonomy, the quantum adiabatic theorem, and berry’s phase. _Phys. Rev. Lett._ 51, 2167 (1983). * 65. Pancharatnam, S. Generalized theory of interference, and its applications: Part i. coherent pencils. In _Proc. Indian Acad. Sci. A_ , vol. 44, 247–262 (1956). * 66. Berry, M. V. The adiabatic phase and pancharatnam’s phase for polarized light. _J. Mod. Opt._ 34, 1401–1407 (1987). * 67. Lee, Y.-H. _et al._ Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities. _Opt. Data Process. Storage_ 3, 79–88 (2017). * 68. Xie, X. _et al._ Generalized pancharatnam-berry phase in rotationally symmetric meta-atoms. _Phys. Rev. Lett._ 126, 183902 (2021). * 69. supplementary materials on Science Online, S. * 70. Kogelnik, H. & Shank, C. V. Coupled-wave theory of distributed feedback lasers. _J. Appl. Phys._ 43, 2327–2335 (1972). * 71. Guo, C., Xiao, M., Guo, Y., Yuan, L. & Fan, S. Meron spin textures in momentum space. _Phys. Rev. Lett._ 124, 106103 (2020). * 72. Regan, E. C. _et al._ Direct imaging of isofrequency contours in photonic structures. _Sci. Adv._ 2, e1601591 (2016). * 73. Liang, Y., Peng, C., Sakai, K., Iwahashi, S. & Noda, S. Three-dimensional coupled-wave model for square-lattice photonic crystal lasers with transverse electric polarization: A general approach. _Phys. Rev. B_ 84, 195119 (2011). * 74. Peng, C., Liang, Y., Sakai, K., Iwahashi, S. & Noda, S. Three-dimensional coupled-wave theory analysis of a centered-rectangular lattice photonic crystal laser with a transverse-electric-like mode. _Phys. Rev. B_ 86, 035108 (2012). * 75. Zhang, Y., Tan, Y.-W., Stormer, H. L. & Kim, P. Experimental observation of the quantum hall effect and berry’s phase in graphene. _Nature_ 438, 201–204 (2005). * 76. Skyrme, T. Particle states of a quantized meson field. _Proc. Math. Phys. Eng._ 262, 237–245 (1961). * 77. Skyrme, T. H. R. A unified field theory of mesons and baryons. _Nucl. Phys._ 31, 556–569 (1962). * 78. Fert, A., Reyren, N. & Cros, V. Magnetic skyrmions: advances in physics and potential applications. _Nat. Rev. Mat._ 2, 1–15 (2017). * 79. Provost, J. & Vallee, G. Riemannian structure on manifolds of quantum states. _Commun. Math. Phys._ 76, 289–301 (1980). * 80. Anandan, J. & Aharonov, Y. Geometry of quantum evolution. _Phys. Rev. Lett._ 65, 1697 (1990). Figure 1: Correspondence between bulk band topology and far-field radiation. (A) The schematic of radiation process from PhC slab to the far field in real space. The wavefunction of the optical eigenmodes $|\psi_{n}\rangle$ in PhC slab is diffracted by the periodic lattice into several specific diffraction directions $C_{1-3}$, acting as the radiation channels. For one channel (i.e. $C_{3}$), the radiation vector of $|\Psi_{n}\rangle$ can be defined in the polarization of the diffracted wave, marked as the spiral arrows. (B) The “bulk-radiation correspondence” of Berry curvature in momentum space. The radiation polarization field (middle panel) can bridge the Berry curvature $B_{n}$ defined in wavefunction $|\psi_{n}\rangle$ (bottom panel) with the Berry curvature $B_{n}^{r}$ defined in far-field radiation vector $|\Psi_{n}\rangle$ (top panel). $c_{x,y}$ are complex amplitudes of the radiative waves in $x-y$ plane. Figure 2: Demonstration of Berry curvature observation on a honeycomb latticed PhC slab. (A) SEM image of the fabricated PhC sample, showing a honeycomb latticed structure of SiN slab on SiO2 substrate with different hole radii. Inset: the side view of the air hole. The structural parameters are given as $a=440$ nm, $r_{1}=50$ nm, $r_{2}=54$ nm, $h=180$ nm, respectively. $\delta_{r}$ is defined as $r_{2}-r_{1}$. (B) The reciprocal lattice of the PhC sample. Grey shading area: the first BZ; purple dot: the $\Gamma$ point; blue dot: the $\mathcal{K}_{1}$ point; red vectors: three diffraction orders acting as radiation channels $C_{1-3}$; green vectors: non-radiative basic diffraction orders. (C, D) The band structures of PhCs around $\mathcal{K}_{1}$ point with ($\delta_{r}=0$) and without ($\delta_{r}=4$ nm) the inversion symmetry. Owing to the $C_{6}$ symmetry with $\delta_{r}=0$, TEA mode (purple sheet) and TEB mode (blue sheet) are degenerate at $\mathcal{K}_{1}$ point, giving rise to a diabolic point. When $\delta_{r}\neq 0$, the $C_{6}$ symmetry degrades to $C_{3}$ symmetry, and thus the DP split to open a non-trivial band gap between TEA,B modes. (E, F) The polarization fields in the momentum space of TEA (purple, the top panels) and TEB (blue, the bottom panels) modes around $\mathcal{K}_{1}$ point with ($\delta_{r}=0$) or without ($\delta_{r}=4$nm) the inversion symmetry, respectively. For $\delta_{r}=0$ that maintains the $C_{6}$ symmetry, half charges emerge at $\mathcal{K}_{1}$ point due to the DP for both TEA,B modes. When $\delta_{r}\neq 0$, two CPs with opposite handness emerge around the $\mathcal{K}_{1}$ point instead. Black dot: DP; red marks: the quasi-CPs. All data are calculated by numerical simulations (COMSOL Multiphysics). Figure 3: Polarimetry measurement of far-field polarization fields. (A) Schematic of the measurements setup. AOTF: acoustic-optic tunable filter; POL1 and POL2: polarizers; QWP1 and QWP2: quarter-wave plates; L1: convex lens with $20$ cm focal length; BS: beam splitter; RFP: rear focal plane; Obj: Objective lens with NA of 0.95 and working distance of 150 $\mu$m; $4f$: lens system with magnification rate of $\times$6\. (B) The observed image of three scattered beams from radiation channels $C_{1-3}$ within the NA range. In the experiment, we excite the optical modes from channel $C_{1}$ by moving lens L1 to a proper position and then collect the diffracted lights in channel $C_{3}$ after it is magnified by the $4f$ system. (C) Schematic of isofrequency contours of the two-level system near the $\mathcal{K}_{1}$ point. Yellow plane denotes the wavelength of $\lambda_{i}=570.328$ nm. Purple sheet: TEA mode; blue sheet: TEB mode. (D) The measured isofrequency contour $S_{0}$ and Stokes’ parameters $S_{1-4}$ at wavelength of $\lambda_{i}=570.328$ nm. The Stokes’ parameters are determined through different configurations of POL2 and QWP2. Dashed line: the simulated iso-frequency contour at $\lambda_{i}$. (E) The measured polarization distributions in the momentum space around the $\mathcal{K}_{1}$ point, by overlapping several isofrequency contours and evaluating the overall Stokes’ parameters. Red marks: the quasi-CPs. Figure 4: Experimental observation of Berry curvatures. (A) The measured Berry curvatures $B_{n}^{r}$ from far-field radiation for the PhC sample with $\delta_{r}\approx 0$ (top panels) and the numerically calculated (semi- analytical CWT) Berry curvatures $B_{n}$ with $\delta_{r}=2$ nm (bottom panels) for a comparison. For a realistic sample with $\delta_{r}=0$, the fabrication errors would slightly lift the DP at $\mathcal{K}_{1}$ point to create a small bandgap. We estimate that the bandgap is equivalent to the case of $\delta_{r}=2$ nm. In this case, the Berry curvatures congregate around the $\mathcal{K}_{1}$ point since the bandgap is very small, showing opposite signs for TEA and TEB modes. (B) The measured Berry curvatures $B_{B}^{r}$ from far-field radiation (top panels) and the numerically calculated bulk Berry curvatures $B_{B}$ (bottom panels) for TEB mode when $\delta_{r}=4$ nm (left), $7$ nm (middle) and $10$ nm (right). Along with the increasing $\delta_{r}$, the bandgap gradually opens and the Berry curvature gradually diffuses to a larger region in the momentum space. Insets in top panels: SEM images of unit cell of each PhC sample; insets in bottom panels: band structures of the two-level system accordingly. Figure 5: Geometric phases obtained from measured Berry curvatures. (A) Schematic of an individual valley (blue shading) around the $\mathcal{K}_{1}$ point (blue dot) in the reciprocal lattice. The integral on Berry curvature over the individual valley gives the geometric (Berry) phases. Considering that the Berry curvatures congregate around the $\mathcal{K}_{1}$ point when $\delta_{r}$ is relatively small, we perform the integral on a circular region (black circle) to simplify the calculation. Grey shading: the first BZ; purple dot: the $\Gamma$ point. (B, C) The geometric phases for TEA,B modes. Blue solid line: theoretical Berry phase $\gamma_{n;t}$ of the two-level model according to Eq. 4; red circles: numerical Berry phase $\gamma_{n}$ obtained from the integral of Berry curvatures $B_{n}$ that are calculated by the CWT; yellow triangles: geometric phase $\gamma_{n}^{r}$ obtained from measured Berry curvature $B_{n}^{r}$ shown in Fig. 4. When $\delta_{r}=0$, the Berry phases are exactly $\pm\pi$ for TEA,B modes owing to the existence of DP, corresponding to quantized valley-Chern numbers of $\pm 1/2$. When $\delta_{r}$ gradually increases, the calculated and measured Berry phases both gradually deviate from the quantized $\pm\pi$. Moreover, when $\delta_{r}$ becomes relatively large, the theoretical $B_{n;t}$ slightly deviates from the numerical $B_{n}$, due to the impact of TEC mode.
# A VERITAS/Breakthrough Listen Search for Optical Technosignatures A. Acharyya Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA C. B. Adams Physics Department, Columbia University, New York, NY 10027, USA A. Archer Department of Physics and Astronomy, DePauw University, Greencastle, IN 46135-0037, USA P. Bangale Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA P. Batista DESY, Platanenallee 6, 15738 Zeuthen, Germany W. Benbow Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA A. Brill N.A.S.A./Goddard Space-Flight Center, Code 661, Greenbelt, MD 20771, USA M. Capasso Department of Physics and Astronomy, Barnard College, Columbia University, NY 10027, USA M. Errando Department of Physics, Washington University, St. Louis, MO 63130, USA A. Falcone Department of Astronomy and Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802, USA Q. Feng Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA J. P. Finley Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, USA G. M. Foote Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA L. Fortson School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA A. Furniss Department of Physics, California State University - East Bay, Hayward, CA 94542, USA S. Griffin WIPAC and Department of Physics, University of Wisconsin-Madison, Madison, WI 53703, USA W. Hanlon Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA D. Hanna Physics Department, McGill University, Montreal, QC H3A 2T8, Canada O. Hervet Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064, USA C. E. Hinrichs Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755 USA J. Hoang Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064, USA J. Holder Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA T. B. Humensky Department of Physics, University of Maryland, College Park, MD, USA NASA GSFC, Greenbelt, MD 20771, USA W. Jin Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA P. Kaaret Department of Physics and Astronomy, University of Iowa, Van Allen Hall, Iowa City, IA 52242, USA M. Kertzman Department of Physics and Astronomy, DePauw University, Greencastle, IN 46135-0037, USA M. Kherlakian DESY, Platanenallee 6, 15738 Zeuthen, Germany D. Kieda Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112, USA T. K. Kleiner DESY, Platanenallee 6, 15738 Zeuthen, Germany N. Korzoun Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA S. Kumar Department of Physics, University of Maryland, College Park, MD, USA M. J. Lang School of Natural Sciences, University of Galway, University Road, Galway, H91 TK33, Ireland M. Lundy Physics Department, McGill University, Montreal, QC H3A 2T8, Canada G. Maier DESY, Platanenallee 6, 15738 Zeuthen, Germany C. E. McGrath School of Physics, University College Dublin, Belfield, Dublin 4, Ireland M. J. Millard Department of Physics and Astronomy, University of Iowa, Van Allen Hall, Iowa City, IA 52242, USA H. R. Miller Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064, USA J. Millis Department of Physics and Astronomy, Ball State University, Muncie, IN 47306, USA C. L. Mooney Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA P. Moriarty School of Natural Sciences, University of Galway, University Road, Galway, H91 TK33, Ireland R. Mukherjee Department of Physics and Astronomy, Barnard College, Columbia University, NY 10027, USA S. O’Brien Physics Department, McGill University, Montreal, QC H3A 2T8, Canada Arthur B. McDonald Canadian Astroparticle Physics Research Institute, 64 Bader Lane, Queen’s University, Kingston, ON K7L 3N6, Canada R. A. Ong Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA M. Pohl Institute of Physics and Astronomy, University of Potsdam, 14476 Potsdam-Golm, Germany DESY, Platanenallee 6, 15738 Zeuthen, Germany E. Pueschel DESY, Platanenallee 6, 15738 Zeuthen, Germany J. Quinn School of Physics, University College Dublin, Belfield, Dublin 4, Ireland K. Ragan Physics Department, McGill University, Montreal, QC H3A 2T8, Canada P. T. Reynolds Department of Physical Sciences, Munster Technological University, Bishopstown, Cork, T12 P928, Ireland D. Ribeiro School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA E. Roache Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA J. L. Ryan Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA I. Sadeh DESY, Platanenallee 6, 15738 Zeuthen, Germany L. Saha Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA M. Santander Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA G. H. Sembroski Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, USA R. Shang Department of Physics and Astronomy, Barnard College, Columbia University, NY 10027, USA D. Tak DESY, Platanenallee 6, 15738 Zeuthen, Germany A. K. Talluri School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA J. V. Tucci Department of Physics, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA N. Vazquez Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064, USA D. A. Williams Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064, USA S. L. Wong Physics Department, McGill University, Montreal, QC H3A 2T8, Canada J. Woo Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA D. DeBoer Breakthrough Listen, University of California Berkeley, Berkeley, CA 94720, USA H. Isaacson Breakthrough Listen, University of California Berkeley, Berkeley, CA 94720, USA University of Southern Queensland, Toowoomba, QLD 4350, Australia I. de Pater Department of Astronomy, University of California Berkeley, Berkeley, CA 94720, USA D. C. Price International Centre for Radio Astronomy Research, Curtin University, Kent St, Bentley WA 6102, Australia Radio Astronomy Laboratory, 501 Campbell Hall, University of California, Berkeley, CA 94720, USA A. Siemion Breakthrough Listen, University of California Berkeley, Berkeley, CA 94720, USA SETI Institute, Mountain View, CA 94043, USA Department of Physics and Astronomy, University of Manchester, UK University of Malta, Institute of Space Sciences and Astronomy, Msida, MSD2080, Malta G. M. Foote ###### Abstract The Breakthrough Listen Initiative is conducting a program using multiple telescopes around the world to search for “technosignatures”: artificial transmitters of extraterrestrial origin from beyond our solar system. The VERITAS Collaboration joined this program in 2018, and provides the capability to search for one particular technosignature: optical pulses of a few nanoseconds duration detectable over interstellar distances. We report here on the analysis and results of dedicated VERITAS observations of Breakthrough Listen targets conducted in 2019 and 2020 and of archival VERITAS data collected since 2012. Thirty hours of dedicated observations of 136 targets and 249 archival observations of 140 targets were analyzed and did not reveal any signals consistent with a technosignature. The results are used to place limits on the fraction of stars hosting transmitting civilizations. We also discuss the minimum-pulse sensitivity of our observations and present VERITAS observations of CALIOP: a space-based pulsed laser onboard the CALIPSO satellite. The detection of these pulses with VERITAS, using the analysis techniques developed for our technosignature search, allows a test of our analysis efficiency and serves as an important proof-of-principle. ††software: ROOT (Antcheva et al., 2009), Astropy (Astropy Collaboration et al., 2022), Astroquery (Ginsburg et al., 2019), BeautifulSoup, GeoPy, Matplotlib (Hunter, 2007), Mechanize, NetworkX (Hagberg et al., 2008), Numpy (Harris et al., 2020), Pandas (Wes McKinney, 2010), PyTeVCat, Seaborn (Waskom, 2021), Skyfield (Rhodes, 2020) ## 1 Introduction The search for extraterrestrial intelligence (SETI) can be defined as the “theory and practice of searching for extraterrestrial technology or technosignatures” (Wright et al., 2018b). Technosignatures are extraterrestrial signals whose only explanation is that they were produced artificially. Examples of potential technosignatures include interstellar radio-based communications (Cocconi & Morrison, 1959), interstellar laser- based communications (Schwartz & Townes, 1961; Tellis & Marcy, 2017; Zuckerman et al., 2023), radio and optical leakage from technological civilizations (Sullivan et al., 1978; Schneider et al., 2010), infrared emission from Dyson spheres (Dyson, 1960), spectral evidence for industrial pollutants in exoplanet atmospheres (Wright, 2018), and physical artifacts deposited within our solar system (Bracewell, 1960). Since the founding of the field in the 1950s, there have been numerous searches for these technosignatures using radio, optical, and infrared telescopes, but the fraction of the total parameter space which has been searched remains extremely low (Wright et al., 2018a). This paper presents the results of a partnership between the Very Energetic Radiation Imaging Telescope Array System (VERITAS) Collaboration and the Breakthrough Listen Initiative in a search for pulsed optical laser-based communications. The Breakthrough Listen Initiative111https://breakthroughinitiatives.org/ is currently the foremost technosignature search campaign (Worden et al., 2017; Isaacson et al., 2017a). It began searching for radio technosignatures in 2016 through a partnership with the Green Bank Telescope and the Parkes Observatory, subsequently adding MEERKAT in 2018. Similarly in the optical band, a partnership with the Automated Planet Finder in 2016 at the Lick Observatory and the Keck Observatory enabled a spectral search for laser-based communication (Tellis & Marcy, 2017; Isaacson et al., 2019; Lipman et al., 2019). More recently, Breakthrough has partnered with the exoplanet-hunting Transiting Exoplanet Survey Satellite (TESS) to search for anomalous stellar lightcurves, and to search targets of interest from the TESS catalog with radio telescopes (Traas et al., 2021; Franz et al., 2022). Taken together, these partnerships constitute the most comprehensive search for technosignatures thus far (Gajjar et al., 2019). Each search for a specific technosignature has benefits and drawbacks, justifying the approach of performing many such searches concurrently. For example, radio-leakage technosignatures emit continuously in every direction, but the inverse-square law and the expected low radio intensity lead to a requirement for radio telescopes which are still in the planning and construction phases, with the full Square Kilometer Array being a notable example (Siemion et al., 2015). For pulsed optical laser-based communication, the benefit lies in concentrating all of the emitting power into a small angular diameter over nanosecond timescales. These laser pulses could, in principle, be produced with today’s technology, and could be easily distinguished from the emitter’s host star, without significant dispersion losses. A $3\mathrm{\ ns}$, $3.7\mathrm{\ MJ}$ optical laser pulse, collimated at the source using a $10\mathrm{\ m}$ reflector and observed from a distance of 1000 light years, would appear approximately $10^{4}$ times as bright as its host star (Horowitz et al., 2001; Howard et al., 2004). Constructing an interstellar communication system based on this technology is not only theoretically possible, but is currently feasible. While these pulses could be bright when observed from within the beam’s solid angle, they would occur only over very short timescales. The optical receiver therefore requires a large-aperture mirror with fast photon detectors and associated instrumentation. These requirements are the same as those for atmospheric Cherenkov telescopes (ACTs), which are used to measure nanosecond-timescale Cherenkov emission from cosmic-ray- and gamma-ray-initiated particle showers in the Earth’s atmosphere. These telescopes can therefore be used to search for optical laser pulse technosignatures (Covault, 2001; Eichler & Beskin, 2001; Holder et al., 2005; Armada et al., 2005). Nanosecond pulsed SETI searches in the blue/UV region of the electromagnetic spectrum specifically are well-motivated, if the background due to cosmic ray events can be removed. The study of cosmic rays has been ongoing for more than a century and is closely tied to the development of modern physics. Cosmic rays are an important and ubiquitous constituent of the Galaxy — their energy density is similar to that of starlight, Galactic magnetic fields or the cosmic microwave background radiation. Any developing technological civilization would almost certainly study cosmic rays and, if located on a planet with a transparent atmosphere, would very likely use the atmospheric Cherenkov effect to do so. The key technology required for this — photomultiplier tubes — has been widely available to our civilization since the late 1930s. Furthermore, Cherenkov telescopes are (by far) the largest optical telescopes in the world. The H.E.S.S. II telescope, currently operating, has a remarkable $28\mathrm{\ m}$-aperture and the field as a whole has been operating 10-meter-class telescopes since the late 1960s. We argue, therefore, that nanosecond pulsed emission in the blue/UV (at the peak of the spectrum of Cherenkov light) represents a preferred search region for SETI, similar to the famous “water hole” in the radio band. Other wavelengths, such as near infra-red, might also be a natural choice since they experience less extinction due to dust. However, we also argue that any advanced civilization attempting to communicate with an emerging technological civilization would be aware that Cherenkov telescopes are one of the earliest methods capable of easily detecting signals over interstellar distances and that these observations will occur naturally as a side-project of fundamental physical investigations. One of the first such searches was conducted using the Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE), which re-purposed a New Mexico solar energy research facility for nighttime operations as a wavefront-sampling ACT (Gingrich et al., 2005). STACEE consisted of a field of 64 steerable heliostats, each with $37\mathrm{\ m^{2}}$ mirror area. Light received at the heliostats was reflected onto two sets of secondary mirrors at the top of a tower before being focused onto a set of 64 photomultiplier tubes (PMTs). This system had a $10-15\mathrm{\ photon}\mathrm{\ m^{-2}}$ sensitivity between 400 and 500$\mathrm{\ nm}$, peaking at 420$\mathrm{\ nm}$. The STACEE Collaboration conducted dedicated observations of 187 targets from the HabCat catalog (Turnbull & Tarter, 2003) for 10 minutes each, between January and May 2007, and did not find any evidence for technosignature signals during their observations (Hanna et al., 2009). Imaging atmospheric Cherenkov telescopes are also designed to detect atmospheric Cherenkov radiation, but differ from wavefront-samplers such as STACEE in that the telescopes are equipped with photomultiplier tube cameras which allow recording of an image of the Cherenkov light flash. The potential use of such imaging ACTs (IACTs) for SETI for technosignature searches was first discussed in the early 2000s (Tarter, 2003), and an analysis methodology and a single test observation using the Whipple $10\mathrm{\ m}$ IACT was performed in 2005 (Holder et al., 2005). The importance of the imaging technique is that it provides efficient discrimination between point-like pulsed optical technosignatures and the enormous background of Cherenkov flashes generated by cosmic ray particle showers in the Earth’s atmosphere. The power of IACTs for optical technosignature searches is dramatically improved when multiple telescopes are combined together in an array. An array of physically separated telescopes provides an additional coincidence requirement for the pulses detected by each telescope, combined with the ability to measure parallax. This approach was first developed using the VERITAS array, in an archival search for pulsed optical technosignatures from KIC 8462852 (Abeysekara et al., 2016). The analysis allowed efficient identification of laser-like events over the background of cosmic ray images. The VERITAS Collaboration has since partnered with the Breakthrough Listen Initiative to continue the research started with the study of KIC 8462852. This partnership has led to 30 hours of dedicated observations of Breakthrough Listen targets with VERITAS and an analysis of 110 hours of observations from the VERITAS archive of sky regions containing Breakthrough Listen targets. The analysis and first results of this program are reported here. ## 2 VERITAS VERITAS is an IACT array, designed to detect Cherenkov radiation from particle showers in the Earth’s atmosphere and to identify those initiated by gamma-ray photons over the background of those due to cosmic rays. A description of the VERITAS telescopes can be found in Holder et al. (2006), and the methods of ground-based gamma-ray astronomy are summarized in e.g. Holder (2021). Here we briefly describe those technical aspects of VERITAS most relevant to the search for optical technosignatures. Figure 1: An elevated view of the VERITAS array located at the base of Mount Hopkins near Tuscon, AZ. Pictured are the four individual telescopes, which are roughly $100\mathrm{\ m}$ apart, the Fred Lawrence Whipple Observatory visitor center, and the VERITAS control building (with the white roof). Image from Abeysekara et al. (2016) VERITAS consists of four IACTs located at the Fred Lawrence Whipple Observatory in southern Arizona (Figure 1). Each telescope has a 12-m-diameter tessellated reflector mounted on a steerable alt-azimuth platform. The reflector dish comprises 345 hexagonal mirror segments (Roache et al., 2008) arranged in a Davies-Cotton design (Davies & Cotton, 1957), giving a total mirror area of $\sim 110\mathrm{\ m^{2}}$. Alignment of the individual mirror segments is performed and regularly verified using the method described by McCann et al. (2010), resulting in an on-axis optical point-spread function of less than $0.1\degree$ (68% containment radius). The focal length of the optical system is $12\mathrm{\ m}$, giving a focal ratio of 1.0. The focal plane is instrumented with a close-packed array of 499 Hamamatsu R10560 super- bialkali photomultiplier tubes (PMTs), covering an approximately circular $3.5\degree$-diameter field-of-view (FOV) with a pixel spacing of $0.15\degree$. CCD cameras installed on the telescope structure monitor the position of the PMT camera with respect to the sky, and provide pointing corrections with an absolute positional accuracy of $\sim 50\arcsec$. The camera PMT pixels are sensitive over a wide spectral range, with a peak detection efficiency around $400\mathrm{\ nm}$. Dead space between the circular entrance windows of the PMTs is reduced by the addition of truncated Winston cones to the PMT front faces. These cones are shaped such that the entrance is hexagonal and the exit is circular, allowing them to effectively tile the FOV (Nagai et al., 2008). All PMT signals are digitized using 2-ns-sampling, 8-bit flash analog-to- digital converters (FADCs). The FADC read-out is initiated by a three-level trigger system, which requires a signal at the individual pixel level, the telescope camera level, and over the full array. The individual pixel trigger condition is determined by a constant fraction discriminator, while the telescope camera trigger requires at least three adjacent PMT pixel triggers within a coincidence time window of $\sim 5\mathrm{\ ns}$. The array trigger requires at least two telescope camera triggers within a $50\mathrm{\ ns}$ coincidence window, after the application of hardware timing delays to correct for path-length differences between telescopes. Since the optical technosignature images are expected to resemble the telescope optical point- spread function (which can be smaller than the angular size of a single PMT pixel) the impact of the 3-pixel camera-level trigger requirement is particularly important for technosignature searches. We discuss this issue in more detail in section 4 of this paper. The recorded FADC pulses are calibrated, integrated and used to create a 499-pixel image for all four telescopes in the array. These images (or “events”) are recorded at a rate of typically $300\mathrm{\ Hz}$, the vast majority of which are due to Cherenkov emission from cosmic-ray-initiated particle cascades in the atmosphere (Kieda, 2013). Subsequent analysis of these images allows identification of the small fraction (typically $<10^{-4}$) that are due to gamma-ray-initiated showers or, as described in the following section, to search for images which resemble a distant optical laser pulse. ## 3 OSETI analysis with VERITAS Figure 2: A schematic illustration of the optical SETI (OSETI) technique with IACT arrays such as VERITAS. Particle air showers, initiated by cosmic-ray particles or gamma-ray photons, produce extended images with parallax shifts when viewed from separated telescopes (left). A distant laser pulse produces identical point-like images, located at the same position in the field of view in each telescope (right). The analysis applied in this work is similar to that used in the original search for optical technosignatures from KIC 8462852 with VERITAS (Abeysekara et al., 2016). The data are first reduced using the standard VERITAS analysis packages (Maier & Holder, 2017; Cogan, 2008), which calibrate and parameterize the recorded images using a moment analysis. Cuts on the resulting image parameters (the image width, length, etc. — usually referred to as Hillas parameters (Hillas, 1985)) are then used to filter almost all events due to Cherenkov emission from cosmic ray air showers from the data. While more sophisticated machine learning approaches are under investigation for this analysis, and are already in use for VERITAS gamma-ray analyses (Krause et al., 2017), simple image parameter cut selections are computationally cheap and robust, and have proven to be extremely effective. The key characteristics of a potential optical technosignature are: (i) that the emission is point-like (i.e. indistinguishable from the telescope optical point-spread function); (ii) that it originates from infinity (i.e. shows no parallax shift and has uniform intensity, when viewed from different locations on the ground); and (iii) that it originates from the position of a target star. This is in contrast to the Cherenkov radiation images of particle cascades produced locally in the Earth’s atmosphere, which can have large angular extent (up to a few degrees), are uniformly distributed over the FOV, and which display significant parallax and non-uniform intensity, when viewed by separated telescopes. Although not used in our work, pulse timing differences may also be used to identify technosignature candidates (Wright et al., 2018c). These features are illustrated schematically in Figure 2. The choice of image parameter cuts follows logically from these differences. In the analysis of KIC 8462852, the cuts required that at least three of the four telescope images must contain light, the centroid coordinates of the images in every telescope must be separated from each other by less than $0.15\degree$, and the length and width of all images must be less than $0.1125\degree$. After the removal of a few examples of easily identified meteor and satellite tracks, only 28 of the initial 7036970 events passed these selection cuts. There were no events retained in which the average position of the images was within $0.15\degree$ of the location of the target star for that study, KIC 8462852. In this work, we are analyzing a much larger dataset, testing a catalog of many targets, and have a less homogeneous set of observations. These factors motivate the application of stricter cuts to further improve the background rejection. The most important of these is a modification to the length and width cuts. First, we reduce the cut values to $length<0.09\degree$ and $width<0.07\degree$, this matches the optical point-spread function of the telescopes better than before. Second, we apply these cuts to the telescope with the third-smallest measured length or width in each event. The motivation behind this is to reduce the impact of PMT afterpulses in the data. Afterpulses are a well-known phenomenon caused mainly by residual positive ions in the PMTs. They appear in the telescope cameras as a single, relatively bright pixel, randomly located in the field of view. This can distort the Hillas parameters of the image in the affected telescope. However, afterpulsing typically affects at most one telescope image in a given event. Applying the length and width cuts to the image with the third-smallest values of these parameters allows an event with one afterpulse-contaminated image to be retained. The telescope image that exceeds the length and width cuts is then also removed from consideration for the other selection criteria. An additional modification is to remove any events that include images potentially truncated by the edge of the camera. This is implemented using the loss parameter, defined as the fraction of the total light in the image contained in pixels that lie on the edge of the camera (we require $loss=0$). As a final check, we visually inspect any remaining candidate events (and their associated ancillary data) to ensure that the telescope cameras were functioning correctly, and that each telescope contributed to the event as expected. For example, if a bright pulse was recorded in only three of the four telescopes, this would exclude it as a candidate — except if the missing telescope had an inoperative PMT pixel at that location in its FOV. At any time, typically a few percent of the PMTs in the telescopes’ cameras are malfunctioning, or are temporarily disabled to avoid damage due to bright stars. ## 4 Analysis verification using the CALIOP instrument on the CALIPSO satellite The probability of the VERITAS array triggering on and recording an optical pulse, as well as the efficiency of the subsequent analysis, is difficult to test under realistic conditions. Monte Carlo simulations provide one approach and are commonly used to estimate the effective detection area and to define the analysis and event selection cuts for gamma-ray astronomy. In this case, the simulated gamma-ray events can be compared with a known bright source of astrophysical gamma-rays, such as the Crab Nebula, and the telescope model parameters tuned until a satisfactory match is achieved. For the analysis, however, no natural standard signal exists with which to compare simulations, or to verify the analysis. Furthermore, the precise properties of the pulse to be simulated (risetime, pulse width, wavelength, etc.) are not known. An ideal test signal would be a distant laser which flashes the telescope cameras from a known location, as this matches the technosignature we are looking for. Pulsed light sources have been used for calibration of IACTs for many years. From 2005, nightly calibrations of VERITAS were performed using a laser with a 337 nm wavelength and a pulse duration of 4 ns at a distance of roughly 5 meters from the camera (Hanna, 2008) before switching to a similar LED-based calibration system in 2010 (Hanna et al., 2010). However, these measurements are designed to illuminate the entire FOV uniformly and do not serve as a useful analog to a distant point source. Another calibration technique once used by VERITAS involves firing a laser pulse upwards from the ground and observing the Rayleigh-scattered laser light with the telescopes (Shepherd et al., 2005; Hanna, 2008). Again, the observed image is not point- like, but corresponds to an illuminated column in the atmosphere. The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument onboard the polar orbiter Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite is a space-based backscattering lidar, designed to provide high-resolution vertical profiles of aerosols and clouds, which emits $110\mathrm{\ mJ}$, $20\mathrm{\ ns}$-duration laser pulses at a repetition rate of $20.16\mathrm{\ Hz}$, at both $532\mathrm{\ nm}$ and $1064\mathrm{\ nm}$ (Winker et al., 2009). This provides an excellent technosignature verification source for VERITAS. The camera PMTs are sensitive at $532\mathrm{\ nm}$, with a quantum efficiency of approximately 12%. At an orbital height of $700\mathrm{\ km}$, the laser is effectively a point source at infinity relative to the size of VERITAS. The lidar is directed 3° from geodetic nadir in the forward along-track direction of the satellite’s orbit, and the laser footprint on the ground is predicted to be less than 100 m in diameter (Winker et al., 2009), making a coincidental overlap with the VERITAS telescopes extremely unlikely. However, observations by the TAIGA-HiScore collaboration of the CALIPSO laser (Porelli & Taiga Collaboration, 2022) demonstrated that the actual footprint extends far beyond the nominal distance, out to at least tens of kilometers, for reasons which are not entirely clear. This motivated both new observations with VERITAS, and a search of the VERITAS archive for serendipitous passages of the satellite through the field of view. Examples of these CALIPSO observations are shown in Figure 3. The top image illustrates a passage from a dedicated observation on May 17, 2021, which occurred at an elevation of 74°. The pulse intensity observed by VERITAS was approximately 2000 photo-electrons at each telescope, corresponding to 150$\mathrm{\ photons}$$\mathrm{\ m^{-2}}$ at ground level. During the transit, 69% of the pulses emitted by CALIPSO triggered VERITAS and 55% passed the optical SETI analysis cuts without accounting for the loss parameter. The efficiency without the loss cut applied is most relevant for comparison with our analysis, since we select target locations which are not close to the camera edge (section 5). For these very high intensity pulses, the missing triggered events are largely explained by the deadtime of the telescope data acquisition (which was 9% for this observation) and by the existence of a large patch of inoperative PMTs in one telescope along the track of the satellite, as can be seen in the figure. A second transit is shown on the bottom of Figure 3. This was a serendipitous passage which occurred on November 11, 2013 at an elevation of 54°, when the center of the laser footprint was $\sim 400\mathrm{\ km}$ distant from the location of VERITAS. The measured pulse intensity is more than an order of magnitude lower in this case, corresponding to approximately 10$\mathrm{\ photons}$$\mathrm{\ m^{-2}}$. 21% of the pulses emitted by CALIPSO triggered VERITAS and 20% passed our analysis without accounting for loss. The relatively low trigger efficiency during this transit is likely the result of the extremely non-uniform sensitivity of the VERITAS trigger system to point- like pulses, as we discuss in more detail in section 6. Figure 3: The tracks of the CALIPSO transits (in camera coordinates) on May 17, 2021 (top) and November 11, 2013 (bottom) with the location of the average of the image centers in an event shown as blue points. The hexagons correspond to VERITAS pixels. Grayed out pixels were non-functional during the transit for VERITAS telescope 1. ## 5 Observations and Results The VERITAS/Breakthrough Listen search we have conducted comprises two datasets, which we describe below. The first of these is a program of dedicated VERITAS observations of Breakthrough Listen targets, while the second is an analysis of serendipitous archival observations. ### 5.1 Dedicated VERITAS Observations Between March 2019 and March 2020, VERITAS spent 30 hours observing objects selected from the Breakthrough Listen target catalog (Isaacson et al., 2017b). This catalog lists targets originally identified for Breakthrough Listen observations with the Green Bank Telescope, Parkes Telescope, and the Automated Planet Finder. It includes: the 60 nearest stars; 1649 stars within a distance of $50\mathrm{\ pc}$ sampling a range of masses, ages and elemental abundances; 123 nearby galaxies; and several exotic objects, including white dwarfs, brown dwarfs, neutron stars and black holes. Not all of these targets are suitable for observations with VERITAS. We removed all galaxies, on the assumption that optical emission is unlikely to be detectable over such large (i.e. extragalactic) distances (Hippke, 2018). We also limited targets to the declination band between $\delta=-10\degree$ and $\delta=+70\degree$, to ensure that the object culminates above $\sim 40\degree$. High elevation observations are preferred, as they provide greater discrimination power between a pulsed point source at infinity and the background of Cherenkov events generated in the Earth’s atmosphere. This is due to the fact that Cherenkov flashes observed at low elevation occur at a larger distance from the telescopes, reducing their parallax angle, image intensity and angular size and making them appear more point-like. We also removed all targets with a B-band magnitude of less than 7, and all targets within $0.15\degree$ of an object (excluding the target itself) with a B or V magnitude of less than 8. Bright stars generate a large amount of background photon noise in the VERITAS PMTs, as well as high currents, which accelerate wear. If the current on any individual PMT exceeds a preset threshold, the high voltage supplied to that channel is automatically turned off. This has little impact on the observation of Cherenkov events with large angular extent, but reduces or completely removes the sensitivity to point- like optical pulses from the star’s location. Finally, we removed any targets which had been previously observed by VERITAS, either intentionally or serendipitously, in the FOV of other observations. Observations of these targets are included in the archival search described in section 5.2. This resulted in a list of 506 targets, which were then ranked according to the inverse square of their distance and their optical brightness, with nearby, optically faint targets being preferred. Targets lying close to the ecliptic (which could host civilizations that view the Earth as a transiting exoplanet (Heller & Pudritz, 2016; Sheikh et al., 2020)), or hosting known exoplanets, or located close to another target, were also favored, but with lower weight than the two main criteria of brightness and distance. Candidates from this target list were selected for observation based on their ranking and on observatory scheduling constraints. Observations were conducted in typically 15 minute exposures taken within 90 minutes of culmination, with the primary target offset from the center of the FOV by 1.25°. This offset was chosen to improve the probability of triggering on a faint pulse, as discussed in more detail in section 6.1. All data were taken under clear skies, at new or crescent Moon phases, and with all four telescopes in the array operating correctly. The final dataset comprises 127 observing runs of 108 non- overlapping target fields, with a total exposure of 30.16 hours. Some target fields contain multiple targets, allowing us to study a total of 136 targets with this dataset. Most targets were observed only once, while 25 were observed twice, and three were observed three times. The locations of the observed stellar targets are shown in figure 4, with their spectral class and distance indicated. Figure 5 shows the Hertzsprung- Russell diagram for all of the stellar Breakthrough Listen targets, with those targets observed by VERITAS indicated. The VERITAS selection covers a broad range of spectral classes along the main sequence, from B to M, as well as a few giant branch stars. Figure 4: The stellar target locations, in equatorial coordinates, for both the dedicated and archival VERITAS observations. Distance and spectral type are also indicated, as described in the figure legend. Figure 5: The coverage across an H-R Diagram for the stellar targets used in both dedicated observations and archival analysis. It is similar to the coverage found in the original Isaacson et al. (2017b) Catalog. ### 5.2 Archival Search VERITAS has been fully operational since 2007, and records typically $\sim 1000\mathrm{\ hours}$ of observations each year. Thanks to the large VERITAS FOV ($9.6\mathrm{\ deg^{2}}$), these observations provide coverage of over 20% of the sky, with exposures cumulatively ranging from a few minutes to hundreds of hours. A complete search of this extensive archive for optical pulses is a worthwhile subject for future work, but will require further development of the analysis tools – in particular, to deal with observations recorded at low elevation and to overcome the increased background from examining the entire FOV as opposed to just the region around a set of pre-defined locations. For this work, we selected a reduced set of archival VERITAS observations to analyze. To create this set, we required that the observations were recorded at high elevations ($>40\degree$), with at least three telescopes operating, and with excellent weather conditions. Data taken prior to summer 2012 were not considered, as this was when the VERITAS photomultiplier tube cameras were upgraded, improving the photon detection efficiency by $\sim 30\%$ (Kieda, 2013). Unlike the dedicated observations, the radial distance of the target from the center of the field of view for archival observations could not be fixed at $1.25\degree$; We instead set the maximum radial distance to be $1.5\degree$. We also set the maximum exposure to be analyzed on any single target to be $1\mathrm{\ hour}$. If any target exceeded this threshold, we analyzed the first hour of good-quality data and left the remainder for the future full archival analysis. With these conditions, we selected 249 archival observations with an average length of 28 minutes which altogether represents 110 hours of observations containing 140 individual Breakthrough Listen targets and 119 non-overlapping fields. The list of Breakthrough Listen targets in this archival dataset includes 25 galaxies, which were serendipitously inside the studied fields and are included here for completeness. This dataset constitutes all of the Breakthrough Listen targets for which we have good quality, high elevation data taken between September, 2012 and March, 2019. We note that the entire VERITAS archive comprises almost $20,000\mathrm{\ hours}$ of data, including additional observations of the Breakthrough targets analyzed here. The full analysis of this dataset will be presented in future work. Figure 4 shows the location, spectral type, distance, and originating dataset of all analyzed targets. Figure 5 shows the same targets, but instead within the H-R diagram, showing the color and magnitude of the targets. As the figures show, the analyzed targets selected occupy a significant portion of the parameter space — locations, distances and spectral properties — of the Isaacson et al. (2017b) catalog. ### 5.3 Results Table 1 shows the results of each stage of the analysis pipeline for both the dedicated observations and for the archival dataset. For the dedicated observations, only one event survived the pre-defined selection cuts (target HIP 83043). For the larger archival dataset, three events survived (targets HIP 51317, HIP 93871 and NGC 4551) and were subjected to visual inspection. For three of these four events, two of the four telescopes in the array triggered and three of the four telescopes registered an image. The fourth telescope did not, despite being operational and having no disabled PMTs at the pulse location. These events therefore fail our requirement for uniform intensity, and are rejected. The remaining event shows three OSETI-like images, thereby satisfying the third-smallest width and length criterion, but the fourth telescope image contains a bright, extended flash, with an angular (parallactic) displacement from the other images. This clearly identifies the event as being due to a cosmic ray air shower in the atmosphere, and so it is also rejected. We therefore have zero candidate events remaining after the full analysis. Table 1: The number of events remaining after each stage of the analysis for serendipitous archival observations and for dedicated observations of Breakthrough Listen targets Cut description | Events remaining after cut ---|--- | Archival Data | Dedicated Observations Before cuts | 127,346,295 | 34,917,340 At least 3 images | 80,910,174 | 23,088,334 Point-like images ($3^{\text{\tiny rd}}$ smallest $length<0.09\degree$ & $width<0.07\degree$) | 1,894,155 | 508,637 Image centers co-located (within $0.15\degree$) | 237 | 35 Near target (within $0.15\degree$) | 3 | 1 Images are not truncated ($loss=0$) | 3 | 1 Visual inspection | 0 | 0 Candidate Events | 0 | 0 ## 6 Discussion We have presented the analysis of observations of 272 Breakthrough Listen targets with VERITAS (there are 4 targets in common between the target lists of the dedicated observations and the archival search) and have found no evidence for rapid optical pulses from any of these objects. Here we attempt to summarize the sensitivity of our search, both in terms of the minimum optical pulse intensity detectable by VERITAS and in the constraints our survey allows us to place on the frequency of emitting civilizations. ### 6.1 Optical pulse sensitivity Abeysekara et al. (2016) estimated the minimum optical pulse intensity detectable by VERITAS to be $0.94\mathrm{\ photons}\mathrm{\ m^{-2}}$ for a $12\mathrm{\ ns}$ integration window, while noting that such estimates are challenging due to the various unknown pulse properties (location, wavelength, duration, temporal profile, etc). The CALIPSO observations demonstrate experimentally that pulses with an intensity of $10\mathrm{\ photons}\mathrm{\ m^{-2}}$ can be detected. Furthermore, the CALIPSO pulses are relatively long duration, with a pulse width of $\sim 20\mathrm{\ ns}$. The single photo- electron pulse width for VERITAS is $4\mathrm{\ ns}$, and the camera trigger coincidence time is $\sim 5\mathrm{\ ns}$, implying that shorter pulses with a substantially lower integrated photon intensity must also be detectable. However, the CALIPSO results also highlight that the efficiency for pulse detection with VERITAS is not 100%. We discuss one of the reasons for this in more detail here. As mentioned, the VERITAS telescope cameras are each composed of 499 photomultiplier tubes on a hexagonal grid, with a pixel to pixel spacing of $0.15\degree$. For an individual telescope to trigger on an optical pulse, signals on three adjacent PMT pixels must exceed a discriminator threshold within a $\sim 5\mathrm{\ ns}$ coincidence window. A laser pulse generated at large distance is point-like, and so this 3-adjacent trigger condition would never be met, if the optical system were perfect. In reality, an image of a point source has the same shape and structure as the telescope optical point- spread function (PSF), which may overlap multiple pixels. The angular extent of Cherenkov showers and the PMT pixels allows for cheaper mirrors with a significantly reduced angular resolution/increased PSF compared to typical optical telescopes. This means that IACTs like VERITAS can be much larger and overall cheaper than their optical counterparts (Canestrari et al., 2010). The PSF will also vary across the field of view due to comatic aberration. At the center of the camera, it can be approximated by a bivariate Gaussian with a $\sim 0.08\degree$ 68% containment diameter, increasing to $\sim 0.15\degree$ at an offset of $1.2\degree$, and degrading further towards the edge of the camera at $1.75\degree$. The probability of satisfying the 3-adjacent trigger condition therefore depends very strongly upon the exact pulse location in the field of view. Specifically, it is determined by the amount of light received by the PMT which is third-most-distant from the image centroid — i.e. that which measures the third-largest signal. In the most favorable case, the pulse centroid location lies equidistant between three pixels, each of which receives approximately one third of the light. In the least favorable case, the pulse lands exactly in the center of a pixel, and adjacent pixels receive only a small fraction of the light. The difference between these two cases is most extreme close to the camera center, where the optical PSF is small, and least extreme at the camera edge, where the optical PSF is more extended. Figure 6 shows the results of a Monte Carlo simulation which illustrates these effects and how the minimum pulse sensitivity varies with radial distance in the camera. The optimum sensitivity is taken to be the same as that estimated by Abeysekara et al. (2016). The green dotted line in the figure corresponds to the worst case, where the pulse is centered on a pixel. The blue solid line corresponds to the best case, where the pulse is equidistant between 3 pixels. For a random pulse location on the camera, the most likely distance between the pulse location and the pixel containing the third brightest signal is $0.13\degree$. The orange dashed line indicates this typical case. 75% of possible pulse locations in the camera provide a sensitivity equal to or better than this typical case. The cross-hatched region indicates this, as well as the outer region of the camera corresponding to 75% of the total area. The black vertical line at a camera radius of $1.25\degree$ indicates the position of the Breakthrough Listen targets in the field of view for the dedicated VERITAS observations reported here. The typical sensitivity in this case is $3\mathrm{\ ph}\mathrm{\ m^{-2}}$. Observations of the CALIPSO satellite laser over a wide range of elevations (and hence pulse intensity and pulse location in the cameras) are currently being taken by VERITAS and will allow testing these sensitivity estimates more rigorously. Figure 6: The sensitivity (minimum detectable pulse intensity) as a function of radial distance from the center of the VERITAS telescope field of view. The three curves correspond to a pulse located at the center of a PMT (worst case), equidistant between 3 PMTs (best case) and at the most common location (typical case). The cross-hatched region indicates the outer 75% of the camera area, and the sensitivity of 75% of the possible pulse locations within this area. See text for more details. As a final point, we stress that the issue of non-uniform sensitivity across the field of view is not intrinsic to the technique; rather, it is a result of the VERITAS trigger system design, which is optimized for gamma-ray astronomy. A dedicated trigger for point-like optical pulses, requiring the same single pixel to cross a trigger threshold on multiple separated telescopes, would completely remove this limitation. This could be implemented in a relatively straightforward manner on existing or future facilities and operate in parallel with the existing Cherenkov trigger system. ### 6.2 Survey sensitivity The sensitivity to optical pulses is an important instrumental metric. Complementary to this, however, is the sensitivity of the search as a survey: that is, how do our results constrain the parameter space of potential emitters? There are many different ways to estimate this, usually discussed in the context of searches for radio technosignatures (e.g. see Wright et al. (2018a) and references therein). The most applicable prior works for our purposes are those of Howard et al. (2004), Howard et al. (2007), and Mead (2013) which discuss the search for nanosecond-duration, pulsed optical emission using optical astronomical telescopes equipped with hybrid avalanche photodetectors or with photomultiplier tubes. From 1998 to 2003, 11,600 targeted observations of 4730 stellar objects were made under good conditions with the $1.5\mathrm{\ m}$-aperture Wyeth telescope at the Harvard/Smithsonian Oak Ridge Observatory, with a total exposure of $1721\mathrm{\ hr}$. Subsequently, the Harvard All-Sky Observatory utilized a custom optical setup consisting of a $1.8\mathrm{\ m}$ telescope which focused a $1.6\degree\times 0.2\degree$ patch of the sky onto a beam splitter with matched arrays of 8 photomultiplier tubes down each path. From 2007 to 2012, it made 7320 hours of observations over which it searched the entire northern sky four times. Each of these campaigns used the same mathematical model, as explained in Howard et al. (2004), to place an upper bound on the fraction of nearby stars that host civilizations emitting optical pulses towards the Earth as a function of $P$, the typical pulse repetition period, under the assumption that any emitted pulse would exceed the minimum pulse sensitivity of the instrument. The results are replicated in Figure 7. We emphasize, however, that the minimum pulse sensitivity of the VERITAS observations ($\gtrsim 3\mathrm{\ ph}\mathrm{\ m^{-2}}$) is much better than that of the Harvard experiment ($\gtrsim 100\mathrm{\ ph}\mathrm{\ m^{-2}}$). We have applied a similar methodology to the sum of both VERITAS datasets described in this paper, with an observed sample of 247 unique stellar targets, and a total observation time of $140\mathrm{\ hr}$. Figure 7 also demonstrates the potential survey sensitivity that can be achieved if we still obtain no candidate events after removing the constraint that pulses must be associated with a pre-defined location from the Breakthrough Listen target list. This requires some additional analysis development, to further reduce the remaining background, but is realistically achievable in the near future. For this calculation, we assume a typical stellar density of $0.1\mathrm{\ stars}\mathrm{\ pc^{-3}}$ and a maximum range of $1\mathrm{\ kpc}$, corresponding to $4\times 10^{8}$ stars over the whole sky — similar to the values used for calculating the Harvard All-Sky limits (Mead, 2013). Using only the $140\mathrm{\ hr}$ dataset considered in this paper, this search would lower the upper-limit on the fraction of stars with transmitters by roughly five orders of magnitude, corresponding to the ratio of the number of stars searched between targeted and non-targeted techniques. Applying the same approach to the entire VERITAS archive of 18,176 hours would further reduce the minimum upper limit, and extend the search sensitivity to much longer pulse transmission periods. Figure 7: Upper limits on the fraction of stars with transmitting civilizations as a function of the average period between pulses, using the model from Howard et al. (2007) (building on Howard et al. (2004)) which assumes no candidate pulses are found. From top to bottom the five lines correspond to: all data from this paper (247 targets; upside-down triangles), the Harvard targeted search (Howard et al., 2004)(4730 targets; circles), the Harvard All-Sky untargeted survey (Howard, 2006; Mead, 2013) (7320 hours; diamonds), a hypothetical non-targeted VERITAS survey using all of the data from this paper (140 hours; right-pointing triangles), and a hypothetical non- targeted survey using all data from the entire VERITAS archive (18,176 hours; left-pointing triangles). The minimum detectable pulse is $\sim 30$ times larger for Harvard than for VERITAS. ## 7 Conclusions and Prospects VERITAS is not alone in searching for optical technosignatures. Table 1 in Schuetz et al. (2016) summarized the capabilities of optical technosignature searches at the time of the VERITAS analysis of KIC 462852. Since then, there have been numerous developments in the field. The Near-InfraRed Optical SETI instrumentation on a 1-meter telescope at the Lick Observatory has been used to conduct a survey of 1280 celestial objects in the near-infrared (950–$1650\mathrm{\ nm}$), sensitive to pulses with durations of $<50\mathrm{\ ns}$ (Maire et al., 2019). Tellis & Marcy (2017) conducted a survey of 5600 FGKM stars using the Keck $10\mathrm{\ m}$ telescope, searching for a spectral (not temporal) signature of laser emission. An all-sky instrument called PANOSETI is under development (Liu et al., 2020; Maire et al., 2022), and will soon provide nightly all-sky coverage from two sites. Cherenkov telescopes also have an important role to play in future developments. The TAIGA-HiSCORE wide-aperture Cherenkov array, consisting of 100, $0.5\mathrm{\ m^{2}}$ telescopes spread over $1\mathrm{\ km^{2}}$ with a field of view of $0.6\mathrm{\ sr}$, has searched for nanosecond optical transients and detected pulsed emission from the CALIPSO satellite (Panov et al., 2021). In the coming decade, the Cherenkov Telescope Array (CTA) will provide unprecedented telescope light collecting area, exceeding the mirror area of all of the world’s large optical telescopes combined (Cherenkov Telescope Array Consortium et al., 2019). It will have the capability to conduct nanosecond optical pulse searches similar to VERITAS, with much greater sensitivity and stricter background rejection. As we have shown here, verification and calibration of this capability with satellite-based lasers will be an important component of this program, as will considerations of the telescope optical performance and trigger system design. This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, and by the Helmholtz Association in Germany. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy’s Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We thank the Breakthrough Prize Foundation and the University of California, Berkeley, for their support. We also acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. Services: Celestrak, NASA Exoplanet Archive, TeVCat, the SIMBAD database, the ViZieR catalog access tool ## References * Abeysekara et al. (2016) Abeysekara, A. U., Archambault, S., Archer, A., et al. 2016, ApJ, 818, L33, doi: 10.3847/2041-8205/818/2/L33 * Antcheva et al. (2009) Antcheva, I., Ballintijn, M., Bellenot, B., et al. 2009, Computer Physics Communications, 180, 2499, doi: 10.1016/j.cpc.2009.08.005 * Armada et al. (2005) Armada, A., Cortina, J., & Martinez, M. 2005, in Neutrinos and Explosive Events in the Universe, ed. M. M. Shapiro, T. Stanev, & J. P. Wefel, Vol. 209, 307 * Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167, doi: 10.3847/1538-4357/ac7c74 * Bracewell (1960) Bracewell, R. N. 1960, Nature, 186, 670, doi: 10.1038/186670a0 * Canestrari et al. (2010) Canestrari, R., Motta, G., Pareschi, G., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7739, Modern Technologies in Space- and Ground-based Telescopes and Instrumentation, ed. E. Atad-Ettedgui & D. Lemke, 77390H, doi: 10.1117/12.857268 * Cherenkov Telescope Array Consortium et al. (2019) Cherenkov Telescope Array Consortium, Acharya, B. S., Agudo, I., et al. 2019, Science with the Cherenkov Telescope Array, doi: 10.1142/10986 * Cocconi & Morrison (1959) Cocconi, G., & Morrison, P. 1959, Nature, 184, 844, doi: 10.1038/184844a0 * Cogan (2008) Cogan, P. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1385–1388. https://arxiv.org/abs/0709.4233 * Covault (2001) Covault, C. E. 2001, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4273, The Search for Extraterrestrial Intelligence (SETI) in the Optical Spectrum III, ed. S. A. Kingsley & R. Bhathal, 161–172, doi: 10.1117/12.435374 * Davies & Cotton (1957) Davies, J. M., & Cotton, E. S. 1957, Solar Energy, 1, 16, doi: 10.1016/0038-092X(57)90116-0 * Dyson (1960) Dyson, F. J. 1960, Science, 131, 1667, doi: 10.1126/science.131.3414.1667 * Eichler & Beskin (2001) Eichler, D., & Beskin, G. 2001, Astrobiology, 1, 489, doi: 10.1089/153110701753593892 * Franz et al. (2022) Franz, N., Croft, S., Siemion, A. P. V., et al. 2022, AJ, 163, 104, doi: 10.3847/1538-3881/ac46c9 * Gajjar et al. (2019) Gajjar, V., Siemion, A., Croft, S., et al. 2019, in Bulletin of the American Astronomical Society, Vol. 51, 223. https://arxiv.org/abs/1907.05519 * Gingrich et al. (2005) Gingrich, D. M., Boone, L. M., Bramel, D., et al. 2005, IEEE Transactions on Nuclear Science, 52, 2977, doi: 10.1109/TNS.2005.855705 * Ginsburg et al. (2019) Ginsburg, A., Sipőcz, B. M., Brasseur, C. E., et al. 2019, AJ, 157, 98, doi: 10.3847/1538-3881/aafc33 * Hagberg et al. (2008) Hagberg, A. A., Schult, D. A., & Swart, P. J. 2008, in Proceedings of the 7th Python in Science Conference, ed. G. Varoquaux, T. Vaught, & J. Millman, Pasadena, CA USA, 11 – 15 * Hanna (2008) Hanna, D. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1417–1420. https://arxiv.org/abs/0709.4479 * Hanna et al. (2010) Hanna, D., McCann, A., McCutcheon, M., & Nikkinen, L. 2010, NIMPA, 612, 278, doi: 10.1016/j.nima.2009.10.107 * Hanna et al. (2009) Hanna, D. S., et al. 2009, Astrobiology, 9, 345, doi: 10.1089/ast.2008.0256 * Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 * Heller & Pudritz (2016) Heller, R., & Pudritz, R. E. 2016, AsBio, 16, 259, doi: 10.1089/ast.2015.1358 * Hillas (1985) Hillas, A. M. 1985, in International Cosmic Ray Conference, Vol. 3, 19th International Cosmic Ray Conference (ICRC19), Volume 3, 445 * Hippke (2018) Hippke, M. 2018, JApA, 39, 73, doi: 10.1007/s12036-018-9566-x * Holder (2021) Holder, J. 2021, Atmospheric Cherenkov Gamma-Ray Telescopes, 2nd edn., World Scientific Series in Astrophysics (World Scientific), 117–136, doi: 10.1142/9789811203817_0006 * Holder et al. (2005) Holder, J., Ashworth, P., LeBohec, S., Rose, H. J., & Weekes, T. C. 2005, in International Cosmic Ray Conference, Vol. 5, 29th International Cosmic Ray Conference (ICRC29), Volume 5, 387. https://arxiv.org/abs/astro-ph/0506758 * Holder et al. (2006) Holder, J., Atkins, R. W., Badran, H. M., et al. 2006, Astroparticle Physics, 25, 391, doi: 10.1016/j.astropartphys.2006.04.002 * Horowitz et al. (2001) Horowitz, P., Coldwell, C. M., Howard, A. B., et al. 2001, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4273, The Search for Extraterrestrial Intelligence (SETI) in the Optical Spectrum III, ed. S. A. Kingsley & R. Bhathal, 119–127, doi: 10.1117/12.435364 * Howard et al. (2007) Howard, A., Horowitz, P., Mead, C., et al. 2007, Acta Astronautica, 61, 78, doi: 10.1016/j.actaastro.2007.01.038 * Howard (2006) Howard, A. W. 2006, PhD thesis, Harvard University * Howard et al. (2004) Howard, A. W., Horowitz, P., Wilkinson, D. T., et al. 2004, ApJ, 613, 1270, doi: 10.1086/423300 * Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 * Isaacson et al. (2019) Isaacson, H., Siemion, A. P. V., Marcy, G. W., et al. 2019, PASP, 131, 014201, doi: 10.1088/1538-3873/aaeae0 * Isaacson et al. (2017a) —. 2017a, PASP, 129, 054501, doi: 10.1088/1538-3873/aa5800 * Isaacson et al. (2017b) —. 2017b, PASP, 129, 054501, doi: 10.1088/1538-3873/aa5800 * Kieda (2013) Kieda, D. B. 2013, in International Cosmic Ray Conference, Vol. 33, International Cosmic Ray Conference, 1124. https://arxiv.org/abs/1308.4849 * Krause et al. (2017) Krause, M., Pueschel, E., & Maier, G. 2017, Astroparticle Physics, 89, 1, doi: 10.1016/j.astropartphys.2017.01.004 * Lipman et al. (2019) Lipman, D., Isaacson, H., Siemion, A. P. V., et al. 2019, PASP, 131, 034202, doi: 10.1088/1538-3873/aafe86 * Liu et al. (2020) Liu, W., Werthimer, D., Lee, R., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11447, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 114477G, doi: 10.1117/12.2561203 * Maier & Holder (2017) Maier, G., & Holder, J. 2017, in International Cosmic Ray Conference, Vol. 301, 35th International Cosmic Ray Conference (ICRC2017), 747. https://arxiv.org/abs/1708.04048 * Maire et al. (2019) Maire, J., Wright, S. A., Barrett, C. T., et al. 2019, AJ, 158, 203, doi: 10.3847/1538-3881/ab44d3 * Maire et al. (2022) Maire, J., Wright, S. A., Holder, J., et al. 2022, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 12184, Ground-based and Airborne Instrumentation for Astronomy IX, ed. C. J. Evans, J. J. Bryant, & K. Motohara, 121848B, doi: 10.1117/12.2630772 * McCann et al. (2010) McCann, A., Hanna, D., Kildea, J., & McCutcheon, M. 2010, Astroparticle Physics, 32, 325, doi: 10.1016/j.astropartphys.2009.10.001 * Mead (2013) Mead, C. C. 2013, Doctoral dissertation, Harvard University. http://nrs.harvard.edu/urn-3:HUL.InstRepos:11158246 * Nagai et al. (2008) Nagai, T., McKay, R., Sleege, G., & Petry, D. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1437–1440 * Panov et al. (2021) Panov, A. D., Astapov, I. I., Awad, A. K., et al. 2021, arXiv e-prints, arXiv:2109.09637. https://arxiv.org/abs/2109.09637 * Porelli & Taiga Collaboration (2022) Porelli, A., & Taiga Collaboration. 2022, in 37th International Cosmic Ray Conference, 876, doi: 10.22323/1.395.0876 * Rhodes (2020) Rhodes, B. 2020, Skyfield: Generate high precision research-grade positions for stars, planets, moons, and Earth satellites, 1.17 * Roache et al. (2008) Roache, E., Irvin, R., Perkins, J. S., et al. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1397–1400 * Schneider et al. (2010) Schneider, J., Léger, A., Fridlund, M., et al. 2010, Astrobiology, 10, 121, doi: 10.1089/ast.2009.0371 * Schuetz et al. (2016) Schuetz, M., Vakoch, D. A., Shostak, S., & Richards, J. 2016, ApJ, 825, L5, doi: 10.3847/2041-8205/825/1/L5 * Schwartz & Townes (1961) Schwartz, R. N., & Townes, C. H. 1961, Nature, 190, 205, doi: 10.1038/190205a0 * Sheikh et al. (2020) Sheikh, S. Z., Siemion, A., Enriquez, J. E., et al. 2020, AJ, 160, 29, doi: 10.3847/1538-3881/ab9361 * Shepherd et al. (2005) Shepherd, N., Buckley, J. H., Celik, O., et al. 2005, in International Cosmic Ray Conference, Vol. 5, 29th International Cosmic Ray Conference (ICRC29), Volume 5, 427. https://arxiv.org/abs/astro-ph/0507083 * Siemion et al. (2015) Siemion, A., Benford, J., Cheng-Jin, J., et al. 2015, in Advancing Astrophysics with the Square Kilometre Array (AASKA14), 116, doi: 10.22323/1.215.0116 * Sullivan et al. (1978) Sullivan, W. T., I., Brown, S., & Wetherill, C. 1978, Science, 199, 377, doi: 10.1126/science.199.4327.377 * Tarter (2003) Tarter, J. 2003, in ESA Special Publication, Vol. 539, Earths: DARWIN/TPF and the Search for Extrasolar Terrestrial Planets, ed. M. Fridlund, T. Henning, & H. Lacoste, 31–38 * Tellis & Marcy (2017) Tellis, N. K., & Marcy, G. W. 2017, AJ, 153, 251, doi: 10.3847/1538-3881/aa6d12 * Traas et al. (2021) Traas, R., Croft, S., Gajjar, V., et al. 2021, AJ, 161, 286, doi: 10.3847/1538-3881/abf649 * Turnbull & Tarter (2003) Turnbull, M. C., & Tarter, J. C. 2003, ApJS, 145, 181, doi: 10.1086/345779 * Waskom (2021) Waskom, M. L. 2021, Journal of Open Source Software, 6, 3021, doi: 10.21105/joss.03021 * Wes McKinney (2010) Wes McKinney. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, 56 – 61, doi: 10.25080/Majora-92bf1922-00a * Winker et al. (2009) Winker, D. M., Vaughan, M. A., Omar, A., et al. 2009, JAtOT, 26, 2310, doi: 10.1175/2009JTECHA1281.1 * Worden et al. (2017) Worden, S. P., Drew, J., Siemion, A., et al. 2017, Acta Astronautica, 139, 98, doi: 10.1016/j.actaastro.2017.06.0 08 * Wright (2018) Wright, J. T. 2018, Exoplanets and SETI, ed. H. J. Deeg & J. A. Belmonte (Cham: Springer International Publishing), 3405–3412, doi: 10.1007/978-3-319-55333-7_186 * Wright et al. (2018a) Wright, J. T., Kanodia, S., & Lubar, E. 2018a, AJ, 156, 260, doi: 10.3847/1538-3881/aae099 * Wright et al. (2018b) Wright, J. T., Sheikh, S., Almár, I., et al. 2018b, arXiv e-prints, arXiv:1809.06857. https://arxiv.org/abs/1809.06857 * Wright et al. (2018c) Wright, S. A., Horowitz, P., Maire, J., et al. 2018c, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII, ed. C. J. Evans, L. Simard, & H. Takami, 107025I, doi: 10.1117/12.2314268 * Zuckerman et al. (2023) Zuckerman, A., Ko, Z., Isaacson, H., et al. 2023, arXiv, arXiv:2301.06971. https://arxiv.org/abs/2301.06971
# Flexible heat pumps: must-have or nice to have in a power sector with renewables? Alexander Roth111German Institute for Economic Research (DIW Berlin), Mohrenstraße 58, 10117 Berlin, Germany 222Corresponding Author<EMAIL_ADDRESS>Dana Kirchem††footnotemark: Carlos Gaete-Morales††footnotemark: Wolf-Peter Schill††footnotemark: ###### Abstract Heat pumps are a key technology for reducing fossil fuel use in the heating sector. However, the transition to heat pumps implies an increase in electricity demand, especially in the cold winter months. Therefore, the flexible operation of heat pumps will be of high importance to the power sector. Using an open-source power sector model, we examine the power sector impacts of three different expansion scenarios of decentralized heat pumps in an interconnected Germany until 2030 and the role of buffer heat storage of different sizes. We quantify the required additional investments in renewable energy sources and the effects on firm capacity needs. If wind power expansion potentials are limited, the rollout of heat pumps can also be accompanied by solar PV with little additional costs. The expansion of heat pumps increases the need for firm capacities and battery storage, but even small heat buffer storage with an energy-to-power ratio of two hours can reduce these additional capacities. We further show that increasing the number of heat pumps from 1.7 to 10 million saves around 180 TWh of natural gas and 35 million tonnes of CO2_eq emissions per year. ## 1 Introduction In light of the climate crisis, heat pumps are regarded as a central technology to reduce greenhouse gas emissions in the heating sector [1]. When powered with electricity from renewable energy sources (RES), heat pumps can displace traditional heating technologies such as oil- and gas-fired heating and thus mitigate greenhouse gas emissions. In addition, in the European context, the Russian invasion of Ukraine has led to further political, especially in Germany, to reduce the dependence on Russian natural gas imports. In Germany, natural gas is currently still the dominant source of residential heating. Therefore, the electrification of heating can be considered a critical measure to reduce the use of natural gas. In Germany, policymakers are working to accelerate the implementation of decentralized heat pumps, with a declared target of six million heat pumps installed by 2030 [2]. Given the current stock (2024) of around 1.7 million heat pumps, such a transition implies an increase in the electricity demand. So far, it is not yet fully understood how a larger heat pump stock affects the power sector in detail, considering that the electricity needs for mobility, hydrogen production, and other energy services will also rise. One common concern is that heat pumps could add to existing load peaks due to electricity load profiles coinciding with heat demand profiles and thus increase the need for firm generation capacity or electricity storage. Therefore, the potential benefits of flexible heat pump operations are of central interest. Therefore, we explore the power sector effects of various heat pump rollout scenarios in Germany. In particular, we focus on different degrees of temporal flexibility in heat pump operations by varying the size of the heat storage assumed to be attached to heat pumps. To do so, we use the open-source capacity expansion model DIETER [3, 4, 5] to model the central European power sector for various scenarios of 2030. Previous studies have highlighted the important role of heat pumps in the decarbonization of the heating sector. A recent study shows that deploying heat pumps is one of the fastest strategies to reduce natural gas consumption in the German heating sector [6]. Several studies investigate the potential of heat pumps to facilitate the integration of renewable energy sources in the power sector. For example, different analyses show that deploying additional heat pumps aligns well with additional investments into wind power plants [7, 8]. Regarding the flexibility of heat pumps and optimal heat storage size, the picture is inconclusive. Investigating various heat storage sizes, one study finds that the optimal thermal energy capacity of heat pumps in Spain and the UK lies between twelve and 14 hours [9] of maximum heat output. A previous analysis of wind power deployment in Denmark finds that the flexible operation of heat pumps provides only has moderate system benefits and that even inflexible heat pumps enable a higher share of wind power energy [10]. In another paper, heat pump flexibility is provided by the thermal inertia of buildings [11]. A more recent study for Germany points out that the power system cost savings from flexible electric heating with night storage in Germany are moderate because renewable availability patterns do not align well with heat demand profiles [12]. The seasonal demand pattern disadvantages flexible electric heating compared to other sector coupling options without this seasonality, such as electric vehicles. This finding is also supported by another study [13] that identifies a larger potential for load shifting in electric vehicles than in heat pumps. Another study focuses on the role of flexible, large-scale, centralized heat pumps in district heating grids [14], finding a correlation between RES expansion and the choice of heating technologies. With higher deployment of RES, large heat pumps become more competitive. Other studies focus on the competition of flexibility provided by heat pumps with electricity storage units. In power systems with a share of renewable electricity of 80% or higher, the flexible use of heat pumps reduces the investment needs for short-term electricity storage significantly [15]. The substitutional nature between pumped hydro storage and thermal storage is also highlighted in the literature [7]. Our paper adds to the existing body of literature by investigating the power sector effects of decentralized heat pumps in detail, specifically accounting for different amounts of temporal flexibility facilitated via heat storage. In our analysis, we use an open-source capacity expansion model that considers the hourly variability of renewable electricity generation and heat demand over an entire year and accounts for additional loads related to electric vehicles and the production of green hydrogen. To the best of our knowledge, such an analysis has not been done so far. We investigate how different rollout speeds of heat pumps in Germany, specifically in combination with different heat storage capacities, impact the optimal capacity investment and dispatch decisions in the power system. In addition, we also provide an ex- post calculation to measure the associated natural gas usage, cost, and emission savings. To check the robustness of our results, we further carry out numerous sensitivity analyses with alternative assumptions on relevant input parameters such as renewable availability, including an extended drought period, different natural gas prices, and a German coal phase-out. ## 2 Methods In the following, we describe the methodological approach as well as the sectoral and geographical scope of the study. First, we introduce the power sector model used in this analysis. Second, we provide details about the modeling of the heating sector and, in particular, the assumptions regarding the operation of heat pumps in our model. Third, we outline how other sector coupling options are considered in the model, namely electric mobility and the production of green hydrogen. Finally, we describe the geographical scope of the model. ##### Power sector model In this study, we use the power sector model DIETER (Dispatch and Investment Evaluation Tool with Endogenous Renewables), which has already been used in prior studies of energy storage and sector coupling [4, 12, 16, 17, 18].333The model code can be accessed here: https://gitlab.com/diw- evu/projects/heatpumps_2030. It is an open-source linear program that determines the least-cost investment and dispatch decisions for a range of electricity generation and storage technologies. The model minimizes total system costs while considering all subsequent hours of a year to capture renewable energy variability and storage use accurately. A detailed description of the objective function and the most relevant constraints can be found in [3]. The model covers the electricity sector and includes a detailed space heating module, e-mobility, and flexible hydrogen production options. Input data include time series of electric load, heat demand, electric vehicle charging, hydrogen demand, and capacity factors of renewable energies. Cost assumptions and technology investment constraints are further inputs. We are following a brown-field approach, in which we consider exogenous bounds on investments to account for existing plants and path dependencies. These are aligned with the current renewable capacity expansion plans of the German government and the currently installed fossil-fuel capacities. More detail on the capacity bounds can be found in section 3.2 and Table SI.2. ##### Heating sector The German space heating sector is characterized by twelve archetypes of residential buildings categorized by two size classes (single-/two-family homes and multifamily buildings) and six age classes, corresponding to varying energy efficiency levels. While the building stock is described in detail in [12], we provide a brief overview here. We model twelve different building archetypes, which are distinguished by year of construction (six classes: before 1957, four periods between 1958 and 2019, and after 2019) and housing type (two classes: one -& two-family homes and multifamily homes). Depending on the year of construction, the building archetypes are characterized by different energy efficiency levels: younger buildings have a lower annual heating requirement, and buildings constructed after 2020 are characterized as passive houses. Table SI.1 depicts the building stock assumptions for 2030, which are based on [12]. For each of the twelve building archetypes, an hourly heating demand time series is generated using the open-source thermal building model TEASER (Tool for Energy Analysis and Simulation for Efficient Retrofit, [19]) and the publicly available AixLib Library [20]. The thermal building model considers the physical components of all major building elements and their thermal inertia to derive hourly heat flows inside the building and toward the ambiance (conduction, convection, and radiation). We assume indoor temperature requirements of 22∘C at daytime and a nighttime reduction to 18∘C between 10 p.m. and 5 a.m. Further, a test reference year approach is used to derive heating demand, which is representative of historical weather data in central Eastern Germany. Domestic hot water demand is modeled separately based on the Swiss SIA 2024 standard [21]. A graphic representation of the resulting hourly heat demand time series is provided in the Supplemental Information (Figure SI.2). We exogenously set the share of total space heating and hot water demand, which has to be covered by two different types of heat pumps for each scenario; hence, we implicitly only consider the part of the building stock where heat pumps are installed. On an hourly basis, the heat pumps must satisfy the demand for space heating and hot water, as determined by the shares. We assume that heat pumps can be combined with buffer a heat storage of different sizes, which vary between scenarios. Based on these inputs and assumptions, the model optimizes the hourly electricity use by heat pumps. Figure 1 depicts how heat pumps are modeled in DIETER. The electric energy needed depends on the coefficient of performance (COP), which in turn depends on the ambient temperature in the case of air-source heat pumps. Lower ambient temperatures decrease the COP, so more electric energy is required to provide the same amount of heating energy. For more information, see section SI.1.1. How much heating energy is provided to the building depends on the heat outflow from the buffer storage, which can neither exceed the total amount of heating energy stored plus the storage inflow in the same hour nor the installed heat output capacity of the heat pump. We only consider decentralized heat pumps with decentralized thermal energy storage. Centralized large heat pumps supplying district heating grids and centralized seasonal heat storage are not part of the analysis. Figure 1: Heat module in DIETER ##### Other sector coupling options As the electrification of other energy sectors is a policy target in Germany, we also account for electric mobility and the production of green hydrogen. The additional system load of electric vehicles enters the model as an electricity demand time series. Cars are assumed to charge with a balanced, yet not wholesale market price-driven time profile determined by the open- source tool “emobpy” [5] (for further details, see SI.1.2). The model also has to satisfy a given yearly demand for green hydrogen that has to be produced with electrolysis. The hourly hydrogen production profile is endogenously optimized, with given electrolysis capacity and assuming hydrogen storage at no cost. We provide the equations that describe the straightforward hydrogen model in Section SI.1.3 in the Supplemental Information. ##### Geographical scope The study focuses on Germany, where an explicit heat pump rollout is modeled, but also includes Denmark, Poland, Czechia, Austria, Switzerland, France, Luxembourg, Belgium, the Netherlands, and Italy. To keep the model tractable while still considering the effects of European interconnection, we optimize investment decisions only in the German power sector while assuming (largely) fixed power plant fleets for other countries. We also do not explicitly model sector coupling for countries besides Germany. Section 3.2 discusses capacity bounds for different countries. ## 3 Data and scenario assumptions ### 3.1 Input data sources Time series data for the electric load, capacity factors for renewables, and hydro inflow data for all countries are taken from the ENTSO-E Pan-European Climate Database (PECD 2021.3) [22].444We use the target year 2030 and the weather year 2008 Cost and technology parameters of electricity generation and storage technologies are depicted in Table 3(b) in the Supplemental Information. The relevant technical assumptions related to heating technologies and gas-based electricity generation technologies for the ex-post analysis of natural gas and emission saving are shown in Table 2 (more information in Section 4.6). ### 3.2 Scenario assumptions We refer to our main set of scenario assumptions as baseline. In the following, we briefly sketch the most important features of this scenario. Whenever we deviate from the baseline, for instance, when we present sensitivity analyses, we make this explicit. ##### Heating sector We distinguish between three policy scenarios of the overall heat pump stock in 2030 which we compare to a reference scenario. In this reference scenario, we assume 1.7 million decentralized heat pumps in 2030. This number reflects the stock of heat pumps installed in Germany at the time of writing, meaning no further pumps would be installed until 2030. In the slow rollout, the number of heat pumps would reach three million by 2030. Additional heat pumps would be exclusively installed in single- and two-family homes built between 1995 and 2009. This scenario largely corresponds to the current growth of heat pump deployment in Germany.555See https://openenergytracker.org/en/docs/germany/heat/ In the government rollout, the stock of heat pumps would reach six million in 2030, reflecting the target of the German government [23]. In this scenario, most single- and two-family homes built after 1995 would be equipped with heat pumps. In the fast rollout, heat pumps would installed in more single- and two-family homes, even old ones built before 1979 with very low energy efficiency standards. In this scenario, the total number of heat pumps would increase to ten million by 2030, and the total annual heat provided by heat pumps increases substantially. Table 1 provides an overview of the different heat pump rollout scenarios. Table SI.1 provides additional information on how heat pumps are rolled out across different building types. In the most ambitious scenario, decentralized heat pumps provide around 40% of total space heating and domestic hot water needs (Table 1). The electricity demand of heat pumps and other sectors in the different scenarios is depicted in Figure 2. Table 1: Heat pump data | | Reference | Slow | Government | Fast ---|---|---|---|---|--- Number of installed heat pumps | [million] | 1.7 | 3.0 | 6.0 | 10.0 Heat pump power rating | [GWe] | 8.7 | 14.5 | 27.5 | 52.6 Maximum thermal heat pump output | [GWth] | 19.6 | 32.7 | 61.9 | 118.5 Share of air-sourced heat pumps | | 0.8 | 0.8 | 0.8 | 0.8 Share of ground-sourced heat pumps | | 0.2 | 0.2 | 0.2 | 0.2 Yearly heat supplied by heat pumps | [TWhth] | 18.6 | 43.3 | 74.0 | 195.3 _Notes:_ Heat includes space heating and domestic hot water. Across all building types, air-source heat pumps account for 80% of installed heat pumps and ground-source heat pumps account for the remaining 20%. While ground-source heat pumps are more energy-efficient, air-source heat pumps are cheaper to install. We assume that all heat pumps can, in principle, be combined with thermal energy storage. We conduct analyses with varying storage energy capacities. The assumed energy storage size is expressed in energy-to- power (E/P) ratios ranging from zero to 168 hours (zero, two, six, 24, and 168 hours). In this terminology, a heat storage with an E/P ratio of two hours has a total heat storage capacity that equals two hours of maximum heat output of the heat pump. Equipping heat pumps with a zero-hour storage means that heat pumps have no attached heat storage and thus have to exactly follow the heat demand profile in every hour. That means heat pumps are operated inflexibly. With increasing heat storage, heat pumps can be operated with more flexibility, allowing to decouple electricity consumption from heat provision. Importantly, our modeling approach assumes that heat pumps are operated in a system-friendly, i.e., cost-minimizing manner whenever possible. This could be interpreted as if heat pump operators faced hourly wholesale prices and operated their heat pumps to minimize overall system costs. As this is not the case today, we discuss the consequences of this assumption in the conclusion. ##### Capacity bounds In Germany, we limit the capacities of coal- and oil-fired power plants to current levels. Capacities of gas-fired power plants, open-cycle (OCGT), and combined-cycle (CCGT) - following current policy discussions - can be expanded beyond current levels. In sensitivity analyses with a German coal phase-out, we assume the upper capacity limit for hard coal and lignite to be zero. Regarding wind energy, we align upper capacity bounds for on- and offshore wind energy with the current German government targets of 115 GW for onshore wind and 30 GW for offshore wind in the baseline scenarios. We use the government target for wind power as an upper limit, as wind capacity expansion in Germany is slow due to long assessment and permit processes, and limited by land (and sea) availability. In this context, the government targets can be considered ambitious. An even higher wind energy capacity expansion appears unrealistic to achieve by 2030 [23]. In a sensitivity analysis, we remove these upper bounds on wind power capacities. The capacities of solar PV do not have any bounds. The electrolysis capacity is fixed at 10 GWe. In other countries, renewable energy capacities are fixed based on the values of Ten-Year Network Development Plan (TYNDP) [24] of ENTSO-E and set as upper bounds for fossil generators. The reason for not fixing the fossil generators in other countries is to avoid an unduly large power plant fleet that could support the German heat pump rollout. Therefore, the model is free to choose the smallest capacity needed. In all countries, we fix the capacities of all hydropower technologies (run-of-river, reservoirs, and pumped-hydro) according to the ERAA 2021 [22] and bioenergy under the assumption that their potential for further capacity expansion is exhausted. Table SI.2 provides an overview of all capacity bounds in all countries. ##### Sector coupling demand In Germany, we consider electric loads related to electric mobility and hydrogen production. To incorporate the impact of electric mobility, we include a fleet of 15 million electric cars compatible with the government’s goal of 2030 [23]. This fleet would require approximately 36 TWh of additional electricity annually. Additionally, we account for 28 TWh_H_2 of hydrogen demand in Germany produced by domestic electrolysis, resulting in an additional electricity demand of around 39 TWh. This number is based on the target set in the updated German National Hydrogen Strategy of 2023 to build up an electrolysis capacity of 10 GW [25, 26] and assuming 4000 full-load hours. This does not include hydrogen imports, which are expected to satisfy 50 to 70 % of the German hydrogen demand (95-130 TWh_H_2 in 2030) [25]. We further assume that hydrogen can be stored without additional investment costs, e.g., in existing cavern storage. This enables electrolyzers to operate with a substantial degree of flexibility to produce hydrogen over the course of the year. In countries besides Germany, additional loads related to sector coupling are included in the electric load time series data provided by ENTSO-E and thus assumed to be inflexible. Figure 2 provides an overview of the electricity demand of the different sector coupling options. _Notes:_ The figure shows the electricity demand in the scenario where heat pumps do not have heat storage and are operated inflexibly. Figure 2: Electricity demand Germany ##### Renewable energy constraint In all scenarios, 80% of the yearly electricity consumption in Germany (including the consumption of electric vehicles and electrolysis) has to be covered by renewable energy sources. That is in line with the goal of the current German government coalition. In addition, the electricity demand by heat pumps has to be entirely met by additional renewable energy sources over the course of a year (but not in every single hour). In other countries, we do not assume any renewable energy targets. ##### Fuel and carbon prices Our fuel price assumptions are summarized in Table 3(b). In our baseline assumptions, we set the wholesale price of natural gas to 50 Euro per MWh. We further assume a carbon emission cost of 130 Euro per ton of CO2 in 2030 [27]. This cost is associated with the emission factor of fossil-based heating and electricity generation technologies and is considered a variable generating cost, along with fuel expenses. ## 4 Results ### 4.1 Heat pump rollout triggers investments in electricity generation and storage In the following, we show the results of the baseline scenarios in which we assume expansion limits of 115 GW for onshore wind power and 30 GW for offshore wind power, have no regulated phase-out of coal-fired power plants, and assume a natural gas price of 50 Euro per MWh. Figure 3 shows the power plant capacities needed to cover the required electricity needs of heat pumps and the impact of heat storage. Expanding the stock of heat pumps requires additional investments into electricity generation infrastructure. Looking first at inflexible heat pumps, we find that in the reference rollout, the cost-optimal capacity mix for reaching 80% renewable energy in Germany entails so (Figure 3, panel A). Further, ten GW of hard coal and 47 GW of gas-fired power plants are present. A rollout beyond the reference scenario requires higher generation capacity additions. The total additional generation capacities are around five GW and 20 GW (panel B) in the scenarios slow and government. For the slow rollout, two GW of offshore wind power and less than one GW of PV capacities are added, while around three GW combined of gas-fired power plants and lithium-ion batteries are added. For the government rollout, these numbers increase to seven GW of additional gas-fired power plants and eight GW of additional solar PV, driven partly by the fact that offshore wind capacities have reached their upper bounds and can only be expanded by an additional four GW. In the scenario fast with the highest rollout of ten million heat pumps, 57 GW of solar PV capacity is added. In parallel to this large expansion of solar PV capacities, firm capacities in the form of gas-fired power plants increase by 18 GW to ensure the coverage of peak loads, while also nine GW of lithium-ion battery storage are added. The optimal storage energy capacity of batteries increases by three, eight, and 49 GWh in the three respective rollout scenarios (Figure SI.3). _Notes:_ The figure shows optimal capacity investments in the reference scenario (panel A) and changes induced by the rollout of heat pumps (panel B) in the case of inflexible heat pumps. Panel C and D show capacity changes to the respective references for different heat storage sizes in the government rollout (panel C) and fast rollout (panel D). The changes shown in panels C & D are to their respective references with different heat storage sizes. Reference results (panel A) are almost identically for different heat pump storage sizes; hence, for better visibility, only one reference rollout is shown. Please note the different y-axis ranges of the different panels. The complete set of results, including those for storage energy, is shown in Figure SI.3. Figure 3: Capacity investments under baseline assumptions with different heat pump rollouts and heat storage sizes ### 4.2 Heat storage reduces capacity needs for electricity generation and storage Equipping heat pumps with heat storage reduces the need for additional electricity generation and storage capacities (Figure 3, panels C & D). Introducing heat storage with an E/P-ratio of two hours reduces the need for additional solar PV capacities (e.g., six GW instead of an additional eight GW in the government rollout) compared to the reference. In addition, the need for battery storage is reduced by around seven GW compared to the case without heat storage, and even by two GW compared to the reference. This effect can be explained as lithium-ion batteries and heat storage of heat pumps are both short-duration storage technologies and, therefore, serve as complements, especially when taking up daily PV surplus generation peaks. Expanding the heat storage of heat pumps beyond an E/P ratio of two hours, even lower additional solar PV capacities are needed, but especially additional capacities of gas-fired power plants are reduced. If combined with heat storage larger than six hours in the government rollout (panel C) or larger than 24 hours in the fast rollout (panel D), the introduction of heat pumps even reduces the overall need for gas-fired power plants as compared to the reference. Qualitative results are largely similar between the government and fast rollout. We find a substitution between lithium-ion batteries and heat storage not only for storage power but also for storage energy capacities (Figure SI.3). While the deployment of heat pumps leads to additional lithium-ion energy capacities in all three rollouts, the introduction of a two-hour heat energy storage does not only reduce the additional need for storage energy capacities but even turns it negative: hence the introduction of heat pumps in combination with two-hour heat storage is reducing the overall need for lithium-ion energy capacities. For larger heat storage capacities of 24 and 168 hours, this absolute reversal cannot be detected, yet additional energy capacities are still strongly below the case of inflexible heat pumps. Due to the interconnection with its neighboring countries, the heat pump expansion in Germany could be partly supported by non-German generation and storage capacities. To avoid unintended support of German heat pumps by foreign power plants, we also co-optimize the power plant portfolios of neighboring countries, assuming an upper limit for fossil-fuel power plants outside Germany. Therefore, we ensure that German heat pumps in the model do not unduly benefit from an oversized exogenous power plant fleet outside Germany. Figure SI.5 (in the Supplemental Information) confirms that aggregated generation capacities in all countries except Germany barely change after the introduction of heat pumps in Germany. ### 4.3 Heat storage helps to integrate renewable electricity The rollout of heat pumps affects optimal generation and storage discharge (Figure 4) similarly to optimal capacities. The additional electricity needed for heat pumps is primarily generated by offshore wind power and solar PV. The latter plays a more significant role in the fast rollout due to the upper capacity bound of offshore wind power. As the profiles of solar PV generation and heat pump load only align to some extent, the expansion of heat pumps triggers additional generation by gas-fired power plants. Battery storage is also used more in the case of inflexible heat pumps but less if heat storage is available. If no heat storage is available, the rollout of heat pumps is also accompanied by additional generation from bio energy, which is another flexible generation technology, but comes with relatively high variable costs. Accordingly, bio energy use decreases (or turns even negative in the government rollout) if heat storage is available. Net imports of electricity slightly decrease with the rollout of heat pumps, especially when they do not come with heat storage, i.e., are operated inflexibly. That is due to more exports of renewable energy surpluses triggered by additional renewable energy capacities needed for the additional heat pumps. _Notes:_ The figure shows optimal dispatch in the reference scenario (panel A) and changes induced by the rollout of heat pumps (panel B) in the case of inflexible heat pumps. Panel C and D show changes to the respective reference scenarios for different heat storage sizes in the government rollout (panel C) and fast rollout (panel D). The changes shown in panels C & D are to their respective reference scenarios with different heat storage sizes. The results for the different reference scenarios (panel A) are almost identically for different heat pump storage sizes; hence, for better visibility, only one reference rollout is shown. Please note the different y-axis ranges of the different panels. The complete set of results is shown in Figure SI.4. Figure 4: Yearly electricity generation by source under baseline assumptions While the capacity and dispatch results already show that heat storage can help integrate renewable energy into the energy system, this effect is highlighted in Figure 5. It provides an illustration of hourly electricity generation and heat pump operation in combination with additional heat pumps. The figure depicts two exemplary weeks under baseline assumptions with a government rollout and two hours of heat storage. The diurnal fluctuations of solar PV generation are clearly visible, especially in the autumn week. In contrast, wind power generation has less regular yet longer variability patterns. In hours of low wind and solar PV generation, gas-fired power plants and imports cover the remaining residual load. Even with only two hours of heat storage capacity, heat pumps can align a substantial part of their electricity consumption with PV peak generation periods. This indicates that even small heat storage capacities already improve the integration of heat pumps into the system. Hours of electricity exports, storage charging, and heat pump use often coincide, which are also hours with relatively low prices. Conversely, heat pumps largely avoid drawing electricity from the grid during hours when imports take place, which often coincides with hours of low renewable generation and relatively high prices. _Notes:_ Two exemplary weeks are shown for the government rollout of heat pumps with heat storage of two hours. Figure 5: Exemplary weeks of electricity generation, heat pump operation, and wholesale prices In our model, heat pumps are operated in a way that minimizes system cost, which can be interpreted as if they are following wholesale market price signals. The presence of heat storage enables and increases their potential to do so. As visible in Figure 6, there is a strong alignment of heat pump electricity intake and relatively low residual load levels when heat pumps are equipped with heat storage. Heat pumps with no heat storage are inflexible electricity consumers, which directly follow the hourly heat demand profile. In Figure 6 that is visible by the parallel movement of heat output (gray line, right y-axis) and the electricity demand in case of no heat storage (red line, left y-axis). This changes when heat storage is added. Even small two- hour heat storage makes heat pumps sufficiently flexible that they can adjust their demand to the overall system conditions to a considerable extent. If heat storage is expanded further, heat output and electricity intake are even less correlated. _Notes:_ The figure shows the residual load, the heat output of heat pumps, and their electricity demand for different heat storage sizes in the baseline setting with a government rollout. Figure 6: Heat output, heat pump electric load with different storage sizes, and residual load The shifting of electric loads through heat storage is also depicted in Figure 7. The figure shows the electricity demand of heat pumps depending on the size of the heat storage over the course of an entire year. Panel 0 in Figure 7 shows the electricity demand of heat pumps without heat storage, which mirrors the heat demand (also shown in Figure SI.2). We see that in the winter months, hence on the bottom and top of every panel, demand is higher than in the summer months. In all days of the year, heat demand during the first hours of the day is assumed to be zero. Moving from panel 0 to the right, we see that the electric load patterns change when heat pumps are operated more flexibly. Already with heat storage of two hours, heat pumps are used to integrate excess solar energy from the middle of the day. Furthermore, due to the flexibility enabled by heat storage, heat pumps can draw electricity in hours of no heat demand, such as at night, and hence smooth heat consumption peaks in morning hours. For larger heat storage sizes, such as 24 and 168 hours, the electric load of heat pumps increasingly resembles the charging of a longer- duration storage asset, sometimes consuming excess electricity for extended periods and avoiding consumption later. While this mode of operation may not be realistic for small, decentralized heat pumps due to limited potential for low-cost heat storage installations, such operational patterns appear more plausible for centralized heating solutions with larger and lower-cost heat storage options. _Notes:_ The figure shows values from the baseline scenario with government rollout. Figure 7: Heatmap of the electricity demand of heat pumps for different heat storage sizes ### 4.4 Heat storage reduces electricity sector costs Our analysis focuses on additional electricity sector costs caused by the heat pump expansion. We relate these costs to the additional heating energy provided (Figure 8). More heat pumps lead to additional costs for the electricity sector due to additional investment into generation and storage capacities and higher variable costs. We find that electricity sector costs increase by around four ct/kWh of additional heating energy provided in the government rollout scenario with inflexible heat pumps. This cost effect decreases with larger heat storage sizes. The relative decline in additional costs is the largest when moving from no storage to a two-hour storage. With larger heat storage sizes, the decreases become smaller and are minimal between a day (24 hours) and a week of heat storage (168 hours). This means that the additional value of long-duration storage compared to shorter-duration storage is relatively small in the modeled setting with an 80% renewable share in Germany. In other words, the marginal electricity sector cost savings decrease with larger heat storage. The cost effects shown in Figure 8 do not include the installation costs of heat pumps and heat storage, but only the costs related to the electricity sector, such as investment and operational expenses of generation and electricity storage capacities. Therefore, we can interpret these figures as opportunity costs of heat storage. We calculate the break-even overnight investment costs of heat storage that would be required to deliver overall system cost savings. We do so by relating the power sector cost differences between scenarios with different heat storage sizes to the respective storage capacity and deriving annualized overnight investment costs. For the latter, we assume that heat storage installations have a lifetime of 20 years, face an interest rate of four percent, and do not incur any variable or fixed operation and maintenance costs. As the marginal power sector cost savings decrease with larger heat storage capacities, the specific break-even costs of heat storage decrease even faster. For example, specific heat storage investment costs have to be below around 80 Euro/kWh_th in the government rollout with two hours of heat storage. This break-even cost decreases to 40 Euro/kWh_th in the case with six hours of storage and sharply declines to only about two Euro/kWh_th for a heat storage size of one week. In other words, long-duration heat storage would have to be very cheap. Results are qualitatively similar for the slow and government rollouts. In the latter, break-even costs are slightly higher at around 94 Euro/kWh_th in the two-hour case, as the additional PV generation in this scenario increases the value of temporal flexibility. While the real- world costs of short-duration heat storage technologies are likely well below the range of two-hour break-even costs determined here, we consider it implausible that storage sizes of a day or more could be realized in combination with decentralized heat pumps at the required costs. Low-cost, long-duration heat storage appears much more plausible in combination with district heating networks and large-scale, centralized heat pumps, which is beyond the scope of our analysis. _Notes:_ For different rollout scenarios, the gray bars depict the additional costs in ct/kWh_th of additional heating energy provided for different rollout scenarios and heat storage sizes (compared to the respective reference, left y-axis). The red lines show the break-even investment costs in Euro/kWh_th of heat storage (compared to the respective scenario without heat storage, right y-axis). Figure 8: Additional electricity sector costs and break-even investment costs of heat storage ### 4.5 Qualitative results also hold in sensitivity analyses In addition to our scenario runs under baseline assumptions in which we vary the rollout speed of heat pumps and heat storage duration, we conduct several sensitivity analyses. Those help us judge how strongly our results hinge on certain fundamental model assumptions. We alter the assumptions on the upper bounds of wind power expansion, the level of natural gas prices, and the possibility of electricity generation from coal. We also introduce a week of a stylized “renewable energy drought” with zero wind and solar availability. Table SI.5 provides an overview of all sensitivity analyses. In the following, we summarize the principal takeaways of these sensitivity analyses. An extensive description and discussion of the results of the sensitivity analyses, including additional figures, are found in section SI.3.2. _Notes:_ The figure shows the optimal dispatch in the reference rollout for different scenario assumptions in case of a two-hour and zero-hour heat storage (panel A). Changes to the respective reference for a government rollout are shown in panel B. Please note the different y-axis ranges of the different panels. Figure 9: Yearly electricity generation in different sensitivity analyses In the baseline scenarios, we assume an upper limit for on- and offshore wind power capacity expansion in Germany of 115 GW and 30 GW, respectively. This appears to be realistic and policy-relevant from a 2030 perspective. Considering real-world constraints related to regulation, land availability, and public acceptance, unbounded wind power capacity expansion seems implausible. Still, removing these limits (scenario no wind cap), generates complementary insights into a less constrained equilibrium setting. Results show an increase in wind onshore capacities at the expense of offshore wind and a slight reduction in PV capacity. This leads to higher onshore wind generation and less offshore wind dispatch (Figure 9). The seasonality of heating demand aligns well with wind power and additional system costs due to heating decrease slightly compared to the baseline. Scenarios with higher gas prices (gas100 and gas150) show fewer gas-fired power plants and more solar PV capacities in the reference and different rollout scenarios. Especially in gas150, additional capacities are mostly solar PV as offshore wind is already at its limit in the reference. In parallel, dispatch sees reduced gas-fired generation both in the reference and the rollout scenario (Figure 9). The system costs per heating unit increase significantly due to more expensive natural gas (Figure SI.9). Without coal-fired power plants (coal phase-out), gas-fired generation and electricity imports increase in the reference rollout. Yet, the dispatch effects of a heat pump rollout do not substantially differ from the baseline setting. Combining coal phase-out with higher gas prices leads to similar effects like in the scenarios gas100 and gas150. As the share of variable renewable energy increases, the security of supply during prolonged periods with low renewable energy supply becomes an increasing concern [28, 29]. Therefore, we assess how a week of a severe renewable energy drought in Europe would affect our results. To simulate an extreme case of such a week, we artificially set wind and solar PV capacity factors to zero in all modeled countries during one winter week. Such a week- long renewable energy drought (scenario RE drought) requires substantially more firm capacity, which is provided by gas-fired power plants in the reference (Figure SI.8). The effects of a heat pump rollout on electricity generation and storage technologies is also higher than in the baseline, and we see higher cost increases. Yet, the overall yearly dispatch in the reference, and also the dispatch effects of a heat pump rollout, hardly change as compared to the baseline. Overall, our sensitivity analyses indicate that the key insights and results from the baseline scenarios hold true under varying assumptions. The addition of heat pumps, depending on the rollout speed, requires additional investments in wind offshore, solar PV, gas-fired plants, and short-duration storage to meet renewable energy constraints. Unlimited wind power expansion offers some benefits, but overall cost reductions are modest. ### 4.6 An ambitious rollout of heat pumps leads to large savings of overall system cost, natural gas use, and carbon emissions Based on the power sector optimization results, we also examine the effects of different rollout speeds of heat pumps on natural gas usage and carbon emissions. We compare the reference rollout of 1.7 million heat pumps with 4.3 million additional heat pumps in the government rollout scenario and 8.3 million additional heat pumps in the fast rollout scenario. The underlying assumptions for the calculation of gas and emission savings are stated in Table 2. Table 3 summarizes the results. Table 2: Cost and technical parameters for savings calculation Parameter | Value ---|--- Overnight investment costs [EUR/kWth] | Air-sourced heat pumps | 850 Ground-sourced heat pumps | 1400 Gas boilers | 296 Efficiencies | Open-cycle gas turbine | 0.4 Combined-cycle gas turbine | 0.542 Gas boilers | 0.9 Technical lifetime of heat pumps [Years] | 20 Interest rate | 0.04 Emission factor [t CO2_eq / MWhth] | 0.2 Under the assumption that each heat pump replaces one gas boiler with a thermal efficiency of 0.9666A thermal efficiency of 0.9 means that 1 kWh of natural gas will be transformed to 0.9 kWh of heat., additional heat pumps displace around 61 TWhth of natural gas in case of a government rollout and 196 TWhth with a fast rollout (Table 3 and Table SI.4), compared to the reference rollout. At the same time, natural gas usage for electricity generation slightly increases in both scenarios, but this is by far overcompensated by natural gas savings in the heating sector, leading to total savings of around 178 TWhth in the fast rollout compared to the reference rollout. For the more moderate government rollout with lower gas prices, overall natural gas savings still amount to around 59 TWh. To put these numbers into perspective, 59 (178) TWh of natural gas corresponds to around seven (21) percent of Germany’s overall natural gas consumption in 2022, or around a fifth (three-fifths) of private and commercial natural gas demand. In the scenarios with higher natural gas prices of 100 Euro or 150 Euro per MWhth, we find largely similar effects on overall natural gas usage. We observe an increase in overall system cost savings with an increasing number of heat pumps and increasing gas prices. In this calculation, overall system cost effects include the increase in power sector costs due to higher electricity demand, the total annualized overnight investment costs of the additional heat pumps, the savings in natural gas expenditures, savings of CO2 emission cost of gas heaters, as well as investment costs of replaced natural gas boilers. As the investment- and installation costs of heat pumps might even fall below our cited values to the technical progress, our costs saving numbers can be interpreted as a lower bound. Overall system cost savings are between 1.0 and 4.6 billion Euro in the government or fast rollout scenarios for a conservative natural gas price assumption of 50 Euro per MWh. Savings increase substantially with higher gas prices, up to 22.2 billion Euro per year in the fast rollout scenario with a gas price of 150 Euro per MWh. Our overall system cost calculations depend on the assumption that every new heat pump substitutes a new gas boiler, which would otherwise have to be installed. We do not consider retiring existing gas boilers before the end of their lifetime. The extent to which an accelerated heat pump rollout would lead to a replacement of existing gas boilers, which have not yet reached the end of their lifetime, is unclear due to a lack of data on the age of existing gas boilers in the buildings modeled here. We calculate a counterfactual extreme case that assumes that the gas boilers replaced by heat pumps could have been used for another 20 years, which means we do not consider their investment costs in the calculation. This leads to smaller, but still positive, overall system cost savings below 0.1 billion Euro in the government rollout and 2.5 billion Euro in the fast rollout. The reduced consumption of natural gas correspondingly leads to lower greenhouse gas emissions. In a fast rollout scenario of heat pumps, emission savings in the range of 35-37 million tons CO2_eq can be expected under different gas price assumptions. For the government rollout, we can expect emission savings of around 11-12 million tons CO2_eq. The latter corresponds to around 13% of German households’ carbon emissions from buildings in 2022. Hence, an ambitious heat pump rollout, as described in this paper, could make a significant contribution to Germany’s carbon emission reductions. A further expansion of heat pumps beyond 2030 would lead to even higher reductions in carbon emissions. Note that the assumed emission factor of natural gas of 0.2 tCO2_eq/MW_th appears to be a conservative estimate, as it does not take methane leakage within the natural gas supply chain into consideration. Thus, the emission savings from switching to heat pumps presented here can be considered a lower bound and might be substantially higher because of methane leakage. Table 3: Yearly saving of natural gas, CO2_eq emissions, and costs related to heat pumps Gas price | Euro/MWh | 50 | 100 | 150 ---|---|---|---|--- Heat pump rollout | | Gov. | Fast | Gov. | Fast | Gov. | Fast Natural gas displaced by additional heat pumps | TWhth | -61.49 | -196.31 | -61.49 | -196.31 | -61.49 | -196.31 Additional gas usage for electricity generation | TWhth | +2.58 | +18.01 | +1.92 | +12.75 | +4.90 | +20.35 Total gas savings | TWhth | -58.91 | -178.30 | -59.57 | -183.56 | -56.59 | -175.96 Total emissions savings | Mio t CO2_eq | -11.78 | -35.66 | -11.91 | -36.71 | -11.32 | -35.19 Change in overall system costs | Billion EUR | -0.96 | -4.63 | -3.68 | -13.32 | -6.56 | -22.20 _Notes:_ Changes and savings are shown relative to the respective reference scenario. Gov. refers to the government rollout scenario of heat pumps. ## 5 Discussion and conclusion As heat pumps are considered a key technology in the heating transition, their potential future impact on the electricity sector is of interest. We determine the effects of different rollout paths of decentralized heat pumps in Germany, combined with heat storage of different sizes, on the power sector. Under baseline assumptions, we find that the expansion of the German heat pump stock from 1.7 to ten million would require additional investments of around 54-57 GW of solar PV capacity in a least-cost solution, depending on how much heat storage is available. These results are driven by the assumption that the additional electricity consumption of heat pumps has to be covered by additional renewable electricity on an annual basis and that the expansion of wind power is limited to 115 GW (onshore) and 30 GW (offshore), respectively. For a slower rollout speed, which still achieves the German government’s target of 6 million heat pumps by 2030, additional PV capacities of around 4-8 GW are needed. Our results suggest a moderate need for additional firm capacity in the form of gas-fired power plants and lithium-ion batteries in most rollout scenarios, particularly in the fast rollout scenario. More flexible heat pump operations facilitated by heat storage can partially relieve these additional capacity needs, which is in line with previous studies [30, 7]. The European interconnection also helps to integrate heat pumps into the power sector and to limit additional capacity needs. This is in line with other findings [14], which also highlight the importance of interconnection, as large-scale heat pumps become more competitive if they can make use of renewable surpluses in other countries. Already small buffer heat storage with an energy capacity of two hours enables heat pumps to better align electricity consumption with the residual load. This results in power system cost savings of up to 0.9 ct per kWh of provided heat (or around 20%) compared to a case with inflexible heat pumps. Costs further decrease with increasing heat storage, yet the marginal cost savings strongly decline with larger heat storage size. This hints at the fact that heat storage mainly serves to smooth daily renewable energy fluctuations. For two-hour storage, it appears plausible that the costs of installing heat storage remain below the power sector benefits determined here. In contrast, long-duration heat storage would have to be very cheap to break even, which appears more plausible for large-scale thermal storage in district heating systems. Sensitivity analyses show that results are generally robust against changes in key scenario assumptions. Assuming unconstrained expansion potentials for wind power substantially reduces solar PV capacity deployment since wind energy aligns better with heat demand [7], yet barely changes power sector costs. A complete coal phase-out in the electricity sector also does not have major effects on the impacts of accelerated heat pump rollouts on power sector capacities, dispatch, or costs. Higher natural gas prices have more substantial effects and, in particular, lead to higher additional power sector costs of additional heat pumps. Considering a week-long, pan-European renewable energy drought requires overall more firm capacities, and the rollout of heat pumps is accompanied by substantially higher solar PV investments in this case. We further find that an accelerated replacement of gas boilers with heat pumps can bring yearly natural gas savings between around 57 and 184 TWhth, depending on the rollout speed and gas prices, and already accounting for increased gas usage in the electricity sector. For instance, in a fast rollout to ten million units in 2030, the additional heat pumps could save more than half of the private and commercial natural gas demand in Germany, which corroborates related findings [6]. Overall yearly system cost savings depend, among other factors, on the natural gas price and range from around 1 to 22 billion Euro for different natural gas price assumptions. CO2 emissions decrease by around 11-37 million tons per year, corresponding to around 13-43% of German households’ carbon emissions from buildings in 2022. As with any model-based analysis, our study has limitations. For example, we implicitly assume perfect distribution and transmission grids within countries that neglect any kind of grid congestion caused by heat pumps. In some distribution grid settings, the effect of heat pumps on grid congestion may be more severe than the impacts on system-wide generation capacities and dispatch modeled here. We also note that the hourly heat demand profiles used in our study are smoother than empirically measured heat pump operation patterns from the U.K. [31, 32]. A more “peaky” future heat demand could, in fact, potentially lead to higher load peaks and smaller flexibility potentials of heat pumps, which merits further investigation in future work. In addition, our heat demand time series follow a synthetic test reference year approach, while the renewable electricity generation profiles and ambient temperatures come from actual weather years. This may lead to underestimating the system challenges in case of situations where very low renewable availability coincides with very low ambient temperatures and, accordingly, high heat demand. Therefore, it is advisable to use consistent electric and heat load data from the same weather years for future work. In addition, our approach of exogenously fixing the bioenergy capacity may lead to an underestimation of its flexibility potential. Without increasing the overall use of bioenergy, its conversion into electricity could become more concentrated in fewer hours to better complement variable wind and solar power. This would require a higher installed generation capacity (with lower full-load hours), as well as appropriate storage of biomass or biogas. Furthermore, Germany is not the only country pushing for an accelerated rollout of heat pumps. While we assume inflexible heat pumps outside Germany, future work could analyze rollouts in the whole of Europe in more detail to obtain more comprehensive insights into a wider European heating transition. Finally, our assumption of balanced charging for electric vehicles may not reflect reality. As the electric vehicle market evolves and charging infrastructure develops, there may be significant changes in charging behavior and the adoption of smart and bidirectional charging technologies, which may decrease the value of flexibility provided by heat storage. Future research should explore more detailed modeling approaches that account for such potential changes. Likewise, smartly charged electric trucks could also be considered in future work [18]. Our results show that even relatively small heat storage capacities may already have substantially positive power system effects. While in this analysis, heat pumps were either operated totally inflexibly with no heat storage or perfectly system-oriented with heat storage, future research could analyze the effects of other, and potentially more realistic, operating behaviors. Further, flexibly operating heat pumps requires the right incentives for consumers. Hence, from a policy perspective, it is important to make sure that electricity consumption can be measured and controlled on a continuous basis (sometimes referred to as “smart metering”) and that consumers have the possibility to choose electricity tariffs that reflect the dynamics of wholesale electricity markets. While very large heat storage sizes appear not to be realistic for decentralized heat pumps, our results still serve as an indication of how larger, centralized heating systems with long- duration heat storage could operate. In summary, we find the power sector impacts of an accelerated heat pump rollout in Germany to be moderate and manageable, even under the assumption that the electric load from heat pumps has to be met by a corresponding yearly increase in renewable electricity generation. If wind energy expansion is restricted, additional solar PV capacity can be deployed instead without substantially increasing the overall system costs, facilitated by the European interconnection. In general, operating heat pumps in a temporally flexible manner entails substantial power sector benefits. Even relatively small heat storage already helps to reduce the additional needs for firm capacities or electricity storage induced by heat pumps and lowers power sector costs. To sum up, operating heat pumps in a temporally flexible manner is not strictly a “must-have” in the power sector modeled here, but it emerges as a desirable feature of the energy transition. ## Acknowledgments We thank our colleague Adeline Guéret for supporting the calculations described in section 4.6. We further thank various colleagues of the Ariadne project for feedback on an earlier draft. We thank the participants of the IAEE 2023 Conference Milano, the Smart Energy Systems - International conference 2023 Copenhagen, and the Strommarkttreffen 2024/01 Cologne for their constructive feedback. We gratefully acknowledge financial support from the German Federal Ministry of Education and Research (BMBF) via the Kopernikus project Ariadne (FKZ 03SFK5N0, FKZ 03SFK5N0-2), as well as from the Federal Ministry of Labour and Social Affairs (BMAS) via the project FIS (FIS.03.00016.21). ## Author contributions Conceptualization: AR, WS; Methodology: AR, CG, DK, WS; Software: AR, CG; Formal analysis: AR, DK, WS; Investigation: AR; Data Curation: AR, CG, DK, WS; Writing - Original Draft: AR, DK; Writing - Review & Editing: WS; Visualization: AR; Supervision: WS; Funding acquisition: WS ## Data availability All input and results data used in this paper can be accessed here: https://gitlab.com/diw-evu/projects/heatpumps_2030 ## Code availability The model and analysis code used in this paper can be accessed here: https://gitlab.com/diw-evu/projects/heatpumps_2030 ## Ethics declarations ### Competing interests The authors declare no competing interests. ## References * [1] IEA “The Future of Heat Pumps”, 2022 URL: https://www.iea.org/reports/the-future-of-heat-pumps * [2] Alexander Roth and Wolf-Peter Schill “Renewable heat” In _Open Energy Tracker_ , 2023 URL: https://openenergytracker.org/en/docs/germany/heat/ * [3] Alexander Zerrahn and Wolf-Peter Schill “Long-run power storage requirements for high shares of renewables: review and a new model” In _Renewable and Sustainable Energy Reviews_ 79 Elsevier, 2017, pp. 1518–1534 DOI: 10.1016/j.rser.2016.11.098 * [4] Wolf-Peter Schill and Alexander Zerrahn “Long-run power storage requirements for high shares of renewables: Results and sensitivities” In _Renewable and Sustainable Energy Reviews_ 83 Elsevier, 2018, pp. 156–171 DOI: doi.org/10.1016/j.rser.2017.05.205 * [5] Carlos Gaete-Morales, Hendrik Kramer, Wolf-Peter Schill and Alexander Zerrahn “An open tool for creating battery-electric vehicle time series from empirical data, emobpy” In _Scientific Data_ 8.1, 2021, pp. 152 DOI: 10.1038/s41597-021-00932-9 * [6] Pietro P. Altermatt et al. “Replacing gas boilers with heat pumps is the fastest way to cut German gas consumption” In _Communications Earth & Environment_ 4.1, 2023, pp. 56 DOI: 10.1038/s43247-023-00715-7 * [7] Oliver Ruhnau, Lion Hirth and Aaron Praktiknjo “Heating with wind: Economics of heat pumps and variable renewables” In _Energy Economics_ 92, 2020, pp. 104967 DOI: 10.1016/j.eneco.2020.104967 * [8] Yi-kuang Chen, Ida Græsted Jensen, Jon Gustav Kirkerud and Torjus Folsland Bolkesjø “Impact of fossil-free decentralized heating on northern European renewable energy deployment and the power system” In _Energy_ 219, 2021, pp. 119576 DOI: 10.1016/j.energy.2020.119576 * [9] Jesus Lizana et al. “A national data-based energy modelling to identify optimal heat storage capacity to support heating electrification” In _Energy_ 262, 2023, pp. 125298 DOI: 10.1016/j.energy.2022.125298 * [10] Karsten Hedegaard and Marie Münster “Influence of individual heat pumps on wind power integration – Energy system investments and operation” In _Energy Conversion and Management_ 75, 2013, pp. 673–684 DOI: 10.1016/j.enconman.2013.08.015 * [11] Georgios Papaefthymiou, Bernhard Hasche and Christian Nabe “Potential of Heat Pumps for Demand Side Management and Wind Power Integration in the German Electricity Market” In _IEEE Transactions on Sustainable Energy_ 3.4, 2012, pp. 636–642 DOI: 10.1109/TSTE.2012.2202132 * [12] Wolf-Peter Schill and Alexander Zerrahn “Flexible electricity use for heating in markets with renewable energy” In _Applied Energy_ 266, 2020, pp. 114571 DOI: 10.1016/j.apenergy.2020.114571 * [13] David Kröger, Jan Peper and Christian Rehtanz “Electricity market modeling considering a high penetration of flexible heating systems and electric vehicles” In _Applied Energy_ 331, 2023, pp. 120406 DOI: 10.1016/j.apenergy.2022.120406 * [14] Christiane Bernath, Gerda Deac and Frank Sensfuß “Influence of heat pumps on renewable electricity integration: Germany in a European context” In _Energy Strategy Reviews_ 26, 2019, pp. 100389 DOI: 10.1016/j.esr.2019.100389 * [15] Simon Hilpert “Effects of Decentral Heat Pump Operation on Electricity Storage Requirements in Germany” In _Energies_ 13.11, 2020, pp. 2878 DOI: 10.3390/en13112878 * [16] Alexander Roth and Wolf-Peter Schill “Geographical balancing of wind power decreases storage needs in a 100% renewable European power sector” In _iScience_ 26.7 Elsevier, 2023 DOI: 10.1016/j.isci.2023.107074 * [17] Dana Kirchem and Wolf-Peter Schill “Power sector effects of green hydrogen production in Germany” In _Energy Policy_ 182 Elsevier, 2023, pp. 113738 DOI: 10.1016/j.enpol.2023.113738 * [18] Carlos Gaete-Morales, Julius Jöhrens, Florian Heining and Wolf-Peter Schill “Power sector effects of alternative options for de-fossilizing heavy-duty vehicles–Go electric, and charge smartly” In _Cell Reports Sustainability_ 1, 2024, pp. 100123 DOI: 10.1016/j.crsus.2024.100123 * [19] Peter Remmen et al. “TEASER: an open tool for urban energy modelling of building stocks” In _Journal of Building Performance Simulation_ 11.1, 2018, pp. 84–98 DOI: 10.1080/19401493.2017.1283539 * [20] Dirk Müller et al. “AixLib-An open-source modelica library within the IEA-EBC annex 60 framework” In _BauSIM 2016_ , 2016, pp. 3–9 * [21] SIA “Standard-Nutzungsbedingungen für die Energie-und Gebäudetechnik. Merkblatt” In _Zürich: Swiss Society of Engineers and Architects_ , 2006 * [22] Matteo De Felice “ENTSO-E Pan-European Climatic Database (PECD 2021.3) in Parquet format” Zenodo, 2022 DOI: 10.5281/zenodo.7224854 * [23] Wolf-Peter Schill, Alexander Roth, Adeline Guéret and Felix Schmidt “Mixed Mid-Term Review for German Traffic Light Coalition in the Energy Transition; Significant Effort Needed to Achieve Targets”, 2023 URL: https://www.diw.de/de/diw_01.c.888280.de/publikationen/diw_focus/2023_0010/mixed_mid-term_review_for_german_traffic_light_coalition_in___rgy_transition__significant_effort_needed_to_achieve_targets.html * [24] ENTSOE “TYNDP 2018. Project Sheets”, 2018 URL: https://tyndp.entsoe.eu/tyndp2018/projects/projects * [25] Bundesministerium Wirtschaft und Klimaschutz (BMWK) “Fortschreibung der Nationalen Wasserstoffstrategie”, 2023 * [26] Martin Kittel, Dana Kirchem, Wolf-Peter Schill and Claudia Kemfert “National hydrogen strategy: Clear focus and consistent implementation necessary” In _DIW Weekly Report_ 13.40/42 Berlin: Deutsches Institut für Wirtschaftsforschung (DIW), 2023, pp. 269–278 DOI: 10.18723/diw˙dwr:2023-40-1 * [27] Robert Pietzcker et al. “Notwendige CO2-Preise zum Erreichen des europäischen Klimaziels 2030” Artwork Size: 20 pages Medium: pdf, 2021, pp. 20 pages DOI: 10.48485/PIK.2021.007 * [28] Damien Raynaud, Benoit Hingray, Baptiste François and Jean Dominique Creutin “Energy droughts from variable renewable energy sources in European climates” In _Renewable Energy_ 125 Elsevier, 2018, pp. 578–589 DOI: 10.1016/j.renene.2018.02.130 * [29] Martin Kittel and Wolf-Peter Schill “Measuring the Dunkelflaute: How (not) to analyze variable renewable energy shortage”, 2024 DOI: doi.org/10.48550/arXiv.2402.06758 * [30] Brecht Baeten, Frederik Rogiers and Lieve Helsen “Reduction of heat pump induced peak electricity use and required generation capacity through thermal energy storage and demand response” In _Applied Energy_ 195, 2017, pp. 184–195 DOI: https://doi.org/10.1016/j.apenergy.2017.03.055 * [31] S.D. Watson, K.J. Lomas and R.A. Buswell “How will heat pumps alter national half-hourly heat demands? Empirical modelling based on GB field trials” In _Energy and Buildings_ 238, 2021, pp. 110777 DOI: 10.1016/j.enbuild.2021.110777 * [32] Oliver Ruhnau, Lukas Lundström, Linus Dürr and Florian Hunecke “Empirical weather dependency of heat pump load: Disentangling the effects of heat demand and efficiency” In _2023 19th International Conference on the European Energy Market (EEM)_ , 2023, pp. 1–5 DOI: 10.1109/EEM58374.2023.10161914 * [33] Carlos Gaete-Morales “emobpy: application for the German case” Zenodo, 2021 DOI: 10.5281/ZENODO.4514928 * [34] Alexander Zerrahn, Wolf-Peter Schill and Claudia Kemfert “On the economics of electrical storage for variable renewable energy sources” In _European Economic Review_ 108, 2018, pp. 259–279 DOI: 10.1016/j.euroecorev.2018.07.004 * [35] Bundesnetzagentur “Genehmigung des Szenariorahmens 2019-2030”, 2018 URL: https://www.netzentwicklungsplan.de/sites/default/files/paragraphs-files/Szenariorahmen_2019-2030_Genehmigung_0_0.pdf ## Appendix SI Supplemental Information ### SI.1 Additional information on the model This section provides additional information regarding the model we use for our analysis. #### SI.1.1 Heat pumps Following [12], we model the coefficient of performance (COP) of heat pumps as follows: $\text{COP}^{hp}_{h}=\eta^{\text{hp}}\frac{{\text{temp}^{\text{sink}}+273.15^{\circ}\text{C}}}{{\text{temp}^{\text{sink}}-\text{temp}_{h}^{\text{source}}}}$ (1) We generally assume a sink temperature of 50∘C. For ground-source heat pumps, we assume a constant source temperature $\text{temp}_{h}^{\text{source}}$ of 10∘C for all hours. Assuming a dynamic efficiency parameter $\eta^{\text{ground}}$ of $0.45$, this renders a COP of 3.64 for ground- sourced heat pumps. For air-source heat pumps, the source temperature varies with the hourly ambient temperature. Using an efficiency parameter $\eta^{\text{air}}$ of $0.35$, this renders a COP of 2.83 in hours with an ambient temperature of 10∘C, and a COP of 2.26 at 0∘C. Compared to state-of- the-art heat pumps, these values may be considered conservative, resulting in inflated electricity consumption of heat pumps. Yet, we assume that heat pumps are also installed in older buildings where heating systems may require higher sink temperatures, which would decrease the COP, especially in the fast rollout scenario. As we assume a constant sink temperature of 50∘C in all building types, this makes our COP assumptions appear less conservative. #### SI.1.2 Electric vehicles The analysis includes battery electric vehicle (BEV) time series using the emobpy tool [5]. The dataset [33] used has been created utilizing data from the “Mobilität in Deutschland” survey, distinguishing between commuter and spontaneous drivers and incorporating various factors such as trip frequencies, distances, trip duration, departure times, charging station availability, and charging strategies, as well as the use of popular BEV models. The dataset encompasses multiple charging strategies. For this research, we have selected the “immediate-balanced” approach to reflect the electricity drawn from the grid. Under this charging strategy, the vehicles’ batteries are charged upon arriving at charging stations, with a constant and often lower power rating than the charging station. This approach ensured that the BEV reached a 100% state of charge just before commencing the next trip. The selected time series are scaled to represent the demand for 15 million battery electric vehicles, with an annual electricity demand of 36 TWh. Figure SI.1 depicts the electricity demand of BEV in an exemplary week. For more information regarding the construction of the BEV demand time series, we refer to previous work [33, 5]. Figure SI.1: Hourly average electricity demand of 15 million BEV for a representative week. #### SI.1.3 Green hydrogen In the analysis, the production of green hydrogen is only foreseen in Germany and modeled following the approach described in [34]. We assume that a given hydrogen demand $h2^{demand}$ of 28 TWh has to be covered by electrolysis over the course of a year (Equation 2). We assume a temporally flexible hydrogen demand and unlimited hydrogen storage. The electrolysis capacity is exogenously set to 10 GW in Germany (Equation 4). The conversion factor of electrolysis is 71%; hence, one kilowatt-hour (kWh) of electricity is transformed into 0.71 kWh of hydrogen. $\displaystyle h2^{demand}=\sum^{h}H2^{prod}_{h}$ (2) $\displaystyle H2^{prod}_{h}=H2^{elec}_{h}\times 0.71$ (3) $\displaystyle H2^{elec}_{h}\leq INV^{H2}$ (4) ### SI.2 Additional information on input data In this section, we present the input data to our model in more detail. Table SI.1 provides information on the different building archetypes and their respective heating energy demand. Table SI.1: Building archetypes and heating energy demand assumptions for Germany in 2030 | | Total number | Annual | | Shares of heat pumps [%] ---|---|---|---|---|--- Type | Year construction | of buildings | heating energy | Floor area | Reference | Slow | Government | Fast | | [million] | demand [kWh/m2] | [million m2] | air | ground | air | ground | air | ground | air | ground | One- & two-family houses | | | | | | | | | | | 1 | Before 1957 | 1.41 | 276 | 247 | 0.008 | 0.002 | 0.008 | 0.002 | 0.008 | 0.002 | 0.2768 | 0.0692 3 | 1958-1978 | 2.46 | 203 | 431 | 0.008 | 0.002 | 0.008 | 0.002 | 0.008 | 0.002 | 0.2768 | 0.0692 5 | 1979-1994 | 2.55 | 153 | 446 | 0.0136 | 0.0034 | 0.0136 | 0.0034 | 0.0136 | 0.0034 | 0.72 | 0.18 6 | 1995-2009 | 3.02 | 112 | 528 | 0.0488 | 0.0122 | 0.38288 | 0.09572 | 0.60048 | 0.15012 | 0.72 | 0.18 7 | 2010-2019 | 1.75 | 66 | 306 | 0.272 | 0.068 | 0.272 | 0.068 | 0.72 | 0.18 | 0.72 | 0.18 9 | After 2019 | 2.15 | 15 | 375 | 0.272 | 0.068 | 0.272 | 0.068 | 0.72 | 0.18 | 0.72 | 0.18 | Multifamily houses | | | | | | | | | | | 2 | Before 1957 | 0.34 | 223 | 170 | 0.0104 | 0.0026 | 0.0104 | 0.0026 | 0.0104 | 0.0026 | 0.0104 | 0.0026 4 | 1958-1978 | 0.64 | 164 | 322 | 0.0104 | 0.0026 | 0.0104 | 0.0026 | 0.0104 | 0.0026 | 0.0104 | 0.0026 6 | 1979-1994 | 0.46 | 130 | 230 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 8 | 1995-2009 | 0.47 | 103 | 239 | 0.0112 | 0.0028 | 0.0112 | 0.0028 | 0.0112 | 0.0028 | 0.0112 | 0.0028 10 | 2010-2019 | 0.36 | 51 | 181 | 0.128 | 0.032 | 0.128 | 0.032 | 0.128 | 0.032 | 0.128 | 0.032 12 | After 2019 | 0.46 | 11 | 232 | 0.128 | 0.032 | 0.128 | 0.032 | 0.128 | 0.032 | 0.128 | 0.032 Figure SI.2 depicts the heat demand in kWh/m2 for different housing classes, depending on the hour of the day and the day of the year. _Notes:_ The number above every panel refers to the building archetypes, as defined in table SI.1. Figure SI.2: Space heating demand Table SI.2 provides an overview of the capacity assumptions and bounds of the different technologies in all countries. Table SI.2: Assumptions on capacity bounds [GW] | Germany | Austria | Belgium | Switzerland | Czech Republic | Denmark | France | Italy | Luxembourg | Netherlands | Poland ---|---|---|---|---|---|---|---|---|---|---|--- Run-of-river hydro | 3.93 | 6.38 | 0.15 | 4.22 | 0.43 | 0.00 | 13.60 | 7.03 | 0.04 | 0.04 | 0.37 Nuclear | 0.00 | 0.00 | 0.00 | 1.19 | 4.04 | 0.00 | 58.21 | 0.00 | 0.00 | 0.49 | 0.00 Lignite | 0.00-9.30 (0.00) | 0.00 | 0.00 | 0.00 | 0.00-3.89 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00-6.32 Hard coal | 0.00-9.80 (0.00) | 0.00 | 0.00-0.62 | 0.00 | 0.00-0.37 | 0.00-0.77 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00-9.88 Natural gas (CCGT) | 0.00-$\infty$ | 0.00-2.82 | 0.00-7.61 | 0.00 | 0.00-1.35 | 0.00 | 0.00-6.55 | 0.00-38.67 | 0.00 | 0.00-8.65 | 0.00-5.00 Natural gas (OCGT) | 0.00-$\infty$ | 0.00-0.59 | 0.00-1.08 | 0.00 | 0.00 | 0.00 | 0.00-0.88 | 0.00-5.40 | 0.00 | 0.00-0.64 | 0.00 Oil | 0.00-1.20 | 0.00-0.17 | 0.00 | 0.00 | 0.00-0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 Other | 0.00-4.10 | 0.95 | 1.32 | 0.89 | 1.23 | 0.24 | 1.87 | 5.99 | 0.03 | 3.77 | 6.82 Bio energy | 6.00 | 0.60 | 0.21 | 1.20 | 1.06 | 0.67 | 2.56 | 4.93 | 0.05 | 0.54 | 1.41 Onshore wind | 56.00-115.00 ($\infty$) | 10.00 | 5.93 | 1.25 | 3.00 | 5.48 | 44.11 | 19.05 | 0.35 | 8.30 | 11.28 Offshore wind | 7.77-30.00 ($\infty$) | 0.00 | 4.30 | 0.00 | 0.00 | 4.78 | 3.00 | 0.60 | 0.00 | 6.72 | 0.90 Solar PV | 59.00-$\infty$ | 15.00 | 13.92 | 11.00 | 10.50 | 4.75 | 42.63 | 49.33 | 0.25 | 15.46 | 12.19 Lithium-ion batteries | | | | | | | | | | | … power in/out | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ … energy [GWh] | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ Power-to-gas-to-power | | | | | | | | | | | … power in/out | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ | 0-$\infty$/0-$\infty$ … energy [GWh] | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ | 0-$\infty$ Open pumped hydro storage | | | | | | | | | | | … power in/out | 1.86/2.14 | 5.33/5.61 | 0.00/0.00 | 1.89/2.46 | 0.60/0.65 | 0.00/0.00 | 1.85/1.85 | 2.22/3.62 | 0.00/0.00 | 0.00/0.00 | 0.17/0.22 … energy [GWh] | 471.23 | 1746.66 | 0.00 | 1194.00 | 2.95 | 0.00 | 90 | 289.99 | 0.00 | 0.00 | 1.31 Closed pumped hydro storage | | | | | | | | | | | … power in/out | 7.17/7.01 | 0.45/0.45 | 1.23/1.31 | 1.90/1.90 | 0.64/0.69 | 0.00/0.00 | 1.95/1.95 | 4.17/4.17 | 0.00/0.00 | 0.00/0.00 | 1.49/1.33 … energy [GWh] | 391.59 | 3.60 | 5.80 | 56.00 | 3.70 | 0.00 | 10.00 | 61.20 | 0 | 0 | 6.35 Reservoirs | | | | | | | | | | | … power out | 0.82 | 2.80 | 0.00 | 8.53 | 0.52 | 0.00 | 9.8 | 8.77 | 0.00 | 0.00 | 0.42 … energy [TWh] | 0.24 | 0.77 | 0.00 | 7.91 | 0.01 | 0.00 | 10.00 | 5.57 | 0.00 | 0.00 | 0.001 Electrolysis | 10.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 _Notes:_ Based on Bundesnetzagentur [35] and ENTSO-E [24]. If two numbers are connected with a hyphen, the model can choose endogenously in that range; otherwise, the value is fixed. All values refer to the baseline assumption. In brackets, numbers are provided for sensitivity analyses. All numbers are provided in GW except for storage energy, which is provided in GWh or TWh. Table 3(b) provides an overview of the cost assumptions of the model. Table SI.3: Cost and technology parameters (a) Electricity storage | Interest | Lifetime | Availability | Overnight costs | Efficiency | Marginal costs | ---|---|---|---|---|---|---|--- Technology | rates | | | energy | charging power | discharging power | charging | discharging | charging | discharging | | | [years] | | [1000 EUR] | [1000 EUR] | [1000 EUR] | | | [EUR] | [EUR] | Li-ion battery | 0.04 | 20 | 0.98 | 142 | 80 | 80 | 0.96 | 0.96 | 0.5 | 0.5 | Pumped hydro | 80 | 0.89 | 10 | 550 | 550 | 0.97 | 0.91 | 0.5 | 0.5 | Power-to-gas-to-power | 25 | 0.95 | 2 | 550 | 435 | 0.73 | 0.42 | 0.5 | 0.5 | (b) Electricity generation Technology | Interest rates | Lifetime | Availability | Overnight costs | Fixed costs | Efficiency | Carbon content | Fuel costs ---|---|---|---|---|---|---|---|--- | | [years] | | [1000 EUR] | [1000 EUR] | | [t/MWh] | [EUR/MWh] Run-of-river | 0.04 | 50 | 1.00 | 3,000 | 30 | 0.90 | 0.00 | 0 Nuclear | 40 | 0.91 | 6,000 | 30 | 0.34 | 0.00 | 3.4 Lignite | 35 | 0.95 | 1,500 | 30 | 0.38 | 0.40 | 5.5 Hard coal | 35 | 0.96 | 1,300 | 30 | 0.43 | 0.34 | 8.3 CCGT | 25 | 0.96 | 800 | 20 | 0.54 | 0.20 | 30.0 OCGT | 25 | 0.95 | 400 | 15 | 0.40 | 0.20 | 30.0 Oil | 25 | 0.90 | 400 | 6.7 | 0.35 | 0.27 | 29.0 Other | 30 | 0.90 | 1,500 | 30 | 0.35 | 0.35 | 18.1 Bioenergy | 30 | 1.00 | 1,951 | 100 | 0.49 | 0.00 | 32.5 Wind onshore | 25 | 1.00 | 1,182 | 35 | 1.00 | 0.00 | 0 Wind offshore | 25 | 1.00 | 2,506 | 100 | 1.00 | 0.00 | 0 Solar photovoltaic | 25 | 1.00 | 400 | 25 | 1.00 | 0.00 | 0 ### SI.3 Additional results #### SI.3.1 Baseline In the following, we show further results of our baseline scenario runs. Figure SI.3 shows the complete optimal capacity choices of the model. The effects of additional heat storage of heat pumps on optimal battery storage energy capacities (lower row of panels, Figure SI.3) are pronounced. Inflexible heat pumps lead to an increased need for storage energy; battery energy storage increases by eight GWh in the government rollout (panel G) by and 49 GWh in the fast rollout (panel H). With a two hour heat storage, this finding reverses and the optimal energy capacity of batteries decreases by 17 GWh or 25 GWh, respectively. Beyond a two hour heat storage, effects are smaller. The results presented in Figure SI.3 show that especially short- duration electricity storage and heat buffer storage are substitutes. _Notes:_ Absolute values in the reference scenarios (panels A and E) and changes to the respective reference scenarios depending on the rollout scenario and the size of heat pump storage. Values for Germany are depicted. Figure SI.3: Capacity investments under baseline assumptions Figure SI.4 depicts the complete dispatch results from Germany, including all references and renewable curtailment. _Notes:_ Absolute values in the reference scenarios (panels A, E, and I) and changes to the respective reference scenarios depending on the rollout scenario and the size of heat pump storage. Values for Germany are depicted. Figure SI.4: Total electricity generation under baseline assumptions Figure SI.5 depicts the aggregated capacities and other countries and how they change with a heat pump rollout in Germany. The introduction of heat pumps in Germany does not affect the aggregated power plant and storage portfolio of other countries on a large scale. In terms of power (discharge) capacities (upper row of Figure SI.5), the introduction leads only to minor changes, even to reductions in battery discharge capacities for some heat storage sizes. Regarding storage energy, we see capacity decreases and increases, especially for long-duration storage; however, there is no consistent pattern that would mean that the expansion of heat pumps in Germany is mainly supported by generation and storage capacities outside Germany. _Notes:_ Results are shown for the scenario under baseline assumptions with a fast rollout. Figure SI.5: Capacity changes in other countries due to an introduction of heat pumps in Germany Figure SI.6 shows the profiles of heat output and electricity demand of heat pumps (for different heat storage sizes) and the residual load. The decoupling of heat output/demand and electricity consumption of heat pumps can be seen easily, even for small heat storage sizes. The heat pumps aim to align their consumption to the hours of lowest residual demand, i.e., often around midday with high generation of electricity from PV. _Notes:_ The figure shows the residual load, the heat output of heat pumps, and their electricity demand for different heat storage sizes in the baseline setting with a fast rollout. Figure SI.6: Heat output, heat pump electric load with different storage sizes, and residual load Figure SI.7 shows the impact of additional heat pumps on residual load duration curves and how heat storage changes these. The peak load-increasing effects of heat pumps can be clearly seen moving from the reference rollout to more ambitious rollouts. The peak load-reducing effect of heat storage is especially pronounced in the government and fast rollout scenarios. _Notes:_ The figure shows residual load duration curves in Germany, depending on the rollout speed of heat pumps (columns) and their heat storage size (color). A residual load duration curve depicts hourly residual loads (load minus generation of renewable energies), sorted in descending order. Figure SI.7: Residual load duration curves (first 50 hours) Table SI.4 provides an overview of the full results of natural gas and emissions savings. Table SI.4: Full results of natural gas and emissions savings Gas price | Euro | 50 Euro | 100 Euro | 150 Euro ---|---|---|---|--- Heat pump rollout | | Gov. | Fast | Gov. | Fast | Gov. | Fast Total gas displaced by heat pumps | TWhth | -82.18 | -217.00 | -82.18 | -217.00 | -82.18 | -217.00 Change in gas displaced by heat pumps | TWhth | -61.49 | -196.31 | -61.49 | -196.31 | -61.49 | -196.31 Total electricity produced from gas | TWh | 219.59 | 227.87 | 76.88 | 82.86 | 70.72 | 78.97 Change in electricity produced from gas | TWh | +1.33 | +9.61 | +1.00 | +6.97 | +0.00 | +10.81 Total gas use for electricity generation | TWhth | 407.44 | 422.86 | 145.27 | 156.10 | 131.44 | 146.89 Change in gas use for electricity | TWhth | +2.58 | +18.01 | +1.92 | +12.75 | +4.90 | +20.35 Gas savings | TWhth | -58.91 | -178.30 | -59.57 | -183.56 | -56.59 | -175.96 Emission savings | Mio tCO2eq | -11.78 | -35.66 | -11.91 | -36.71 | -11.32 | -35.19 Cost savings | billion Euro | -0.96 | -4.63 | -3.68 | -13.32 | -6.56 | -22.20 _Notes:_ The changes shown in the table are calculated with respect to the corresponding reference scenario (with the same natural gas price). #### SI.3.2 Sensitivity analyses In addition to our baseline scenario runs in which we vary the rollout speed of heat pumps and heat storage duration, we conduct several sensitivity analyses. Those help us judge how strongly our results hinge on certain fundamental model assumptions. Table SI.5 provides an overview of all sensitivity analyses conducted. Table SI.5: Overview of sensitivity analyses | Scenario acronym | Description ---|---|--- 1 | no wind cap | No upper bound on on- and offshore wind capacities in Germany. 2 | gas100 | Natural gas price at 100 Euro per MWh. 3 | gas150 | Natural gas price at 150 Euro per MWh. 4 | coal phase-out | No coal-fired plants allowed in Germany. 5 | coal phase-out + gas100 | Combination of 2 and 4. 6 | coal phase-out + gas150 | Combination of 3 and 4. 7 | RE drought | All renewable energy capacity factors of all countries set to zero in one winter week. 8 | RE drought + coal phase-out | Combination of 4 and 7. In the following, we briefly present the different sensitivity analyses and discuss their results in terms of capacity investments (Figure SI.8), dispatch (Figure 9), and additional system costs of heating provided (Figure SI.9). _Notes:_ The figure shows optimal capacities in the reference rollout for different scenario assumptions in the case of flexible heat pumps (with two- hour storage) and inflexible heat pumps (with no storage) (panel A). Changes to the respective reference for a government rollout are shown in panel B. Panels C and D illustrate the respective results for storage energy. Please note the different y-axis ranges of the different panels. Figure SI.8: Capacity investments in different sensitivity analyses ##### No capacity expansion limit of wind energy (no wind cap) In the baseline scenarios, we assume an upper limit for on- and offshore wind power capacity expansion in Germany of 115 GW and 30 GW, respectively. This appears to be realistic and policy-relevant for 2030. Assuming unbounded wind power capacity expansion, considering real-world constraints related to regulation, land availability, and public acceptance, seems not feasible. However, in a sensitivity analysis, we drop this upper limit so that expansion of on- and offshore wind power capacities are unconstrained, and we assess what a less constrained optimal solution looks like. The removal of the upper bound for wind power leads to overall higher wind capacities and slightly lower PV capacity expansion, already in the reference rollout scenario (Figure SI.8, panel A). In particular, offshore wind is substituted by onshore wind due to lower costs. These changes correspond with a higher yearly generation of onshore wind energy in the reference rollout scenario (Figure 9, panel A) compared to the baseline scenario. Given this reference, an additional rollout of heat pumps leads to a substantial expansion of wind onshore capacities, yet far fewer additional PV capacities (Figure SI.8, panel B) than in the baseline. In consequence, additional dispatch consists mainly of onshore wind energy instead of offshore (Figure 9, panel B). These results confirm that the availability of wind power aligns well with the seasonality of the heating demand. Optimal storage energy installation rarely changes in comparison to the baseline (Figure SI.8, panels C and D). Despite the relatively large shift between different generation technologies, additional system costs do not change much compared to the baseline setting (Figure SI.9). This implies that a rollout of heat pumps can also be combined with different solar PV capacity expansions in case of binding wind power capacity limits with little additional costs. ##### Sustained high gas prices (gas100 and gas150) Due to the Russian invasion of Ukraine, the natural gas supply structure of Europe was fundamentally changed. In the foreseeable future, Germany will not import any more Russian gas but will rely on more costly imports of liquefied natural gas (LNG) from other regions. Although wholesale gas prices have been falling strongly since their peak levels of over 300 Euro per MWh in August 2022 and range by the time of writing between 30-40 Euro per MWh, it remains possible that new spikes could arise in the future. In our set of baseline scenarios, we assume a natural gas price of 50 Euro per MWh. We introduce two alternative scenarios, gas100 and gas150, in which we assume natural gas prices of 100 or 150 Euro per MWh. Higher gas prices barely alter the optimal capacity expansion in the reference rollout. Even a fast heat pump rollout leads to very similar capacity installations compared to baseline assumptions, with increased solar PV and reduced gas power plant capacities for a gas price of 150 Euro per MWh (Figure SI.8, panel B). For the gas150 scenario, we see that a considerable amount of long-duration storage is deployed, serving as a substitute. Introducing heat pumps reduces slightly these capacities. Regarding yearly energy generation, higher gas prices lead, not surprisingly, to lower electricity generation by gas-fired power plants - in the reference rollout but also in more ambitious rollouts. Noteworthy is that the additional electricity generated due to the heat pumps comes in the scenario gas150 mainly from PV, as already the capacity results suggest. This outcome can be explained by the capacity limits of offshore wind power that is already reached in the reference rollout. Additional power system costs per heating unit increase substantially compared to the baseline because of more expensive natural gas SI.9, suggesting that the model cannot fully substitute gas-fired power plants. Overall, we do not observe substantial changes compared to our scenarios under baseline assumptions with a lower gas price. _Notes:_ The figure shows the additional system costs per heating energy provided (in ct/kWh) in different sensitivity scenarios for a government and fast rollout (with heat storage sizes of zero and two hours). Figure SI.9: Additional power system costs of heating energy provided in different sensitivity analyses ##### Coal phase-out (coal phase-out) In the baseline scenarios, we allow coal-fired power plants to generate electricity in 2030, in accordance with the currently planned German coal phase-out by 2038. However, the current governmental coalition agreed to “ideally bring forward” the coal phase-out to 2030. Although this agreement has not yet been translated into binding law, we aim to analyze the power sector consequences of an earlier coal phase-out combined with a faster heat pump rollout. In the reference rollout, coal-fired power plants that are present in the baseline scenario are mainly replaced by gas-fired plants (Figure SI.8, panel A), but hardly differ otherwise. The additional capacities due to heat pump rollout are very similar to our baseline scenarios (Figure SI.8, panel B). In terms of dispatch, generation by coal-fired power plants in the reference rollout is mainly compensated by gas-fired (CCGT) plants, as well as by increased net imports. Expanding heat pumps leads to largely similar dispatch effects as in the baseline (Figure 9). Additional power system costs due to heat pumps are very similar to the baseline (Figure SI.9). We also combine the coal phase-out with higher gas prices (scenarios coal phase-out + gas100 and coal phase-out + gas150). In consequence, we see slightly higher solar PV capacity installations in the reference rollout. Additional capacities in the government rollout are similar to those of coal phase-out and gas150: either they are very similar to the baseline scenarios, or more PV capacities are triggered in case of high gas prices (coal phase-out + gas150). Like in scenario gas150, offshore wind power is already at its upper bound in scenario coal phase-out + gas150; hence, additional capacities are mainly PV. Like in gas150, long-duration storage is installed. In terms of dispatch, results do not differ too much from the baseline, either. For the reference rollout, the missing coal-fired generation is partly displaced by electricity net imports. Yet, these net imports diminish with additional heat pumps in the fast rollout. For scenario coal phase-out + gas150, following the capacity results, additional generation is mainly PV. Overall, additional dispatch does not vary strongly between these sensitivity scenarios and the baseline. Yet, the combination of a coal phase-out and higher gas prices leads to considerably higher power system costs because of the higher production costs of gas-fired power plants. ##### A week of a renewable energy drought (RE drought) As the share of variable renewable energy increases, the security of supply during prolonged periods with low renewable energy supply becomes an increasing concern. To simulate an extreme case of such a week, we artificially set wind and solar PV capacity factors to zero in all modeled countries during one winter week (scenario RE drought). As a consequence, substantially more gas-fired power plants are installed in the reference (Figure SI.8, panel A). Adding more heat pumps does not change the optimal capacities much compared to the baseline setting. We can see a capacity- reducing effects of heat storage, when comparing the results of the scenarios with flexible and inflexible heat pumps. The results regarding generation (Figure 9) also do not differ much compared to the baseline. Additional system costs per heat provided (Figure SI.9) are increased, and heat storage also reduces power sector costs in this scenario. Combining this sensitivity with a coal phase-out (scenario RE drought + coal phase-out) does not alter the results substantially. The principal difference is that coal-fired power plants are replaced by gas-fired power plants. Note that long-duration electricity storage does not play much of a role in the renewable energy drought modeled here, as it would be more expensive than providing a backup capacity of OCGT plants. This would change in case of substantially higher natural gas prices (compare sensitivity gas150), or if the potential to build new gas-fired power plants was restricted, or if long-duration electricity storage became substantially cheaper than assumed here. Figure SI.10 shows the electricity consumption pattern in a week of a renewable energy drought. Despite a consistent positive residual load, we see stark differences in heat pump electricity demand depending on their heat storage size. Comparable to a regular week, heat pumps shift their consumption to the periods with the lowest residual load. That is even done for the smallest heat storage size to the largest extent possible. _Notes:_ The figure shows the residual load, the heat output of heat pumps, and their electricity demand for different heat storage sizes in the RE drought setting with a fast rollout. Figure SI.10: Heat output, heat pump electric load with different storage sizes, and residual load
# Towards evolution of Deep Neural Networks through contrastive Self- Supervised learning Adriano Vinhas University of Coimbra CISUC/LASI, DEI Coimbra, Portugal <EMAIL_ADDRESS>João Correia University of Coimbra CISUC/LASI, DEI Coimbra, Portugal <EMAIL_ADDRESS>Penousal Machado University of Coimbra CISUC/LASI, DEI Coimbra, Portugal <EMAIL_ADDRESS> ###### Abstract Deep Neural Networks (DNNs) have been successfully applied to a wide range of problems. However, two main limitations are commonly pointed out. The first one is that they require long time to design. The other is that they heavily rely on labelled data, which can sometimes be costly and hard to obtain. In order to address the first problem, neuroevolution has been proved to be a plausible option to automate the design of DNNs. As for the second problem, self-supervised learning has been used to leverage unlabelled data to learn representations. Our goal is to study how neuroevolution can help self- supervised learning to bridge the gap to supervised learning in terms of performance. In this work, we propose a framework that is able to evolve deep neural networks using self-supervised learning. Our results on the CIFAR-10 dataset show that it is possible to evolve adequate neural networks while reducing the reliance on labelled data. Moreover, an analysis to the structure of the evolved networks suggests that the amount of labelled data fed to them has less effect on the structure of networks that learned via self-supervised learning, when compared to individuals that relied on supervised learning. ###### Index Terms: Self-supervised learning, NeuroEvolution, Deep Learning, Evolutionary Machine Learning ## I Introduction Deep Learning (DL) has demonstrated its power by achieving new state of the art results and surpassing previous ones based on more traditional methods. Part of this success is due to the availability of large-scale labelled datasets. However, labels are often time-consuming and costly to obtain. Additionally, one could argue that label-based learning can be limiting, as the produced labels are not inherent in the data and, therefore, do not accurately relate to how animals, including humans, tend to learn. For instance, during early stages of life, young children can learn to discriminate between common categories (e.g. birds and dogs [1]) without being explicitly told that they have different labels. This encourages the idea that, at initial stages, learning is unsupervised and uses perceptual abilities to build representations of those categories [2]. Inspired on this rationale, Self-Supervised learning (SSL) has been growing on popularity, due to its ability to leverage vast amounts of unlabelled data available. Its goal is to learn helpful representations without requiring extrinsic labels. It was found that DNNs trained using SSL obtained breakthrough results in fields like Natural Language Processing (NLP), overshadowing the ones obtained with supervised learning. However, the same progress is yet to be achieved in computer vision. We believe that there are two main reasons for this. The first one is that most of the works that apply SSL on computer vision have used a limited range of network topologies. This is because the design of a DNN is a time-consuming task, as its hyperparameters are optimised following a trial-and-error process. The second reason is because SSL algorithms are manually designed too, which hampers the discovery of new approaches that might be more suited to the image domain. Evolutionary Computation (EC) is another biologically inspired field which borrows ideas from evolution theory. It has been used to automate the search for the best topology or other hyperparameters of a given DNN. The intersection of these two fields originated a subfield known as Neuroevolution (NE). Not only NE helps designing the network that better suits a task automatically, it also promotes the emergence of increasingly better solutions by following a bio-inspired approach. In this paper, we hypothesise that combining SSL algorithms with NE promotes the emergence of DNNs that are useful for specific tasks (in our case, image classification). By merging these two areas, we believe that we can take the best of SSL and NE: reduce the need for labelled data and, at the same time, automate design aspects that impact the final DNNs. In order to achieve this, we propose a framework that performs Evolution of Deep Networks through Self Supervision (EvoDeNSS). The code is publicly available on GitHub111https://github.com/adrianovinhas/evodenss. The remainder of the document is structured as follows. In Section II we survey related works. Next, in Section III we detail the developed approach, which is then followed by the undertaken experiments and respective results (Section IV). To end, in Section V, conclusions are drawn and future work is addressed. ## II Related work Our work intersects two main topics - 1) SSL and 2) NE. Therefore, we will start by outlining the dynamics of this learning paradigm, followed by reviewing NE works which attempt to learn representations without the use of extrinsic labels. ### II-A Self-Supervised learning (SSL) SSL [3, 4] is a learning paradigm that uses the input data itself to create pseudo-labels that will guide the learning process. Pseudo-labels are intrinsically generated from the relation between the original input and a modified version of it. This modification can come in the form of parts of the original input, a corrupted version of the input or even different modalities of data. The goal of SSL is to create feature extractors by predicting the original inputs based on their modified counterparts. SSL methods are composed of two stages. In the first one, we train a model without relying on any labels and without explicitly taking into account our final goal. Instead, the model is trained to solve a task related to the one we are interested in, known as pretext task, that will learn how to extract features based on pseudo-labels. The second stage, known as the downstream task, transfers the pretrained model by reusing the learned features in order to train another model that solves the actual task. This step follows a supervised learning setting, as the features extracted from labelled inputs are used to train the downstream task. An overview of the 2-step process in SSL is depicted in Figure 1. Figure 1: Overview of the SSL process for an image classification problem. ### II-B Neuroevolution (NE) NE [5, 6, 7] is a field that encompasses the application of Evolutionary Algorithms (EAs) to automate design aspects of DNNs. Within the specific context of SSL, NE literature can be divided into the evolution of generative and contrastive methods. Regarding generative methods, David and Greental evolved the weights of an Auto-Encoder (AE) [8]. Their architecture is fixed, and the weights of the encoder are tied to the decoder, allowing to substantially reduce the search space. Lander and Shang [9] proposed a framework which allows more freedom to the evolution process by targeting the weight and structural space. Assunção et al. evolved AEs that are not constrained to have the same structure nor weights on the encoder and decoder [10], aiming to compress the data without compromising classification performance. Their fitness function is the aggregation of several objectives into a single one via linear combination: 1) maximise the accuracy of the learned representations in an image classification task, 2) minimise the size of the chokepoint, 3) minimise the number of layers in the decoder. Other works employed NE to the evolution of Variational Auto Encoders (VAEs) [11]. For instance, Hajewski and Oliveira evolve VAEs using only fully connected layers [12]. The architecture of their evolved DNNs was flexible enough to vary the number of layers and the number of neurons within a layer. Chen et al. employed a block-based search by allowing any arrangement of layers within each block (without skip connections) [13], but the way blocks interact which other is predefined. Four blocks are defined in total. Fitness is calculated based on the training loss, which may cause generalisation issues due to overfitting. Generative Adversarial Networks (GANs) [14] can be considered as another form of SSL, given that a generator $G$ learns representations from a pretext task that is taught to distinguish images that come from the training data or generated from the latent space. E-GAN [15] targeted generators in their evolutionary process and assume that the discriminator will always achieve its optimal performance. Generators are evolved only through mutations and separately trained using three predefined objective functions, generating three new GANs. Gonzalez and Miikkulainen evolved loss functions adapted to GANs [16]. Individuals represent coefficients of Taylor expansions and are evolved using an Evolutionary Strategies (ES) algorithm. Costa et al. approached the evolution of GANs as a co-evolution problem [17], whereby generators and discriminators follow a similar representation to DeepNEAT [18] and evolved individuals using a mutation operator only. Each GAN is the result of pairing each discriminator with each generator to calculate the fitness for each individual. This work was later extended to use different fitness functions. One of them is a multiobjective function that takes into account both FID and a novelty search score that measures diversity [19]. The other used a skill rating score [20] based on the Glicko-2 rating system [21]. Most modern SSL methods aim to learn representations by contrastive learning. The idea is to feed a model with multiple views of the same input and encourage it to represent contrasting versions of an input similarly, thereby promoting multi-view invariance. Even though contrastive methods obtain state of the art results, NE literature is very scarce in automating design aspects learned with these methods. In order to automate the design of the data augmentation component, Barrett et al. aimed to evolve the augmentation policy parameters to find the one that produces the best representations [22]. Outside the image domain, ELo [23] is a network that is trained to minimise a joint loss composed of losses on $M$ modalities, trained on $T$ self- supervised tasks. The final loss is divided into two components. One is the linear combination of all losses $L_{m,t},m\in M,t\in T$. The other is distillation loss $L_{d}$, which is the result of transferring knowledge from one modality to the main network. Coefficients associated to each one of these losses are evolved through an ES algorithm. Finally, Wei et al. use NE to evolve DNN architectures that were trained using a SSL algorithm [24]. As a fitness metric, the authors rely on a surrogate model that predicts the performance of the individuals. ## III Proposed approach In this section we detail the most relevant parts of our proposed neuroevolutionary framework. We start by describing the evolutionary engine, focusing on how the evaluation of individuals was adapted to the SSL case. Finally, the dataset partitioning process is specified with regards to both SSL and supervised learning. ### III-A Evolutionary Engine EvoDeNSS is a neuroevolutionary framework that can be considered as an extension of Fast-DENSER [25]. Therefore, we will describe how Fast-DENSER works and what relevant extensions were performed to enable individuals to be evaluated through a SSL algorithm. Fast-DENSER is a NE framework that allows the emergence of Convolutional Neural Networks (CNNs) using a $(1+\lambda)$-ES algorithm. The rationale behind this decision is due to an effort to reduce the number of evaluations, when compared to its predecessor [26]. Fast-DENSER aims to run with a more restricted population size, hence limiting the number of evaluations needed to execute the NE algorithm. One of the concerns raised by the authors was the impact of this decision in the diversity of the evolved solutions. Diversity is an important factor in EAs, as it impacts the ability of converging towards optimal DNNs. To compensate this possible diversity loss, weight-sharing mechanisms were not used. Instead, at each generation one parent is selected to create its descendants, but its weights are discarded. In the next generation, the parent is forced to be trained from scratch, using another set of initial weights. An overview of Fast-DENSER is depicted in Figure 2. Figure 2: Overview of Fast-DENSER. EvoDeNSS uses the same representation as Fast-DENSER. It is a two-level representation which can be seen as an array of modules at the macro level (modules can represent a layer or learning aspects), whereas at the microscopic level, each module uses a Dynamic Structured Grammatical Evolution (DSGE) representation [27], which allows the structure of each module to be constrained by context-free grammars. The flexibility of this approach biases the search space towards more favourable solutions by injecting a-priori knowledge. In EvoDeNSS, one does not use fully-connected layers for representation learning, meaning that we only consider layers within the context of learning features. The details about the grammars that were used in EvoDenSS can be checked in the public repository mentioned in Section I. Moreover, for the learning block we included the possibility of training DNNs using LARS optimiser [28]. This decision was made as this optimiser was chosen by the authors of the SSL algorithm we use for evaluation purposes. More details about this algorithm can be found on Section III-B. ### III-B Evaluation Module In order to evaluate each individual, the genotype needs to be converted into a phenotype, which in this case is a DNN in a format that is trainable with a Deep Learning framework. Given that we are focusing on an image classification problem, we define fitness as the ability of an evolved network to correctly classify the images it is fed with. For this effect, we compute fitness using the accuracy on a dedicated subset, created only for this purpose. The process to compute accuracy differs slightly depending on the chosen learning paradigm. In supervised learning the model needs to be trained using labelled data (subjected to a maximum training time) and then the accuracy is measured on the dedicated test set. As for SSL, the evolved network is trained for the pretext task with unlabelled data to learn representations, bounded by a maximum training time too. Then, the train for the downstream task happens, by applying a linear layer on top of the representations and feeding the network with labelled data. This linear layer is trained with fixed parameters for a predefined number of epochs, meaning that no aspects of the downstream task are incorporated into evolution. In this scenario, the fitness metric is the quality of the learned representations, which could be translated into finding out if the evolved representations are good predictors. Therefore, we measure the accuracy on the downstream task by feeding the dedicated test set data. The differences in calculating fitness for each learning paradigm are depicted in Figure 3. (a) Fitness assignment under supervised learning (b) Fitness assignment under SSL Figure 3: Overview of individual evaluation under different learning paradigms. In order to train representations without using labels, we incorporated Barlow Twins algorithm [29] within the proposed framework. The rationale of this method is to build representations from pairs of image views. Learning occurs by employing a loss function that maximises invariance and minimises the redundancy between these representation pairs. An overview of Barlow Twins method is depicted in Figure 4. Figure 4: Barlow Twins algorithm Each batch of original images from the dataset is augmented, generating two different batches, named $Y^{A}$ and $Y^{B}$. Both $Y^{A}$ and $Y^{B}$ contain unique image views that are created from the same set of original images. Image views are fed to an encoder network, which is in turn attached to a projector network. A projector network is commonly a multiple layer neural network composed by dense and/or batch normalisation layers only. The outputs of this process are two matrices that represent two batches of output vectors $Z^{A}$ and $Z^{B}$. The cross-correlation matrix between $Z^{A}$ and $Z^{B}$ is calculated, and is then used to calculate the loss. The rationale is that we want the same dimensions to be as correlated as possible, making the representations to be invariant to transformations. At the same time different dimensions should be untangled from each other. If they are not, two dimensions can be considered redundant. Therefore, the goal of the optimisation process is to approximate the cross-correlation matrix as much as possible to the identity matrix, via Equation 1. $L_{BT}=\underbrace{\sum_{i}(1-C_{i,i})^{2}}_{\text{invariance term}}+\lambda\cdot\underbrace{\sum_{i}\sum_{j}C_{i,j}^{~{}~{}2}}_{\text{redundancy reduction term}},$ (1) In this loss function, $i$ and $j$ identify dimensions of the representation vector, $C$ represents the cross-correlation matrix and $\lambda$ is a parameter that sets the importance of the redundancy reduction term. It should be noted that the representations to be used for downstream tasks are the ones produced by the encoder, hence the projector network is only needed during the pretext task. Barlow Twins was chosen for these experiments due to the simplicity of its design when compared to other algorithms. Besides, this algorithm is reported to work well with lower batch sizes when compared to some of its peers, making it a suitable candidate in scenarios that do not require extensive GPU memory resources. ### III-C Dataset Partitioning There were two major factors taken into account while partitioning the dataset. The first one was that even though our fitness function is based on accuracy metric, we need to ensure that the data that is used to measure the fitness and the final performance of the networks are disjointed. For instance, if we were to use the same test set for fitness and performance purposes, even though individuals would not be trained on that data, the evolution process would still be guided by a measurement that is based on the test data. The other one was that the data partitioning process had to be flexible so that it could be used in supervised and self-supervised scenarios. Given that most common current benchmark datasets come divided in train and test sets, we used only data from the train set during the evolutionary process, whereas the test set was only used at the end of it, to check the performance of the best DNN in unseen data. During the evolutionary process we divide the entire train set into three disjoint splits for training, validation and testing purposes. From now onwards, we will name these splits as evolutionary training set, evolutionary validation set and evolutionary test set. In order to test scenarios in which the amount of labelled data is scarce, we downsampled the train split further, producing an even smaller training subset which we will name downsampled evolutionary training set. All divisions and downsampling operations produce balanced sets, i.e., in each set there is the same number of instances per class. The entire dataset partitioning process is described in Figure 5. Figure 5: Set splits created by the dataset partitioning process. Coloured sets represent the ones used during the evolutionary process by supervised and/or self-supervised learning, whereas the test set is used at the end to check the final performance. The splits that were used vary according to the learning paradigm chosen in the evolutionary process. In the supervised learning scenario, each candidate network is trained with the downsampled evolutionary training set. During the train of each individual, we use an evolutionary validation set to track network loss. This allows to set an early stopping criteria to preclude the network from overfitting. Finally, the evolutionary test set is used for fitness computation. In the SSL scenario, we use the evolutionary training set to train representations, without relying on its labels. Following previous SSL works [29], no evolutionary validation set is used in the pretext task, as it is harder to check if the network is overfitting. This happens because 1) the pretext task is not the final task we are aiming to solve, and 2) the loss produced during the pretext task is derived only from inputs and their stochastic augmented versions (no labels are used as ground truth). After representations are learned, the downstream task will make use of the downsampled evolutionary training set to train a network for the image classification problem to be solved. Finally, the evolutionary test set is applied to the trained network in order to compute the fitness of an individual. The splits that were used for each learning paradigm are described in Figure 6. Figure 6: Set splits used breakdown by task and learning paradigm ## IV Experimentation The evolutionary engine built for these experiments (version 2.0.0 on GitHub) uses Python 3.9. As for the fitness assignment phase, Pytorch library was adopted to create the corresponding phenotype models and optimisers. Models were trained on the CIFAR-10 dataset [30] using a single NVIDIA 3080 GPU. ### IV-A Setup In order to validate the adopted methodology, we conducted experiments that envisaged two goals. First, we wanted to analyse how would the evolution using a supervised learning paradigm compare against evolution using SSL. Besides, we wanted to understand what was the impact of evolving DNNs when the amount of labelled data is scarce. Therefore, four different scenarios were tested. * • Evolution using supervised learning and 100% of the labelled data for training; * • Evolution using supervised learning and 10% of the labelled data for training; * • Evolution using Barlow Twins and 100% of the labelled data for training (in the downstream task); * • Evolution using Barlow Twins and 10% of the labelled data for training (in the downstream task). For each of the aforementioned scenarios, 10 different runs were executed. Each run based on Barlow Twins took an average of 2 days to execute, whereas each of the supervised learning runs took less than a day. The parameters used for these experiments are described in Table I. TABLE I: Experimental parameters. EA Parameters | Supervised | SSL ---|---|--- Number of runs | 10 Generations | 75 Population size | 6 Add layer rate | 0.25 Remove layer rate | 0.25 DSGE mutation rate | 0.15 Macro mutation rate | 0.3 Train longer | 0.03 Dataset Parameters | Supervised | SSL Evolutionary Training Set | 70% Evolutionary Validation Set | 20% | - Evolutionary Test Set | 10% | 30% Learning Parameters | Supervised | SSL $\lambda$ | - | 0.0078125 Train time (mins) | 2 Loss Function | Cross-Entropy | Barlow Twins Downstream epochs | - | 30 Downstream learning rate | - | 0.001 Downstream optimiser | - | Adam Downstream loss | - | Cross-Entropy Data Augmentation Parameters | Supervised | SSL Padding | 4 | 4 Horizontal Flip | 0.5 | 0.5 Color Jitter (Brightness) | - | 0.4 Color Jitter (Contrast) | - | 0.4 Color Jitter (Saturation) | - | 0.4 Color Jitter (Hue) | - | 0.1 Color Jitter (Probability) | - | 0.8 Grayscale | - | 0.2 ### IV-B Results Firstly, we will focus in analysing the ability of the NE algorithm to promote the emergence of increasingly better solutions as well as the generalisation capabilities of the best DNNs. The plot in Figure 7 depicts how the fitness of the best individual changes throughout generations, averaged by the 10 runs. Figure 7: Evolution of DNNs using the accuracy on the Evolutionary Test set as fitness. Results are averages of 10 runs. From this Figure, it is clear that all four test cases are able to evolve solutions that are increasingly better. When the evolved networks learn with supervised learning and without downsampling (Supervised 100), fitness converges faster towards more optimal regions as the blue line reaches an average fitness of 0.85. When compared to self-supervised runs that use Barlow Twins, this is a significantly higher fitness value. This behaviour is expected, as learning without labels makes the evolution task notoriously more difficult. Although the combination that produces solutions with the highest fitness is the Supervised 100 case, it has to be noted that when there is downsampling, the supervised scenario (Supervised 10) seems to reach a plateau that suggests that no significant fitness improvements will happen if the evolution process was to be extended. Conversely, self-supervised based evolution exhibits evolution curves that suggest that further improvements are still possible to occur if the evolutionary runs are extended for more generations. This is particularly evident in the case where DNNs are evolved without downsampling using a SSL paradigm (BT 100), as the evolution curve does not flatten at the same pace as the Supervised 100 line. Another point worth mentioning is that SSL guided evolution seems to be more resilient to the scarcity of labelled data than its supervised counterpart, given that both red lines are somewhat close to each other throughout generations, compared to the blue lines that represent supervised learning scenarios. This suggests that the evolved solutions are able to leverage unlabelled data to learn representations. Additionally, when the amount of labelled data is limited, self-supervised evolution is able to reach the same fitness levels as its supervised counterpart, despite the extra difficulty of the task. Besides performing an evolution analysis, we computed the accuracy of the best individual of each run on the original test set. These values are summarised in Table II and they show the ability of the evolved networks to generalise on unseen data. When analysing the self-supervised runs, the best one is able to achieve a test accuracy of 77.41%, using Barlow Twins with 100% of labelled data. Moreover, the average performance of the best solutions evolved with SSL was not affected by using fewer labelled samples in the downstream task, and it also obtained comparable results to the evolution runs guided by supervised learning using 10% of labelled data, showing how competitive SSL based evolution can be when the labelled data is limited. TABLE II: Test accuracies (%) of the best networks produced by each testing scenario. | BT 10% | Supervised 10% | BT 100% | Supervised 100% ---|---|---|---|--- Mean | 66.86 $\pm$ 3.53 | 66.53 $\pm$ 2.83 | 68.35 $\pm$ 5.73 | 84.67 $\pm$ 1.67 Best run | 71.63 | 69.59 | 77.41 | 86.89 The second part of our results involves a structural analysis to the best individuals whose average fitness was depicted previously. In order to do that, Figure 8 shows the average number of layers of the best individuals throughout generations. In the SSL case, we consider the number of layers as the ones that are utilised to train representations and the final layer trained during downstream task, thus excluding the projector network. Figure 8: Number of layers of the best individuals throughout generations. Results are averages of 10 runs. Although we acknowledge that a direct comparison between the total number of layers of Barlow Twins and supervised scenarios cannot be made in a fairly manner, we can still analyse the impact of data scarcity in each of the learning paradigms separately. This is due to the fact that the downstream task is not evolved (only one fixed layer is used); hence, we do not promote the emergence of fully connected layers under the same circumstances. Nevertheless, we can still observe that evolved networks using Barlow Twins showed a similar number of total layers as the evolution proceeded, which can be seen by both red lines being very close to each other during the evolutionary process. That is not the case of the supervised scenario. The Supervised 100 line actually starts with a very similar average number of layers (5) than Supervised 10 (4.8). However, by the time of the last generation, there is a discrepancy in this number, as Supervised 100 reaches an average of 11.7 compared to 8.2 for the Supervised 10 scenario. A deeper insight to the number of layers breakdown by type is depicted in Figure 9. This plot shows how there is a common trend when it comes to the number of convolutional layers. Regardless of the learning paradigm and percentage of labelled data used, the best individuals will tend to hold an increasingly number of convolutional layers throughout generations. This shows the importance of convolutional layers in promoting the emergence of increasingly better individuals. Another detail that emerges from Figure 9 is in the number of batch normalisation layers (green line). In Figures 9(c) and 9(d), one can see that the green lines and blue lines (convolutional layers) grow together, suggesting that when the evolved networks learn through supervised learning, the number of batch normalisation layers has a trend that is correlated to the number of convolutional layers (even more visible in Figure 9(c)). On the other hand, if one performs the same analysis for Barlow Twins looking at Figures 9(a) and 9(b), a different behaviour is observed (particularly in Figure 9(a)), as the gap between the number of convolutional and batch normalisation layers seems to grow throughout generations. This is an interesting result, as the grammars used in supervised learning and Barlow Twins contain the same derivation rules for the definition of convolutional and batch normalisation layers. (a) Barlow Twins with 10% of labelled data. (b) Barlow Twins with 100% of labelled data. (c) Supervised learning with 10% of labelled data. (d) Supervised learning with 100% of labelled data. Figure 9: Number of layers of the best individuals evolved using a different combination of learning paradigm and % of labelled data. Results are averages of 10 runs. ## V Conclusions and Future Work The time required to design DNN architectures and to prepare the necessary datasets can be time-consuming. Moreover, in order to feed these models with data one needs to label them, which adds extra complexity to an already laborious task. This highlights the importance of reducing the amount of human intervention during this process. The current work aims to alleviate this problem by combining ideas of NE with SSL, so that aspects of DNN architecture design can be automated while reducing the reliance on labelled data to train these models. We developed a neuroevolutionary framework that evolves DNNs which can learn either using supervised learning, or through Barlow Twins algorithm. In the latter, we leverage the unlabelled data to learn representations, and then train a single dense layer on top of them using a percentage of the labelled data. In order to assess the impact of the chosen learning paradigm within the NE context, we evolve DNNs that are taught with both supervised learning and SSL paradigms. We also vary the amount of labelled data that is fed into the evolved networks (10% or 100%). Our results confirm the idea that similarly to evolution guided by supervised learning, the same can be done in a self-supervised context. However, given the additional constraints imposed by the unavailability of labels, the optimisation of networks is a harder task. Despite of this, evolved networks that learn in a self-supervised manner are able to compete with their supervised counterparts when the amount of labelled data is limited, even outperforming them in some cases. Additionally, a structural analysis to the evolved DNNs evidences that the ones that learned via the Barlow Twins algorithm tend to be structurally more similar (in terms of number of layers) regardless of the percentage of labelled data, when compared to networks that learned in a supervised manner. In order to better understand the impact of NE within a SSL context, there are other components in our proposed framework which can be targeted by EC. For instance, we plan to evolve other components like the projector network, as they are reported to help improving the performance of the downstream task. However, the reason why it works and its exact role are yet to be uncovered. Chen et al. concluded that using the layer before the projector as representations offered better downstream performance [31], whereas Ma et al. suggest that the projector seeks uniformity by maximising the distance between dissimilar samples [32]. Similarly, we plan to evolve data augmentation aspects within Barlow Twins algorithm. They play a vital role during the pretext task by setting its difficulty to be solved. The network used during the downstream task can also be evolved to help the EA to converge towards more optimal solutions. On a different line of thought, we believe that modifying the search space by adopting a cell-based approach could benefit both the quality of the evolved networks and the time that takes to evolve them. ## Acknowledgments This work is funded by the FCT - Foundation for Science and Technology, I.P./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit - UIDB/00326/2020 or project code UIDP/00326/2020 ## References * [1] P. C. Quinn, P. D. Eimas, and S. L. Rosenkrantz, “Evidence for representations of perceptually similar natural categories by 3-month-old and 4-month-old infants,” Perception, vol. 22, no. 4, pp. 463–475, 1993. * [2] E. Orhan, V. Gupta, and B. M. Lake, “Self-supervised learning through the eyes of a child,” Advances in Neural Information Processing Systems, vol. 33, pp. 9960–9971, 2020. * [3] V. R. de Sa, “Learning classification with unlabeled data,” Advances in neural information processing systems, pp. 112–112, 1994. * [4] C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual representation learning by context prediction,” in Proceedings of the IEEE international conference on computer vision, pp. 1422–1430, 2015. * [5] D. Floreano, P. Dürr, and C. Mattiussi, “Neuroevolution: from architectures to learning,” Evolutionary intelligence, vol. 1, pp. 47–62, 2008. * [6] D. J. Montana, L. Davis, et al., “Training feedforward neural networks using genetic algorithms.,” in IJCAI, vol. 89, pp. 762–767, 1989. * [7] S. A. Harp and T. Samad, “Genetic optimization of self-organizing feature maps,” in IJCNN-91-Seattle International Joint Conference on Neural Networks, vol. 1, pp. 341–346, IEEE, 1991. * [8] O. E. David and I. Greental, “Genetic algorithms for evolving deep neural networks,” in Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 1451–1452, 2014. * [9] S. Lander and Y. Shang, “Evoae–a new evolutionary method for training autoencoders for deep learning networks,” in 2015 IEEE 39th annual computer software and applications conference, vol. 2, pp. 790–795, IEEE, 2015. * [10] F. Assuncao, D. Sereno, N. Lourenco, P. Machado, and B. Ribeiro, “Automatic evolution of autoencoders for compressed representations,” in 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8, IEEE, 2018. * [11] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings (Y. Bengio and Y. LeCun, eds.), 2014. * [12] J. Hajewski and S. Oliveira, “An evolutionary approach to variational autoencoders,” in 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), pp. 0071–0077, IEEE, 2020. * [13] X. Chen, Y. Sun, M. Zhang, and D. Peng, “Evolving deep convolutional variational autoencoders for image classification,” IEEE Transactions on Evolutionary Computation, vol. 25, no. 5, pp. 815–829, 2020. * [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, eds.), vol. 27, Curran Associates, Inc., 2014. * [15] C. Wang, C. Xu, X. Yao, and D. Tao, “Evolutionary generative adversarial networks,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 6, pp. 921–934, 2019. * [16] S. Gonzalez, M. Kant, and R. Miikkulainen, “Evolving gan formulations for higher quality image synthesis,” arXiv preprint arXiv:2102.08578, 2021. * [17] V. Costa, N. Lourenço, and P. Machado, “Coevolution of generative adversarial networks,” in Applications of Evolutionary Computation: 22nd International Conference, EvoApplications 2019, Held as Part of EvoStar 2019, Leipzig, Germany, April 24–26, 2019, Proceedings 22, pp. 473–487, Springer, 2019. * [18] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, and B. Hodjat, “Evolving deep neural networks,” in Artificial Intelligence in the Age of Neural Networks and Brain Computing (R. Kozma, C. Alippi, Y. Choe, and F. C. Morabito, eds.), pp. 293–312, Amsterdam: Elsevier, 2019. * [19] V. Costa, N. Lourenço, J. Correia, and P. Machado, “Exploring the evolution of gans through quality diversity,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference, pp. 297–305, 2020. * [20] V. Costa, N. Lourenço, J. Correia, and P. Machado, “Using skill rating as fitness on the evolution of gans,” in Applications of Evolutionary Computation: 23rd European Conference, EvoApplications 2020, Held as Part of EvoStar 2020, Seville, Spain, April 15–17, 2020, Proceedings 23, pp. 562–577, Springer, 2020. * [21] M. E. Glickman, “Example of the glicko-2 system,” Boston University, vol. 28, 2012. * [22] N. Barrett, Z. Sadeghi, and S. Matwin, “Evolutionary augmentation policy optimization for self-supervised learning,” arXiv preprint arXiv:2303.01584, 2023. * [23] A. Piergiovanni, A. Angelova, and M. S. Ryoo, “Evolving losses for unsupervised video representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 133–142, 2020. * [24] C. Wei, Y. Tang, C. N. C. Niu, H. Hu, Y. Wang, and J. Liang, “Self-supervised representation learning for evolutionary neural architecture search,” IEEE Computational Intelligence Magazine, vol. 16, no. 3, pp. 33–49, 2021. * [25] F. Assunção, N. Lourenço, P. Machado, and B. Ribeiro, “Fast denser: Efficient deep neuroevolution,” in Genetic Programming (L. Sekanina, T. Hu, N. Lourenço, H. Richter, and P. García-Sánchez, eds.), (Cham), Springer International Publishing, 2019. * [26] F. Assunçao, N. Lourenço, P. Machado, and B. Ribeiro, “Denser: deep evolutionary network structured representation,” Genetic Programming and Evolvable Machines, vol. 20, pp. 5–35, 2019. * [27] N. Lourenço, F. Assunção, F. B. Pereira, E. Costa, and P. Machado, “Structured grammatical evolution: a dynamic approach,” Handbook of Grammatical Evolution, pp. 137–161, 2018. * [28] Y. You, I. Gitman, and B. Ginsburg, “Large batch training of convolutional networks,” arXiv preprint arXiv:1708.03888, 2017. * [29] J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” in International Conference on Machine Learning, pp. 12310–12320, PMLR, 2021. * [30] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009. * [31] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, pp. 1597–1607, PMLR, 2020. * [32] J. Ma, T. Hu, and W. Wang, “Deciphering the projection head: Representation evaluation self-supervised learning,” arXiv preprint arXiv:2301.12189, 2023.